Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Using the boto3 Resource method:import boto3 ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2') instance = ec2_resource.Instance('i-12345') if instance.state['Name'] == 'running': print('It is running')Using the boto3 Client method:import boto3 ec2_client = boto3.client('ec2', region_name='ap-southeast-2') response = ec2_client.describe_instance_status(InstanceIds=['i-12345']) if response['InstanceStatuses'][0]['InstanceState']['Name'] == 'running': print('It is running')
I have an instance-id of an ec2 instance. How to check if that ec2 instance is running or not using an if statement? I am using Python and Boto3.
How to check if an ec2 instance is running or not with an if statement?
There is a simple fix. Just edit filewp-config.phpand write this code inside it.First try this:define('FS_METHOD', 'direct');Note: Do not add this to the end of the file, but just below the database information on the top of the file.define('FTP_USER', 'username'); // Your FTP username define('FTP_PASS', 'password'); // Your FTP password define('FTP_HOST', 'ftp.example.org:21'); // Your FTP URL:Your FTP portAlso please readthis blog post.
I host WordPress onAWS EC2 (Ubuntu)and encounter the following error while updating plugins:To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.rwxpermission has been granted to the userwww-data. Here is what I do.<!– language: lang-bash –> # Add a new group groupadd www-pub # Add the user `www-data` to the group 'www-pub' usermod -a -G www-pub www-data # Change the ownership of '/var/www/' to 'ubuntu:www-pub' chown -R ubuntu:www-pub /var/www # Change the permissions of all the folders to 2775 find /var/www -type d -exec chmod 2775 {} + # Change the permissions of all the files to 0664 find /var/www -type f -exec chmod 0664 {} +As you can see,www-datahas all the right permissions, but I am still required to enter the FTP credentials. What is the reason and how can I fix it?
WordPress needs the FTP credentials to update plugins
If you setup a key pair for your instance on creation, then there is no password for the userubuntunor is password based login enabled. You must login using your key pair.To execute commands that require elevated permissions, usesudoin front of your command. The actual command to restart apache2 depends on which Linux distribution and version you are using.sudo systemctl restart apache2orsudo service apache2 restart
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed5 years ago.Improve this questionI am trying to restart Apache and I get...ubuntu@ip-172-xx-xx-xx:~$ systemctl restart apache2 ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to restart 'apache2.service'. Authenticating as: Ubuntu (ubuntu)This is a brand new instance and I never set the ubuntu user password. Is there a default password set on instance creation?How do I restart apache in AWS Ubuntu?
How do I run systemctl restart apache2 in AWS Ubuntu when it requires a password [closed]
It is pretty straightforward as shown in the link from your question.What are you having trouble with ?Minimal Example :Imports :github.com/aws/aws-sdk-go/aws,github.com/aws/aws-sdk-go/service/sesandgithub.com/aws/aws-sdk-go/aws/credentials,github.com/aws/aws-sdk-go/aws/sessionawsSession := session.New(&aws.Config{ Region: aws.String("aws.region"), Credentials: credentials.NewStaticCredentials("aws.accessKeyID", "aws.secretAccessKey" , ""), }) sesSession := ses.New(awsSession) sesEmailInput := &ses.SendEmailInput{ Destination: &ses.Destination{ ToAddresses: []*string{aws.String("[email protected]")}, }, Message: &ses.Message{ Body: &ses.Body{ Html: &ses.Content{ Data: aws.String("Body HTML")}, }, Subject: &ses.Content{ Data: aws.String("Subject"), }, }, Source: aws.String("[email protected]"), ReplyToAddresses: []*string{ aws.String("[email protected]"), }, } _, err := sesSession.SendEmail(sesEmailInput)
I'm using AWS to host my server in Go language. I am stuck as I'm not sure how to use theirAWS SES SDKto send an email. Any ideas?
How to integrate aws sdk ses in golang?
The error message:Invalid length for parameter KeyIs telling you that you need to specify a Key for your object (a filename basically). Like so:aws s3 cp test.zip s3://my-bucket/test.zip
This works on my Linux box, but I can't get a simple AWS S3 cli command to work on a Windows server (2012).I'm running a simple copy command to a bucket. I get the following error:Parameter validation failed: Invalid length for parameter Key, value: 0, valid range: 1-infI googled this, couldn't find anything relevant. And I'm not the best at working with Windows servers.What does this error actually mean?Here's the command:aws s3 cp test.zip s3://my-bucketVersion:aws-cli/1.11.158 Python/2.7.9 Windows/2012Server botocore/1.7.16
AWS S3 cli not working on Windows server
You can disable and enable Lambda triggers by using following approaches withUpdate Event Source Mapping, based on how you are going to do it.Using AWS CLI: You can use AWS CLIupdate-event-source-mappingcommand with--enabled | --no-enabledparameters.Using AWS SDK (E.g NodeJS): You can use AWS SDKupdateEventSourceMappingmethod withEnabled: true || falseattribute.Using AWS REST API: You can use AWS REST APIUpdateEventSourceMappingwith"Enabled": booleanattributes.Note: You need to grant relevant permission for each of the approach to execute using IAM Roles/Users or temporarily access credentials.
Is there a way for us to disable & enable the Lambda trigger programmatically (e.g. for scheduled maintains purposes)?
Disable and enable AWS lambda trigger programmatically
Please deploythis view from github "v_get_obj_priv_by_user"Once done , follow below stepsA_user ---User that has to dropB_user ---Table ownership of old table need to map to this user.If you wish to to change owner of all tables belong to A_user, thenselect schemaname,tablename from pg_tables where tableowner like 'A_user';For retrieved above tables runalter table schemaname.tablename owner to B_user;Revoke all on schema where A_user has some privilegesselect distinct schemaname from admin.v_get_obj_priv_by_user where usename like 'A_user';For retrieved above tables runrevoke all on schema XXXX from A_user;Revoke all on tables where A_user has some privilegesselect distinct tables from admin.v_get_obj_priv_by_user where usename like 'A_user';For retrieved above tables runrevoke all on all tables in schema XXXX from A_user;Drop user usename;If there are two database in one cluster, please do this for both databases.
I am trying to drop user form Redshift but it always fails with the same messageuser "XXX" cannot be dropped because the user has a privilege on some object;Following a google search on it I found out that I need to revoke the user's permissions so I run several revoke queries but I still fail with the same message:The queries I ran:revoke all on schema YYY from XXX; revoke usage on schema ZZZ from XXX; revoke all on database LLL from XXX;Any idea why I still get this failure message ?
Can't drop user from Redshift
I was running into a symlinks issue deploying a node Elastic Beanstalk app. Looks like symlinks is now supported for artifacts.Check out the docsartifacts: enable-symlinks: yesAdding this to thebuildspec.ymlfile solved my issue
I have some symlinks in my github repo.When I have a Codebuild project that clones directly from github, symlinks are preserved.I switched so that Codepipeline listens for changes in mydevbranch in github, and passes the artifacts to codebuild.Since making this switch, Codebuild can't see the symlinks anymore.Is this by design, or am I perhaps missing something in how my codepipeline is configured?
Will AWS Codepipeline pass symlinks to Codebuild in artifacts
Most likely you want to get the attributeDNSNamefor the LoadBalancer whose reference isRestELB. So you will need something withFn::GetAttlike (untested)"ApiRecordSet" : { "Type" : "AWS::Route53::RecordSet", "Properties" : { "AliasTarget" :{ "DNSName" : { "Fn::GetAtt" : [ "RestELB", "DNSName" ]}, "EvaluateTargetHealth" : "Boolean", "HostedZoneId" : "String" }, "HostedZoneName" : "example.net.", "Comment" : "A records for my frontends.", "Name" : "api.example.net.", "Type" : "A" } }
What I trying to connect is Loadbalancer DNS name to to Route53. Lets look on example. Here is Loadbabancer from template in Resource:"RestELB" : { "Type" : "AWS::ElasticLoadBalancing::LoadBalancer", "DependsOn": "AttachGateway", "Properties": { "LoadBalancerName": {"Fn::Join": ["",["Rest-ELB-", {"Ref": "VPC"}]]}, "CrossZone" : "true", "Subnets": [{ "Ref": "PublicSubnet1" },{ "Ref": "PublicSubnet2" }], "Listeners" : [ {"LoadBalancerPort" : "80", "InstancePort" : "80","Protocol" : "HTTP"}, {"LoadBalancerPort" : "6060", "InstancePort" : "6060","Protocol" : "HTTP"} ], } },And Here is Route53:"ApiRecordSet" : { "Type" : "AWS::Route53::RecordSet", "Properties" : { "AliasTarget" :{ "DNSName" : [ {"Fn::Join": ["", [{"ElasticLoadBalancer": "DNSName"},"."]]} ], "EvaluateTargetHealth" : "Boolean", "HostedZoneId" : "String" }, "HostedZoneName" : "example.net.", "Comment" : "A records for my frontends.", "Name" : "api.example.net.", "Type" : "A", "TTL" : "900", } }Just to put {"ElasticLoadBalancer": "DNSName"} didn't work. Can someone to suggest or give me correct way to add this?Thanks!
If it possible connect Loadbalancers DNSname to Route53 using AWS Cloudformation template?
There is an account setting to enable IAM user access in addition to the permissions on individual accounts. On thebilling home page, scroll down and look forIAM User Access to Billing Information. It might look like this:You need to edit and update this to allow IAM user access:
I have an IAM user with Administrator permissions.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] }but when I try to access "/billing" page AWS saysYou are not authorized to perform this operation. You are currently signed in as an IAM user that does not have permissions to the requested page.I also tried to generate a specific policies like{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1438861751000", "Effect": "Allow", "Action": [ "aws-portal:ViewBilling" ], "Resource": [ "*" ] } ] }But it doesn't change anything. I still can't access the billing info.When I go to policy simulation page it says that permission is allowed for the user.I've seen the AWS guide and tried to follow it.http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-permissions-ref.htmlIs there something I'm missing? What could it be?
AWS permissions to access Account and Billing info
Inspired by @marcus and hisgithub repo, I just did this:sudo wget -O /usr/lib64/libgcj.so.10 https://github.com/lob/lambda-pdftk-example/raw/master/bin/libgcj.so.10 sudo wget -O /usr/bin/pdftk https://github.com/lob/lambda-pdftk-example/raw/master/bin/pdftk chmod a+x /usr/bin/pdftkWorks like a charm.
I am not able to install the pdftk on Amazon Linux AMI release 2012.03. pdftk requires gcj2.14 and amazon ami provides gcj2.12 package. If I try to install gcj2.14 getting conflicts with the existing gcj package. Please suggest a way to install pdftk without any need to upgrade amazon ami linux as my application is already setup and running there.Your help will be appreciated
How to install pdftk on Amazon Linux AMI release 2012.03
I suggestboto- It's an active project, and boto's new home is now onGitHub, so you can fork it and add/patch it as desired (not that you need to - it seems very stable).The author recently got a job that lets him hack on this part time for work, seeAnd Now For Something Completely Different...Update: Meanwhile the author, Mitch Garnaat, has fortunately joined the AWS team as well, seeBig News Regarding Python, boto, and AWS, promoting this de facto AWS SDK for Python to a semi official one:Building on this model,Mitch Garnaathas also joined the team. Mitch has been a member of the AWS community for over 6 years and has made over 2,000 posts to theAWS Developer Forums. He is also the author ofboto, the most popular third-party library for accessing AWS, and of thePython and AWS Cookbook.
Googling reveals several Python interfaces toAmazon Web Services (AWS). Which are the most popular, feature-complete, etc?
Python: Amazon AWS interface?
I think, all the Aws functions can return a Promise out of the box, then you can just put the call intotry/catch:let triesCounter = 0; while(triesCounter < 2){ console.log(`try #${triesCounter}`) try { await kinesis.putRecord(params).promise(); break; // 'return' would work here as well } catch (err) { console.log(err); } triesCounter ++; }
I want my function to execute X(=3) times until success.In my situation I'm runningkinesis.putRecord(from AWS API), and if it fails - I want to run it again until it succeeds, but not more than 3 tries.I'm new to NodeJS, and the code I wrote smells bad.const putRecordsPromise = function(params){ return new Promise((resolve, reject) => { kinesis.putRecord(params, function (err, data) { resolve(err) }); }) } async function waterfall(params){ try{ let triesCounter = 0; while(triesCounter < 2){ console.log(`try #${triesCounter}`) let recordsAnswer = await putRecordsPromise(params) if(!recordsAnswer){ console.log("success") break; } triesCounter += 1; } // continue ... } catch(err){ console.error(err) } } waterfall(params)I promise the err result. Afterwards, If the err is empty, then all good. otherwise, continue running the same command.I'm sure there is a smarter way to do this. Any help would be appreciated.
nodejs retry function if failed X times
Since you are using the resource interface, your code will look like this:import boto3 ec2 = boto3.resource('ec2', region_name = 'us-west-2') for instance in ec2.instances.all() print instance.id, instance.state
I want to list theec2instances in aws account with boto module.I am getting the issue as"You must specify a region".Here is the program.import boto3 ec2 = boto3.resource('ec2') for instance in ec2.instances.all() print instance.id, instance.stateI didn't specify any default region. How can I specify it programmatically ?
How to specify aws region within boto program?
A “Not Supported” message will show because PHP is not able to run under root using cPanel.To allow scripts to run firstly install and enable suPHP, then set the PHP Handler to use suPHP and it should allow you to run .php scripts under temp URLs.Login to cPanel WHM and install suPHP (Choose EasyApache 4 from the menu then choose "customize" - In Apache Modules, search "mod_suphp" and then click 'Next' to provision the installation.Once suPHP has been installed you need to enable the PHP Handler to use it. Access “Software > MultiPHP Manager” in WHM as root. Click on PHP Handlers. Click on 'Edit' for the respective PHP version and select suPHP from the dropdown. (Usually set to CGI by default) Click on Apply.Now, try accessing the php file from the temp URL and it should work.
I access the web from an IP address likehttp://xx.xx.xx.xx/~cpaneluser. With this I'm able to access the html file but when I try to open a .php file it's showsNot Supported Error. Please help me to solve this issue.Hear I havecPanel/WHMBoth Access and it's hosted on theAmazon AWS EC2.
Cpanel Apache mod_userdir Showing Not Supported Error on PHP File Execution
Please try with setting the region to"us-east-1". It worked for me before.var sns = new AWS.SNS({ "region": "us-east-1" });
Here is my code for sending SMS to a particular number with AWS sms service.var AWS = require('aws-sdk'); AWS.config.update({ accessKeyId: '{ID}', secretAccessKey: '{KEY}', region: 'us-east-2' }); var sns = new AWS.SNS(); var params = { Message: 'this is a test message', MessageStructure: 'text', PhoneNumber: '+XXXXXXXX' }; sns.publish(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });But i got the following error in console'InvalidParameter: Invalid parameter: PhoneNumber Reason: +XXXXXX is not valid to publish
Send SMS with Amazon SNS in node js : Invalid parameter: PhoneNumber Reason: +XXXX is not valid to publish
Is it correct to not run manual VACUUM with auto_vacuum enabled ?You generally do not need manual vacuum of any kind. If autovacuum is not keeping up, make it run more often and faster. See the autovacuum documentation.How can I monitor the progress and performance of auto_vacuum ?Watch for growth of table bloat. There is, unfortunately, nopg_stat_autovacuumor similar. You can see autovacuum working inpg_stat_activitybut only instant-to-instant. Detailed analysis requires trawling through log files with autovacuum logging enabled.How do I know it is not stuck in the same place as the manual VACUUM ?Checkpg_stat_activity. You don't know it's in the same place, and you can't even really tell if it's progressing or not, but you can see if it's running or not.Lots of improvement could be made to admin/monitoring of vacuum, as you can see. We lack people who have the time, willingness and knowledge required to do it, though. Everyone wants to add new shiny features instead.Do I still need run ANALYZE on a regular basis ?No.Is there a way to enable automatic ANALYZE, similar to auto_vacuum ?Autovacuum runs analyze (or rather VACUUM ANALYZE) when required.
We run PostgreSQL 9.3 on the AWS RDS platform. Every night at 1am we've been running a globalVACUUM ANALYZEjob.Yesterday we observed severe degradation in performance and as it turned out we had 5VACUUM ANALYZEprocesses stuck for the past 5 days. Over the same period of time the disk utilization went up by 45 gigabytes.I killed it withpg_terminate_backendbut that didn't have much impact. The processes looked dead but performance was still severely degraded. Since we are using AWS RDS, we've performed a reboot with failover and performance drastically improved right away.This morning I checked and found thatVACUUM ANALYZEwas stuck again for 5 hours. I killed it, but I suspect it is still there somewhere.Upon further investigation I confirmed thatauto_vacuumis correctly enabled, which means we do not need to run manualVACUUMbut we may need to runANALYZEon some or all of the tables.In my research I found this article:http://rhaas.blogspot.com/2011/03/troubleshooting-stuck-vacuums.htmlandhttp://wiki.postgresql.org/wiki/Introduction_to_VACUUM,_ANALYZE,_EXPLAIN,_and_COUNT.In the end, I have the following questions:Is it correct to not run manual VACUUM with auto_vacuum enabled ?How can I monitor the progress and performance of auto_vacuum ? How do I know it is not stuck in the same place as the manual VACUUM ?Do I still need run ANALYZE on a regular basis ?Is there a way to enable automatic ANALYZE, similar to auto_vacuum ?
How to deal with a stuck PostgreSQL 9.3 VACUUM ANALYZE?
Don't know if you still need it but here you go:let credentialsProvider = AWSStaticCredentialsProvider(accessKey: "ACCESS KEY", secretKey: "SECRET KEY") let configuration = AWSServiceConfiguration(region: .USWest2, credentialsProvider: credentialsProvider) AWSS3.registerS3WithConfiguration(configuration, forKey: "defaultKey") let s3 = AWSS3.S3ForKey("defaultKey") let listRequest: AWSS3ListObjectsRequest = AWSS3ListObjectsRequest() listRequest.bucket = "BUCKET" s3.listObjects(listRequest).continueWithBlock { (task) -> AnyObject? in print("call returned") let listObjectsOutput = task.result; for object in (listObjectsOutput?.contents)! { print(object.key) } return nil }(Thanks to Daniel for reminding me not to use deprecated code) ;)
I am trying to figure out how to list all the objects from an AWS S3 bucket in Swift. I can't seem to find the information anywhere on the internet, but maybe I didn't look hard enough. If anyone could refer me to the code that will allow me to do this that would be great.
List all objects in AWS S3 bucket
Of course there will be latency when communicating between separate machines. If they are both in the same availability zone it will be extremely low, typically what you'd expect for two servers on the same LAN.If they are in different availability zones in the same region, expect a latency on the order of 2-3ms (per information provided at the 2012 AWS re:Invent conference). That's still quite low.Using a VPC will not affect latency. That does not give you different physical connections between instances, just virtual isolation.Finally, consider using Amazon's RDB (Relational Database Service) instead of a dedicated EC2 instance for your MySql database. The cost is about the same, and Amazon takes care of the housekeeping.
I am planning to run a web-application and expecting a traffic of around 100 to 200 users.Currently I have set up single Small instance on Amazon. This instance consist of everything – the Webserver(Apache) , the Database Server(MySQL) and the IMAP server( Dovcot). I am thinking of moving out my database server out of this instance and create a separate instance for it. Now my question is –Do I get latency while communication between my webserver and Database server( Both hosted on separate instances on Amazon )If yes, what is the standard way to overcome this ? ( or Do I need to set up a Virtual Private Cloud ?)
WebServer and Database server hosted on seperate instances of Amazon EC2
Answer fromhttps://www.reddit.com/r/aws/comments/cwnbt1/aws_cloud9_server_refuses_to_connect/Once you start the rails server, click the 'preview' button. When this tells you that it refuses to connect, find the button that looks like two overlapping squares with an arrow to "pop out into a new window." Once it was in a new tab, it worked like a charm.Hope this helps!
So I'm trying to make a website for school and I've been followingthisguys tutorial on how to make a website. But for some reason when I get to lesson32and I enter theec2-user:~/environment/blog $ rails server -b $IP -p $PORT command, the website doesnt run and it says " somenumbersandletters.vfs.cloud9.us-east-2.amazonaws.com refused to connect " with an error. I've followed all the steps correctly (Except for the directory he runs it from, I run it straight from blog instead of environment because it tells me I need to make a new app the other way). I've tried disabling my firewall, I've enabled Cookies and searched the internet for a solution. I am very new to Servers and Coding and any help would be greatly appreciated!Thisis my Terminal Log
AWS Cloud9 Server refuses to connect
SQS queues do not belong to a specific VPC. There is no networking involved when creating/configuring a queue.Access to SQS queues is entirely managed with IAM permissions.With ECS, you will have to configure your task execution role properly. As an example, a policy like the following allows to send, receive and delete messages from a specific queue:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sqs:DeleteMessage", "sqs:ReceiveMessage", "sqs:SendMessage" ], "Resource": "arn:aws:sqs:<region>:<account>:<queue name>" } ] }See alsoAuthentication and Access Control for Amazon SQS.
I have a ECS cluster inside the VPC. ECS have to read from a SQS. So, do I need to create SQS in the same VPC for that to communicate? Also, if say, I wanted to communicate outside VPC, how can I do that?
Do I need to create the SQS in the same VPC as the ECS is in?
MediaStore has a cacheing layer and lower latency than s3 so it is optimized for live streaming and just in time packaging of live and vod.
This is more of a philosophical question:For a simple video application, which AWS services is the the best to choose, S3 or Elemental MediaStore?andAnd at which point will one be better to an other.The question is mostly in regards to performance, as I am aware different possibilities in the sdk's.BackgroundI am making a video website for my company, which will be used for simple instructional videos (screen recordings of how to do stuff in different applications). The videos er mp4 videos made vide iMovie and are around 30s to 1min.In my website, I see no performance differences for the videos. Would I see a difference if the videos was 4k 1 hour long?I like using MediaStore simply because it seems more appropriate for videos, but is there any difference?
AWS S3 vs Elemental MediaStore
It happens often when your Bot Build version differs from the current Bot version. It should work if you refresh the page and choose the "Latest" from the versions tag.
So I have recently gotten on board with AWS Lambda and I've been working on a bot since yesterday afternoon but now all of a sudden, this is happening. I whenever I go to build the bot or save the intent, I just keep getting the message 'The checksum value doesn't match for the resource named 'isRecyclableGarden'.'isRecyclableGarden is one of the intents I am using within my code. I can't share the code as it's work code and I am fairly new to this. Any help on how I can work out how to validate the check sum again would be helpful as I cant actually edit or progress with this code!Pictured is the problem I am having.
The checksum value doesn't match for the resource named 'isRecyclableGarden'
You can see this error if the EC2 instance does not have the correct IAM role. Create an IAM role with the policy "AmazonEC2RoleforAWSCodeDeploy". You can't add an IAM role to an existing instance, so you'll have to launch a fresh one.Also make sure you've installed the CodeDeploy agent for the correct region, e.g. forus-east-1:apt-get -y install awscli ruby2.0 aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1 chmod +x ./install ./install autohttp://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-set-up-new-instance.html
I am working on a scenario, where I need to push the code from a GIT repository to AWS Instance. To achieve this I am usingAWS CodeDeployfeature. But in the final step of the process to deploy the code, I am receiving the below error.Deployment Failed The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
AWS Deployment Failed due to "HEALTH_CONSTRAINTS"
Login to your cpanel. go to simple DNS Zone EditorEnter your sub domain name and IP address of aws ec2 instance in A records and click on Add.Now you can open your subdomain/site_name and you wil see the page which is hosted on aws ec2 instance.
I have created sub domain of my site. Its on bigrocks. Now I have ec2 instance with elastic ip and I am running a site from there. I can access that site by IP/SITE_NAME from local browsers of current machines also. SO it runs on ec2 as well as on current machine. Now I want to link that site to my su bdomain. So how can I do that? I don't want to redirect sub domain. Browser should display the sub domain name it self and contents from aws ec2 instance Please help
How to point subdomain to aws ec2
you should probably use files tag and not command:commands: create_post_dir: command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post" ignoreErrors: true files: "/opt/elasticbeanstalk/hooks/appdeploy/post/99_make_changes.sh": mode: "000777" content: | #!/bin/bash mkdir -p /var/app/current/tmp/uploads/ chmod 755 /var/app/current/tmp/uploads/it will be triggered after app deploy finished
I'm running a Rails 4.2 app on Elastic Beanstalk, and need to set log permissions and create the /tmp/uploads folder (plus permissions) after deploy.I was running two ebextensions scripts to do this, but on some occasions they would fail because the folder /var/app/current/ didn't yet exist.I'm presuming this is because the permissions and/or folders should be created on /app/ondeck/ first so that EB can copy the contents over to /var/app/current/, but I'm interested to see if there's a recommended and more foolproof approach to doing this?For reference, my two ebextension scripts were:commands: 01_set_log_permissions: command: "chmod 755 /var/app/current/log/*"andcommands: 01_create_uploads_folder: command: "mkdir -p /var/app/current/tmp/uploads/" 02_set_folder_permission: command: "chmod 755 /var/app/current/tmp/uploads/"Thanks, Dan
Elastic Beanstalk: what's the best way to create folders & set permissions after deploy?
The question is at very high level but however will attempt to answer that.AWS is all about providing computing infra structure in on demand basis. EC2 and RDS are the 2 different service offerings from AWS.EC2 : Elastic Compute Cloud - On demand Servers either Linux / Windows.You are provided with a Instance to which you can RD / SSHYou are free to install web server / mail server / db server / app serverThe instance health is AWS's responsibility and everything inside the instance your responsibilityRDS : Relational Database ServiceYou are provided with a deployed DB server and you can create the db instanceYou can't do a RD / telnet to connect to the RDS's physical server but you can connect to the RDS instance via clients like SSMS, MySQL workbench etc. or via JDBC / ODBC DB APIsYou can connect to the RDS instance from any internet enabled client system / application ( provided appropriate firewall rules have been enabled )By extension to the previous point, you can connect to the RDS from your app running in EC2.PS:You can install a database server in EC2 and use it / configure as you wish; but remember doing that you will hold the responsibility of the DB's uptime and health as it becomes an app running in the EC2.In the RDS, AWS will take care ( manage ) the DB instance health and you just need to concentrate on your data, schema, tables etc.
It seems i do not fully understand the difference between EC2 and RDS. Are both of them separate, or is EC2 like a container for RDS. And also if I want to access RDS, will it go through EC2??
Difference between EC2 and RDS
You need to useSecret. You can use any of the staticfrommethods to get the secret. From there you can use thesecretValueFromJsonmethod to get the value.Example (secret for Postgres db):import * as secretsmanager from '@aws-cdk/aws-secretsmanager'; const dbSecret = secretsmanager.Secret.fromSecretNameV2(this, 'db-secret', 'db-secret-name'); const dbUser = dbSecret.secretValueFromJson('username').toString(); const dbPass = dbSecret.secretValueFromJson('password').toString(); const dbName = dbSecret.secretValueFromJson('dbname').toString();
I am having some trouble getting a specific Secrets Manager Secret key value to pass it to my lambda through CDK.After some time I finally realized that mySecretValueis only resolved when I actually deploy this to lambda, and not while running local through SAM CLI. By doingcdk.SecretValue.secretsManager(secretId).toString()I get something like"{\"apiKey\":\"sdfsdf-sdfsdf-sddsf\"}", but I want to rather have the apiKey directly. Unfortunately, in my CDK code, I cannotJSON:parse(...secretsManager(..).toString())as this will only be resolved once deployed. Before, the value is simply:{{resolve:secretsmanager:apiKey:SecretString:::}}(which seems to be a Token:https://docs.aws.amazon.com/cdk/latest/guide/tokens.html)So I guess I would need some way to tell CDK how to use the rendered value, maybe by passing a callback that transforms the rendered result - is that possible? Are there any other tools I can use in my CDK setup that allow me to receive a specific key from a secret so that I can pass it to lambda directly?I hope the problem is understandable. Thanks in advance for your help.
Pass AWS SM Secret Key to Lambda Environment with CDK
Convert Glue's DynamicFrame into Spark's DataFrame to add year/month/day columns and repartition. Reducing partitions to one will ensure that only one file will be written into a folder, but it may slow down job performance.Here is python code:from pyspark.sql.functions import col,year,month,dayofmonth,to_date,from_unixtime ... df = dynamicFrameSrc.toDF() repartitioned_with_new_columns_df = df .withColumn(“date_col”, to_date(from_unixtime(col(“unix_time_col”)))) .withColumn(“year”, year(col(“date_col”))) .withColumn(“month”, month(col(“date_col”))) .withColumn(“day”, dayofmonth(col(“date_col”))) .drop(col(“date_col”)) .repartition(1) dyf = DynamicFrame.fromDF(repartitioned_with_new_columns_df, glueContext, "enriched") datasink = glueContext.write_dynamic_frame.from_options( frame = dyf, connection_type = "s3", connection_options = { "path": "s3://yourbucket/data”, "partitionKeys": [“year”, “month”, “day”] }, format = “parquet”, transformation_ctx = "datasink" )Note that thefrom pyspark.qsl.functions import colcan give a reference error, this shouldn't be a problem as explainedhere.
The current set-up:S3 location with json files. All files stored in the same location (no day/month/year structure).Glue Crawler reads the data in a catalog tableGlue ETL job transforms and stores the data into parquet tables in s3Glue Crawler reads from s3 parquet tables and stores into a new table that gets queried by AthenaWhat I want to achieve is the parquet tables to be partitioned by day (1) and the parquet tables for 1 day to be in the same file (2). Currently there is a parquet table for each json file.How would I go about it?One thing to mention, there is a datetime column in the data, but it's a unix epoch timestamp. I would probably need to convert that to a 'year/month/day' format, otherwise I'm assuming it will create a partition for each file again.Thanks a lot for your help!!
How to partition data by datetime in AWS Glue?
The message "RESOURCE:ENI" indicates that it's a problem with allocating an elastic network interface.Per thedocs, at2.mediumshould be able to allocate 3 ENIs. So, assuming that ECS assigns a distinct ENI to each container, that would be the reason that you can't assign more than three containers to an instance.But you're indicating that you're actually limited to 2 containers per instance. Which makes me wonder if you're somehow exceeding thelimit of ENIs per region. That shouldn't happen unless ENIs are being detached and not removed (which could possibly happen if your IAM permissions aren't correct). I recommend looking at the ENI page in the AWS Console to make sure that you don't have a lot of unattached ENIs.
I have 10 service/task definitions each of which requires 512 memory and 10 cpu (from container definition). I have three t2.medium instances each of which has 4GB memory. So it should be no problem to launch up to 24 task instances.However those three instances run maximum only 7 services (3/2/2). For the services that are not running in "events" tab there are following errors:service integrityCheck was unable to place a task because no container instance met all of its requirements. The closest matching container-instance 3e2dbe6a-7a07-46f2-846b-ccccb9adaeee encountered error "RESOURCE:ENI".I tried updating AMI on ec2 instances to latest ecs-optimized but it did not help. It seems that one ec2/container instance can't start more than 3 tasks? Strange thing is that it worked fine like 1 month ago (all 10 services were running) and those errors appeared ~20-26 days ago.Any idea?Each service uses awsvpc network mode and awslog log driver.Here are my network interfaces listed:
It seems that more than three tasks per container instance cannot be placed?
You can't stop a DB instance that is in a Multi-AZ deployment.http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.htmlModify the instance so that it is no longer Multi-AZ. After this modification is complete, you should be able to stop it. You can switch it back to being Multi-AZ after it is restarted.
I spun up two RDS instances in the same VPC. Both are postgresql.While the m4.xlarge one has multi-AZ enabled and is encrypted. The t2.micro has either. That is the only difference between the two instances. It is strange that t2.micro instance can be stopped but m4.xlarge can't. The "stop" option is grayed out for m4.xlarge.Why can't I stop this one? Does it have to do with the multi-AZ thing?
Why can't I stop RDS instance?
OK, so I got it working...First I switched from zsh to bash shell via the terminal command:exec bashThen I ran:pip --versionJust to confirm that pip and Python where both in fact installed and working.From here I ran:brew install awscliThis was the critical missing ingredient that I did not have on my first run through.At the end of the install process Amazon prints to screen a list of "Caveats" for completing installation.According to the Caveats I took the following two steps...First, added the following to ~/.bashrc to enable bash completion:complete -C aws_completer awsThen added the following to ~/.zshrc to enable zsh completion:source /usr/local/share/zsh/site-functions/_awsNow I am able to run the command "aws" via either shell.
I followed all the instructions provided by Amazon for installing AWS CLI found here:http://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.htmlMy machine is running the Zsh Shell. So in step three I edited .zshrc instead of .bash_profile.The error message I am receiving iszsh: command not found: awsHere is how the .zshrc file looks now.export PATH="$HOME/.bin:$PATH" export PATH="/usr/local/bin:$PATH" export PATH=~/.local/bin:$PATH eval "$(hub alias -s)" export PATH="$PATH:$HOME/.rvm/bin" # Add RVM to PATH for scriptingI believe that the export PATH=~/.local/bin:$PATH might be redundant given the line above it that was already in place.
Cannot Launch Amazon AWS CLI from Command Line on Mac
Yes. You can useJMESPathsyntax to filter the results ofaws autoscaling describe-auto-scaling-groupscommand down to only those groups matching some tag's key/value pair. This uses the--queryparameter, which is available for filtering on most AWS CLI commands.Example to query by a single tag:The example below filters results based on a tag where Key = 'Environment' and Value = 'Dev'.aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='Environment') && Value=='Dev']]".AutoScalingGroupNameExample to query by multiple tags:The example below filters results based on tags where Key = 'Environment' and Value = 'Dev', and Key = 'Name' and Value = 'MyValue'. This uses a pipe to query for the second tag on the resulting autoscaling groups of the query for the first tag.aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[? Tags[? (Key=='Environment') && Value=='Dev']] | [? Tags[? Key=='Name' && Value =='MyValue']]".AutoScalingGroupNameFurther ReadingAWS Documentation - aws autoscaling describe-auto-scaling-groupsAWS Documentation - Controlling Command Output from the AWS Command Line Interface
Is there a way to list down the available AutoScalingGroups under an account and filter on top of it based on some tags?I am looking for something likeaws ecs list-clusterswhich gives list of ecs clusters.
AWS-CLI: Ways to list down autoscalinggroups
This question has been ask many time. What you need isInstance Metadata and User Data.Just do this and refer to the mentioned documentation, you get the public ip address :curl http://169.254.169.254/latest/meta-data/public-ipv4
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed7 years ago.Improve this questionI created a amazon-web-services instance and used the Ubuntu 14.04 amazon machine image.now I can ssh into that machine and use the shell to run different commands.my question is now, how I can find out the public IP from that machine from inside that ssh-session?I tried usingnetstatandifconfigbut cannot find out the public IP I locked in with via ssh.Is there somebody who can tell me how to find out, the ip I used to ssh into the machine?Thanks in advance!
aws ec2: how to know public ip from inside ubuntu instance [closed]
You need to open HTTP port in AWS instance menu.(all ports except ssh closed in AWS)Go to your console.aws.amazon.com, then pick your instance and go to last menu item "security groups". It lauch wizard, click on "Inbound" in bottom menu, then "edit", and add HTTP or any port what you want :)And be sure you using your public AWS IP, to open in browserAdd some screen for you, hope it help:
This issue might seem very trivial but please try to suggest a solution for this if possible. I have deployed a django App on AWS ec2 host and I am able to run the following command successfully.(venv)[ec2-user@ip-xxx-xx-xx-xx abc]$ python manage.py runserver Performing system checks... System check identified no issues (0 silenced). January 03, 2016 - 13:15:31 Django version 1.7.1, using settings 'abc.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.But I am not able to accesshttp://127.0.0.1:8000/from the browser. On googling it's suggesting to use nginx or gunicorn. I am not sure if nginx, gunicorn etc. are to be used for this.Can someone please let me know how can this be accessed from browser ? Thanks,
Access Django App on AWS ec2 host
I had the same issue. Turns out that when you enable CodeCommit the CLI looks for a remote called "codecommit-origin" and if you don't have a git remote with thatspecificname it will throw that error.Posting this for anyone else who stumbles upon the same issue.
When I try to deploy to my Elastic Beanstalk environment I am getting this python error. Everything was working fine a few days ago.$ eb deploy ERROR: AttributeError :: 'NoneType' object has no attribute 'split'I've thus far attempted to update everything to no effect by issuing the following commands:sudo pip install --upgrade setuptoolsandsudo pip install --upgrade awscliHere are the resulting versions I'm working with:$ eb --version EB CLI 3.10.0 (Python 2.7.1) $ aws --version aws-cli/1.11.56 Python/2.7.13rc1 Darwin/16.4.0 botocore/1.5.19Everything looks fine under eb status$ eb status Environment details for: *** Application name: *** Region: us-west-2 Deployed Version: *** Environment ID: *** Platform: 64bit Amazon Linux 2016.09 v3.3.1 running Node.js Tier: WebServer-Standard CNAME: ***.us-west-2.elasticbeanstalk.com Updated: 2017-03-02 14:48:29.099000+00:00 Status: Ready Health: GreenThis issue appears to only effect this elastic beanstalk project. I am able to deploy to another project on the same AWS account.
Elastic Beanstalk Deploy ERROR: AttributeError :: 'NoneType' object has no attribute 'split'
The documentation is a little confusing, but you need to construct a filter name that includes thetag:prefix and your tag name. Here's a working example:var AWS = require('aws-sdk'); var ec2 = new AWS.EC2({ region: 'eu-west-1' }); var params = { Filters: [ { Name: 'tag:Project', Values: ['foo'] } ] }; ec2.describeInstances(params, function (err, data) { if (err) return console.error(err.message); console.log(data); });This returns all instances that have the tagProjectset to the valuefoo.
I am trying to use the describeInstances function in amazon ec2 to get details about my instance using my tag id. In the documentation it mentions use the filter,tag:key=value - The key/value combination of a tag assigned to the resource, where tag:key is the tag's key.I tried it in the following way:var params1 = { Filters : [ { Tags : [ { Key : key_name, Value : key_value } ] } ] }; ec2.describeInstances(params1, function(data, err) { }), but I get an error: Unexpected Token at Tags : What is the correct way to use this api?
Correct usage of describeInstances amazon ec2
Copy all content from the current folder, but not the folder itself:For example, I am copying all the content from data folder which is located under NewFolder, with tgsbucket my bucket:aws s3 cp data s3://tgsbucket --recursiveCopy the local folder:Specify folder name after bucket name:aws s3 cp data s3://tgsbucket/data --recursive
I've tried to copy my current folder to the bucket:aws s3 cp path s3://bucket/NewFolderbut when I sync again usingaws s3 sync s3://bucket/NewFolder/ /home/testall I get is an empty folder meaning that my folder was not copied in the first place.
How to upload a folder to s3 through the command line?
Manuel,as mentioned, the return info is inside the Payload element in the returned json. Payload is a boto3 object type that you need to access it's contents through it's read() method.The code I used to get the python dictionary that I return from my lambda functions is this:payload = json.loads(response['Payload'].read()) statusCode = payload.get('statusCode') message = payload.get('message') results = payload.get('results')
Im currently writing a python script that interacts with some AWS lambda functions. In one of the functions, my response contains a list which I need in my script.Problem is that when I use theinvoke()function, the response is a json which contains request information.response = aws_lambdaClient.invoke(FunctionName = 'functionName', Payload = payload)The function that im using has this as a returnreturn {'names': aList, 'status': 'Success!'}If I print out the response, I get this:{'ResponseMetadata': {'RequestId': 'xxxxxxxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Thu, 07 Nov 2019 14:28:25 GMT', 'content-type': 'application/json', 'content-length': '51', 'connection': 'keep-alive', 'x-amzn-requestid': 'xxxxxxxxxx', 'x-amzn-remapped-content-length': '0', 'x-amz-executed-version': '$LATEST', 'x-amzn-trace-id': 'root=xxxxxxxxx;sampled=0'}, 'RetryAttempts': 0}, 'StatusCode': 200, 'ExecutedVersion': '$LATEST', 'Payload': <botocore.response.StreamingBody object at 0x0000023D15716048>}And id like to get{'names': aList, 'status': 'Success!'}Any idea on how can I achieve this? Or should I find another way of getting the data (Maybe putting the list i need in an s3 bucket and then getting it from there).
Getting AWS Lambda response when using boto3 invoke()
If you are using API Gateway HTTP APIs (not sure if this is relevant for the REST APIs):Lets say I have an endpoint at/POST products. I had to addanother endpointat/OPTIONS productsand integrate it with a simple Lambda function that just returns the HTTP 200 OK (or HTTP 204 No Content), and the"Access-Control-Allow-Origin": "*"header (or even better, specify the URL of your origin/client).This is because browsers issue a preflight/OPTIONSrequest to the same endpoint, before issuing the actual request (seemore), for all HTTP requests exceptGETandPOSTwith certain MIME types (source).
I have an aws lambda function that returns the following response:var responseBody = { cost: price }; var response = { statusCode: 200, headers: { "Access-Control-Allow-Origin": "*" }, body: JSON.stringify(responseBody), isBase64Encoded: false }; callback(null, response);But I get the following error in my frontend Angular application.Access to XMLHttpRequest at 'https://xxxxxxxxx.execute-api.us-east-1.amazonaws.com/dev/price' from origin 'http://127.0.0.1:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.
Can't access API with Lambda and API Gateway
Your AWS credentials on your local machine can't be seen by theawscliprogram on your EC2 instance. If you want to run the S3 copy command on the EC2 instance, you have two options:1) Runaws configureon the EC2 instance and define the credentials on that machine (not recommended)or2) Create anIAM rolewith the required permissions to copy the files (for example, with the AmazonS3ReadOnlyAccess policy) andattach the role to your EC2 instance
Im trying to copy files from my s3-bucket to my ec2-instance. When i run this command inside my ec2-instance:aws s3 cp s3://amazing-demobucket-1/file.txt ./file.txti get following error:Unable to locate credentialsBut the aws configure files is in my~/.awsin my local machine. Do i need to configure inside ec2-instance too? Or is there another command i can run outside my ec2-instance to download files from s3-bucket into ec2-instance ?Thank for help!
Unable to locate credentials when trying to copy files from s3-bucket to my ec2-instance
Network Access Control Lists (ACLs) mimic traditional firewallsimplemented on hardware routers. Such routers are used to separate subnets and allow the creation of separate zones, such as aDMZ. They purely filter based upon the content of the packet. That is their job.Security Groupsare an added capability in AWS that provides firewall-like capabilitiesat the resource level. (To be accurate, they are attached to Elastic Network Interfaces, ENIs). They arestateful, meaning that they allow return traffic to flow.In general, the recommendation is to leave NACLs at their default settings (allow all traffic IN & OUT). They should only be changed if there is a specific need to block certain types of traffic at the subnet level.Security Groups are the ideal way to control stateful traffic going in and out of a VPC-attached resource. They are THE way to create stateful firewalls. There is no other such capability provided by a VPC. If you wanted something different, you could route traffic through an Amazon EC2 instance acting as a NAT and then you would have full control over how it behaves.
From what I read, stateless firewalls are used more for packet filtering. Why is AWS NACL stateless?NACLs force too big a range of ports to be opened for the ephemeral ports.Is there a way to create stateful firewalls on AWS other than Security Groups? Security Groups feel too granular and may get omitted by mistake.
Why is AWS NACL stateless?
You are using the CMK to encrypt/decrypt your data which is not what you should be using it for. The CMK is limited to encrypting up to 4k data because it is meant to create and encrypt/decrypt the data key. Once you’ve created this data key you then use it to encrypt your data without the use of AWS KMS. You could use OpenSSL with the data key and this process is not dependent on KMS. Keep in mind that you have to handle the data key very carefully and best practice is once you've used it to encrypt data, you must encrypt that data key using KMS then store that encrypted key (as metadata) along with the encrypted data. The process of decrypting the data will start with you using KMS to decrypt the data key then using OpenSSL for example to use the decrypted data key as the key to decrypt your data(XML Payload).
I am trying to encrypt a large XML payload using AWS KMS Encryption SDK. I came across thislinkwhich states that there is a limit on bytes of data that can be encryptedYou can encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such as an RSA key, a database password, or other sensitive information.Does KMS not support encryption of data that is more than 4 KB? Is there a workaround to handle data of size more than 4 KB?
AWS KMS Encryption - Limits of Data Size
The body of viewer requests and origin responses is not available to Lambda@Edge functions -- only the headers.While it isn't entirely clear what you are trying to do once you get access to the data in the body, if that's something you need, then here's the AWS workaround:Look into API Gateway, which does have access to the request body. You can deploy an API Gateway Regional Endpoint and add that endpoint as a second origin to your CloudFront distribution. You can then use Lambda@Edge in an Origin Request trigger to divert those requests to the alternate origin (your new "API," which can generate the response you want, based on thePOSTrequest you receive).
I have a static website hosted on AWS Cloudfront. On a route I need to accept the POST method because is the redirect of the OAuth server so I decided to develop a [email protected] idea is to register the lambda on the 'Viewer Request' and intercept the POST method, reading the body and copy the values on the headers in order to make them readable from my static website (I know I can access the Referrer header with javascript).I set up the Lambda and I can intercept the POST letting all the other methods pass through.The problem is that I cannot find a way to read the body of the POST request, I googled it with no result.Any advice on I can do it? Is there any parameter I have to configure on Cloudfront side?
Lambda@edge read body POST request
@bharathp answer did the trick for me,in aws command line had to specify the parameter as{"S":"s1"}aws dynamodb scan \ --table-name aws-nodejs-typescript-dev \ --filter-expression "id = :search" \ --expression-attribute-values '{":search":{"S":"s1"}}'in node code just":search": "s1"const params = { TableName: process.env.DYNAMODB_TABLE, ExpressionAttributeValues: { ":search": "s1" }, FilterExpression: "id = :search", ProjectionExpression: "id", }; dynamoDb.scan(params, (error, result) => {
I am trying to filter the data returned by a dynamodb scan operation using nodejs aws sdk but the data returned has 0 items.Response : {"Items":[],"Count":0,"ScannedCount":15}I have tried with both FilterExpression and ScanFilter but getting the same result.FilterExpression:var params = { TableName: tableName, FilterExpression: 'active = :active', ExpressionAttributeValues: { ':active': { S: '1' } } };ScanFilter:var params = { TableName: tableName, ScanFilter: { 'active': { "AttributeValueList": [{ "S": "1" }], "ComparisonOperator": "EQ" } } };Here is the nodejs code:dynamodb.scan(params, onScan); function onScan(err, data) { if (err) { console.error('Unable to scan the table. Error JSON:', JSON.stringify(err, null, 2)); } else { if (typeof data.LastEvaluatedKey != 'undefined') { params.ExclusiveStartKey = data.LastEvaluatedKey; dynamodb.scan(params, onScan); } if (data && data.Items) callback(data.Items); else callback(null); } }I checked the same filter condition in the dynamodb console and getting the expected result.dynamodb console screenshot
DynamoDB Scan FilterExpression returning empty result
You need to install the repository-s3 plugin in all cluster nodes, thenrestartthe nodes. Otherwise the plugin does not become usable.
i'm going to elasticsearch restore. when use 2.x version, s3 repository register is well by script.curl -XPUT 'http://'ip':9200/_snapshot/'repo_2016-12-14/?pretty'' -d ' {"type": "s3", "settings": { "bucket": "'patch-backup'", "base_path" : "elasticsearch/'2016-12-14'", "region": "ap-southeast-1", "access_key": "************", "secret_key": "*************" }}'but when upgrade to 5.0 version, above scripts not operation. and show this error{"error" : { "root_cause" : [ { "type" : "repository_exception", "reason" : "[repo_2016-12-14] repository type [s3] does not exist" } ], "type" : "repository_exception", "reason" : "[repo_2016-12-14] repository type [s3] does not exist" }, "status" : 500 } -
elasticsearch backup/restoration from/to s3 - register error ( repository exception)
If you wish to use aspecific VPC and subnet, just insert their values:{ "Resources": { "MyServer": { "Type": "AWS::EC2::Instance", "Properties": { "InstanceType": "t2.micro", "SubnetId": "subnet-abc123", "ImageId": "ami-abcd1234" } } }A subnet always belongs to a VPC, so specifying the subnet will automatically select the matching VPC.
I want my CloudFormation template to use existing subnets and VPCs. I don't want to create new ones.How do I parameterize these?When I look at the docs forAWS::EC2::VPCandAWS::EC2::Subnet, it seems these resources are only for creatingnewVPCs and subnets. Is that correct?Should I just point the instance resource directly to the existing VPC and subnets I want it to use?For example - if I have an instance resource in my template and I point it directly to an existing subnet, like this:{ "Resources": { "MyServer": { "Type": "AWS::EC2::Instance", "Properties": { "InstanceType": { "Ref": "InstanceType" }, "SubnetId": { "Ref": "subnet-abc123" }, ...I get this error when validating the template:Template contains errors.: Template format error: Unresolved resource dependencies [subnet-abc123] in the Resources block of the templateI tried to do this with mappings but still getting an error:"Mappings": { "SubnetID": { "TopKey": { "Default": "subnet-abc123" } }And with this in the instance resource:"SubnetId": { "Fn::FindInMap": [ "SubnetID", { "Ref": "TopKey" }, "Default" ] }I get this error when trying to validate:Template contains errors.: Template format error: Unresolved resource dependencies [TopKey] in the Resources block of the template
How do I specify subnet and VPC IDs in AWS CloudFormation?
It sounds like you installed the SSL certificate on your Elastic Load Balancer, so that's where SSL Termination is happening. So your load balancer is doing the SSL termination and always communicating with your server via HTTP. This means you have to check the 'x-forwarded-proto' header to determine if the original request is over HTTPS.There are several other ways to configure SSL on AWS, including termination on the web server, but SSL termination on the ELB is the generally preferred method on AWS. You just have to be aware that in this configuration the request between the ELB and the web server isn't actually over SSL so you have to check the header accordingly.
Im trying to redirect HTTP requests to my site to HTTPS. Its been extraordinarily hard. My code is:var express = require('express'); var app = express(); app.use(function(req, res, next) { console.log('req.protocol is ', req.protocol); console.log('req.secure is ', req.secure); if (req.url !== '/health' && !req.secure) { console.log('redirecting .........'); return res.redirect('https://www.example.net/catch'); } next(); }); app.get('/catch', function(req, res) { res.send('Hello World!'); }); app.get('/', function(req, res) { res.send('Hello World!'); }); app.get('/health', function(req, res) { res.send('Hello World!'); }); app.listen(8080, function() { console.log('Example app listening on port 8080!'); });The load Balancer health check goes to '/health'. Every other request that isnt a health check, and is HTTP (rather than HTTPS) should be caught and redirected to https. However, I end up in an infinite loop as req.protocol always returns 'http' for http or https requests. req.secure therefore is false every time and I end up in a loop. Any ideas why this is?
Req.secure in Node alwys false
AWS Transit Gatewaynow provides an option to do what you wish, although you will want to consider the costs involved -- there are hourly and data charges. There is a reference architecture published in which multiple VPCs share a NAT gateway without allowing traffic between the VPCs:https://aws.amazon.com/blogs/networking-and-content-delivery/creating-a-single-internet-exit-point-from-multiple-vpcs-using-aws-transit-gateway/
I have one VPC where i configuredNAT Gateway. Another VPC(s) do not have any "public subnet" nor IGW. I would like to share single NAT Gateway among many VPCs. I tried to configure Routing table but it does not allow to specify NAT Gateway from different VPC. As posible solution, I installed http/s proxy in VPC with IGW and configured proxy settings on every instance in different VPC. It worked, but I would like useNAT Gatewaydue to easier management. Is it possible to make this kind of configuration at AWS? There are few VPCs and I do not want to add NAT Gateway to each VPC.Zdenko
AWS: Share "NAT Gateway" among VPCs
I've just started setting this up and I realized quickly that it was allowing me to make selections that couldn't possibly be free. When setting up your free teir instance, look on the left hand side of the screen forYour current selection is eligible for the free tier.Once you select something like "Multi-AZ Deployment" or use any DB Instance Class other than "db.t2.micro" it will slyly change the left column display:The following selections disqualify the instance from being eligible for the free tier:Multi-AZ DeploymentJust be careful in your selections and usage to maintain the free teir.
I have hosted a server app on AWS and RDS for relational DB. Though I opted for free account, RDS is being charged at $0.0025 per hour amounting to $18 a month.I read some documentation but still not able to figure this out. Is this the way it is or is there a way to get free RDS account for testing purpose?Thanks OpenTube
Free Amazon AWS/RDS Instance
The blog post you linked is still basically valid, here's what exactly you need to do:First put SDK into subfolder inside libraries folder (for ex. aws-sdk-for-php). This is the file awslib.php in libraries folder:class Awslib { function Awslib() { require_once('aws-sdk-for-php/sdk.class.php'); } }And then just use whatever AWS service you wish in the controller, let's say it's SQS:$this->load->library('awslib'); $sqs = new AmazonSQS(); $response = $sqs->list_queues(); var_dump($response->isOK());Don't forget to set your credentials and rename the sample config file.
Is there already a handy CI 2 library for AWS SDK 1.5.x? If not, what would be the steps to make it into one?I found a 3 year old posting about integrating Tarzan (the pre-pre-cursor to AWS SDK) to CI 1 here:http://blog.myonepage.com/integrating-tarzan-amazon-web-services-php-to. I am wondering if these instructions still hold? One difference I noticed was that the way AWS SDK 1.5.3 declares its Access Identifiers has changed and I am not quite sure how to proceed to inform CI about this.Thanks! mmiz
Integrating AWS SDK as a library in Codeigniter
This one is your best bethttps://github.com/livelycode/aws-lib
Is there a stable module for Amazon SES in Node.js?Thanks
Node.js: Send e-mails using AWS SES
Here is another solution. Since folders technically do not exist in S3 and merely a UI feature, "folders" in S3 are ultimately called prefixes.You can trigger an EventBridge Notification on an S3 folder with the following event pattern:{ "source": ["aws.s3"], "detail-type": ["Object Created"], "detail": { "bucket": { "name": ["<bucket-name>"] }, "object": { "key": [{ "prefix": "<prefix/folder-name>" }] } } }
I have to start stepMachine execution upon file upload on a folder inside bucket, I got to know how we can configure eventbridge on S3 bucket level. But on the same bucket there can be multiple file uploads. I need to get notified when object inserted into a particular folder inside bucket. Is there any possible way to achieve this?
EventBridge notification on Amazon s3 folder
In the AWS Console go to the DynamoDB Table underTable Overview, thenStream Details, thenManage Stream, clickenable both New and Old Images.
I'm having errors when executing theamplify pushoramplify push functioncommands with the Amplify CLI.Here's the error in the console (censored project-specific identifiers ):... UPDATE_FAILED storage[dynamo db table name] AWS::CloudFormation::Stack Tue Jan 19 2021 08:32:29 GMT+0800 (Philippine Standard Time) Embedded stack arn:aws:cloudformation:us-west-2:XXX:stack/amplify-XXX-XXX/XXX was not successfully updated. Currently in UPDATE_ROLLBACK_IN_PROGRESS with reason: Attribute: StreamArn was not found for resource: [dynamo db table name] ...I've already tried:Executeamplify initagainExecuteamplify configure, create a new IAM user, and use the newly created user's credential.Going back to the commit whereamplify pushlast worked, seeing the same error still.Recloned the repo and do the above steps. No luck.I'll appreciate any help. Thanks.
`amplify push` failing - Attribute: StreamArn was not found for resource
It is being caused by AWS CLI v2 using a 'pager', which is a different behaviour from v1.FromControlling command output from the AWS CLI - How to set the output’s default pager program:The following example sets the default to disable the use of a pager in theconfigfile:[default] cli_pager=
I have a script to update lambda code from the CLI. It involves several steps:## prepare code aws lambda update-function-code --function-name my-function --zip-file fileb://my-file --region eu-west-1 ## execute code: aws lambda invoke..My problem is that after executing update-function-code, the cli waits for an enter key press, while I don't really care about the result and I would like to go on to the execution.I've tried different things that haven't worked: Non interactive mode:bash -c `aws lambda...`Piping enter to the function:printf '\n' | aws lambda...Any idea?My aws cli version is 2.0.12
The cli for aws is blocking with update-function-code
Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role. Every IAM role requires a trust policy.You have to specify a trust policy when creating a role through the CLI. Identity-based policies (managed/inline) can be attached to a role afterwards by usingattach-role-policyorput-role-policycommands.The following trust policy lets Lambda service assume this role. You have to provide this file as input to the command usingassume-role-policy-documentoption.trust-policy.json{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }aws iam create-role --role-name Test-Role --assume-role-policy-document file://trust-policy.json aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSLambdaFullAccess --role-name Test-Role
I am trying to create an IAM role with AWS managed policy, however it asks me for policy document.aws iam create-role --role-name test-role usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument --assume-role-policy-document is requiredI am trying to attach an aws managed policy likeAWSLambdaFullAccess
How do I create a role with AWS managed policy using aws-cli?
You don't need to usees6-promisifyYou can do:try { const params = {Bucket: 'bucket', Key: 'key', Body: stream}; const data = await s3.upload(params).promise(); console.log(data); } catch (err) { console.log(err); }https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/using-promises.html
I'm usinguploadmethod fromaws-sdkto upload files to S3 bucket from React app in browser.Theoriginal callback based upload methodis as bellow:var params = {Bucket: 'bucket', Key: 'key', Body: stream}; s3.upload(params, function(err, data) { console.log(err, data); });I wrapped it withpromisifyto work it like Async-await as bellow:const AWS = require('aws-sdk'); const { promisify } = require('es6-promisify'); ... <in my React component> ... async uploadFile() { try { var params = { Bucket: <BucketName>, Key: <KeyName>, Body: <File Stream> }; const res = await uploadToS3Async(params); console.log(res); } catch (error) { console.log(error); } }Now when I'm callinguploadFilefunction on some event fired it produces following error:TypeError: service.getSignatureVersion is not a function at ManagedUpload.bindServiceObject at ManagedUpload.configure at new ManagedUpload at upload at new Promise (<anonymous>)
AWS S3 upload not working after converting from callback to Async-Await format using Promisify
Try the following command:aws ec2 describe-instances --instance-ids $instance_id \ --query 'Reservations[*].Instances[*].PublicIpAddress' \ --output textIf the EC2 instance has a public IP address, this command will return it.Links:Details about the query parameter can be foundhere.Details about the describe-instances command can be foundhere.
I have an instance that I start through aws cli:aws ec2 start-instances --instance-ids i-00112223333444445Instance does not have a static public IP. How can I get instance public ip through CLI knowing the ID i-00112223333444445?
How to get public ip of an EC2 instance from aws CLI by instance id?
According tothis answer, it seems as though it might be because of log files getting too large. Try run the command OP mentioned in their answer, in order to find all large files:sudo find / -type f -size +10M -exec ls -lh {} \;
I recently ran a report on my EC2 server and was told that it ran out of space. I deleted the csv that was partially generated from my report (it was going to be a pretty sizable one) and randf -hand was surprised to get this output:Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 7.0G 718M 91% / devtmpfs 15G 100K 15G 1% /dev tmpfs 15G 0 15G 0% /dev/shmI surprised not only by how little was available/how much space was used,(I am on the/dev/xvda1instance) but also surprised to see 2 alternative filesystems.To investigate what was taking so much space, I randu -hin~and saw the list of all directories on the server. Their reported size in aggregate should not be even close to 7 gb...which is why I ask "what is taking up all that space??"The biggest directory by far was the~directory containing 165MB all other were 30MB and below. My mental math added it up to WAY less than 7gb. (if I understand du -h correctly, all directories within ~ ought to be included within 165MB...so I am very confused how 7 gb could be full)Anyone know what's going on here, or how I can clean up the space? Also, just out of curiosity, is there a way to utilize the devtmpfs/tmpfs servers from the same box? I am running on AWS Linux, with versions of python and ruby installed
How to clean up aws ec2 server?
Just typeaws configureagain (oraws configure --profile <profile_name>to edit a specific profile). If you just confirm the suggested value by hitting enter, it will remain unchanged.orChange just theaws_secret_access_keyby typing$ aws configure set aws_secret_access_key <secret_key>orYou can edit the AWS credentials directly by editing the AWS credentials file on your hard drive. Theaws_access_key_idand theaws_secret_access_keyare stored in the~/.aws/credentialsfile by default. You can use any editor to edit them, such as vim, emacs, or nano, e.g.$ nano ~/.aws/credentialsAdditionally, you can have credentials for many different AWS accounts in the same credentials file by using profiles. As a result, if you have onedevelopmentaccount and oneproductionaccount, the content of the file may look like[development] aws_access_key_id = <key id of dev account> aws_secret_access_key = <secret access key of dev account> [production] aws_access_key_id = <key id of prod account> aws_secret_access_key = <secret access key of prod account>(naturally, you need to replace<key id of dev account>,<secret access key of dev account>, etc with the actual IAM credentials for each account respectively)Ref:AWS CLI Configuration Variable
What is the command to edit secret key inaws configurein terminal?
How to edit AWS Credentials in terminal?
You can pass yourmyDatadirectly instead converting into MemoryStream, if the data is a valid JSON with double quotes.In function name you can use the ARN or just the name. Both works fine for me in latest versionAWSSDK.Lambda -Version 3.3.103.31static readonly string awsAccessKey = "access key here"; static readonly string awsSecretKey = "secret key here"; private static BasicAWSCredentials awsCredentials = new BasicAWSCredentials(awsAccessKey, awsSecretKey); private static AmazonLambdaConfig lambdaConfig = new AmazonLambdaConfig() { RegionEndpoint = RegionEndpoint.USEast1 }; private static AmazonLambdaClient lambdaClient = new AmazonLambdaClient(awsCredentials, lambdaConfig); public async Task<string> GetLambdaResponse(string myData) { var lambdaRequest = new InvokeRequest { FunctionName = "mylambdafunction", Payload = myData }; var response = await lambdaClient.InvokeAsync(lambdaRequest); if (response != null) { using (var sr = new StreamReader(response.Payload)) { return await sr.ReadToEndAsync(); } } return string.Empty; }
I'm trying to run a Lambda function from a console application. The idea is for it to run a quick fire & forget lambda function without waiting for lambda function to return. My code doesn't appear to be executing the lambda function at all though. I know the function works because I can run with the test. When I run the below code I just get a task cancelled exception.var jsonSerializer = new JsonSerializer(); var lambdaConfig = new AmazonLambdaConfig() { RegionEndpoint = RegionEndpoint.USEast2 }; var lambdaClient = new AmazonLambdaClient(lambdaConfig); using (var memoryStream = new MemoryStream()) { jsonSerializer.Serialize(myData, memoryStream); var lambdaRequest = new InvokeRequest { FunctionName = "MyFunction", InvocationType = "Event", PayloadStream = memoryStream }; var result = Task.Run(async () => { return await lambdaClient.InvokeAsync(lambdaRequest); }).Result;Does anyone have some insight into what I'm doing wrong?Thanks!
Calling AWS Lambda Function in C#
Try using then same in group bySELECT CAST(createdat AS DATE) FROM conversations GROUP BY CAST(createdat AS DATE)ShareFollowansweredAug 5, 2017 at 5:20ScaisEdgeScaisEdge133k1010 gold badges9393 silver badges108108 bronze badges0Add a comment|
I have a table in Athena AWS with a timestamp field. I want to group them by date. My SQL query is:SELECT CAST(createdat AS DATE) FROM conversations GROUP BY createdatBut my result is the following:As you can see the group by does not work, and the reason is that the new table has the name field_col0insteadcreatedat. I also tried:SELECT CAST(createdat AS DATE) FROM conversations GROUP BY _col0but I got and error.Does anybody has any suggestions? I will appreciate it
AWS Athena CAST with GROUP BY
Is there a reason why you are usingMappingin between?You could easily use!SubinsteadResources: EC2Instance: Type: AWS::EC2::Instance Properties: InstanceType: !Ref InstanceType KeyName: !Ref KeyName Tags: - Key: Name Value: Test UserData: Fn::Base64: !Sub | #!/bin/bash ${PlatformSelect}ShareFollowansweredJun 27, 2017 at 8:40MaiKaYMaiKaY4,4522121 silver badges2828 bronze badges1No particular reasons! I was being carried away with an existing CF script I had. This works! Thanks (y)–VineethJun 28, 2017 at 6:36Add a comment|
I have this under parameter section ,Parameters: PlatformSelect: Description: Cockpit platform Select. Type: String Default: qa-1 AllowedValues: [qa-1, qa-2, staging, production]I need to reference this value in my UserData. I’m using Mappings in between.Mappings: bootstrap: ubuntu: print: echo ${PlatformSelect} >>test.txt Resources: EC2Instance: Type: AWS::EC2::Instance Properties: InstanceType: !Ref ‘InstanceType’ KeyName: !Ref ‘KeyName’ Tags: - Key: Name Value: Test UserData: Fn::Base64: Fn::Join: - ‘’ - - | #!/bin/bash - Fn::FindInMap: - bootstrap - ubuntu - print - |2+This is not working. Not sure the way I refer it is wrong in first place!!Should I use something before it like, ‘${AWS::Parameters:PlatformSelect}’ ?
Reference Parameter Value in UserData in AWS Cloudformation
RancherOS is a minimal installation of the Linux kernel, Docker daemon, and generally as little as possible else.docker-composeis not part of the default console.Depending on what you're trying to do you can create a RancherOS service with docker-compose syntax:https://rancher.com/docs/os/v1.2/en/system-services/adding-system-services/Or run actual docker-compose from a container:docker run docker/compose:1.10.0Or switch to one of the persistent consoles and install it locally:https://rancher.com/docs/os/v1.2/en/configuration/switching-consoles/ShareFolloweditedOct 2, 2019 at 11:00Abdul Saboor4,10722 gold badges3434 silver badges2525 bronze badgesansweredJan 24, 2017 at 2:19Vincent FiducciaVincent Fiduccia98955 silver badges33 bronze badges1I seemed to need the -v to get anything to work properly, not sure why.–JMY1000Oct 30, 2018 at 0:39Add a comment|
I am trying with aws rancher os. I want to create and run a docker-compose file with the same rancher OS. When I am trying with Docker-compose up command I am getting the error 'not recognized docker-compose.please anyone help me on this
Running docker-compose with Rancher OS
UseAWS s3 rmcommand with multiple--excludeoptions (I assume the last 5 files do not fall under a pattern)aws s3 rm s3://somebucket/ --recursive --exclude "somebucket/1.txt" --exclude "somebucket/2.txt" --exclude "somebucket/3.txt" --exclude "somebucket/4.txt" --exclude "somebucket/5.txt"CAUTION: Make sure you try it with--dryrunoption, verify the files to be deleted do not include the 5 files before actually removing the files.ShareFollowansweredDec 30, 2016 at 6:15helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badgesAdd a comment|
I can fetch the last five updated files from AWS S3 using the below commandaws s3 ls s3://somebucket/ --recursive | sort | tail -n 5 | awk '{print $4}'Now I need to delete all the files in AWS S3 except the last 5 files which are fetched from above command in AWS.Say the command fetches1.txt,2.txt,3.txt,4.txt,5.txt. I need to delete all from AWS S3 except1.txt,2.txt,3.txt,4.txt,and 5.txt.
How do I delete all except the latest 5 recently updated/new files from AWS s3?
var s3 = new AWS.S3({ accessKeyId: accessKeyId, secretAccessKey: secretAccessKey }), file = fs.createWriteStream(localFileName); s3 .getObject({ Bucket: bucketName, Key: fileName }) .on('error', function (err) { console.log(err); }) .on('httpData', function (chunk) { file.write(chunk); }) .on('httpDone', function () { file.end(); }) .send();ShareFollowansweredMay 17, 2016 at 12:03KibGzrKibGzr2,0731515 silver badges1616 bronze badges2Hey, thanks that of course helps. But I am still wondering why my example fails. Is there a efficiency difference between the two versions (my one and the one in this answer?)–NathanMay 17, 2016 at 12:07How do you handle the errorNo sich file or directory?–arcade16Jan 29, 2019 at 14:44Add a comment|
I am trying to download a file from s3 and directly put into into a file on the filesystem using a writeStream in nodejs. This is my code:downloadFile = function(bucketName, fileName, localFileName) { //Donwload the file var bucket = new AWS.S3({ params: { Bucket: bucketName }, signatureVersion: 'v4' }); var file = require('fs').createWriteStream(localFileName); var request = bucket.getObject({ Key: fileName }); request.createReadStream().pipe(file); request.send(); return request.promise(); }Running this function I get this error:Uncaught Error: write after endWhat is happening? Is the file closed before the write is finished? Why?
Trying to download a file from aws s3 into nodejs writeStream
It's a bit unintuitive, but theDNSNameofAliasTargetshould be just the regional S3 website endpoint, and not specific to the exact bucket you're pointing to.Example, specific to thesa-east-1region:AliasTarget: DNSName: s3-website-sa-east-1.amazonaws.com HostedZoneId: Z7KQH4QJS55SOIf you want your template to be region-agnostic, you could create a mapping in your template containing each region's S3 website endpoint and hosted zone ID and useFn::FindInMapin your resource declarations.ShareFollowansweredSep 23, 2016 at 19:08mfishercamfisherca2,4392323 silver badges2222 bronze badges1Saved me a lot of time too. The docs are easy to misinterpret Under "Amazon S3 bucket that is configured as a static website" atdocs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/…it says "Specify the domain name of the Amazon S3 website endpoint that you created the bucket in, for example, s3-website.us-east-2.amazonaws.com." The list of Hosted Zone IDs is available under "Amazon Simple Storage Service Website Endpoints" atdocs.aws.amazon.com/general/latest/gr/rande.html–Trevor Ian PeacockJun 18, 2019 at 6:10Add a comment|
I'm trying to create an static site in cloud formation using a bucket and an Alias type record, but I get the following error:Tried to create an alias that targets ., type A in zone Z7KQH4QJS55SO, but the alias target name does not lie within the target zoneThe zone Id (Z7KQH4QJS55SO) is fromAmazon Simple Storage Service Website EndpointsHere is the json for the Router 53 Record"<Resource Id>": { "Type": "AWS::Route53::RecordSet", "Properties": { "Comment": "...", "Name": { "Fn::Join": [ "", [ "subdomain", ".", "example.com", "." ] ] }, "Type": "A", "AliasTarget": { "DNSName": { "Fn::GetAtt": [ "<BucketName>", "DomainName" ] }, "HostedZoneId": "Z7KQH4QJS55SO" }, "HostedZoneId": { "Ref": "<HostedZoneResourceName>" } } }
Cloud Formation S3 bucket alias for static website issue
No need to add any lines in the PHP configuration file.This lists the modules available for install:$ yum search php | grep -i soapThis installs the module "php-soap"$ sudo yum install php-soapThen restart the server$ sudo service httpd restart $ sudo service php-fpm restart // if neededAll info taken from:How do I enable --enable-soap in php on linux?ShareFolloweditedSep 7, 2022 at 20:36isherwood60.1k1616 gold badges118118 silver badges162162 bronze badgesansweredApr 22, 2015 at 16:34BxxBxx1,61511 gold badge1818 silver badges3030 bronze badges0Add a comment|
I includedextension=php_soap.dllat the bottom of my PHP configuration file, however I get the error below. Where do I put the command?"Invalid command 'extension=php_soap.dll', perhaps misspelled or defined by a module not included in the server configuration"Using an AWS server
Install SOAP Extension PHP Config (AWS)
i finally resolved this issue by make this change on my css codeBefore change.user-area{ background-image:url('<%[email protected]_image.expiring_url %>'); background-repeat:no-repeat; width:1025px !important; margin-top:100px !important; }After change.user-area{ /*I remove the code for background-image:url and make it as inline css on my div*/ background-repeat:no-repeat; width:1025px !important; margin-top:100px !important; }And i moved the background-image property alone from class and added directly as inline css to my div, then it works like charm..<div class="user-area" style="background-image: url(<%= @user.background_image.expiring_url %>)"> </div>I am not saying this is best solution but it is enough for my code workflow .ShareFollowansweredJun 12, 2014 at 7:57RameshkumarRameshkumar16111 silver badge77 bronze badges2That works well. Thank you for posting it. Does anyone know why it behaves this way? Seems strange.–LoubotJan 24, 2015 at 22:241Works for me as well. I think this has to do with the AWS access management. Inline it loads it directly from your app, in CSS it loads it from outside. So you could also solve this to changing your permissions in AWS. But your solution is perfect for me.–almoMay 5, 2019 at 13:45Add a comment|
What i wantI am trying to set background image for class , the image stored on amazon s3, i am accessing the image through paperclip object on railscss class.user-area{ background-image:url('<%[email protected]_image.expiring_url %>'); background-repeat:no-repeat; width:1025px !important; margin-top:100px !important; }Out put on the browser.user-area{ background-image:url('https://xyz-customers.s3.amazonaws.com/photos/7/superbackground.jpg?AWSAccessKeyId=xxxxxxxxxxxxx&amp;Expires=1402511741&amp;Signature=xxxxxxxxxxxxxxxx'); background-repeat:no-repeat; width:1025px !important; margin-top:100px !important; }The problemThe image is not visible on the browser , but when i visit the amazon s3 url,(that is generated on the css class) i can able to view the image.and the browser also throws an 403 error for this file,is a Failed to load resource: the server responded with a status of 403 (Forbidden)
background-image:url not working for amazon s3 image
A common case and example for the use of custom metrics is about instance memory reporting.There are several codes around the web about custom CloudWatch metrics. I found this very useful into Amazon Forums.#!/bin/bash export AWS_CLOUDWATCH_HOME=/home/ec2-user/CloudWatch-1.0.12.1 export AWS_CREDENTIAL_FILE=$AWS_CLOUDWATCH_HOME/credentials export AWS_CLOUDWATCH_URL=https://monitoring.amazonaws.com export PATH=$AWS_CLOUDWATCH_HOME/bin:$PATH export JAVA_HOME=/usr/lib/jvm/jre # get ec2 instance id instanceid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` memtotal=`free -m | grep 'Mem' | tr -s ' ' | cut -d ' ' -f 2` memfree=`free -m | grep 'buffers/cache' | tr -s ' ' | cut -d ' ' -f 4` let "memused=100-memfree*100/memtotal" mon-put-data --metric-name "FreeMemoryMBytes" --namespace "System/Linux" --dimensions "InstanceId=$instanceid" --value "$memfree" --unit "Megabytes" mon-put-data --metric-name "UsedMemoryPercent" --namespace "System/Linux" --dimensions "InstanceId=$instanceid" --value "$memused" --unit "Percent"Source:https://forums.aws.amazon.com/message.jspa?messageID=266893ShareFollowansweredJun 11, 2013 at 9:19Pablo SLPablo SL17511 silver badge66 bronze badges0Add a comment|
What can I do by creating custom CloudWatch metrics? I couldn't get the idea behind creating custom metrics in CloudWatch after I read the docs.I’ve created a new metric:mon-put-data --metric-name MyMetric --namespace "MyService" --value 2 --timestamp 2011-03-14T12:00:00.000ZWhat can I get from this metric? I couldn't understand the custom metrics.
What can I use custom CloudWatch metrics for?
Elastic load balancing and Elastic IP are both region specific, I would assume that auto scaling is region specific and only between the availability zones in that region. The white paper on building fault tolerant applications doesn't explicitly state that you could auto-scale across regions butit does say that you can across zones."Auto Scaling can work across multiple Availability Zones in an AWS Region, making it easier to automate increasing and decreasing of capacity."I would believe if they supported multi-region, they would explicitly say so.Thinking about this further, I'm not so sure it's even a good idea to auto-scale across regions. Auto-scale is more geared for a specific tier of your application.For example, if a region was to go down, you would not want some of your web servers to use services across a slow link to another region (potentially) across the country.Instead you would want route 53 to route the traffic to an autonomous stack running it's own auto-scaled layers in a separate region.see this hosting charteverything from ELB down is region specific.ShareFolloweditedOct 17, 2012 at 0:52answeredOct 16, 2012 at 3:25John MoralesJohn Morales31644 silver badges77 bronze badges11Dear John, you were perfect until the paragraph before the last, you thought Route 53 but said S3 (simple storage service).–Alessandro OliveiraOct 16, 2012 at 10:42Add a comment|
I hope you can provide a quick response to my question.Is it possible to create auto scaling group which spans across regions ? Consider this scenario - Lets say all the availability zones in west are unavailabe. Can we configure auto scaling so that if the instance in US.West are down, create an instance in east zone ?I dont think it is possible, because we need to specify the region for AWS_AUTO_SCALING_URL while using Command line scripts, which restricts the creation of launch configs, auto scaling group within that region only.So we can only hope all the AZ's in that region are not down or move to VPC is that right ?
Auto Scaling - Across regions?
I wrote a script that might help with what you need, detailshere, sourcehere.But first of all, and really, this is the most important thing, you need to consider what defines a 'user'. Whilst it might seem obvious in descriptive terms, in technical terms you need to start to talk about requests per second.Now that bug bear is out of the way, to run load tests against Amazon using ELBs there's a couple of things you might want to know about.At this sort of load you'll very likely need to ask Amazon to 'pre-warm' things. Last time I looked into this I found that an ELB is essentially a software load balancer running on a simple instance. Like all things, these instances have a throughput limit and tend to max out at around (very vague) 40 requests a second. They will autoscale but the algorithm for this is not suited to load testing so pre warming is Amazon spinning up X ELBs beforehand where X is based on the information you provide to them in your request (see my first point around requests per second, not users).ELBs can cache DNS which in load testing can cause issues. If your test tool is java based (like JMeter) use: -Dsun.net.inetaddr.ttl=0Whatever solution you opt for, be very aware that your test itself can become the bottleneck, you should check for this first before blaming the application you're testing.ShareFollowansweredAug 1, 2012 at 21:16Oliver LloydOliver Lloyd4,95477 gold badges3636 silver badges5555 bronze badgesAdd a comment|
I have a PHP application which I have running on Amazon's Web Services. It's a relatively simple PHP script which basically does a simple write to an SQL database. This Database is an Xtra Large RDS instance. The PHP is running on a large EC2 instance behind a load balancer.What I would like to do is to stress test my script to simulate about 800 users all connected at the same time (yes, that truely is the estimate).I have heard about Siege, but I wasn't sure how to go about using it to test my application. If I try running it from my connection at home, I'm not sure that my PC / ADSL is even fast enough to create enough traffic to simulate 800 users attacking the EC2s (thus the RDS) all at once.Is it advisable to start another EC2 instance in another zone to simply "Siege" my application? Or perhaps running 2 EC2 instances, both sieging with 400 users each!?One hope that this would test the load balancing, the RDS and the EC2s thoroughly.Does anyone have experience with this kind of high-concurrent-user testing?Andy
Testing Amazon EC2, RDS and ELB performance
It is my opinion thatNo Rebootshould prevent the image creation from rebooting. If you are the api user, it also providesargument--no-rebootto do it.ShareFolloweditedMar 5, 2013 at 15:31jwegner7,19388 gold badges3636 silver badges5757 bronze badgesansweredJul 21, 2012 at 2:14qrtt1qrtt17,77499 gold badges4343 silver badges6363 bronze badgesAdd a comment|
Does Creating an Image of an Amazon EC2 Linux instance cause any downtime? Can I image a running server?
Does Creating an Image of an Amazon EC2 Linux instance cause downtime?
The commandsudo amazon-linux-extras install postgresql13installsonly the client. This isnot server. You still have to setup server separately apart from the client yourself.Thus, to installpostgresql 13, you have install the client (if you haven't done so yet). It is needed as Amazon Linux 2 will install matching server (v13), not default version 9.sudo amazon-linux-extras install postgresql13and now install server (this should install v13, as it matches your client):sudo yum install postgresql-servernow you enable it:sudo systemctl enable postgresqlinitialize it:sudo /usr/bin/postgresql-setup --initdbstart it:sudo systemctl start postgresqland finally check its status:sudo systemctl status postgresqlShareFollowansweredJun 18, 2022 at 8:53MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges2I am really sorry, I was on medical leave. I am really sorry.–CR SardarAug 4, 2022 at 22:32@CRSardar Its OK. Happens:-)–MarcinAug 4, 2022 at 23:41Add a comment|
Have an AWS EC2 instance which is running Amazon Linux AMI 2. Installed PostgreSQL usingsudo amazon-linux-extras install postgresql13Now, how to start it and configure it?I can seePackage postgresql-13.3-2.amzn2.0.1.aarch64 already installed...
AWS EC2 Amazon Linux 2 AMI Starting PostgreSQL
I figured it out.Apparently, you cannot access elasticache clusters from outside AWS by default. In order to do this, you need to create a VPN through AWS and connect to that in order to reach your desired cluster.The steps to do this are outlined in this AWS tutorialhere, but in more simple terms all I did was the following:Create and import a certificate of authority using the AWS Certificate Manager. You will use this certificate to authorize your VPN connection.Create a VPN Client Endpoint and attach the key and certificate generated in the previous step with it.Associate the VPC being used on your elasticache cluster with the VPN endpoint.Authorize all traffic on your VPN for all users.Add a route to the route table of your VPN endpoint to allow access from anywhere (0.0.0.0/0).Download VPN client configuration file locally and connect to the VPN using "openvpn" (you may need to brew install this) with your certificate and key created in the first step.This worked for me and I'm glad I figured it out. Now I can connect to my Redis cluster from my local machine using "redis-cli"!ShareFollowansweredOct 16, 2021 at 6:43Aaron MednickAaron Mednick58033 gold badges88 silver badges1818 bronze badgesAdd a comment|
I recently created a Redis cluster on AWS elasticache and am having trouble connecting via redis-cli from my local machine. Every time I run the command:redis-cli -h <redis_cluster_domain> -p 6379the connection is never established and eventually exits due to timeout.Eventually, I figured it's blocking due to a setting on the security group, so I edited the inbound rules to allow all traffic from my IP address. Even after doing this I still cannot connect to the cluster. Any ideas why this might be?
Cannot Connect To AWS Elasticache Redis Cluster From Local Machine
The response fromdescribe_instances()is:{ 'Reservations': [ { 'Groups': [ { 'GroupName': 'string', 'GroupId': 'string' }, ], 'Instances': [ { 'AmiLaunchIndex': 123, ...Notice that the response is:A dictionaryWhereReservationsis alistcontaining:Instances, which is alistTherefore, the code really needs to loop through all Reservations and instances.At the moment, your code is looping through the Reservations (incorrectly calling them instances), and is then only retrieving the first ([0]) instance from that Reservation.You'll probably want some code like this:for reservation in response['Reservations']: for instance in reservation['Instances']: print(instance['InstanceId'])ShareFollowansweredSep 12, 2019 at 5:12John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badgesAdd a comment|
I am having an issue with pagination in boto3 & not getting all instances in the aws account.Only getting 50% of the instances with below (around 2000 where as there are 4000)Below is my codeimport boto3 ec2 = boto3.client('ec2') paginator = ec2.get_paginator('describe_instances') response = paginator.paginate().build_full_result() ec2_instance = response['Reservations'] for instance in ec2_instance: print(instance['Instances'][0]['InstanceId'])
Pagination in boto3 ec2 describe instance
Elastic Load Balancer is a distributed system. It does not have a single public IP address. Instead, when you create an ELB, you are given a DNS name such as ExampleDomainELB-67854125.us-east-1.elb.amazonaws.com. Amazon gives a facility to set up a DNS CNAME entry pointing for e.g. www.exampledomain.com to the ELB-supplied DNS name.Also, ELB is directing to one of your instances. Hence, creating a static IP address for ELB will not be feasible.So as a solution if you need to set your 'A' record from your domain pointing to your ELB in Route53 :Select 'Yes' for Alias.Set Alias target as your Load Balancer DNS.Second way is similar by selecting the CNAME in Route53 and pointing the Alias Target to your ELB.This should help.ShareFollowansweredApr 4, 2019 at 16:32Aress SupportAress Support1,38566 silver badges1313 bronze badgesAdd a comment|
Is it possible to assign a static IP to an AWS load balancer without the need to move your NS records to Route 53?I basically just want to create an A record from my domain to point to the ELB.
AWS Static IP to Load Balancer?
As was commented on yourrelated question, you really need to involve your network personnel to identify the correct solution.With--no-verify-ssl, the traffic should still beencryptedbut itis notsecure.With this option, you are explicitly disabling the mechanism designed to prevent misuse or forgery of an SSL certificate, and doing so makes it impossible for aws-cli to determine with reasonable confidence that the peer system with which it is communicating is indeed Amazon S3, not an impostor server, and not a man-in-the-middle observer/attack/exploit.A need (or perceived need) to disable this validation is a sign of a defect in the system that needs to be resolved, such as by adding your enterprise CA to your local system trust store (assuming that's the issue -- it's the only marginally legitimate explanation that comes to mind, and if that is the actual problem, then I would argue that your organization is manipulating TLS in an improper way).This workaround should be avoided.ShareFollowansweredFeb 14, 2019 at 15:58Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badgesAdd a comment|
will be the data on-flight secured and port used to transfer is 443 ? If we copy with the below commandaws --no-verify-ssl s3 cp filename s3://bucketname/
use of --no-verify-ssl to copy the data through aws cli to s3. secured or not?
There is no way Terraform will update your source code when detecting a drift on AWS.The process you mention is right:Report manual changes done in AWS into the Terraform codeDo aterraform plan. It will refresh the state and show you if there is still a differenceShareFollowansweredJan 3, 2019 at 8:27Quentin RevelQuentin Revel1,4701010 silver badges1111 bronze badges1Thanks Quentin ... i hope terraform will do something about it in future–Satyashil DeshpandeJan 3, 2019 at 9:02Add a comment|
i had provisioned some resources over AWS which includes EC2 instance as well,but then after that we had attached some extra security groups to these instances which now been detected by terraform and it say's it'll rollback it as per the configuration file.Let's say i had below code which attaches SG to my EC2vpc_security_group_ids = ["sg-xxxx"]but now my problem is how can i update the terraform.tfstate file so that it should not detach manually attached security groups :I can solve it as below:i would refresh terraform state file withterraform refreshwhich will update the state file.then i have to update my terraform configuration file manually with security group id's that were attached manuallybut that possible for a small kind of setup what if we have a complex scenario, so do we have any other mechanism in terraform which would detect the drift and update itTHanks !!
how to update terraform state with manual change done on resources
Does you lambda role have the DynamoDB policies applied?Go toIAM Go to policiesChoose the DynamoDB policy (try full access and then go back and restrict your permissions)From Policy Actions - Select Attach Attach it to the role that is used by your LambdaShareFollowansweredOct 4, 2018 at 14:50codercoder4,29922 gold badges1515 silver badges2222 bronze badges1Nice. Worked well.–paul danAug 27, 2020 at 4:39Add a comment|
I need to scan a dynamodb database but I keep getting this error:"errorMessage": "An error occurred (AccessDeniedException) when calling the Scan operation: User: arn:aws:sts::747857903140:assumed-role/test_role/TestFunction is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot"This is my Lambda code (index.py):import json import boto3 client = boto3.resource('dynamodb') table = client.Table('HelpBot') def handler(event, context): table.scan() return { "statusCode": 200, "body": json.dumps('Hello from Lambda!') }This is my SAM template (template.yml):AWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31' Resources: MyFunction: Type: 'AWS::Serverless::Function' Properties: Handler: index.handler Runtime: python3.6 Policies: Version: '2012-10-17' Statement: - Effect: Allow Action: - dynamodb:Scan Resource: arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot
Not authorized to perform: dynamodb:Scan Lambda
I also found this to be counter-intuitive and confusing at first. However, thisisactually the expected (and documented) behavior.When you attach a Cognito event to a function as a trigger,Serverlesswill create a user pool for you, without even being asked.Source:This will create a Cognito User Pool with the specified name.So in your case, one user pool is being created by thecognitoUserPoolevent, and the other is being created by yourResourcessection. The one created byResourcesis correct (has the custom password policy), and the one created by the lambda trigger has default configuration. The fix is described under the"Overriding a generated User Pool"heading.You prefix the User Pool key in the Resources section withCognitoUserPool, which will cause both your trigger and your resource to refer to the same User Pool in the generated CloudFormation template.In your case, this means simply changing this:resources: Resources: MyUserPool: Type: "AWS::Cognito::UserPool"to this:resources: Resources: CognitoUserPoolMyUserPool: Type: "AWS::Cognito::UserPool"Tested with Serverless 1.26.0ShareFollowansweredMar 16, 2018 at 17:53Mike PatrickMike Patrick10.9k22 gold badges3333 silver badges5454 bronze badgesAdd a comment|
I'm using serverless to deploy an AWS CloudFormation template and some functions, here is a part of my serverless.yml file:resources: Resources: MyUserPool: #An inline comment Type: "AWS::Cognito::UserPool" Properties: UserPoolName: "MyUserPool" Policies: PasswordPolicy: MinimumLength: 7 RequireLowercase: false RequireNumbers: true RequireSymbols: false RequireUppercase: false functions: preSignUp: handler: presignup.validate events: - cognitoUserPool: pool: "MyUserPool" trigger: PreSignUpAs you can see, both user pool names are the same, but when I run serverless deploy, 2 user pools with the same name are created.Is this a bug or am I missing something?
Serverless duplicates user pools instead of reusing by name
Add "file://" to testbootstraptable.jsonaws dynamodb create-table --cli-input-json file://testbootstraptable.json --region us-west-2Also, delete the following line as it is not correct: "NumberOfDecreasesToday": 0,ShareFolloweditedSep 8, 2018 at 13:57wassgren19k66 gold badges6565 silver badges7979 bronze badgesansweredOct 9, 2017 at 3:00John HanleyJohn Hanley78.1k66 gold badges103103 silver badges168168 bronze badgesAdd a comment|
i have been trying to create a dynamo db table using the following json(testbootstraptable.json) file:{ "AttributeDefinitions": [ { "AttributeName": "test1", "AttributeType": "S" }, { "AttributeName": "test2", "AttributeType": "S" } ], "TableName": "BOOTSTRAP_TEST_TBL", "KeySchema": [ { "AttributeName": "test1", "KeyType": "HASH" }, { "AttributeName": "test2", "KeyType": "RANGE" } ], "ProvisionedThroughput": { "NumberOfDecreasesToday": 0, "ReadCapacityUnits": 35, "WriteCapacityUnits": 35 } }I have tried multiple times with different variations based on google search but keep getting the following error:Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0) JSON received: testbootstraptable.jsonAWS Command:$ aws dynamodb create-table --cli-input-json testbootstraptable.json --region us-west-2
Creating dynamodb table using aws cli "--cli-input-json"
An Amazon CloudWatch custom metric is only created when data is stored against the custom metric. Therefore, you'll need to push a data value to make appear and then you will be able to create an alarm.You can push some data to CloudWatch with theAWS Command-Line Interface (CLI), eg:aws cloudwatch put-metric-data --namespace MongoDB --metric-name errors --value 0ShareFollowansweredJul 7, 2017 at 0:18John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badges11Thanks, that worked. Although to be fair, thevaluehas to be "1" and the Metric will still not show on the Alarm Create screen. But I managed to create the Alarm anyway.–Christian DecheryJul 10, 2017 at 19:49Add a comment|
I had already created 7 other metrics based on some log files I send to CloudWatch with no problems.Some time ago we had a problem with MongoDB connection, and I identified that through logs, so I'd like to create a Metric, so that I can create an Alarm based on it. I did create the Metric, but (of course) there are no data being fed into that Metic, because no more "MongoError" messages exists.But does that also mean that I can't even access the Metric to create the Alarm? Because this is what is happening right now. The Metric cannot be seen anywhere, only in the "Filters" section of the Logs, which won't allow me to create Alarms or create graphics or anything.I have already posted this on AWS forums but that usually doesn't help.
CloudWatch custom metrics not working as expected
IfTimestampis the partition key and not the sort key. Then, you have two problems:In aQueryoperation, you cannot perform a comparison test (<, >, BETWEEN, ...) on the partition key. The condition must perform an equality test (=) on a single partition key value and,optionally, one of several comparison tests on a single sort key value. For example:KeyConditionExpression: 'HashKey = :hkey and RangeKey > :rkey'You have a syntax error in yourKeyConditionExpression,obviously. Keep in mind thatTimestampis areserved wordin DynamoDB. So, you'll have to useExpressionAttributeNamesfor that:(Assuming you have anIdpartition key andTimestampsort key)var params = { TableName: "Table2", KeyConditionExpression: "Id = :id AND #DocTimestamp BETWEEN :start AND :end", ExpressionAttributeNames: { '#DocTimestamp': 'Timestamp' }, ExpressionAttributeValues: { ":id": "SOME VALUE", ":start": 1, ":end": 10 } };ShareFolloweditedMay 21, 2021 at 12:40Vikas Sardana1,61322 gold badges2020 silver badges3838 bronze badgesansweredFeb 20, 2017 at 23:51Khalid T.Khalid T.10.2k55 gold badges4747 silver badges5555 bronze badges2If I have Timestamp as the partition key, would creating a secondary global index with the Timestamp as the sort key overcome the problems here? I could then create an equality statement–IorekFeb 21, 2017 at 0:15Solved with the addition of a secondary index–IorekFeb 21, 2017 at 0:59Add a comment|
Querying dynamoDB table with node.js. The DynamoDB table has key of Timestamp, represented by an integer. In this case, I left :timestampStart as 1 and timestampEnd as 10 as an example.var params = { TableName: "Table2", KeyConditionExpression:"Timestamp = :ts BETWEEN :timestampStart AND :timestampEnd", ExpressionAttributeValues: { ":ts":"Timestamp", ":timestampStart": 1, ":timestampEnd": 10 } };The :ts is not correct, I can see this. I want to return any rows found with a Timestamp value between timestampStart and timestampEnd.Error message:"errorMessage": "Invalid KeyConditionExpression: Syntax error; token: \"BETWEEN\", near: \":ts BETWEEN :timestampStart\"",
Querying DynamoDB with node.js between values
You can do that using AWS CLI (v1.11.46or newer).You can disassociate an IAM instance profile from a running or stopped instance using thedisassociate-iam-instance-profilecommand.See thedisassociate-iam-instance-profileCLI command documentation for more details.ShareFolloweditedyesterdayAli1,14111 gold badge1212 silver badges2727 bronze badgesansweredFeb 10, 2017 at 18:16Khalid T.Khalid T.10.2k55 gold badges4747 silver badges5555 bronze badgesAdd a comment|
I launched an instance with an iam roleI now want to remove the role from the instance via powershellI must be confused about the terminology here because I'm readingthis docand I don't see how to remove a role from an instance.I want to run this command on the EC2 instance that the role will be removed from.There can only be one role assigned to an instance so is there a way to dynamically get the iam role that has been assigned to an instance and remove it from the instance via the powershell api?
How do I remove a role from an EC2 instance?
You don't change EC2 instance state values directly. The state changes based on the actions you take to launch/start/stop/terminate instances. Look at the followingBoto3 EC2 clientmethods:run_instances() start_instances() stop_instances() terminate_instances()ShareFollowansweredFeb 18, 2016 at 19:24Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
According to the Boto3docsand thisdiagramThere are 6 states for an EC2 instance:'pending'|'running'|'shutting-down'|'terminated'|'stopping'|'stopped'I was wondering how can programmatically set the state to one of these states.I have some code to view all the states of every instance in my ec2 instance.ec2 = boto3.resource("ec2", region_name="us-west-2") vpc = ec2.Vpc("vpc-123456") for instance in vpc.instances.all(): for tag in instance.tags: print(instance.state["Name"])I get the output ofrunning running running running ... ...I was wondering if I can change these states to something likependingorshutting-down.Something along the lines ofinstance.set("stopping"). I understand that perhaps if I set a instance to the state ofstoppingI will getstoppedstate the next time I check on this instance.
boto3 changing AWS ec2 instance state
You need to uninstall the previous java version, to do this, install java 1.8 with:yum install java-1.8.0-openjdk.x86_64and then uninstall the previous java version, in my case:yum erase java-1.7.0-openjdkP.S: java-1.7.0-openjdk is installed on Amazon Linux AMI by default .Regards!ShareFolloweditedFeb 26, 2015 at 18:21answeredFeb 26, 2015 at 16:46Javier VargasJavier Vargas7871212 silver badges1919 bronze badges1Worked like a charm–NandakishoreJul 18, 2017 at 0:10Add a comment|
I tried sudo yum update but it just keeps java "1.7.0_75". I need 1.8 for it to work with another application but can't figure out how to upgrade it.Do I need to manually install it somehow? There's not much information on this on the internet as far as I can see.Specs:java version "1.7.0_75" OpenJDK Runtime Environment (amzn-2.5.4.0.53.amzn1-x86_64 u75-b13) OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)When I try update now:[ec2-________]$ sudo yum update Loaded plugins: priorities, update-motd, upgrade-helper amzn-main/latest | 2.1 kB 00:00 amzn-updates/latest | 2.3 kB 00:00 No packages marked for updateIs there anything else I need to do?Thanks.
How can I upgrade to Java 1.8 on an Amazon Linux Server?
You are only using node v16 locally, amplify for some reason uses a lower one. You could either downgrade the package (not recommended imho) or tell amplify to use a higher node version (recommended imho).frontend: phases: preBuild: commands: - nvm install 16Find the official documentation on how to change build settings here:https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.htmlShareFollowansweredNov 21, 2021 at 6:08clemensclemens8611 bronze badgeAdd a comment|
I upgraded node on my local machine as well as migrated fromcreate-react-apptonextjs.When I pushed my code to AWS Amplify, I got this error:error[email protected]: The engine "node" is incompatible with this module. Expected version ">=12.22.0". Got "12.21.0" error Found incompatible module. info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.I've looked on stackoveflow and other blogs and I've tried everything, but I still get this error.My trials:Added engines: node to my package.json"engines": { "node": ">=12.22.0" }Ran these commands:sudo npm cache clean -f sudo npm install -g nDouble checked my node version:node -v v16.13.0Deleted and then installed the node modules folder with yarn installWhy is this error still occurring?
Why does AWS amplify does not recognize updated node version?
Theterraform importsubcommand relies on strings in the map key within resource namespaces that are first class expression, and this causes issues with shell interpreters where the resource is not a first class expression because they are not the Terraform DSL. You can work around this by casting the entire resource name as a literal string:terraform import 'aws_s3_bucket.bucket["logs"]' logs_bucketand this will resolve your issue.ShareFollowansweredOct 19, 2021 at 15:45Matthew SchuchardMatthew Schuchard26.8k33 gold badges5353 silver badges7373 bronze badges2Does this need to target "source" in a way? Looks like it's still erroring out: Error: Invalid expression on <value for var.s3_replication> line 1: (source code not available) Expected the start of an expression, but found an invalid expression token. Error: No value for required variable on variables.tf line 18: 18: variable "s3_replication" { The root module input variable "s3_replication" is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.–J. PatwaryOct 19, 2021 at 16:07Thanks, your syntax was correct, I just needed the -var-file=path to point to the tfvars file–J. PatwaryOct 19, 2021 at 16:57Add a comment|
I've got a map variable that identifies existing s3 buckets:resource "aws_s3_bucket" "bucket" { for_each = var.s3_replication bucket = each.value.source #other configuration } variable "s3_replication" { description = "Map of buckets to replicate" type = map default = { logs = { source = "logs_bucket", destination = "central_logs_bucket" }, security = { source = "cloudtrail_bucket", destination = "central_security_bucket" } } }Since these buckets already exist, I am trying to import them and then apply the a configuration to them to update the resources. Unfortunately, I am not able to figure out how to do a terraform import on these. I've tried:terraform import aws_s3_bucket.bucket["logs"] logs_bucket terraform import aws_s3_bucket.bucket[logs] logs_bucket terraform import aws_s3_bucket.bucket[0] logs_bucket terraform import aws_s3_bucket.bucket[0].source logs_bucket terraform import aws_s3_bucket.bucket[0[source]] logs_bucketAll failing with a different error. Any idea on how to import existing resources listed on a map?
Terraform Import of map resources
Based on the comments.The issue was caused because the CloudFront distribution in question did not have Alternate Domain Name set witch matches the record in Route53. Fromdocs:The distributionmust include an alternate domain name that matches the domain namethat you want to use for your URLs instead of the domain name that CloudFront assigned to your distribution.ShareFollowansweredAug 30, 2020 at 11:12MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I'm creating an A record in my Route 53 hosted zone for a subdomain. When I select "Alias to Cloudfront distribution", only US East zone shows in the list, containing an undesired distribution:However I want to point to another distribution which is not showing in the list. Any idea why it's not showing? The distribution is ready and can be accessed using the cloudfront URL, pointing to a static s3 hosting bucket. I created it using amplify cli hosting, s3+cloudfront option if it matters.Thanks for any help.
Route 53 - cloudfront distribution not showing when creating A record
You can run it in a loop over the results oflist-buckets.For example:for bucket_name in $(aws s3api list-buckets --query "Buckets[].Name" --output text); do echo ${bucket_name} encryption_info=$(aws s3api get-bucket-encryption \ --bucket ${bucket_name} 2>/dev/null) if [[ $? != 0 ]]; then echo " - no-encryption" else echo " - ${encryption_info}" fi doneIf bucket has no encryptionget-bucket-encryptionreturns error, so I assume above that any error means that there is no encryption.ShareFollowansweredAug 28, 2020 at 23:26MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges1thx, just what i needed! Adding --output text on the second aws cli command makes it easier to read for me as my default is json.–at0mzkFeb 2, 2023 at 15:09Add a comment|
My account has a few hundred buckets, I need to be able to show the encryption status for all of these. I'd like to be able to do this via the CLI, I see there is a command 'get-bucket-encryption' operation but I can't figure out how to run this against all buckets rather than just a specific bucket.
AWS CLI to list encryption status of all S3 buckets
If you are using Spark > 2.0 then1.In Pyspark:Get Spark version:print("Spark Version:" + spark.version)Inspark < 2.0:sc.versionGet Hadoop version:print("Hadoop version: " + sc._gateway.jvm.org.apache.hadoop.util.VersionInfo.getVersion())2.In Scala:Spark Version:println ("Spark Version:" + spark.version)inspark < 2.0:sc.versionHadoop version:println("Hadoop version: " + org.apache.hadoop.util.VersionInfo.getVersion())ShareFollowansweredJun 16, 2019 at 18:16notNullnotNull30.9k44 gold badges3838 silver badges5555 bronze badges12Excellent. This saved my life! So dramatic,LoLbut thank you.–NYCeyesApr 15, 2021 at 20:16Add a comment|
I am using AWS with (Basic support plan). I want to know which version of Spark and Hadoop (HDFS) is getting used in AWS glue jobs. So that I can setup the same environment in my local machine for development.Or if i get to know version of Spark then corresponding which version of Hadoop is used by AWS glue jobs or vice-versa.As I am using Basic Support plan. So, I can't raise case to support center. Any idea where I can check in AWS glue jobs... Spark and Hadoop version?Any kind of help and suggestion is appreciated. Thanks!
How to check version of Spark and Hadoop in AWS glue?
Unfortunately, it seems that the library has been updated since the accepted answer was written and the solution no longer is the same. After some trial and error, this appears to be the more current method of handling the signing (usinghttps://pkg.go.dev/github.com/aws/aws-sdk-go-v2):import ( "context" "net/http" "time" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/aws/signer/v4" ) func main() { // Context is not being used in this example. cfg, err := config.LoadDefaultConfig(context.TODO()) if err != nil { // Handle error. } credentials, err := cfg.Credentials.Retrieve(context.TODO()) if err != nil { // Handle error. } // The signer requires a payload hash. This hash is for an empty payload. hash := "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" req, _ := http.NewRequest(http.MethodGet, "api-gw-url", nil) signer := v4.NewSigner() err = signer.SignHTTP(context.TODO(), credentials, req, hash, "execute-api", cfg.Region, time.Now()) if err != nil { // Handle error. } // Use `req` }ShareFollowansweredJul 22, 2021 at 14:52Alan WongAlan Wong7611 silver badge11 bronze badgeAdd a comment|
Hello StackOverflow AWS Gophers,I'm implementinga CLI with the excellentcobra/viper packages from spf13. We have an Athena database fronted by an API Gateway endpoint, which authenticates with IAM.That is, in order to interact with its endpoints by using Postman, I have to defineAWS Signatureas Authorization method, define the corresponding AWS id/secret and then in the Headers there will beX-Amz-Security-Tokenand others. Nothing unusual, works as expected.Since I'm new to Go, I was a bit shocked to see that there are no examples to do this simple HTTP GET request with theaws-sdk-goitself... I'm trying to use the shared credentials provider (~/.aws/credentials), as demonstrated for the S3 clientGo code snippets from re:Invent 2015:req := request.New(nil)How can I accomplish this seemingly easy feat in 2019 without having to resort to self-cookednet/httpand therefore having to manually read~/.aws/credentialsor worse, go withos.Getenvand other ugly hacks?Any Go code samples interactingas clientwould be super helpful. No Golang Lambda/server examples, please, there's plenty of those out there.
API Gateway HTTP client request with IAM auth with Go
A better solution was to omit "Unit" altogether, which allowed AWS to choose the appropriate unit, not only in scale but in category.ShareFollowansweredFeb 11, 2019 at 19:34StevenSteven9241010 silver badges1919 bronze badges1This worked for me in my situation as well. Seems to be the best default course of action.–xxyjoelApr 14, 2021 at 10:32Add a comment|
Using the JS AWS SDK and passing the following parameters:{ "StartTime": 1548111915, "EndTime": 1549321515, "MetricDataQueries": [ { "Id": "m1", "MetricStat": { "Metric": { "MetricName": "NetworkOut", "Namespace": "AWS/EC2", "Dimensions": [ { "Name": "InstanceId", "Value": "i-[redacted]" } ] }, "Period": 300, "Stat": "Average", "Unit": "Gigabytes" } } ] }This is the output:[ { "Id": "m1", "Label": "NetworkOut", "Timestamps": [], "Values": [], "StatusCode": "Complete", "Messages": [] } ]The query closely matches the sample request found athttps://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html#API_GetMetricData_ExamplesI am sure that the instance is a valid instance that has definitely had NetworkOut traffic during that date range.What reason could account for the lack of elements inValuesarray?
Why is GetMetricData returning an empty set of values?
UPDATE:For AWSMobileClient ~> 2.12.0, you can fetch user attributes as follows.AWSMobileClient.default().getUserAttributes { (attributes, error) in if(error != nil){ print("ERROR: \(error)") }else{ if let attributesDict = attributes{ print(attributesDict["email"]) print(attributesDict["given_name"]) } } }ShareFollowansweredDec 12, 2019 at 15:04velociraptor11velociraptor1159611 gold badge55 silver badges1010 bronze badgesAdd a comment|
Question is very simple: I've added user authentication to iOS app using AWS Cognito and AWS Amplify. I have successfully implemented sign in and sign up, but how to get user attributes such as email, full name or phone number?
How to get AWS Cognito user attributes using AWSMobileClient in iOS?
It looks like AWS has recently released a feature to launch an instance with encrypted volume based on non-encrypted AMI.Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single stepFrom the CloudFormation perspective, you need to overwrite AMI block device configuration. So for example, you can write like this:BlockDeviceMappings: - DeviceName: "/dev/xvda" Ebs: VolumeSize: '8' Encrypted: 'true'This will start an instance with encrypted root EBS from non-encrypted AMI with a default KMS keyShareFollowansweredMay 28, 2019 at 13:19dhavrykdhavryk12111 silver badge55 bronze badgesAdd a comment|
Working on cloud formation script which will create simple ec2 instance. here i want to encrypt a root volume at the time of launch. its possible to create a separate EBS, encrypt it and attach it as boot volume. but i couldn't find a way to encrypt it while launching. any way to do this?Thanks In Advance
Encrypt root volume of EC2 while creating stack using cloud formation
While the answer of Nate is correct, this would lead to a lot of code duplication. A better solution in my opinion would be to work with a list and loop over it.Create a variable (variable.tf file) that contains a list of possible folders:variable "s3_folders" { type = "list" description = "The list of S3 folders to create" default = ["folder1", "folder2", "folder3"] }Then alter the piece of code you already have:resource "aws_s3_bucket_object" "folders" { count = "${length(var.s3_folders)}" bucket = "${aws_s3_bucket.b.id}" acl = "private" key = "${var.s3_folders[count.index]}/" source = "/dev/null" }ShareFollowansweredApr 28, 2018 at 20:33ThomasVdBergeThomasVdBerge7,81255 gold badges4646 silver badges6262 bronze badgesAdd a comment|
How to create a multiple folders inside an existing bucket using terraform. example: bucket/folder1/folder2resource "aws_s3_bucket_object" "folder1" { bucket = "${aws_s3_bucket.b.id}" acl = "private" key = "Folder1/" source = "/dev/null" }
how to create multiple folders inside an existing AWS bucket
Try to just split them in different lines in yourapplication.properties:amazonProperties.endpointUrl= https://s3.us-east-2.amazonaws.com amazonProperties.accessKey= XXXXXXXXXXXXXXXXX amazonProperties.secretKey= XXXXXXXXXXXXXXXXXXXXXXXXXX amazonProperties.bucketName= your-bucket-nameShareFollowansweredFeb 19, 2018 at 16:52Federico ScardinaFederico Scardina15011 silver badge1111 bronze badges0Add a comment|
I am currently trying to set up an s3 bucket with my Spring Boot webapp for adding/removing images.The guide I am following uses the following application.yml properties:amazonProperties: endpointUrl: https://s3.us-east-2.amazonaws.com accessKey: XXXXXXXXXXXXXXXXX secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXX bucketName: your-bucket-nameHow can I define these properties in my application.properties file?All help is very much appreciated, thank you!
Spring Boot - How to define application.yml properties as application.properties
<PropertyGroup> <PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManifest> </PropertyGroup>Adding the above code to the .csproj solved the problem.ShareFollowansweredJan 15, 2018 at 17:59Matheus LacerdaMatheus Lacerda6,0111111 gold badges3030 silver badges4545 bronze badges11life saver, seems limited to .net core 2 as it worked perfectly fine with old .net–McShepJun 5, 2019 at 20:43Add a comment|
I'm getting this error after deploying my API to AWS Elastic Beanstalk.Steps to reproduceFile -> New -> ProjectASP.NET Core Web ApplicationASP.NET Core 2.0Web APIF5: OKPublish to AWS Elastic Beanstalk... (via AWS Tollkit for Visual Studio 2017)HTTP Error 502.5 - Process FailureAdditional infoVisual Studio 201764bit Windows Server 2012 R2 v1.2.0 running IIS 8.5I've already tried looking upmanyotherquestions, but to no success.Log2018-01-15T13:27:21.000Z Error 0:(0) IIS AspNetCore Module - Application 'MACHINE/WEBROOT/APPHOST/DEFAULT WEB SITE' with physical root 'C:\inetpub\AspNetCoreWebApps\app\' failed to start process with commandline 'dotnet .\WebApplication2.dll', ErrorCode = '0x80004005 : 8000808c.
HTTP Error 502.5 - Process Failure - .net core 2.0 to AWS EB
A VPC range of10.0.0.0/16means that all addresses starting with10.0.x.xare part of the VPC.When you create the subnet, you want it to be aportionof the VPC. People typically assign an address like10.0.1.0/24-- the /24 means that the subnet has every IP address starting with10.0.1.x.The error you received is because you tried to make a /16 subnet within a /16 VPC. This will work (as it did in your 2nd try), but you can then only have one subnet.Bottom line:Use/24, or at least something smaller than/16(which in CIDR actually means a bigger number!).ShareFollowansweredJan 8, 2018 at 1:16John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badges1Thanks mate, your answer was very helpful :)–VinoJan 8, 2018 at 1:18Add a comment|
I am trying to create a VPC with multiple subnets in AWS. I am pretty sure I am get the concepts of CIDR to mask networks and available hosts.Unfortunately, whenever I am trying to design the VPC I am getting errors. This is my VPC design:VPC: 10.0.0.0/16Public subnet 1: 10.0.1.0/16Error:Must be a valid CIDR block. Did you mean 10.0.0.0/16?Then I assign my Public subnet as 10.0.0.0/16 due to the error.Then I proceed to create my private network as 10.0.1.0/16 - I get an error:CIDR block 10.0.1.0/16 overlaps with pre-existing CIDR block 10.0.0.0/16What am I doing wrong? I just want to create two private network and one public network.
Error in creating multiple subnets in AWS VPC