Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Is there a way I avoid this additionalGETrequestIt sounds as if you are misinterpreting the what you are reading. Unfortunately, you didn't cite the source, so it's difficult to go back and pick up more context; however, this is not referring to an "extra" request.It will then make aGETrequest with an If-Modified-Since headerThis refers toeach time the object is subsequently requested by a browser. CloudFront sends thenextrequest withIf-Modified-Since:so that your origin server has theoptionof returning a304 Not Modifiedresponse... it doesn't send two requests to the origin in response to one request from a browser.If your content is always dynamic, returnCache-Control: private, no-cache, no-storeand set Minimum TTL to 0.http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
AWS Cloudfront document says:If you set the TTL for a particular origin to 0, CloudFront will still cache the content from that origin. It will then make a GET request with an If-Modified-Since header, thereby giving the origin a chance to signal that CloudFront can continue to use the cached content if it hasn't changed at the originI need to configure my Dynamic Content. I have already set TTL to 0.. I want every request to go to Origin always. Is there a way I avoid this additional GET request with an If-Modified-Since header ! Why this extra request everytime !
Cloudfront how to avoid If-Modified-Since header request everytime
Instead of storing all the answers for a user in a single item you can store each answer as a single item. For e.g let say the questions belong to a survey then your table schema looks like this:SurveyId(Hash Key) | User(Hash key) | Question(Range Key) | Answer.So if a survey contains n answers for a user there will be n items in dynamoDb table. To find list of all answers for a user we can query on hashkey(surveyId + User). A similar use case is discussedhere.Depending on how frequently we exceed the maximum record size we can store large items to s3.Regards Dinesh Solanki
I am developing an application that stores questions that people has answered in a nosql database. I am using AWS dynamodb but the record size limit is 400 kb. How would I store more than 400kb of data. Would it be best to put a reference to the next record in the current record? If anyone has any other thoughts or ideas it would be great.
Amazon dynamodb record size limit work around
After you update the object in S3 you have to remove the object from the CloudFront cache so that CloudFront will go back to S3 to get the new version. This is called "cache invalidation". Since you aren't doing this, CloudFront isn't going back to check for a new version until the cache expires, which is why it is taking so long for the new version to show up.You can read about invalidating CloudFront cachehere.
Having Amazon S3 and CloudFront enabled for the S3 content (actually serve static website). Any updates to the bucket takes randomly from 15 minutes to 1 day. What can I do with settings to make this faster?
Amazon CloudDistribution slow update
This will happen if you created your SSL Certificate on a different Region to your Elastic Beanstalk instance. An easy gotcha!
I am new to AWS and need help to select the AWS Certificate Manager provisioned Certificate from Elastic Beanstalk Loadbalancer usingAWS Console.Deployed my Java application on Linux instance using Elastic Beanstalk and that worked fine with Http.Provisioned a new wildcard certificate using AWS Certificate Manager.Under Elastic Beanstalk Configuration - Network Tier - Load Balancing Settings gear Icon, I changed "Secure listener port" = 443 and "Protocol" = HTTPS.But the "SSL Certificate ID" does not list the certificate to pick.Please suggest what is that I am missing here.I have read many suggestions to do by CLI but I am not an CLI expert and wanted to use the console feature for simplicity.EDIT-1: I can see the certificate under EC2 - Load Balancer - Listener TAB if I try to add HTTPS, but not under Beanstalk. I am not sure if I shall add this listener under EC2 or not, but I think I need to add SSL to Beanstalk as My application get deployed using Beanstalk into EC2.
AWS Certificate Manager Certificate is not visible to AWS Beanstalk from Console
When you give AmazonEC2FullAccess access to the user he will be able to see all the EC2 instances in the AWS account. Even if you don't provide him the key to pre-created EC2 instances he will be able to take AMI of the pre created EC2 instance and launch it with a new key and get access to that instance.He can also do disk recovery procedure as in you mentioned in your use case. So you have some of the below options.Do not provide AmazonEC2FullAccess ask him what specification he needs for the server and launch the EC2 as per the specification and provide him ssh jailed user access to that EC2 instance.Set up cloud trail so that you can monitor the resources created by that user for any suspicious activityhttps://aws.amazon.com/cloudtrail/Third option is as you mentioned he is developer just provide him deployment and git access to the application running on the EC2 instance.
I use Amazon EC2 to host some web sites and databases.I have a new developer joining me tomorrow. If I create an IAM User, and attach the "AmazonEC2FullAccess - arn:aws:iam::aws:policy/AmazonEC2FullAccess- Provides full access to Amazon EC2 via the AWS Management Console.) policy to him,will he be able to access secrets stored inside the linux ec2 instances created in the past. Basically, does this policy somehow allow access to pre-created linux instances.EDIT: what if he/ she attempts a disk recovery procedure? for example, mount the disk of a vm in a new ec2 instance
AmazonEC2FullAccess and security
I'm assuming you are using the AWS management console. These operations are also possible using the Command Line Interface or AWS CloudFormation.Toresizean instance, you have to stop it then go to Actions > Instance Settings > Change Instance TypeAs you can see, this operation is not automatic. In AWS you don't autoscale an instance but an autoscaling group which is a group of instances. So according to your memory/cpu usage, you can automatically start new instances (but not increase the size of the current ones)To create anautoscalinggroup, go to Auto Scaling Groups in the EC2 menu:To create an autoscaling group, you will need to create a Launch Configuration first which describes the properties of the instances you want to automatically scale. Then you will be able to define your scaling policies based on your Cloudwatch alarms (CPU usage, instance status...):
currently am on the t2.micro & i read that amazon allow an auto scaling option to allow the server to expand/shrink according to the traffic which is perfect.so my question is:what exactly should i do in-order to enable the auto scaling/resizing of the server when needed or when the traffic start to spike ?is there an option to allow changing the instance type automatically ?auto scaling i believe means adding more instances and balance the load in between them, so does this mean i need to have a background about load balancing and all that jargon that comes with it or does amazon take care of that automatically ?am totaly new to the whole server maintenance/provisioning land, so please try to explain as simple as possible. also the only reason i went with amazon because of the automation capabilities it offer but sadly their docs are very complex and many things could go wrong.
how to auto resize/scale amazon aws ec2 instance
Use this policy it will work for full access to the bucket.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME/*" } ] }
The configure I used has AdministrationAccessthe bucket has following policy configured:{ "Version": "2012-10-17", "Statement": [ { "Sid": "myPolicy", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "*", "Resource": [ "arn:aws:s3:::bucket-name/*", "arn:aws:s3:::bucket-name" ] } ]}In grantee:Everyone with all four operationsI can not imagine a bucket more open than that, why do I still get the errorA client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
The aws-java-sdk allows you to do that. Similar tohttp://docs.aws.amazon.com/amazondynamodb/latest/developerguide/JavaDocumentAPIWorkingWithTables.htmlyou can do eitherAmazonDynamoDB dynamoClient = new AmazonDynamoDBClient(); DescribeTableResult result = dynamoClient.describeTable("MyTable"); Long readCapacityUnits = result.getTable() .getProvisionedThroughput().getReadCapacityUnits();orAmazonDynamoDB dynamoClient = new AmazonDynamoDBClient(); DynamoDB dynamoDB = new DynamoDB(dynamoClient); Table table = dynamoDB.getTable("MyTable"); Long readCapacityUnits = table.describe() .getProvisionedThroughput().getReadCapacityUnits();DynamoDbis a higher level wrapper, which sometimes has simpler APIs,AmazonDynamoDBClientis a rather straight implementation of the HTTP APIs.For more on autoscaling dynamodb seeHow to auto scale Amazon DynamoDB throughput?
Is it possible for my (java) app to check DynamoDB's provisioned throughput for read/writes? For stability reasons it would be useful if I could get these numbers programmaticallyI am aware that if I get aProvisionedThroughputExceededExceptionthen I have exceeded my limit, but is there a way to find out what my read/write limits arebeforethat happens?I have also found some docs referring todescribing limitsbut this doesn't seem to correspond to anything I can use in codeThis is the first time I've used dynamodb so if this is fundamentally bad practice to do please say!Cheers
Is it possible to check what the provisioned throughput is for DynamoDB?
Create a new AMI ImageGo to the AWS consolego to your EC2 dashboard : in the instances listselect your instance, right click on your instanceselect image / create imageThis will create an AMI image that you can reuse later
So, I am using an ubuntu AMI (amazon machine image). I have installed a couple of compiler, softwares, packages etc on this image. Now I want to save this image; so that I can use it to spawn multiple identical instances of this same AMI. How can I save my create and image of my current AMI instance in order to replicate it to create multiple identical instances?
How do I save a customized AWS EC2 image (AMI)?
aws ec2 describe-instances \ --query "Reservations[*].Instances[*].PublicIpAddress" \ --output=textAnother Way iscurl --silent http://ipecho.net/plainThis will return Public IP of instance
My AWS Command is:> aws ec2 run-instances --image-id ami-346b2354 --count 1 --instance-type c4.large --key-name my-cali-key --security-group-ids sg-a168c7c4When I trigger this, the JSON data that it returns has only the Private IP address and NOT the Public IP address................. DATA OMITTED................... "Groups": [ { "GroupName": "launch-wizard-1", "GroupId": "sg-a168c7c4" } ], "SubnetId": "subnet-cb2524ae", "OwnerId": "012710546082", "PrivateIpAddress": "172.31.17.252" } ..................... DATA OMITTED ....................(Image) Browser look of the same instance with Public IPHowever when I see in the browser I immediately notice a Public IP address being associated automatically.How Can I fetch the public IP of launched instances?Kindly do not get confused with elastic-IP and its association with instances. I know to associate an elastic-IP but the need here is different.
How to get Public IP via AWS CLI while launching
To copy locallyaws s3 sync s3://origin /local/pathTo copy to destination bucket:aws s3 sync /local/path s3://destination
I need to get the contents of one S3 bucket into another S3 bucket.The buckets are in two different accounts.I was told not to create a policy to allow access to the destination bucket from the origin bucket.Using the AWS CLI how can I download all the contents of the origin bucket, and then upload the contents to the destination bucket?
How to download all the contents of an S3 bucket and then upload to another bucket?
Which user are you logged in as (find out withwhoami). The user and group of these directories arerootand there is no write permission for "other".Possible solutions:Usesudoin front of the command to executemkdiras rootChange owner / group to allow your user / group write permission usingchown/chgrp
I am getting the following error in my code.Message: mkdir(): Permission deniedI have tried other solutions which i found instackoverflowand following commands:$ ls -ald /var/www drwxrwsr-x 7 root root 4096 Apr 23 13:58 /var/www $ ls -ald /var/www/html drwxrwsr-x 4 root root 4096 Apr 26 10:02 /var/www/html $I referred below document of AWSAWS User GuideUpdate:-I can create it from command line. Problem is when i execute my php code i am getting this error.
Message: mkdir(): Permission denied AWS ec2
Specify aprefixfor the load, and all Amazon S3 objects with that prefix will be loaded (in parallel) into Amazon Redshift.Examples:copy mytable FROM 's3://mybucket/2016/'will load all objets stored in:mybucket/2016/*copy mytable FROM 's3://mybucket/2016/02'will load all objets stored in:mybucket/2016/02/*copy mytable FROM 's3://mybucket/2016/1'will load all objets stored in:mybucket/2016/1*(eg 10, 11, 12)Basically, it just makes sure the objectstartswith the given string (including the full path).This also means that if you have something likemybucket/walletand amybucket/walletiventoryit can apply the rule also, so be careful with names when using the COPY command from S3.
Is it possible to copy all files under root directory/bucketExample folder structure:/2016/01/file.json /2016/02/file.json /2016/03/file.json ...I've tried with the following command:copy mytable FROM 's3://mybucket/2016/*' CREDENTIALS 'aws_access_key_id=<>;aws_secret_access_key=<>' json 's3://mybucket/jsonpaths.json'
Redshift copy command recursive scan
The config files must be part of an.ebextensionsdirectory added to your project sources.When deploying your code using theeb create/eb deploycommand line, these commands are using thegit archivecommand to package your code and upload it to Elastic Beanstalk for deployment (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-deploy.html)When your.ebextensionsis not under git control (e.g. if it's included in your.gitignore), the directory and its config files are not packaged and sent to Elastic Beanstalk.Be sure that you add and commit the.ebextensionsdirectory before you deploy to Elastic Beanstalk.git add .ebextensions/* git commit -m "add eb config files"
I have the following lines in my config filecommands: install_packages: command: sudo yum -y install libxml2-devel libxslt-develI have also tried it the way ofpackages: yum: libxml2-devel: [] other packageBoth of which seem to fail to install the packages.I am trying to install the pyusps python module and the installation will fail without those packages. I can SSH in and install them manually and than pyusps will installI am not sure where I am going wrong here.Thanks
Packages not installing though elastic beanstalk config.yml
I found the answer inone of the issuesof mailcomposer. You need to add one extra config in the mail options.var mail = mailcomposer(options); mail.keepBcc = true;
I am sending a mail with attachments usingmailcomposerand thesendRawEmailmethod of the AWS SDK. I am able to send the emails using thetoandccfields but when I add an address inbcc, the mail does not get delivered. There is no failure though. Is there any extra configuration not mentioned in the docs that I might be missing ?
Node.js: AWS SES sendRawEmail: mails not getting sent to BCC addresses
Relax. It is not very difficult and no need to terminate the instance. Always keep multiple sessions open before changing keypairs, so that if you make a mistake, you can use the other ssh sessions to restore the access.In this example, I am assuming user asubuntu, but applies to any user.Take an AMI of the machine in case something goes wrongGenerate a new keypair. Keep the private key in a safe location. Let the public key bekey.pubEdit/home/user/ubuntu/.ssh/authorized_keysfile and replace the contents of the file with the contents ofkey.pubTrysshinto the machine with the new private keyIf you cansshwith the new key, reboot the machine so that all active ssh sessions are closedIf you cannotsshwith the new key, go back to step 2 and see what went wrong.
If someone else has possession of our private key for an ec2 instance, and so the key pair is no longer safe to use, how should we proceed?From what I understand, there is no way to replace the key pair once the instance has started running. The only option I think is to somehow replace the current ec2 instance with a new one (a copy, containing the exact same data/volume and hopefully with the same ip address), but I'm not sure how to do this safely (and preferably with minimal downtime of our servers). This way, a new key pair can be generated and the old one will become obsolete, once the old instance is terminated. Otherwise, another option may be to somehow disable the current key pair and add a new one, but I have no idea how to do that either (and it's probably not the best long term solution either).Can someone provide me instructions for the best solution in this scenario? And hopefully let me know if I'm on the right track. Working with aws can be dangerous so I want to make sure I'm doing this correctly.
Amazon AWS key pair is no longer safe to use. How do I disable/replace this key pair on an ec2 instance?
In your Firebase dashboard, there's an export button in the upper right corner.That will export your JSON data into a flat file, which you can then either import into another database that reads JSON data, or craft some code to massage it into a format that's compatible.You can then use any database engine that works for you.Please also review the comment from Kato below as there is a limit to how much can be exported from the dashboard.
I am a beginner and I am going to implement an android application that use Firebase but i don't not know what if I want to move all my database to another server like Amazon Web Service ? is it possible and what type of database engine can I use ?
How to move database from Firebase to another server?
You would forward the 5000 to different instance ports (since you can't bind 80 multiple times).You could then use an ELB to map across the ports.this post answers the specifics.You'd want to standardize the ports of service 1 across the cluster so that you could bind an ELB to it. Ie. ELB port 80 can't be mapped to 5000 and 5001. So port 5000 would be forwarded on both instances.
Say I have an autoscaling group with initial number of 2 instances. Assume that instances of this autoscaling group would be of the same type (hence the same amount of memory and CPU). The maximum number of instances isn't relevant in this case. Also I have an ELB which load-balancing load among instances of this group. Besides this, instances of this autoscaling group are members of some fresh ECS cluster I've created earlier. There is only one task definition in this case with only one container which would use 512Mb of RAM. Also this container requires port mapping from host's 80 to container's 5000.Say I've spun up this autoscaling group and 2 initial instances are now ready to be used. Then I'm trying to spawn a service of 4 tasks based on the aforementioned task definition. Imagine that this tasks would perfectly fit this 2 instances if they were placed by two (if the hosts had 1Gb of RAM each).Would this setup even be legitimate? If so then what would happen with port mappings because there would be 2 identical containers on one host?
Several Amazon ECS tasks on the same instance host
Find and select theEC2 Autoscaling Groupfor the Elastic Beanstalk environment in the AWS Console. It should be named something likeawseb-e-*-stack-AWSEBAutoScalingGroup-*.Press the "Edit" button. Change the "Desired capacity" to 0.
In other words I would like to temporarily turn off an environment (and its associated billing costs) but not delete it entirely.It seems if I set the [Configuration > Web Tier > Scaling > Minimum instance count] to 0 along with the related "Maximum instance count", AWS rejects those settings as invalid. Ditto for questionable values like 0.1.Any ideas for temporarily taking an Elastic Beanstalk environment out of service?
How do I set the instance count of a Elastic Beanstalk environment to 0?
If you want to use SSL on a EC2 instance directly, you must obtain and install a certificate through the application running on your instance, (e.g. Apache, Nginx). There is nothing special required because your instance is running on AWS.You will not be able to use the free certificates provided by Amazon Certificate Manager, they can not be exported for use with services other than ELB and CloudFront.
I have an EC2 instance which runs a website I want to add an SSL certificate for. From Amazon's documentation and other sources the only way they have stated an SSL certificate can be added is through:CloudfrontElastic Load BalancingI am not already using these for my website due to the added cost of these services. Is there another method of adding an SSL certificate without using Cloudfront or ELB? Thanks.
Installing SSL certificates on AWS EC2 Instance not using Cloudfront or Elastic Load Balancing
but the https thing is crossed out in redI'm not sure what you mean by that. Is it red in your browser's URL bar?Anyway to use Cloudfront or ACM to generate the SSL for API Gateway endpoint?Not currently. We're looking into integrating API Gateway with Amazon Certificate Manager (ACM), but we don't yet have an estimated delivery date.everytime I select alias before adding CNAME record, the cloudfront endpoint doesnt appearWe're hoping to address that when we add the ACM integration.
I've managed to set up a custom domain for my api gateway end point but the https thing is crossed out in red. I generated my certificates from Letsencrypt for *.mydomain.comThe API gateway end point is api.mydomain.com/prod/Anyway to use Cloudfront or ACM to generate the SSL for API Gateway endpoint? Must I use letsencrypt?I find it odd that there's no drop in support for adding custom domains via route53 and api gateway (everytime I select alias before adding CNAME record, the cloudfront endpoint doesnt appear)
How to create an SSL AWS API Gateway endpoint with custom domain?
The API Gateway management console has a nice 'Enable CORS' feature, which you may have seen. As far as replicating using the CLI, I'd suggest using the console feature, observing the configuration afterwards, and using the same parameters in the CLI requests.The error you're seeing must be caused by incorrect escaping of the single quotes for the value '*' because just the character * would not be valid. Also just to point out another potential problem, the input to --response-templates '{"application/json":"Empty"}' is valid but is not interpreted the same as --response-models in a method-response object. That value will set the response body to "Empty" for all API calls that use that integration-response. To do a passthrough, just don't set the value of --response-templates
I'm attempting to setup aws CORS from the command line using aws-cli in a deployment script. I have created the POST Resource with the following perl to shell command. I'm attempting to set the integration-response to '*' much like enabling cores would do.aws apigateway put-method-response \\ --region "$region" \\ --rest-api-id "$api_id" \\ --resource-id "$resource_id" \\ --http-method "POST" \\ --status-code 200 \\ --response-models '{"application/json":"Empty"}' \\ --response-parameters '{"method.response.header.Access-Control-Allow-Origin":true}'When i run the following command to set the integration value.aws apigateway put-integration-response \\ --region "$region" \\ --rest-api-id "$api_id" \\ --resource-id "$resource_id" \\ --http-method "$method" \\ --status-code 200 \\ --response-template '{"application/json":"Empty"}' \\ --response-parameters \\ '{"method.response.header.Access-Control-Allow-Origin": "'*'"}'I get the following error.A client error (BadRequestException) occurred when calling the PutIntegrationResponse operation: Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: *]Can anyone tell me what this error is really referring to or even a better way to go about a api gateway deployment script.
api gateway CORS setup
s3/get-objectcan also take keyword arguments:(require '[amazonica.aws.s3 :as s3]) (s3/get-object :bucket-name "my-bucket" :key "foo")You can add additional keyword arguments for any accessors onGetObjectRequest. In this case, you want the methodSdkClientExecutionTimeoutto be called, so do this:(s3/get-object :bucket-name "my-bucket" :key "foo" :sdk-client-execution-timeout 10000)
I useAmazonicato download an object from S3:(require '[amazonica.aws.s3 :as s3]) (s3/get-object "my-bucket" "foo")However, sometimes the download hangs. How can I set a timeout?
How do I set a timeout when getting an object from S3 with Amazonica?
Have you tried simply renaming the file with a .pem extension? i.e. get rid of the .txt? My .pem file is also a text file (though not named as such) and it works just fine.
I am trying to launch aws ec2 server. I got a key pair, but my key looks like privatekey.pem.txt.If I open it with text editor it looks like normal key, but how could I generate .pem file from it?-----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAh89 ...
AWS EC2 pem key in txt
By default, DescribeInstanceStatus only captures instances that are running. You can set the propertyIncludeAllInstancesin the request to true to change this. From the documentation:IncludeAllInstancesWhen true, includes the health status for all instances. When false, includes the health status for running instances only.Default: falseCode example:DescribeInstanceStatusRequest rr = new DescribeInstanceStatusRequest() { IncludeAllInstances = true };Reference:AWS Documentation - DescribeInstanceStatusRequest
In the AWS console, you can see what instances are online, what are shutting down, and what are shut down. I'm trying to replicate this functionality in my application, but EC2 api doesn't seem to cooperate.Here's what I'm doing:DescribeInstanceStatusRequest rr=new DescribeInstanceStatusRequest(); rr.InstanceIds=new List<string>(new[]{instanceId}); var status = ec2.DescribeInstanceStatus(rr); List<InstanceStatus> statusses = new List<InstanceStatus>(); foreach (var s in status.InstanceStatuses) { if (s.InstanceId == instanceId) { statusses.Add(s); } } if (statusses.Any()) { var instanceStatus = statusses.First(); ... }This works fine when the instance is online, but as soon as I request to shut it down, the instance disappears from the info.How do I get info for all instances, including those shutting down, shut down and terminated ones?
How to use EC2 api to tell instance status?
Figured this out. Sometimes just posting the question makes you think about it differently!I wasn't passing theKeycorrectly.dd.updateItem({ 'Key': { 'hashAttributeName': { 'S': payload.identityId } }, 'TableName': 'Users', 'UpdateExpression': 'SET testVal = :testVal', 'ExpressionAttributeValues': { ':testVal': {'S': 'This is a test'} } }
Trying to create a Lambda to update DynamoDB from a Kinesis stream. Here is my update statement:var response = dd.updateItem({ 'Key': {'S': payload.identityId}, 'TableName': 'Users', 'UpdateExpression': 'SET testVal = :testVal', 'ExpressionAttributeValues': { ':testVal': {'S': 'This is a test'} } }That generates 47 error messages:InvalidParameterType: Expected params.Key['S'] to be a structureUnexpectedParameter: Unexpected key '0' found in params.Key['S']UnexpectedParameter: Unexpected key '1' found in params.Key['S']UnexpectedParameter: Unexpected key '2' found in params.Key['S']UnexpectedParameter: Unexpected key '3' found in params.Key['S']...UnexpectedParameter: Unexpected key '44' found in params.Key['S']UnexpectedParameter: Unexpected key '45' found in params.Key['S']"}TheUserstable exists and is currently empty. I've double checked the identityID exists (and is valid). Can anyone see what I'm doing wrong here?
DynamoDB updateItem failing
If you're using the most recent (3+) release of the Elastic Beanstalk command line tool, the way to push updates is "eb deploy". Earlier versions used "eb push".
This is my first python-Flask app on AWS. It has caused headaches.The procedure that I have followed is:mkdir myapp && cd myapp virtualenv venv source venv/bin/activate pip install Flask SQLAlchemy twilio psycopg2 pip freeze > requirements.txt mkdir .ebextensions cd .ebxtensions nano application.config #content of this file below packages: yum: postgresql93-devel: [] option_settings: - option_name: MANDRILL_APIKEY value: my_value - option_name: MANDRILL_USERNAME value: my_email_address cd .. deactivate eb init eb createAfter a whole range of problems, including with options settings and psycopg2, the above worked.Now the issue is how to update when I make changes to the app on my local machine. I have tried as follows:git init eb init git add . git commit -m "my first update" git aws.pushwhich does not work and returns error message saying that"git aws.push"is not a legal command (or something like that). I have also tried"eb push".So 2 questions here:Why is the above procedure with git failing?What is the correct way to push updates or changes to elastic beanstalk?Thank you, all help gratefully received.
How can I update a python Flask app on elastic beanstalk?
Your user data script is actually run. Nevertheless, it is run on its own bash process which dies at the end of your script.Exported variables are preserved only for the lifetime of your script and they are also visible from child processes of your script.Since new connections to your ec2 instance are not children of the original script that ran the user data, they don't inherit exported variables.
I am deploying my code to AWS EC2. The documentation says there's something called "user data" or "user data scripts" that you can enter this info when you're launching an ec2 instance and the script will be executed at instance startup.http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scriptsthe following is in my user data script:#!/bin/bash echo 1111 >> /home/ubuntu/1111.txt export MONGODB_HOST=www.mongodb.com export MONGODB_PORT=12345 export MONGODB_USER=user export MONGODB_PASS=passSo when I launch the instance with this user data script I would expect to see the environment variables being set, but it didn't.So is there something that I did wrong?
AWS EC2 set environment variables
No, currently you cannot specify EBS volumes in Data-PipelinesEc2Resourceobject.Often times ROOT volume is used as staging directory for most of the Data-Pipeline activities which is currently limited to 8GB for Data-Pipeline provided default AMI's.So, you can make your own AMI out of an Ec2 instance with increased EBS root volume and include that AMI in the Resource object(Image-id field) of Data-Pipelines.Tip: You can check the AMI-id of Data-pipelines launched ec2 instances in Ec2 console.Use that AMI to create an ec2 instance with increased EBS ROOT volume and Make Image(AMI) from this instance with even more volume size. This way you dont need to choose from a list of AMI's and you will preserve visualization type that is required for launching particular instance types.
When I try to create an EC2 Resource with a AWS Data Pipeline, I don't see and option for defining EBS volume that will be associated with that compute engine. Is it possible to set the volume size? If yes, can someone give me an example.
How can I specify EBS Volume when adding a EC2 Resource to AWS Data Pipeline?
This is incorrectInvocationType: 'RequestResponse'You should useInvocationType: 'Event'Fromhttp://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntaxBy default, the Invoke API assumes "RequestResponse" invocation type. You can optionally request asynchronous execution by specifying "Event" as the InvocationType.
I have an AWS Lambda function, that need's ~ 30 seconds. When I connect it to the API Gateway, it's sending a 504 because of the 5 second timeout. So my easyCron Job is failing and will not try it again (I only have a free plan)So I need an API, that sends a correct 200 status. My Idea:Invoke the long term lambda via a short term lambda. The policy is allowing the invocation.Here is the codevar AWS = require('aws-sdk'), params = { FunctionName: 'cctv', InvocationType: 'RequestResponse', LogType: 'Tail' }, lambda; AWS.config.update({region: 'us-east-1'}); lambda = new AWS.Lambda(); exports.handler = function (event, context) { 'use strict'; lambda.invoke(params, function (err, data) { if (err) { console.log(err, err.stack); } else { console.log(data); } }); context.succeed('hey cron job, I think my lambda function is not called'); };But I think,context.succeed()aborts the execution oflambda.invoke()Do you have any idea how to solve this?
aws lambda: invoke function via other lambda function
I'm afraid you might be out of luck:When you launch an instance, you should specify the name of the key pair you plan to use to connect to the instance. If you don't specify the name of an existing key pair when you launch an instance, you won't be able to connect to the instance. When you connect to the instance, you must specify the private key that corresponds to the key pair you specified when you launched the instance.Amazon EC2 doesn't keep a copy of your private key; therefore, if you lose a private key, there is no way to recover it. If you lose the private key for an instance store-backed instance, you can't access the instance; you should terminate the instance and launch another instance using a new key pair. If you lose the private key for an EBS-backed Linux instance, you can regain access to your instance. For more information, see Connecting to Your Linux Instance if You Lose Your Private Key.http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
I have lost private key of my AWS instance.I searched the option in console panel.
How to recover lost private key of instance of aws server?
I managed to solve this using the following codefile = params[:file] s3 = Aws::S3::Resource.new obj = s3.bucket('my_bucket').object("my_path").upload_file(file.tempfile)
I'm trying to upload an image to s3 using the ruby aws sdk. I'm able to upload the base64 string if I don't set the content_type. If I do set the content_type toimage/pngthe upload is just the generic image thumbnail.obj = #<Aws::S3::Object bucket_name="mybucket", key="test"> >> params[:file] >> "data:image/png;base64,iVB...." obj.put(body: params[:file], content_type: 'image/png', content_encoding: 'base64')How can I upload a Base64 string to s3? I'm also open to uploading as bytes if that's more straight forward
Upload bytes or base64 string to s3 ruby sdk
No, each Cloudsearch query is for data within a single domain.
My team's application has numerous data types across 11 tables in our application database. To implement an efficient keyword search across specific fields on all of these types, we are exploring AWS CloudSearch as one option. Our intention is to return relevant results across all record types for a given keyword search.My understanding is that each record type (each table) would end up in a separate CloudSearch domain. If that is the case, does the service allow for a search across multiple domains? Or would multiple requests need to be submitted and combined after they return?Please correct me if I am mistaken at any point above. I have searched the CloudSearch documentation generally for a hint about this, but have not come to any conclusion.Side Notes:Our alternative is a non-self-hosted ElasticSearch service, which would solve this problem. However, our application ecosystem is currently hosted exclusively within a handful of AWS services. The advantages and disadvantages to CloudSearch vs ElasticSearch are unclear in this regard. If an endorsement can be made with a technical reason relating to the above, I would appreciate it. Though, I respect that this is not the place for a general pros vs cons discussion.
Does AWS CloudSearch allow searching multiple domains in the same query?
This code worked for me. You can use it to receive and process DynamoDB events in a Lambda function -public class Handler implements RequestHandler<DynamodbEvent, Void> { @Override public Void handleRequest(DynamodbEvent dynamodbEvent, Context context) { for (DynamodbStreamRecord record : dynamodbEvent.getRecords()) { if (record == null) { continue; } // Your code here // Write to Table B using DynamoDB Java API } return null; } }When you create your Lambda, add the stream from table A as your event source, and you're good to go
I'm trying to create a DynamoDB trigger using DynamoDB Streams and AWS Lambda. I researched a lot but I couldn't find any way to read and process a DynamoDB Stream event in Java 8. I'm completely new to both these technologies so don't know how to work with this.Essentially, what I want to do is create a record in table B whenever a record is created in table A.Could any of you please point me to a code or post that handles this use case in Java?Thanks :)
Setup DynamoDB Trigger using Lambda
Due to the way Google handles client IDs, we actually recommend that customers use ourgeneric OpenId Connect supportwhen configuring their identity pool for Google login.Go to AWS IAM Console'sidentity provider section.Create an OpenId Connect Identity Provider with provider URL ashttps://accounts.google.comand Audience as one of the client Ids.Follow the steps to create identity provider and later you will have an option to add additional client ids.Go toAmazon Cognito Console.Create or edit an identity pool and add the OpenID connect identity provider to the pool (it should appear in the OpenId Connect providers).If at a later date you add iOS or web support, create your new client IDs in the Google console and add them to your OpenId Connect provider in the IAM console.
I have an Android project and i am trying to authenticate with AWS Cognito with Google Plus. I have set up Facebook authentication and that is working, but when i login with Google Plus i get a 400 : Unauthorized errorAt the moment i have to set my app to 'Enable access to unauthenticated identities' so that Google Plus users can use it without getting an unauthorised exception.My token coming back from logging in to Google Plus is fine, it also gets the users Profile and details, so i think it has something to do with IAM and maybe the 'Google Client ID' in the 'Edit identity pool' section in the AWS dashboard.At the moment i have my OAuth 2.0 Service account Client ID from my Google Developers Console as the 'Google Client ID' in the 'Edit identity pool' section in the AWS dashboardSomeone please help :)
Why Doesn't AWS let me authenticate with Google Plus?
Afix for this bugis built into version 3.2.1 and greater.Additionally, your message attributes array doesn't match the format shown inthe documentation. Your attributes should look like this:$attributes = [ '<attribute name>' => [ 'DataType' => 'String', 'StringValue' => '<attribute value>', ], ];
I need to implement sending messages to SQS with attributes. The body of the message is uploading fine, but I have problem with attributes. Message Attributes require Associative array with Name of the Attribute, Data Type, and Value. I got this kind of error :AWS HTTP error: Client error: 400 InvalidParameterValue (client): The request must contain non-empty message attribute name.the function for sending messages:public function uploadMessage(DataTransferObjectInterface $dataTransferObject) { $command = $this->client->getCommand( 'SendMessage', [ 'QueueUrl' => $this->queueUrl->value(), 'MessageBody' => $dataTransferObject->getBody()->value(), 'MessageAttributes' => $dataTransferObject->getAttributes(), ] ); $this->client->execute($command); }function getAttributes() returns $attributes array& the test where I run the code$attributes = [ 'TestName' => [ 'Name'=>'test', 'DataType' => 'string', 'Value' => 'string', ] ]; $age = array("Peter" => "35", "Ben" => "9", "Joe" => "43"); $json = json_encode($age); $body = Json::get($json); $dto = new DataTransferObject($attributes, $body); $uploader = new SQSManager($sqsClient, $queueUrl); $uploader->uploadMessage($dto);How does the $attributes array should look like?
Amazon SQS send message attributes in php
hostvars[inventory_hostname]['ec2_tag_xxx'] where xxx is the tag in question
I would like to pull values from AWS EC2 tags via Ansible. This works:{{ hostvars[host]['ec2_private_ip_address'] }}and will return the IP address. So, I'm getting EC2 data. However, I have a tag called app on my EC2 instances with values like Cassandra or PostGres, and I need to find out for each host I'm currently processing what app is associated with that post. Any ideas of how to grab the value of an EC2 tag?
How to pull AWS tag value from host_vars with Ansible
Is it possible to limit the number of objects and the size of each object one can upload to S3? In general, no. You cannot do this through any kind of bucket policy.You can, however, limit an individual object upload size from browsers using a pre-signed POST URL with apolicyindicating acontent-length-range.Alternatively, you could code this restriction into your JS client or a server that proxies your uploads.SeeJS examplesand adiscussion of policiesand a related response onrestricting object size via POST.
It is possible to use JavaScript APIs to upload objects to S3 and it is possible to have a fine-grain authorization using IAM policies. For instance, see this policy:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME/*" ], "Effect": "Allow" }, ] }inspired from theirtutorialwhich allows to put objects into the bucketYOUR_BUCKET_NAME. However it is not clear to me whether it is possible to limit the number of objects and the size of each objects one can upload. I have checked into the list ofcontitions, but I didn't find anything useful on this.
Can I limit the size of an object put into S3 via the JavaScript API?
The problem is the Security Group rules as currently constructed are blocking the AD traffic. Here's the key concepts:Security Groups arewhitelists, so any traffic that's not explicitly allowed is disallowed.Security Groups are attached to each EC2 instance. Think of Security Group membership like having a copy of an identical firewall in front of each node in the group. (In contrast, Network ACLs are attached to subnets. With a Network ACL you would not have to specify allowing traffic within the subnet because traffic within the subnet does not cross the Network ACL.)Add a rule to your Security Group which allows all traffic to flow within the subnet's CIDR block and that will fix the problem.
I'm unable to join an EC2 instance to my Directory Services Simple AD in Amazon Web Services manually, perAmazon's documentation.I have a Security Group attached to my instance which allows HTTP and RDP only from my IP address.I'm entering the FQDNfoo.bar.com.I've verified that the Simple AD and the EC2 instance are in the same (public, for the moment) subnet.DNS appears to be working (becausetracertto my IP gives my company's domain name).I cannottracertto the Simple AD's IP address (it doesn't even hit the first hop)I cannottracertto anything on the Internets (same as above).arp -ashows the IP of the Simple AD, so it appears my instance has received traffic from the Simple AD.This is the error message I'm receiving:The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "aws.bar.com":The error was: "This operation returned because the timeout period expired." (error code 0x000005B4 ERROR_TIMEOUT)The query was for the SRV record for _ldap._tcp.dc._msdcs.aws.bar.comThe DNS servers used by this computer for name resolution are not responding. This computer is configured to use DNS servers with the following IP addresses:10.0.1.34Verify that this computer is connected to the network, that these are the correct DNS server IP addresses, and that at least one of the DNS servers is running.
Why can't I join my AWS EC2 instance to Active Directory?
Quick answers:No, the zip file does not need to include the node_modules folder. EB will runnpm installfor you.There are several ways to run a script at start.npm startwould be one, you can also runcustom commands.Yes, EB will runnpm start, seeConfiguration Options for node.jsThe best answer would be to take a look at one of Amazon's sample apps, such asnodejs-example-expressfromDeploying an Express Application to Elastic Beanstalk.
Should the zip file of my application include the node_modules folder? Should I be zipping up a top-level folder that contains all my application files or should I not include the top root folder like the instructions for amazon lambda?Do I have to set the web application port to an environment variable like in heroku? Does the app start by calling npm start and looking at the package.json or do I have to have a file called server.js like in opsworks?How can I have it run a small migration script before it starts - can I just put that in npm start?Can I get it to run npm install on deployment rather than copying over the node_modules folders?
what's in a node.js elastic beanstalk zip file and how do I get it to run a script on deploy?
To do it once:eb deploy {environment-name} --region {region-name}To always deploy to it:eb init --region {region-name} eb use {environment-name}Then use:eb deploy
I'm a little new to Elastic Beanstalk on AWS so forgive me if this is a little newbie....But we've got an instance of our product in a new region (EU) and I'm unsure how to bind a specific git brand to deploy to that environment? (Using CLI3)If it something best setup in the config.yml?Many thanks!
Deploy AWS Elastic beanstalk to an environment in different region
What I do is I set the lambda function to SNS Message Event, so when I upload to an S3 bucket I send from my server a SNS message to the configured url, and the message is the entire path on S3 to the file, so Lambda can download it, resize it and then upload with thumb_ or whatever.Hope it helps! This is 4 months ago, but... I hope it helps future visitors XD
Currently i have two buckets in S3 - let's call thembuckandbuck_thumb. Right now, when i uploads an image in to thebuckbucket, which triggers a lambda function that resizes the image into a thumbnail and uploads the thumbnail into thebuck_thumbbucket.But now i want to make it like - when i send a image url inbuckbucket then it download the image and re size it .Is there a way ? I can do this using only one bucket?
Image resize using AWS lambda in same bucket
I found the answer here. TTL is needed. Even though thedocswrongly state that it is not.https://serverfault.com/questions/649004/aws-cloudformation-returning-invalid-request-when-trying-to-create-a-awsrout
I have an AWS CloudFormation file that includes this:"myELB" : { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", Blah Blah Blah }, "DatabaseDNSRecord" : { "Type" : "AWS::Route53::RecordSetGroup", "DependsOn": ["myELB"], "Properties" : { "HostedZoneId" : "Z19Y4P1DDQJADI", # obfuscated obviously. "RecordSets" : [ { "Name" : "mydns.privatehostedzone.", "Type" : "CNAME", "ResourceRecords" : [ {"Fn::GetAtt" : ["myELB","DNSName"]} ] } ] } },When I run it, I get the following cryptic error:Error Message: 18:59:16 UTC-0500 CREATE_FAILED AWS::Route53::RecordSetGroup DatabaseDNSRecord Invalid requestCan someone suggest what the problem is here? I don't see what I'm doing wrong.myELBis successfully created.
Why does AWS Cloudformation say "Invalid request" when trying to create this RecordSetGroup?
Applications deployed on Elastic Beanstalk should be stateless. I.e. They should not store state on the EC2 instances, such as a file or a in memory http session.AWS Elastic Beanstalk uses autoscaling behind the scene to add or remove instances from your pool, depending on your application workload.When you change AWS Elastic Beanstalk configuration, it also might replace instances with new one.Best practice is to store your application state in a shared store, available to all your instances.Amazon S3 is a very good candidate for this, you just need to slightly modify your application to let users browser securely upload the files to S3.Look at the answer to this question for more details.Amazon S3 direct file upload from client browser - private key disclosureYou can also implement client side upload without using a form, by directly using Amazon Javascript's SDK in the browser.
I have deployed my PHP application in amazon EC2 using elastic beanstalk instance. The filesystem structure of my app looks like this:MyApp |-css | |-... |-js | |-... |-uploads | |- image.png | |- file.pdf | |- ... |-index.php |-...My app allows users to upload images. Its a simple application to web-manage some files and is currently only used by my client. So when files are uploaded I am placing them underuploadsfolder as shown above. Problem is my files don't last in this folder for long. After a day or two,Isshand find the uploads folder is empty. I am not sure what happens but I suspect elastic beanstalk does create a new instance and consequently overwrites theuploadsfolder contents.How do I fix this situation?Thanks.
Amazon EC2, Elastic Beanstalk: My images disappear
The database connection countis exactly what it sounds like: "The number of database connections in use."It's a count, so it shouldn't be summed. Maximum or averaging are recommended. It may be registering low because you have a very efficient database pool, have server-level caching, or are looking at the wrong database in your statistics.
What is meaning of DB Connections(Count) report on AWS RDS? I have gone through their documentation but didn't find my answer there.I am quite confused withDB Connection reporton my AWS. I can see their only 1 connection available but I am sure that there are always 100-150 concurrent users on my website on different pages which users database operation.As the user concurrency is 100-150 then why it shows me only 1 connection on report.Note : My website is working well on good performance.
Whats is DB Connections(Count) in AWS?
Yes, when you pay for a reserved instance, you will be billed wether you use it or not, and you could theoretically terminate and create a new instance ever day (week, month, hour etc), and still only pay for the single instance that you previously agreed to pay for, for the term you agreed to pay.Its a bit tricky, but you need to wrap your mind around the fact that a Reserved Instance is a billing construct only - it has nothing to do with any particular instance you may have running. Paying upfront gives you the right to run an instance, at a pre-agreed for price.If you buy a single RI, and then spinup 2 of them, one will automatically get 'billed' under the RI contract you have, the other will be billed at an On Demand hourly rate. If you delete one of them (either of them), the hourly on demand billing goes away and you continue to get billed on the RI only.Also, if you decide you are never going to need the RI you terminated you can sell (thru amazon) the unused portion of your RI if you find you no longer need it; I get most of my reserved instances this way and you can usually save a bit of money (as the buyer), and recoup some of your losses as the seller.
I am trying to understand Amazon EC2 reserved instances pricing structure. It is my understanding that the Reserved Instances are no more than a different pricing for my instances.My question is what happens if I pay upfront for an instance and later for whatever reason I need to terminate it before all of the period of the instance is completed? Can I use the remaining money paid on another instance or that is gone?Any clarification will be appreciated,Thanks!
What happens if I decide to terminate an instance for which I paid upfront?
When restoring RDS from a Snapshot, a new database instance is created. If you only wish to copy a portion of the snapshot:Restorethe snapshot to a new (temporary) databaseConnect to the new database anddumpthe desired tables usingpg_dumpConnect to your staging server andrestorethe tables usingpg_restore(most probably deleting any matching existing tables first)Deletethe temporary databasepg_dumpactually outputs SQL commands that are then used to recreate tables and restore data. Look at the content of a dump to understand how the restore process actually works.
I have two databases on Amazon RDS, both Postgres. Database 1 and 2I need to restore an instance from a snapshot of Database 1 for my Staging environment. (Database 2 is my current Staging DB).However, I want the data from a few of the tables in Database 2 to overwrite the tables in the newly restored snapshot. What is the best way to do this?
Backup specific tables in AWS RDS Postgres Instance
S3putObject()assumes either a Buffer or an UTF-8 string. I should have sent the binary as it, not as a "binary string", meaning usingnew Buffer(...)instead ofnew Buffer(...).toString("binary").
Amazon S3 interprets my binary data as non-UTF-8 and modifies it when I write to a bucket.Example using the official s3 Javascript client:var png_file = new Buffer( "iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==", "base64" ).toString( "binary" ); s3.putObject( { Bucket: bucket, Key: prefix + file, ContentType: "image/png;charset=utf-8", CacheControl: "public, max-age=31536000", Body: png_file // , ContentLength: png_file.length }, function( e ){ if ( e ) { console.log( e ); } else { s3.getObject( { Bucket: bucket, Key: prefix + file }, function( e, v ) { if ( e ) { console.log( e ) } else { console.log( v.ContentLength ); } } ); } } );Returns105while the originalpng_fileis85. S3 somehow modifies my file, and I think it has to do with charsets.If I uncomment theContent-Lengthline, I get a 400 error onputObject():The Content-MD5 you specified did not match what we received.I get the same result if I calculate the MD5 hash myself (instead of letting the S3 library do it) withContentMD5: crypto.createHash("md5").update(png_file).digest("base64"). This seems to acknowledge a difference between the data I send and the one S3 receives.I have read asimilarly titled issue, but it didn't solve the problem.
Sending binary data to Amazon S3 (Javascript)
Edit New Solution:The Amazon SDK Provides the Pre-Signed Url Feature. With this you can generate temporary urls to one specific file only for your users.So the Use case scenario is this. The S3 Images remain still private. When a user requests his images, you pass all the links first to the Signing Handler which generates the temp links. (These are set to expire and have plenty of encryption and protection , so it's impossible for someone to hijack such a link).The end user can load and view the files in his browser directly from amazon like nothing happened.Old SolutionWell another workaround might be to store all files in s3withoutpublic access. Then create an IAM Role for your application and give that role access to the bucket. Then Create a method which will read the S3 files within your application , and serve them to the user if he has rights.Simple example.S3 File Path//media/group1/folder1/1.pngRequest = MyWebMethod/GetFile?internalPath="folder1/1.png"{Int userId = customAuthentication.getUserId();String CompletePath = "media/group"+userId+"/"+internalPathvar image = AwdSdkClient.getFileFromS3(CompletePath);return image;}So by default each user can request files only within his group.A more advanced way, is not to fetch the file actually to your server , but create some kind of median data stream pipe to the file. (I am not a Django developer so i don't know if this is feasible )
My web app uses Amazon S3 to store all the media files in my djagno web app with the help of the django-storages 3rd party app.My db handle the folder and files hierarchy and each user see just the links that belongs to his group.But! there is no permissions to the other folders at S3.for example: user 1 : group is group1 user 2: group is group2 s3 link:https://s3.amazonaws.com//media/group1/folder1/1.pnguser 1 will see this at his page inside the app. user 2 won't see this link, but if he tried to get to this link, there is no permissions on the file and it will be downloaded easily.My goal is to restrict folder permission to each group.There is automatically solution that based on my aws keys? I'm lost...
Restrict s3 file to Django group user
After lots of searching, it was the excerpt fromthis articlethat caused a eureka moment.If you've been using the AWS CLI, you might already have a credentials file, which is in the same location as the new credentials file, but is namedconfig. If so, the CLI will continue to use that file. However, if you create a new credentials file, the CLI will use that one instead. (Be aware that theaws configurecommand that you can use to set credentials from the command line will put the credentials in theconfigfile, not the credentials file.)By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.
In trying to automate some deploy tasks to S3, I noticed that the credentials I provided viaaws configureare not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
Configure AWS credentials to work with both the CLI and SDKs
Actually, if you are using Riak, it wouldn't be a proxy, it would be a completely different endpoint. So you should do it this way with thebase_urloption:$s3 = S3Client::factory([ 'base_url' => 'http://127.0.0.1:8080', 'region' => 'my-region', 'key' => 'my-key', 'secret' => 'my-secret', 'command.params' => ['PathStyle' => true] ]);Using'command.params'allows you to set a parameter used in every operation. You will need to use the'PathStyle'option on every request to make sure the SDK does not move your bucket into the host part of the URL like it is supposed to do for Amazon S3.This is was all talked about on anissue on GitHub.
Is there any RiakCS S3 PHP client library out there? The best I could find wasS3cmdcommand line client software.Also I've seen there isRiak PHP Client, but it looks like there is nothing related to S3.I've installedaws-sdk-php-laraveland used same credentials as for RiakCS S3 but it doesn't seem to work. Error message below:The AWS Access Key Id you provided does not exist in our records.Thank you for any guidance or advice.
RiakCS S3 PHP client library
AWSRoute53 Health Checks and DNS failoverare effectively Azure Traffic Manager. The big difference is that Traffic Manager will happily provide a service that coversAzure endpoints and non-Azure endpointswhere AWS can only cover services hosted on AWS.
What is the MicrosoftAzure Traffic Manageranalog in AWS for EC2 instances?
What is microsoft traffic manager analog in amazon infrastructure?
As @joshubrown pointed out correctly, there currently is no API to get customer messages or send replies. You can, however set up a separate mail account for that purpose, which (depending on your own server architecture) gives you a whole set of protocols like POP3, IMAP, SMTP or even MAPI. Regular emails sent to the "scrambled" customer email addresses will be relayed to the consumer and will shown up as replies in your Seller Central.
My company is working with the amazon MWS api, and we are receiving a lot of messages from the customers about their orders.Is there any way to get these messages using the web services and reply to them?
Get and Send messages through amazon api
You can find the run logs for this at/var/log/cfn-init.log. In here I could see that themkdircommands had worked initially but subsequently failed as the directory already existed. Turns out that eb extensions run commands in alphabetical order so I had to change the commands to:01command1: 02command2:etc. From this point on it worked fine.Something else that was confusing me is that the .ebextensions directory in my local git repo was not appearing on the target instance directory. this is because once it's been run it will delete the directory.
I've linked a git branch to my Elastic Beanstalk environment and using git aws.push it deploys correctly.I've now added a.extensionsdirectory which contains a config script which should be creating a couple of directories. However, nothing appears to be happening.I understand that the.extensionsdirectory should be copied across to the ec2 instance as well but I'm not seeing it.I've checkedeb-tools.logand it's not mentioned in the upload.Is there something additional that's required?The script contains:commands: cache: command: mkdir /tmp/cache items: command: mkdir /tmp/cache/items chmod: command: chmod -R 644 /tmp
Elastic Beanstalk .ebextensions config file not getting deployed with git aws.push
In my app, I simply switched to an AWS http check and set up a "Heartbeat" controller with an Index action; there was no logic, but it returned a 200 when AWS hit /Heartbeat. Your IIS log will catch this request by default; your application infrastructure wasn't outlined in your question, but in my case the lack of any logic in the action was sufficient.http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/gs-ec2classic.html#ConfigureHealthCheckHowever, implementing your own IHttpModule is another option here. You're at the raw HttpApplication level here, so you don't tap into the MVC pipeline at this point. I've used it for (among other things), forcing the https redirect in my AWS-hosted apps by looking for the "X-Forwarded-Proto" header.
We manage a .net mvc application that is hosted on two AWS machines. An AWS Load Balancer manages the traffic to each machine. The Load Balancer is configured to perform a Health Check to each machine every 30 seconds. It does this with a TCP request to port 80.The application expects that behind every request is a human, so the Health Check causes side-effects such as unnecessary writes to the database and bloated log files. The tell-tale sign of the Health Check Request is that there is just one key-value-pair in theHttpContext.Request.ServerVariablesproperty: "Host: mysite.com"How should the application best identify this Health Check Request so that it can respond in a way that causes no side-effects? I am thinkingActionFilterbut is there a more accurate way of isolating the request other than checking for a lack of server variables?
Identify AWS Load Balancer TCP Health Check Request
Yes, it will be considered as a GET, and you will have to pay the "request pricing".Request pricing is detailed here:http://aws.amazon.com/s3/pricing/However this will be negligible compared to the Data transfer (real download) pricing. So you will be better off issuing this request (and pay) before deciding to download the file (if that was your confusion).
before downloading a file from AWS s3, I would like to double check that the file is really there - the right way seems is to call S3GetObjectMetadataRequest - however, does this count as a get/list/...?
Is there a cost to do a "S3GetObjectMetadataRequest"?
-1This installation for IonCube worked just now for EC2 (hope it works as well for elastic beanstalk):PHP version installed is 5.5 - please change the 5.5 to your installed version if you have a different one ("php -v" gives you the currently installed one):# Download current version of IonCube loader wget http://downloads2.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz # Unzip to /usr/local sudo tar -xzf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local # Add installed module to PHP config echo 'zend_extension=/usr/local/ioncube/ioncube_loader_lin_5.5.so' | sudo tee /etc/php-5.5.d/ioncubeloader.ini # Restart Apache (if necessary) sudo service httpd restartIf your run "php -v" now, it should show you IonCube installed:PHP 5.5.12 (cli) (built: May 20 2014 22:27:36) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies with the ionCube PHP Loader v4.6.1, Copyright (c) 2002-2014, by ionCube Ltd., and with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2014, by Zend Technologies
I have been trying to get one of these two loaders installed all evening without success. I have narrowed it down to creating a config file. I have put a .config file in a .ebextensions folder located in my root directory of my project, I'm not sure if it needs to be at the same level as my project. But in any case every time 403 error with the following message:"You don't have permission to access / on this server." If I remove the script the message goes away. I will also include a screenshot of where I can get to with out the .config file included and the reason why I need one of the loaders installed. Thanks in advance here is what my .config file looks like:# Install ioncube mkdir ion cd ion wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86.tar.gz tar xzvf ioncube_loaders_lin_x86.tar.gz mv ioncube/ioncube_loader_lin_5.4.so /usr/lib/php/modules/ioncube_loader.so touch /etc/php.d/ioncube.ini echo "zend_extension=/usr/lib/php/modules/ioncube_loader.so" >> /etc/php.d/ioncube.ini cd .. rm -rf ion/Which I got from here:https://forums.aws.amazon.com/thread.jspa?messageID=446182&#446182
AWS Elastic Beanstalk Installing IonCube or Zend Loader
In a Multi-AZ setup, any change you make to the instance will first be done on the standby instance. Then, failover will occur (during this phase your application won't be able to connect to the database) and the change will be applied to the primary instance. The failover is the only downtime.More information:http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
I have a question about scaling up MySql RDS - Multi-AZ downtime.I think firstly Amazon will scale the Slave, then do fail over ( there is a downtime here ). then it will scale the master.my question: is fail over the only downtime?. is there any downtime to redirect the requests back from the slave to the master after scaling the master.Thank you
Amazon RDS Multi-AZ scaling downtime
I'm not aware of a quick Yum/apt-get style install for X-sendfile for Apache24 on an EC2 instance at the time of writing this answer, however compiling and installing the module your self is super easy:PrepworkDownloadmod_xsendfile.cfromdownload sectionthe link belowhttps://tn123.org/mod_xsendfile/Install GCC for compilingsudo yum install gccWe need httpd24-devel for apxssudo yum install httpd24-devel.x86_64Compiling and installationsudo apxs -cia mod_xsendfile.cEdit your http.conf add<IfModule mod_xsendfile.c> XSendFile on XSendFilePath /home/path/to/private/files/to/serve/ </IfModule>Restart Apache24sudo service httpd restartDone !Check your phpinfo or apache_modules() to confirm all good and modify settings to your liking. Enjoy efficient downloads :)
How to install xsendfile for httpd 2.4 on amazon linux ami? Default package repositories from amazon and epel do not have a package for httpd 2.4, only for httpd 2.2. I would prefer not to compile the module if possible. Thank you.
How to install xsendfile for httpd 2.4 on amazon linux ami?
No, You cannot install X on amazon Linux. amazon Linux is meant for Server roles and hence X related packages are not available in the repo of amazon Linux.You need to explore Ubuntu/RHEL or some other flavour of Linux if you need X. OR you can hack your own way of compiling X on Amazon Linux.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.Closed10 years ago.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Improve this questionI am using AWS Linux for our server systems. I would like to get a desktop session there to make it easier to do certain desktop tasks such as manual testing and configuration of those servers. Is there a way to do that?My system is a default AWS Linux 2 system and the version information looks like this:# uname -a Linux ip-172-31-0-160.us-east-2.compute.internal 4.14.243-185.433.amzn2.x86_64 #1 SMP Mon Aug 9 05:55:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux # echo `cat /etc/*release | grep "NAME\|VERSION"` NAME="Amazon Linux" VERSION="2" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
Is there a way to install gnome, kde or any other X interface on an Amazon Linux System? [closed]
The documentation quote you provide comes from the SQS tutorial. TheSQS API docscorrectly describe the current return value. The SQS tutorial is simply out of date and needs to be corrected. I have created anissueto track this.If the write fails for any reason, the service will return an HTTP error code which, in turn, will cause boto to raise an SQSError exception. If no exception is raised, the write was successful.
All documentation of this method that I can find says that Queue.write returns True or False, depending on whether the write succeeded, but this doesn't square with reality.The docs say:The write method returns a True if everything went well. If the write didn't succeed it will either return a False (meaning SQS simply chose not to write the message for some reason) or an exception if there was some sort of problem with the request.But in fact the method simply returns the message that gets passed in. Here is the relevant source code fromhttps://github.com/boto/boto/blob/develop/boto/sqs/queue.py:def write(self, message, delay_seconds=None): """ Add a single message to the queue. :type message: Message :param message: The message to be written to the queue :rtype: :class:`boto.sqs.message.Message` :return: The :class:`boto.sqs.message.Message` object that was written. """ new_msg = self.connection.send_message(self, message.get_body_encoded(), delay_seconds) message.id = new_msg.id message.md5 = new_msg.md5 return messageMy question then is: How do I really tell if the write was successful?
How to tell if boto.sqs.Queue.write() succeeded?
The standard way to change the type of a running instance is:Stop (do not terminate) the instance (stop-instances).Modify the type of the instance (modify-instance-attribute).Start the instance (start-instances).There are a number of cautions that you should be aware of including:This only works for EBS based instances.The ephemeral storage (often mounted on/mnt) will be lost.You may have to re-associate an Elastic IP address if the instance is not in a VPC.This can be done through the console, command line, or API calls. Here's an old article I wrote about changing instance types using the command line tools:http://alestic.com/2011/02/ec2-change-typeI definitely recommend converting fromc1.mediumtoc3.largeas you are thinking. Here's an article I wrote about that:http://alestic.com/2013/12/ec2-instance-type-c3Since you're interested in SSD, note that the SSD on thec3.largeisephemeral storage. Data stored there will be lost irretrievably when an instance is terminated, stopped, or fails. You'll want to only store files you can afford to lose there (e.g., ones that are replicated elsewhere, backed up regularly, or possible to regenerate).
Currently, I am using Amazon EC2 C1-medium server to run my service. I heard that Amazon EC2 server made a new type server called, Amazon EC2 C3. Amazon EC2 C3 offers SSD hard disk.I was wondering I can change Instance Type without moving my old data files, and I believe that they can do it because they are using normal hard disk. However, I found out that I can change the instance type even though I want to change my old server to SSD hard disk.I didn't change the type yet because I worry about losing server data. I read documents but couldn't find the answer. How does it work and is it safe to do it?
How can Amazon EC2 Server change instance type without moving data file?
To easily understand and visualize Heroku when hosting Django apps I created this drawing for our startup ChattyHive. I hope it is helpful. Don't hesitate on asking me any doubt or suggest anything :)(please right clic on it and "view picture" to see it full size or it will be too small!)
I want to change my static website (http://ingledow.co.uk) to a Django site on Heroku and Amazon using GitHub to store the code.I've been through the Django tutorial once, so I'm fairly new to the whole thing.Where would you guys start with this? Any useful books, code learning websites you could recommend to get started?Thanks David
Getting started with Django, Heroku and Amazon
I found the solution$encode = hash_hmac('sha1', utf8_encode($string_to_sign), $secret_key);is replaced by$encode = hash_hmac('sha1', utf8_encode($string_to_sign), $secret_key, true );Final script to simulate the example is :$access_key = "AKIAIOSFODNN7EXAMPLE"; $secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"; $string_to_sign = "GET\n\n\n1175139620\n/johnsmith/photos/puppy.jpg"; $encode = hash_hmac('sha1', utf8_encode($string_to_sign), $secret_key , true ) ; echo urlencode(base64_encode($encode)) . "\r\n" ;
BackgroundI am trying to create a signed url to fetch a resource from AWS S3. There areseveral examples in aws s3 documentation, and as a first step I am trying to replicate the last example on that page.My code is :$access_key = "AKIAIOSFODNN7EXAMPLE"; $secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"; $string_to_sign = "GET\n\n\n1175139620\n/johnsmith/photos/puppy.jpg"; $encode = hash_hmac('sha1', utf8_encode($string_to_sign) , $secret_key) ; echo base64_encode($encode). "\r\n" ;The above code outputsMzY5ODAyOGU3MGYzYWNjZjk2MTczYTA0MzU3OWE5MzQzNTJjNGE3Zg==According to the example, the result should beNpgCjnDzrM%2BWFzoENXmpNDUsSn8%3D&I do understand that the result needs to be url encoded , but I think I am still far off. Can I get some help please ?
AWS S3 QueryString Authorization
if our architecture might have any design flaws.Well, keep in mind that we can't tell much from a generic diagram. But here are some notes:1) MongoDB isn't as easy to scale as other databases such as DynamoDB, Riak or Cassandra. For example, if you ever exceed the capacity of a single master (no matter how many slaves you have, all writes go to the single master), you'll have to shard. But switching to sharding is very disruptive and very tedious to set up.If you don't expect to exceed the write capacity of one node, then you'll be fine on MongoDB.2) What will you do for async tasks such as sending emails, creating long reports, etc?It's possible to do these things in the request loop, and that's probably a fine way to get started. But as you have more boxes, the chances of failure go up. When a box dies, all the async tasks go away and nobody will know what they were. You also can have problems where one box gets heavily loaded with async tasks (using too much CPU or memory), and the problem will get worse and worse as it gets more tasks and completes them more slowly.Also, a front-end like ELB will have a 60-second limit, which can cause problems if some of your requests could take longer. (Spin them off into async jobs with polling or something.)3) ELB doesn't support web sockets. Consider that if you think you might want websockets down the road.
we're planing to deploy a web-application with Amazon OpsWork and I just wanted to check with you, if our architecture might have any design flaws.We've 4 components:A load balanacer (Amazon preferably)Express based on Node.jsMongoDBElasticSearchHere's a communication diagram of our components:At the front is a load balancer which distributes http requests to multiple web servers.The web server is stateless and therefore can be cloned each time the load requires it. All web server instances are equal. Session information is saved in the MongoDB.In the "backend" we're planing to use the build-in cluster functionalities from MongoDB and ElasticSearch. Therefore each web server instance only connects to a single MongoDB and ElasticSearch master instance. MongoDB and ElasticSearch are then scaling accordingly. Furthermore the the ElasticSearch master speaks to the MongoDB master to retrieve data for building the index.How we see it, the most challenging task to setup such a system, is to configure OpsWorks with the MongoDB and ElasticSearch cluster.Many thanks in advance!
Server architecture for a scalable web application
The file is stored in thevaultName, what ever value you provide there. Thedata.archiveIdis the representation of the file. Thebodyis the file it self.Here is a more general overview of GlacierQ: How is data within Amazon Glacier organized?Q: How do vaults work?Q: What is an archive?Cody Example: (As provided by hitautodestruct)var AWS = require('aws-sdk'), fs = require('fs'), glacier = new AWS.Glacier(), vaultName = 'YOUR_VAULT_NAME', // No more than 4GB otherwise use multipart upload file = fs.readFileSync('FILE-TO-UPLOAD.EXT'); var params = {vaultName: vaultName, body: file}; glacier.uploadArchive(params, function(err, data) { if (err) console.log("Error uploading archive!", err); else console.log("Archive ID", data.archiveId); });
I foundthis exampleon the amazon aws docs.var glacier = new AWS.Glacier(), vaultName = 'YOUR_VAULT_NAME', buffer = new Buffer(2.5 * 1024 * 1024); // 2.5MB buffer var params = {vaultName: vaultName, body: buffer}; glacier.uploadArchive(params, function(err, data) { if (err) console.log("Error uploading archive!", err); else console.log("Archive ID", data.archiveId); });But I don't understand where my file goes, or how to send it to the glacier servers?
How to upload a file to amazon Glacier using Nodejs?
I'm not totally clear on the scenario here, but I think you're saying you did things in this order:Create an EC2 instance with keypair #1Create a new keypair (#2)Put the private key from keypair #2 on the new laptopTry to log in to the instance.If that's what you're describing, then the problem is that keypair #2's public key has never been installed on the EC2 instance. You need the private key on your client, and the matching public key on the server you're connecting to.Once the instance already exists, creating a new keypair in AWS will not update the key on an existing instance. You'd have to log in to it (with keypair #1), and put the new public key in the proper place.I haven't done that myself in a while, but according tothis page, you'd edit~ec2-user/.ssh/authorized_keys(a text file) and append the public key from your key pair (which is in a text format, too) to the end of the file. You'd might have to restart thesshddaemon, which the commandsudo /sbin/service sshd restartshould do. But try logging in with the new key first; if you make a mistake editing the file, you could lock yourself out. (It's safer to create a new account and update its.ssh/authorized_keysto avoid locking the ec2-user account out by mistake.)
Currently I have one computer properly set up to SSH into my EC2 instane, however I'm trying to connect another laptop as well. When I went to the AWS console to download another key pair and use it in Terminal to SSH, I get this error: Permission denied (publickey).I've already tried performing the commandchmod 400 /path/sshkey.pembut I still get a public key error. Does anybody know why this is?Thanks so much!p.s. the command I'm performing to SSH to my ec2 instance is:ssh -i /path/sshkey.pem[email protected]
creating a private key for AWS EC2 Instance
When you stop or terminate an EC2 instance, Amazon sends a soft shutdown request to the operating system to let it wrap up in a clean, safe manner. If the system does not indicate it is powering down within a short time (minutes) then Amazon effectively pulls the power plug forcing a hard shutdown.I am not aware of any commitment from Amazon about how long this soft shutdown grace period is, so I would recommend you not assume or rely on having a specific minimum. Even if Amazon gives you 10 minutes today for one instance, they could easily reduce this to 3 minutes tomorrow when, say, they have a large demand for new instances.If you need to do important wrap up before an instance shuts down, then send the instance a signal (web request or ssh command), wait for it to complete its task, then initiate the EC2 shutdown.If you are using, say, spot instances where the instance can be shut down at any point by Amazon, then save your work frequently so that not much of it will be lost if the instance gets terminated suddenly.Answer copied from my ServerFault answer toIs there a maximum shutdown for instances in Amazon EC2?
According to the docs found at theAmazon EC2 Instance FAQ:"Instances in the “shutting down” state for longer than usual will eventually be cleaned up by automated processes within the Amazon EC2 service. "Does anyone know how long "longer than usual" is? How long can an instance be in the "shutting down" state before it is killed automatically?I call a backup script from an init.d shutdown script, but when it tries to zip up and copy a large directory (500 MB) to S3, it seems like the EC2 instance shuts down before the backup completes. (I never experience this problem when backing up small directories during shutdown.)Is it possible that my EC2 instance is getting killed automatically because it is staying in the "shutting down" state too long? How long can I safely stay in "shutting down" state?
When is an EC2 instance in "shutting down" state killed automatically?
I run into the same issue, after working with AWS support I understood that List of String does not mean what we initially thought. Also, if you want to place the DB inside a VPC you must not useAWS::RDS::DBSecurityGroupobjects.Here is a full sample, it took me a while to get it working:"dbSubnetGroup" : { "Type" : "AWS::RDS::DBSubnetGroup", "Properties" : { "DBSubnetGroupDescription" : "Availability Zones for RDS DB", "SubnetIds" : [ { "Ref" : "subnetPrivate1" }, { "Ref" : "subnetPrivate2" } ] } }, "dbInstance" : { "Type" : "AWS::RDS::DBInstance", "Properties" : { "DBInstanceIdentifier" : { "Fn::Join" : [ "", [ { "Ref" : "AWS::StackName" }, "DB" ] ] }, "DBName" : "dbname", "DBSubnetGroupName" : { "Ref" : "dbSubnetGroup" }, "MultiAZ" : "true", "AllocatedStorage" : "8", "BackupRetentionPeriod" : "0", "DBInstanceClass" : "db.m1.medium", "Engine" : "postgres", "MasterUserPassword" : "masteruserpassword", "MasterUsername" : "masterusername", "VPCSecurityGroups" : [ { "Ref" : "sgVpc" }, { "Ref" : "sgDB" } ] } },
I'm trying to build a CLoudFormation script that launches an instance and a db into a vpc at the same time. the issue is the db requires two AZ's so i create a second subnet and now i just need to reference the two subnet physical ids in a 'MyDBSubnetGroup' var. I can get the logical IDs for the subnets i created but dont know how to ref those physical IDs. ANyone know? THanks!!Heres my code:"MyDBSubnetGroup" : { "Type" : "AWS::RDS::DBSubnetGroup", "Properties" : { "DBSubnetGroupDescription" : "Subnets available for the RDS DB Instance", "SubnetIds" : { "Fn::Join" : [ " ", [{"Ref" : "PublicSubnetAZ1"}, ", ", {"Ref" : "PublicSubnetAZ2"}, " " ]]} } },
how to construct a string of physical subnet ids to create db subnet group on the fly in a cloudformation script?
It turns out that the AWS commands call exit which exits the .bat script prematurely.Prepending each AWS command with "call" did the trick.I found suggestions in this question:Why does only the first line of this Windows batch file execute but all three lines execute in a command shell?
I am fairly new to Amazon Cloud Auto Scaling (and AWS all together).I am currently trying to write a .bat script that will automatically create a launch configuration and then an auto scaler. The aim is for an image that was previously set up about a week ago by a coworker.The problem I encounter is that when I run the script, no commands past the launch configuration command is executed.The code is here:echo Beginning Auto Scale Up Process REM Create a launch config as-create-launch-config --image-id ami-xxxxxxx --instance-type t1.micro --user-data "Created by Launch Config reportingServerScaleUp-lc" --launch-config reportingServerScaleUp-lc echo Timer CompleteI am looking for suggestions to help me debug this issue. Or advice on how to solve it. After the "echo Timer Complete" I have a command to create an auto scaler. Though, not even the "echo Timer Complete" is executed. The console does return indicating that the launch configuration is created though :)Also, when I enter each command sequentially into the command line, each executes perfectly. The launch configuration is created as is the auto scale group.
.bat Script Terminating Before AWS Auto Scaling Commands Executed
You can't use the ~ character in Java to represent the current home directory, so change to a fully qualified path, e.g.:file:///home/user1/hbaseBut i think you're going to run into problems in a fully distributed environment as the distcp command runs a map reduce job, so the destination path will be interpreted as local to each cluster node.If you want to pull data down from HDFS to a local directory, you'll need to use the -get or -copyToLocal switches to thehadoop fscommand
I'm trying to back up a directory from hdfs to a local directory. I have a hadoop/hbase cluster running on ec2. I managed to do what I want running in pseudo-distributed on my local machine but now I'm fully distributed the same steps are failing. Here is what worked for pseudo-distributedhadoop distcp hdfs://localhost:8020/hbase file:///Users/robocode/Desktop/Here is what I'm trying on the hadoop namenode (hbase master) on ec2ec2-user@ip-10-35-53-16:~$ hadoop distcp hdfs://10.35.53.16:8020/hbase file:///~/hbaseThe errors I'm getting are below13/04/19 09:07:40 INFO tools.DistCp: srcPaths=[hdfs://10.35.53.16:8020/hbase] 13/04/19 09:07:40 INFO tools.DistCp: destPath=file:/~/hbase 13/04/19 09:07:41 INFO tools.DistCp: file:/~/hbase does not exist. With failures, global counters are inaccurate; consider running with -i Copy failed: java.io.IOException: Failed to createfile:/~/hbase at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1171) at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666) at org.apache.hadoop.tools.DistCp.run(DistCp.java:881) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Backup hdfs directory from full-distributed to a local directory?
You should check out Elastic Beanstalk. Essentially you just package up your WAR or other code file, upload it to a bucket via AWS's command line/Eclipse integration and the deployment is performed automatically.http://aws.amazon.com/elasticbeanstalk/
There are quite a few resources on deployments of AMI's on EC2. But are there any solutions to incremental code updates to a PHP/Java based website?Suppose I have 10 EC2 instances all running PHP / Java based websites with docroots local to the instance. I may want to do numerous code deployments to it through out the day.I don't want to create a new AMI copy and scale that up to new instances each time I have a code update.Any leads on how to best do this would be greatly appreciated. We use subversion as our main code repository and in the past we've simply done an SVN update/co when we were on one to two servers.Thanks.
code deployments on EC2
Entirely correct.sdk.class.phpis a file that exists in SDK 1.x, but not 2.x.The correct instructions are in theSDK2 README.
Environment: MAC - Mountain Lion I am trying to use the AWS PHP SDK for a project. I followed the Amazon web site's SDK installation directions (through composer) -- using the followingLink to AWSI created the file compser.json. Contens:{ "require": { "aws/aws-sdk-php": "2.*" } }From the command line, I typed:curl -s "http://getcomposer.org/installer" | phpThenphp composer.phar installA new directory appeared "vendor" and inside it, the AWS SDK 2 was automatically installed.The problem is that I am expecting (per the code example I'm trying to follow), I am expecting to see the following file:vendor/aws/aws-sdk-for-php/sdk.class.phpBut it's not there. Could this be referencing an older version of the SDK?The automatically generated by the "php composer.phar install" command: vendor/autoload.php looks like this:<?php // autoload.php generated by Composer require_once __DIR__ . '/composer' . '/autoload_real.php'; return ComposerAutoloaderInit25a7292f83dd9a43a459f6c2e51befba::getLoader();Is it possible that the file: sdk.class.php is valid for version 1 of the SDK, but not version 2?
AWS PHP SDK with Composer - missing sdk.class.php
You can fetch all the tagsimport boto conn = boto.connect_ec2('asdf','asdfasdfasdfasdf') tags = conn.get_all_tags() for tag in tags: print tag.name, tag.valueOr you can get the tags associated with just an instancereservation = conn.get_all_instances()[0] # Yeah I don't know why they have these stupid reservation objects either... instance = reservation.instances[0] print instance.tags # prints a dictionary of the tags {'Name': 'Given name'}UPDATE Apr 2014:Get all instances is going to change it's behaviour in the near future. Funnily enough it is going to start returning a list of EC2 instances. You should useget_all_reservationsnow to avoid code breakage during the next major version update.
Folks, I am trying to retrieve not only instance ids of my running machines, but also the aliased names which I've added to them in the aws console.Is this the proper way to do this? I am not getting back anything interesting....import boto botoEC2 = boto.connect_ec2('asdf','asdfasdfasdfasdf') rsv = botoEC2.get_all_instances() tags = botoEC2.get_all_tags() print tags dir (tags) print tags print tags.status print tags.pop print tags.count print tags.tagSet print tags.requestId print tags.index print tags. print tags.requestId print tags.index print tags.key_marker print tagsoutput: [Tag:ec2tag, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:ec2tag, Tag:Name, Tag:Name, Tag:Name, Tag:Name, Tag:Name]Thanks!
boto aws pulling down list of instances
Right now (v 0.3.5) is not possible. I made a pull request on the github project to add support for the 'api_params' parameter of boto, so you can pass parameters directly to the AWS API, and use the 'Instances.Ec2SubnetId' parameter to run a job flow in a VPC subnet.
I'm using mrjob to run some MapReduce tasks on EMR, and I want to run a job flow in a VPC. I looked at the documentation of mrjob and boto, and none of them seems to support this.Does anyone know if this is possible to do?
mrjob: Is it possible to run a job flow in a VPC?
It turns out I was adding the folder name to the string after calling InitiateMultipartUploadRequest. Once I changed the key value to be consistent across the upload calls it began to work.
The following fails with this error message: "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed."UploadPartRequest uploadRequest = new UploadPartRequest() .WithBucketName(IniValues.Instance.TargetBucketName) .WithKey("junk/20070125.log") .WithUploadId(initResponse.UploadId) .WithPartNumber(i) .WithPartSize(partSize) .WithFilePosition(filePosition) .WithFilePath("C:\\InetTemp\\Logs\\20070125.log");The problem is with the ".WithKey("junk/20070125.log")". If I strip out the "junk/" it works perfectly.So the question is,how to upload a file to a specific AWS directory?All the documentation I found shows tha correct way to be to prepend the directory name and a forward slash. What am I missing?
With AWS S3 MultiPart upload to a named directory using C# and the .Net SDK
Now, you can use the configuration antiflood with swiftmailer (cf.http://symfony.com/doc/current/reference/configuration/swiftmailer.html#antiflood)Exampleswiftmailer: transport: "%mailer_transport%" host: "%mailer_host%" username: "%mailer_user%" password: "%mailer_password%" spool: type: file path: '%kernel.root_dir%/spool' antiflood: threshold: 1 sleep: 1This will send 1 email per second
I am setting up SES to work with SMTP2. One of the limitations of SES accounts (atleast by default) is a 5 email per second limit.I want to setup a spooler, as described inthis article. I can use cron to trigger it every minute, which is fine for my purposes. My worry, though, is that a large number of emails will become queued in this spooler and my server will try to send them all at once.The article lists a method for limiting the total emails sent each execution, as well as a wayt o limit the execution time. Neither fits my use case though: limiting emails sent per second.Is there any way to limit the rate emails are sent from the spooler?
Limit swiftmailer emails per second in symfony2
"I see that CloudFront automatically uses the closest region to the end user"You're mixing up your information here. CloudFront automatically uses the closestedge locationto the end user.S3 buckets only live in a single region. If you tell CloudFront to create a distribution for an S3 bucket in theus-east-1region, then CloudFront will always pull from that bucket in that region.CloudFront thencopiesthe data from that region-locked bucket to anedge locationthat is closer to the end user.In that way, you only pay for S3 costs for the region the bucket is in, plus whatever the CloudFront costs are.Make sense?
I'm using CloudFront on top of a S3 bucket mostly to help me reduce the cost my hosting bill.10000 HTTP Requests on S3 cost $0.01 on US Standard Region and same requests cost 0.0075 on CloudFront's US region.I see that CloudFront automatically uses the closest region to the end user and therefor I see in some cases heavy use of some expensive regions such South America, Asia, Australia, etc. It makes sense since the idea of CloudFront is to decrease loading time.The point is that I originally expected to reduce the cost to a level that I'm not nearly near at and - probably by just looking at my numbers - I'll probably end up paying even more than before.My question is. Is there any way I can limit CloudFront availability only to US Region so everyone will default to it?I read theirFAQsand couldn't find any info related to it.
Is it possible to limit CloudFront to a specific region?
Relatively new but I believe it should solve the problem:http://aws.typepad.com/aws/2012/08/amazon-s3-cross-origin-resource-sharing.html
Is there any way to enable an AJAX request sent from a server to Amazon S3?I have a video player that loads subtitles using AJAX request and I would like to store the subtitles on Amazon S3, but obviously I've got an Cross-Origin error.
Cross domain AJAX request to Amazon S3
The simplest way to tag an EC2 resource is it use the #tags method:ec2.snapshots["snapshot-id"].tags["tag"] = "value"If you want a handle to the tag object created, you can still use the TagCollection#create method. It expects the first param to be a resource:tag = ec2.tags.create(ec2.snapshots['snapshot-id'], 'tag') # no tag value tag = ec2.tags.create(ec2.snapshots['snapshot-id'], 'tag', 'value')
I'm pretty new to using Amazon's ruby-sdk (gem install aws-sdk), and am stuck trying to simply create a tag for a snapshotted resource. Here's what I'm doing:ec2.tags.create(:resource_id => "snap-7d3aa701", :key => "My Test Tag", :value => "My test value") ArgumentError: wrong number of arguments (1 for 2)Note, ec2 = AWS::EC2.new (after I set my credentials).Any ideas what I'm doing wrong? I haven't been able to find a single example online of using the ruby aws-sdk for tagging.
For AWS, how do you set tags to a resource with the ruby aws-sdk?
In the background (deep inside the Amazon jungle) ELBs are basically just very simple small instances running LB software, so they definately have a performance limit and this is probably what you are hitting. Now, ELBs are designed to scale with increasing load (requests, not connections I believe) but this scaling only happens over a five minute period so if you have a synthetic test that ramps up in less time than that then you will hit problems.Two solutions:Ramp up very slowly, but that's boring.Raise a support call with Amazon and ask for your ELB instances to be 'pre warmed'. Tell them how much load you want to test to and they will make some adjustments accordingly.
I am performing load testing on my web-app behind the AWS ELB. I have tested two scenario 1) Check throughput directly generate load on tomcat instance 2) Check throughput by generate load on AWS ELB.I am using Apache Benchmark tool for load testing. I have observed that AWS ELB gives less req/sec than directly throughput on instance. I want to know that what is the problem in AWS ELB that causing the low throughput.
AWS ELB handling less request than single instance
Assuming that you are using theOrders API, the shipping charges for each item are stored in theshippingPriceproperty of theOrderItemtype. You need to use theListOrderItemsoperation to retrieve the order's items.See page 26 of theOrders API documentationfor a description ofshippingPriceproperty. See page 25 for a description of theListOrderItemsoperation.
I would like to get shipping costs of an order.I can get amount, customer information, purchase date of an order... but where is the shipping cost?
How to get shipping costs of an order with Amazon MWS Api?
Found the answer here:https://bbs.archlinux.org/viewtopic.php?id=135775The Arch tomcat7 package is broken.I uninstalled tomcat7:pacman -R tomcat7And then copied the normal Tomcat 7 files from Apache.org to /usr/share/tomcat7Everything works fine now.
I'm compiling the Amazon Web Services Elastic Beanstalk demo and attempting to run it (locally, on tomcat7) on a fresh install of Arch linux.Every time, it fails to the console with:Feb 18, 2012 2:31:41 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [jsp] in context with path [/TryTwo] threw exception [java.lang.IllegalStateException: No Java compiler available] with root cause java.lang.IllegalStateException: No Java compiler available at org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:228) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:638) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:357) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)# which java /usr/bin/java # java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (IcedTea7 2.1) (ArchLinux build 7.b147_2.1-1-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode)What am I doing wrong?
Compiling the AWS Elastic Beanstalk demo threw exception No Java compiler available
You will need to create a new EBS Instance using the Snapshot-ID for the public dataset. That way you won't need to pay for transfer.But be careful, some data sets are only available in one region, most likely denoted by a note similar to this. You should register your EC2 instance in the same region then.These datasets are hosted in the us-east-1 region. If you process these from other regions you will be charged data transfer fees.
Loading any of Amazon's listed public data sets (http://aws.amazon.com/datasets) would take a lot of resources and bandwidth. What's the best way to import them into AWS so you start working with them quickly?
How do you import Big Data public data sets into AWS?
It looks to me as[object lastModified]simply returns aNSStringand not anNSDateobject, as stated in thedocumentation.NSDateFormattercan be used in this case to create aNSDateobject from the string:NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"yyyy'-'MM'-'dd'T'HH':'mm':'ss'+'mm':'ss.SSS'Z'"]; NSDate *s3date = [dateFormatter dateFromString:[object lastModified]]; [dateFormatter release];TheDate Formatting guidehas lots of handy examples. You may need to tweak the format string slightly as i have not tested it.
I am comparing the date when a file was last modified for two files, one local and one on Amazon S3 server. I am using the AWS IOS SDK framework and can successfully request and receive response from the S3 server but I have trouble understanding the format of the returned s3 date.On my local machine the date format for lastModified is "2011-07-21 18:43:15 -0400" while for the file residing on the S3 server it is "2011-10-15T16:25:49.000Z".My local info is obtained using:NSFileManager *fm = [NSFileManager defaultManager]; NSDictionary *attr = [fm attributesOfItemAtPath:filePath error:nil]; NSDate *localDate = [attr objectForKey:NSFileModificationDate];while my S3 info is obtained usingfor (S3ObjectSummary *object in [listObjectsResult objectSummaries]) { NSDate *s3date = [object lastModified]; }Does anyone know if I can convert the date for the S3 file to a format that I can use to compare these two dates using:NSTimeInterval deltaSeconds = [s3Date timeIntervalSinceDate: localDate];or am I doing something wrong here? Right now my program crashes with[NSCFString timeIntervalSinceReferenceDate]: unrecognized selector sent to instance 0x200351360.probably because the s3 date format is not in proper format. I am quite new to using the AWS S3 SDK so all help is greatly appreciated. If anyone also knows of some good tutorials for this framework (apart from the demo code that comes with it), that would be great. Cheers, Trond
Format of date when retrieving lastModified
The AWS/EC2 namespace is reserved for EC2 published metrics, so it's not possible. I'm sure I read it in the documentation but I can't find the source today.Check the last post in this thread:https://forums.aws.amazon.com/thread.jspa?threadID=86835
I am in the process of publishing several custom metrics for CloudWatch. When the metrics are on my own namespace, all goes well. I now want to publish a per-instance metric, similar toCPUUtilization, with dimensionsImageId=i-XXXXXXXX, in theAWS/EC2namespace. Unfortunately, CloudWatch disagrees with me and gives me this error:"The value AWS/ for parameter Namespace is invalid."How do I add a custom metric to a specific instance?Is this possible at all?many thanks,
How do I publish per-instance metrics in the AWS/EC2 namespace for Amazon CloudWatch?
You can manage Elastic Load Balancers from the graphical console. Documentation ishere.There's still no graphical interface for auto scaling. The Amazon command line documentation ishere. You might also skimthis tutorialon configuring auto scaling and load balancing.
In Amazon AWS, is it currently possible to configure load balancing and autoscaling with the web console / panel?The whole infrastructure seems to be configurable with a few clicks but I only found CLI tutorials to manage autoscaling. Is there no way to manage it with a graphical interface?I want to propose Amazon solutions to my company, but it is hard to defend the replacement of one command line mess with another command line mess.Thanks.PS: Trying to avoid third-party solutions please.
Amazon auto-scaling through web console/panel
There's a helper class for REST called SignedRequestHelper.You call it like so:SignedRequestHelper helper = new SignedRequestHelper(MY_AWS_ACCESS_KEY_ID, MY_AWS_SECRET_KEY, DESTINATION); requestUrl = helper.Sign(querystring);There must be a similar one for SOAP calls in the above links.
i am trying to search in amazon product database with the following code posted in amazon webservice sample codes pageAWSECommerceService ecs = new AWSECommerceService(); // Create ItemSearch wrapper ItemSearch search = new ItemSearch(); search.AssociateTag = "ABC"; search.AWSAccessKeyId = "XYZ"; // Create a request object ItemSearchRequest request = new ItemSearchRequest(); // Fill request object with request parameters request.ResponseGroup = new string[] { "ItemAttributes" }; // Set SearchIndex and Keywords request.SearchIndex = "All"; request.Keywords = "The Shawshank Redemption"; // Set the request on the search wrapper search.Request = new ItemSearchRequest[] { request }; try { //Send the request and store the response //in response ItemSearchResponse response = ecs.ItemSearch(search); gvRes.DataSource = response.Items; } catch (Exception ex) { divContent.InnerText = ex.Message; }and getting the following errorThe request must contain the parameter Signature.and amazon documentation is not clear about how to sign the requests.any idea how to make it work???thx
Amazon Product Advertising API Signing Issues
AWS's Elastic Load Balancer does not support URL-based session stickiness.Be sure to check that you've set the ELB's stickiness policy.Also, ELB's stickiness doesn't actually look at the value of any cookie except for its own called "AWSELB". When you configure a cookie-based stickiness policy you're really configuring thelifetimeof the stickiness to follow thelifetimeof the specified cookie - but the actual server assignment is controlled by the AWSELB cookie.
I currently trying to setup Amazon Load Balancer for Tomcat workers, but I faced one problem.I'm using sticky sessions and cookieJSESSIONIDis available for most of the requests. But some requests have session information in URL, like this:http://myserver.com/contextPath/someAction;jsessionid=BA6853C23F795BD5EEDAEA996E601BB8And it does not work (and request is forwarded to the wrong worker).Does AWS Load Balancer supportjsessionidin the URL? If no, than maybe you know some workarounds?With Apache + mod_proxy_balancer I can, for example, define it like this:ProxyPassMatch /.* balancer://mycluster stickysession=JSESSIONID|jsessionid
Amazon Load Balancer sticky sessions configuration for jsessionid in URL
It doesn't look like there's any way to use Django's ORM with SimpleDB at the moment, unless you want to write all the code yourself. I'd suggest interfacing with SimpleDB using normal Python code (which would get called by your views or however you wish to do it).To do this, useboto. It's mature, stable and well-documented -- I used it quite successfully in a Django project I recently undertook.
Can somebody guide me to develop a django app with SimpleDB(Amazon's Database) as its database. I couldnt find any tutorials on searching. Can somebody help me by explaining the process involved in integrating Django with SimplDB for creating a small application. Or if somebody have any tutorials for it, please share it with me. Any help would be appreciated.
Integrating Django with Amazon's Database 'SimpleDB'
+100While the step functions approach above seems like the right solution generally for these kind of orchestration scenarios, I wonder if in this simple case it would be sufficient to have a single lambda listen to both S3 notifications and simply check if the other file already exists?Something like:exports.handler = async function (event, context) { const s3Object = extractS3Object(event); if (isSecondFile(s3Object) && firstFileExists() || isFirstFile(s3Object) && secondFileExists()) { // Do stuff } else { // Don't do stuff } };Update: This doesn't work as great if you need a strong guarantee that job is only executed once, but it's still possible with a bit of a hack. You can:Set your lambda concurrency limit to 1Store IDs of successfully processed jobs in a DynamoDB tableCheck ID in the table before processing a job, and if found, skip.However, I would first think if it's possible to make your job idempotent, so that you don't have to enforce "only-once" policy
I have some work that needs two s3 objects. Object A is uploaded by another system; I have to generate the Object B.In fact, there is not one Object A, but several (A1, A2, A3). Each one is uploaded by an external system at any time. For each object A, another instance of the work has to be launched.On the other hand, Object B remains the same for a specified period time, after which I have to regenerate it. Generating the object takes time.I can useEventBridge Schedulerto generate the object B, and I can also use event bridge to fire events for each Object Ax that gets uploaded.My question is how docombine these two events, so that I can launch a job only after both Object B is generated, and Object Ax is uploaded, ensuring that for every object Ax that gets uploaded, exactly one job is launched.(Something similar toPromise.allin javascript)
Combine several events from event-bridge
It looks like Firefox has a bug in the http3 handler with some xhr requests. If you're in AWS and using CloudFront you probably have the default http1.1, http2, and http3 protocols. I disabled http3 and Firefox requests started working again.More information:there is no bug reported, but searching the source code for that error i find comments likethiscase PR_END_OF_FILE_ERROR: // XXX document this correlation rv = NS_ERROR_NET_INTERRUPT;andthisif ((NS_ERROR_NET_INTERRUPT == aError || NS_ERROR_NET_RESET == aError) && SchemeIsHTTPS(aURI)) { // Maybe TLS intolerant. Treat this as an SSL error. error = "nssFailure2";}
I have a website deployed in AWS but whenever I use firefox to access my website and make a request, it always blocked the request and throws an error "ns_error_net_interrupt" But when I access my website in chrome, everything is working fine. Is there anyone who has idea of what's going on?
ns_error_net_interrupt in firefox but not in chrome
You need to add a header to the request:-H "x-amz-security-token: $AWS_SESSION_TOKEN"At least it worked for me with EKS...❯ curl https://eks.us-west-2.amazonaws.com/clusters/<cluster-name> --user "$AWS_ACCESS_KEY_ID":"$AWS_SECRET_ACCESS_KEY" --aws-sigv4 "aws:amz:us-west-2:eks" {"message":"The security token included in the request is invalid."} ❯ curl https://eks.us-west-2.amazonaws.com/clusters/<cluster-name> --user "$AWS_ACCESS_KEY_ID":"$AWS_SECRET_ACCESS_KEY" --aws-sigv4 "aws:amz:us-west-2:eks" -H "x-amz-security-token: $AWS_SESSION_TOKEN" { "cluster": {...} }
I am trying to usecurlto make a SIGv4 signed request to API Gateway, using temporary credentials from an assumed role. I have this working usingawscurl, which provides an option to pass the--security_token(session token).awscurl --service execute-api -X POST -d '{}' https://aaaaaaaa.execute-api.eu-west-1.amazonaws.com/endpoint --region eu-west-1 --access_key ${AWS_ACCESS_KEY_ID} --secret_key ${AWS_SECRET_KEY} --security_token "${SESSION_TOKEN}" {"response":{}}But I am unable to make the same request succeed using standardcurl:curl --aws-sigv4 "aws:amz:eu-west-1:execute-api" --user '${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}' -XPOST https://aaaaaaaa.execute-api.eu-west-1.amazonaws.com/endpoint -d'{}' {"message":"Forbidden"}does anyone know how to pass the session token usingcurl?
Use temporary credentials to sign AWS SIGv4 request using curl
I solved the issue. I usedimport { SESClient, SendEmailCommand } from "@aws-sdk/client-ses";instead
How do I use aws-sdk in a lambda. I'm trying to follow thishttps://aws.amazon.com/premiumsupport/knowledge-center/lambda-send-email-ses/but I cannot getvar aws = require("aws-sdk");to work, I get an error "require is not defined in ES module scope, you can use import instead" How come AWS’s own solution doesn't even work?EDIT: usingimport { AWS } from 'aws-sdk';doesn't work either, I get the error "Cannot find package 'aws-sdk' imported from /var/task/index.mjs"
How to use aws-sdk in an AWS lambda
You can absolutely mix Fargate and EC2 tasks in the same cluster. Recommended checking out Capacity Providers for this:https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html
My main objective is to utilize GPU for one of our existing task being deployed through Fargate.We have existing load balancers for our staging and production environments.Currently we have two ECS Fargate clusters which deploy Fargate serverless tasks.We want to be able to deploy one of our existing fargate tasks with GPU, but becausefargate doesn't support GPU, we need to configure an EC2 task.To do this, I believe we need to create EC2 auto-scaling groups associated with both the staging and production environments that allow for deploying an EC2 instances with a GPU through ECS.I'm unsure whether or not we need to create a new cluster to house the EC2 task, or if we can put the EC2 task in our existing clusters (can you mix Fargate and EC2 like this?).We're using Terraform for Infrastructure as code. Any AWS documentation or relevant Terraform docs would be appreciated.
GPU support for task currently within AWS-Fargate cluster
You can remove arbitrary child constructs by ID, using thetryRemoveChildescape hatchmethod:// remove the role taskDefinition.Node().TryRemoveChild(jsii.String("TaskRole")) // remove the reference to the role t := taskDefinition.Node().DefaultChild().(awsecs.CfnTaskDefinition) t.AddPropertyDeletionOverride(jsii.String("TaskRoleArn"))The trick is identifying the construct ID. You sometimes need to look for it in thesource code.
In AWS CDK v2 the ECS TaskDefinition L2 construct has an optional property TaskRole if not specified CDK default behavior is to create a task role. However I do not want a task role set for this resource, it is not actually required in AWS - the Task Definition can function without this property. How can i manage that in CDK? I can't see any way to unset that task role or not have it generated in the first place. Do I need to step back to the L1 construct for this? My configuration:taskDefinition := awsecs.NewEc2TaskDefinition(stack, jsii.String(deploymentEnv+service.Tag+"TaskDef"), &awsecs.Ec2TaskDefinitionProps{ Family: jsii.String(deploymentEnv + service.Tag), NetworkMode: awsecs.NetworkMode_BRIDGE, //TaskRole: what can i do here to fix this Volumes: &[]*awsecs.Volume{ &efs_shared_volume, }, })
AWS CDK ECS Task Definition Without Task Role
There is no way to stop the cluster today. What I have seen to reduce bill was that team edited the cluster to reduce the instance type to a t2.small (or smaller ones) instance which is significantly cheaper than the previous instance.Then when they needed to resume testing they changed the instance type back to what they required.One other thing maybe is to take asnapshotof your domain, the disable Opensearch for the weekend. Finally restore it back on monday with the snapshot you got.
I am looking at doing some cost savings in AWS and want to know if we can stop and then start the AWS Opensearch service for a couple of days.My scenario is that the application which uses the OpenSearch service (Elasticsearch) remains down during 2 days every week... During this time OpenSearch remains active and incurs costs...I know one option to save the costs is to downgrade the node type and reduce the number of nodes during the application downtime.But let me know if there are any other options where I can entirely "Switch Off" and "Switch On" the service just like we can do with EC2 instances.
Stop AWS OpenSearch service temporarily for cost savings
You can look atCloudwatch Container Insights. Container Insights reports CPU utilization relative to instance capacity. So if the container is using only 0.2 vCPU on an instance with 2 CPUs and nothing else is running on the instance, then the CPU utilization will only be reported as 10%.Whereas Average CPU utilization is based on the ratio of CPU utilization relative to reservation. So if the container reserves 0.25 vCPU and it's actually using 0.2 vCPU, then the average CPU utilization (assuming a single task) is 80%. More details about the ECS metrics can be foundhere
What APIs are available for tasks running under ECS fargate service to get their own memory and CPU usage?My use case is for load shedding / adjusting: task is an executor which retrieves work items from a queue and processes in parallel. If load is low it should take on more tasks, if high - shed or take on less tasks.
Get mem and cpu usage from AWS fargate task
These are TCP RST packet counts. For a TCP connection to remain alive, either party should exchange some data before idle timeout. On a UNIX OS (server/target), idle timeout is governed either bytcp_keepalive_timeortcp_keepidleparameter. On the client it depends upon how it's implemented or it may use the same parameters if it's also a UNIX OS. If either of the parties fail to send any data, the connection is closed after which if a client or a server send anything they'll receive a TCP packet with RST bit set and they'll know that the connection is no longer valid.Client Reset Count:The total number of reset (RST) packets sent from a client to a targetTarget Reset Count:The total number of reset (RST) packets sent from a target to a clientLoad balancer Reset Count:The total number of reset (RST) packets generated by the load balancer. It usually happens in cases where any target has started to fail or is being marked unhealthy or for a connection request to a target which is already marked unhealthy.
We have deployed Network Load Balancer target to nginx webserver using PHP-FPM.We are receiving various reset count shown in below image. Could any one help understanding these counts?
Reset count metrics in AWS Network Load Balancer?
In version1.92, try adding the following option:-o compat_dir
I am using s3fs to mount a bucket into an EC2 instance. The mount is successful but strangely not all folders present in my bucket are visible within the mount in the EC2 instance. The data within the S3 bucket was copied from another EC2 instance.pkg-config --modversion fuse 2.9.2 s3fs --version Amazon Simple Storage Service File System V1.91 (commit:9a42822) with OpenSSLThe command I have used to mount the bucket:s3fs -o iam_role='MyS3Role' -o url='https://s3.us-east-1.amazonaws.com' -o allow_other -o nonempty -o use_path_request_style -o use_cache=/tmp -o umask=0002 mybucket /usr/test
s3fs not showing all the folders from S3 bucket