Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Did you set the properregionwhen you created a new S3 instance in Node?Say for example, your s3 bucket is inus-east-1. For optimal transfer speeds you'd want to make sure your S3 instance was set to that region, like:const s3 = new AWS.S3({
accessKeyId: "xxx",
secretAccessKey: "xxx",
region: 'us-east-1'
});Otherwise it can be incredibly slow. Someone can probably chime in for the specific reasons why this happens--- I'd guess it has to do with having to keep looking up the actual region while doing multi-part requests, or possibly uploading to another region that's much further away from your destination region.ShareFolloweditedFeb 7, 2019 at 8:35Eric Aya69.7k3636 gold badges185185 silver badges256256 bronze badgesansweredFeb 7, 2019 at 3:15tom ftom f38033 silver badges1616 bronze badgesAdd a comment| | Been trying to figure out why uploading to Amazon S3 is amazingly slow using the putObject command (node.js library). The code below reads an entire directory of files and puts them to S3 asynchronously.//Read a directory of files
fs.readdir(dir,function(err,files){
//Read each file within the folder
for(var i=0; i < files.length; i++){
var file = files[i];
//Read the File
fs.readFile(path.join(dir,file), function(err, data){
//Create a new buffer
var buffer = new Buffer(data, 'base64');
//Add the pdf to S3
s3.putObject({
'Bucket':bucket,
'Key':path.join(key,file),
'Body':buffer,
'ContentType':mime(file)
},function(err, data) {
//Wait for all the other files to be done
// and perform a callback
});
});
}
});Tested with a number of different folders with similar results.6 files all between 1-2 Kb except for 1 @ 63Kb (20+ seconds to upload)4 files all exactly 3kb (20+ seconds to upload)Uploading the same files using theAWS web interfacetakes around 3 sec to complete (or less). Why is using the node.js API so slow??As per Amazon documentation I've even tried spawning multiple children to handle each upload independently. No changes in upload speed. | Upload to Amazon S3 using API for node.js Extremely Slow |
Problem solved, I gradually added the missing libraries to the project and when the apache httpclient jar should be version 4.0 or later, and without any previous version to contradict.I importedhttpclient-4.2.jarand it worked.Other than that, I just solved the exception that followed by importingjoda-time-2.4.jarand it's all up and running.ShareFollowansweredSep 11, 2014 at 7:02Yuval HerzigerYuval Herziger1,15522 gold badges1616 silver badges2828 bronze badges0Add a comment| | I've seen this type of error over here for exceptions that are thrown by various classes, though I haven't found the right solution for mine just yet.I'm trying to get AWS Java SDK work locally so I can write a test application that reads data from a Kinesis stream.
Problem is, when I run theinit()static method I encounter the following error:Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.http.impl.conn.DefaultClientConnectionOperator.<init>
(Lorg/apache/http/conn/scheme/SchemeRegistry;Lorg/apache/http/conn/DnsResolver;)VNow, this is not the first error I've been thrown. I've been thrown four or five exceptions prior to this one, and the solution to all of them was just importing some jar's into the project. e.g.:apache-httpcomponents-httpclient.jarcom.fasterxml.jackson.databind.jarcommons-codec-1.9.jar / commons-codec-1.9-javadoc.jar / commons-codec-1.9-sources.jarhttpclient-4.2.jarhttpcore-4.0.1.jarI've seen in other threads around here that it could be the version of thehttpcorelibrary, however I imported the latest one.Any ideas how I can resolve this? I'm thinking about starting over, as my project seems to be a heap of imports I'm not sure I'll actually utilize. Furthermore, I can't debug the binary imports of the AWS SDK (or can't I?).Cheers. | AWS Java SDK Error - java.lang.NoSuchMethodError |
If you mean AWS RDS PostgreSQL:pg_dumpandpg_restoreI know you don't like it, but you don't really have other options. With a lot of hoop jumping you might be able to do it with Londiste or Slony-I on a nearby EC2 instance, but it'd be ... interesting. That's not the most friendly way to do an upgrade, to say the least.What you should be able to do is ship WAL into RDS PostgreSQL, and/or stream replication logs. HoweverAmazon don't support this.Hopefully Amazon will adopt some part of 9.4's logical replication and logical changeset extraction features, or better yet theBDR project- but I wouldn't hold my breath.If you mean AWS EC2If you're running your own EC2 instance with Pg, usereplicationthen promote the standby into the new master.ShareFolloweditedApr 13, 2017 at 12:42CommunityBot111 silver badgeansweredAug 22, 2014 at 16:41Craig RingerCraig Ringer315k7878 gold badges704704 silver badges791791 bronze badges2I asked heroku support, and they said they don't support a slave db outside of heroku. Have you done that before?–ming.kernelAug 23, 2014 at 2:44Ah, I missed the Heroku part. You can still probably make Slony or Londiste work but streaming replication you are out of luck.–Craig RingerAug 23, 2014 at 3:46Add a comment| | I want to migrate our postgres db from heroku to our own postgres on AWS.I have tried using pg_dump and pg_restore to do the migration and it works; but it takes a really long time to do this. Our database size is around 20GB.What's the best way to do the migration with minimal downtime? | Migrate database from Heroku to AWS |
I eventually found the solution:My inventory file was like:[dbservers]
ubuntu@mydomainBut if I set the ssh user in this other way runs ok:[dbservers]
mydomain ansible_ssh_user=ubuntuShareFollowansweredAug 11, 2014 at 13:37Diego NavarroDiego Navarro9,45433 gold badges2727 silver badges3333 bronze badgesAdd a comment| | I've been playing around with Ansible a little bit and I'm able to launch my playbook against a vagrant virtual machine, but the problems arise when I try to do the same process in an EC2 instance. I just can't sudo any task:# ansible -i inventory/staging dbservers --sudo -a "apt-get update"
ubuntu@staging-ansible | FAILED | rc=100 >>
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?Any clue? I can sudo with ssh:# ssh ubuntu@staging-ansible "sudo apt-get update"
sudo: unable to resolve host ip-10-0-0-61
Get:1 http://eu-west-1.ec2.archive.ubuntu.com precise Release.gpg [198 B]
Get:2 http://eu-west-1.ec2.archive.ubuntu.com precise-updates Release.gpg [198 B]
Get:3 http://security.ubuntu.com precise-security Release.gpg [198 B]
... | How to run tasks with sudo from ansible in EC2 instance |
What we do ourselves right now to is to hook into the deployment hooks (ref) and and use AWS instance roles to send out sns/ses messages. There isn't an easy off the shelf item for this.ShareFollowansweredJul 31, 2014 at 10:51Imran AhmedImran Ahmed1,06666 silver badges66 bronze badges21Thanks, glad to know that my suspicions (and solutions) are on point. After discussing with my team, we're actually going for my first solution for now, because the deployment manager is designed to fill OpsWorks gaps without imposing specific app structure (or adding any custom recipes).–user847316Jul 31, 2014 at 23:02Deployment hooks do not work anymore for Chef 12 :/. I've been using this, but it doesn't completely fill my needs:github.com/zuazo/chef_handler_sns-cookbook–Paulius DragunasMay 13, 2017 at 2:07Add a comment| | I would like to receive a notification via a SNS topic (or maybe a SQS queue) when a OpsWorks stack or app deployment is complete. The topic should include the stack ID, the deployment result (successful or unsuccessful), and perhaps the stack's public-facing DNS name. Surprisingly, this doesn't appear to be an off-the-shelf feature.Possible implementations:My deployment app could poll the stack's deployment status and block until the deployment is complete, at which point the app would take the responsibility of retrieving the stack's details and passing that into SNS. This is simple and straightforward but rather inelegant.I could write a Chef deployment hook to invoke the AWS API in aruby_block, and attach this hook to the OpsWorksrestartevent. This is nice and clean, and all of my stack information is already provided to the recipe, but it introduces additional complexity to the overall deployment system.Any better options? | How to receive OpsWorks deployment notifications? |
So far balancers don't forward websocket headers. To make WS working you must have a public IP address and no other services in front of your application.ShareFollowansweredAug 23, 2014 at 3:15DiGiTALDiGiTAL1111 bronze badgeAdd a comment| | I developed our websocket project on wildfly. When we test it on localhost or within our local network, everything work fine. But when I deployed it on AWS, websocket don't work any longer. We can access other html pages. But when we conenct to "ws://ip/project location ", chrome just says hand shake error. I have experienced the same web socket problem on jelastic hosting too. My question isWhy it is happening like this?Is websocket protocol not stable enough?Is there any suitable hosting for websocket projects in java? | Why websocket don't work on the cloud? |
Automatically mount the S3 bucket when the server boots by adding an entry to /etc/fstab using the following syntax:s3fs#bucket-name /s3mnt fuse allow_other,_netdev,nosuid,nodev,url=https://s3.amazonaws.com 0 0ShareFollowansweredJul 14, 2014 at 11:28user3086014user30860144,37155 gold badges2828 silver badges5656 bronze badgesAdd a comment| | I am usings3fsutility to mount a S3 bucket on an EC2 instance.After crossing so much hurdles I am able to mount the S3 bucket.I have few queries :If I mount a S3 bucket on EC2 instance do I need to make any entry to thefstab.If I mount a S3 bucket on EC2 instance then the I can see the files and folders in the mount device like /s3mnt but I am not able to see the contents on the S3 bucket. Does the content disappears from the bucket??Thanks | Mounting a S3 bucket using s3fs utility AWS |
The easiest way to characterize a given application is to log your AWS API calls using CloudTrail:http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.htmlRun your app through its' paces and then you'll have a log of all the IAM info you need. You'll probably want to do this each time you upgrade Boto, as certain calls change the way they work over time (and being surprised by an IAM failure is not good news.. :-)).ShareFollowansweredJul 8, 2014 at 17:26Nick BastinNick Bastin30.9k77 gold badges6060 silver badges7878 bronze badges0Add a comment| | Boto is a very convenient way to use AWS services. I want to be very specific with my IAM users/groups/policies so that I can achieve fine-grained control over access. I know about theAWS policy generatorbut there are so many aws services, each with so many actions that it's always frustrating to come up with a policy that is tailored to a particular use case. It usually requires lots of wasteful trial and error which I'd like to avoid.I'd love to see some sort of catalog that shows exactly what actions are needed for each and every boto method call. Is this wishful thinking? Or am I missing something obvious that would help me? | How to discover the *exact* correspondence between boto method calls and AWS service actions |
Should be web server for TVM example.
AWS Elastic Beanstalkaddedsupport for worker tier in December 2013.ShareFollowansweredJul 7, 2014 at 22:23Rohit BangaRohit Banga18.6k3131 gold badges116116 silver badges192192 bronze badgesAdd a comment| | The tutorial forToken Vending Machine for Anonymous Registration - Sample Java Web Applicationis out of date. In particular the current Beanstalk console has an option for Web Server or Worker (which is not covered in the tutorial) for the Environment Tier field under Environment Type. I presume for setting up a TVM I would want a server but I wanted to confirm before saving the config. So, server or worker? | Environment Tier: Web Server or Worker for Setting up Elastic Beanstalk TVM? |
Don't use a static IP on the domain controller, instead try allocating an Elastic IP for it. Amazon's docs include a guide to setting up a DC:http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ConfigWindowsHPC.htmlShareFollowansweredJul 5, 2014 at 5:08eugeug1,13811 gold badge1717 silver badges2525 bronze badges1no prob, glad you found it useful!–eugJul 6, 2014 at 7:00Add a comment| | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.Questions asking us torecommend or find a tool, library or favorite off-site resourceare off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead,describe the problemand what has been done so far to solve it.Closed9 years ago.Improve this questionI would like to setup 3 Windows 2012R2 EC2 instances:AD Server (Domain Controller)Web Server (IIS)Database Server (SQL Server)Setting up the servers individually is fine, but I would like to setup the AD Server so that it acts as a Domain Controller. I would just like the domain to run in the cloud, it is not necessary that the Domain be accessible from a remote office.I have tried setting up the AD Server by installing the Windows feature. That part is working fine, however, when I try setting a static IP, then I lose complete access to the machine. Note when installing AD I have installed DHCP and DNS as well. The configuration is where I am getting stuck.Can anyone advise / point me to a good tutorial on setting up an architecture like this in AWS EC2? I am developer who striving to better under dev ops. | Amazon Web Services - EC2 - Active Directory (Domain Controller) | Web Server (IIS) | DB Server (SQL Server) [closed] |
Here's what I did that worked.I changed all my image paths from "/assets/image.png" to<%= image_tag("image.png") %>For background images I changed it to something likebackground: url(image-path('image.png')) no-repeat center center;Then I did an assets precompilation.rake assets:precompile RAILS_ENV=productionAs for why this is needed, I read somewhere that there's some permissions issues with AWS S3 serving images if they are not listed in public/assets/ folder for rails projects.Since assets:precompile automatically creates new instances of these images in the public folder, you don't have this issue after you do the changes and dynamically list the image path.ShareFollowansweredJul 6, 2014 at 10:34Need A HandNeed A Hand57711 gold badge66 silver badges1818 bronze badges11Or you can code more simply like thisbackground: image_url('image.png') no-repeat center center;–PenguinOct 30, 2015 at 8:12Add a comment| | I've uploaded a simple landing page to AWS elastic beanstalk based on Rails 4.1.The problem is now the images are not being loaded.http://localhost/assets/image.png shows me the image.
http://webinsight.co/assets/image.png does not exist.When I look in AWS S3, the image files are uploaded properly to AWS.Anybody else encountered the same problem before?My site:http://webinsight.co | Rails 4.1 Elastic Beanstalk cannot find image url |
Amazon does not provide an image of its distribution for use on other VM platforms. You may findtheseblogpostsusefulthough; they provide detailed instructions for building Amazon Linux disk images, and in the last article he provides direct links to images he built for VMWare and VirtualBox.Amazon Linux is based on CentOS so you could also start there.ShareFolloweditedApr 16, 2015 at 22:42jwfearn29.1k2828 gold badges9898 silver badges123123 bronze badgesansweredJun 24, 2014 at 17:32Ben WhaleyBen Whaley33.7k88 gold badges8989 silver badges8989 bronze badges3I included a link to that guy's set of posts as they pertain to vagrant, but I wasn't sure what pitfalls there might be in using his images. I'm looking for developer testimonials either for your suggestion or for alternatives I can use when working with Amazon Linux.–TheLonelyGhostJun 24, 2014 at 18:03Main pitfall is that it's a lot of work to maintain them yourself. When Amazon Linux is patched or releases a new version, you're out of date. You could use the Vagrant AWS provider and just run your dev boxes on EC2 instances. That's what I do.–Ben WhaleyJun 24, 2014 at 18:05Ah shoot, then that defeats the purpose of developing locally with vagrant if I need an internet connection to test. Looks like building on top of CentOS is my best bet so far.–TheLonelyGhostJun 25, 2014 at 22:14Add a comment| | My company has decided to migrate our base server images from Ubuntu Server to Amazon Linux. In the past we would spin up an Ubuntu Server LTS box fromvagrantbox.esto emulate an instance in our AWS stack, but Amazon only provides an AMI.According to theAmazon Linux AMI FAQ, updates are custom tailored depending on the EC2 region the AMI is launched in, which might have issues withexporting an AMI to VDI. I've also read that Amazon Linux removes a lot of cruft from RHEL and Fedora to make it a server-optimized distribution.How can I emulate Amazon Linux in an environment where I might not have a persistent network connection?Apart from switching toyum, what pitfalls should I expect in running Amazon Linux locally?Is there some pre-built vagrant box for Amazon Linux that gets around these pitfalls? | Vagrant alternative to Amazon Linux |
Check if it running from console. If everything is ok, install Apache 2 ITK MPM and add to VirtualHost:ServerName example.com
DocumentRoot /path/to/web/rootAssignUserId vhost-user vhost-groupShareFollowansweredApr 29, 2014 at 10:18Av007Av00710611 silver badge33 bronze badgesAdd a comment| | I run into strange issue that I cannot debug.
Same code works fine on different servers, but on Amazon instance - not. Especially, ftp_connect() doesn't work.<?php
error_reporting(E_ALL);
$conn = ftp_connect("server.address");
var_dump($conn);
$login_result = ftp_login ($conn, "username", "pass");
?>Output:bool(false) Warning: ftp_login() expects parameter 1 to be resource, boolean given in /var/www/dev/ftp/index.php on line 8I'm able to connect to this ftp server from command line so it's not any global firewall.
ftp_connect() just giving me false and that's it. It's starting to show warnings if I try to connect to non-existing address, but for existing - only silence and false.
Do you have any ideas how to debug it? | Php ftp_connect doesen't work on AWS server |
I think the only way is to parse the console output.#get the console output of the instance
aws ec2 get-console-output --instance-id <instance id> |\
#use jq to get the Output field
jq .Output -r |\
#use sed to find the interesting bits
sed -n -e '1,/-----BEGIN SSH HOST KEY KEYS-----/d; /-----END SSH HOST KEY KEYS-----/q; p'Caveats, which might not matter depending on your application:awsoutput is JSON, so we need to parse itoutput requires some sed massageoutput may not be the same depending on the AMI?ShareFolloweditedMay 23, 2017 at 12:16CommunityBot111 silver badgeansweredMay 12, 2015 at 23:42nfirvinenfirvine1,4991212 silver badges2323 bronze badges0Add a comment| | When I connect to the AWS EC2 instance using ssh for the first time, I got an error like below because the host key is not stored in ssh known_hosts file.The authenticity of host 'x.x.x.x' can't be
established. ECDSA key fingerprint is
xx:yy:.... Are you sure you want
to continue connecting (yes/no)?Now, I'm automating ssh. I often just add StrictHostKeyChecking option to ssh command to avoid this message.
But, I feel that is not very safe way and possibly cause Man in the middle attack.
Is there any (or good) way to get host key safely on AWS EC2? | AWS EC2 : Safe way to get host public key |
The LWRP listed on :https://github.com/opscode-cookbooks/aws- works. It will help you add in recipe the tags required for specific instances. The LWRP is : aws_resource_tagShareFollowansweredApr 30, 2014 at 0:25Imran AhmedImran Ahmed1,06666 silver badges66 bronze badges1Great stuff, that'll do for now. I set up this in the endgithub.com/stuart-warren/chef-aws-tag–stuart-warrenMay 1, 2014 at 18:04Add a comment| | I've been using various community cookbooks to set up a stack.I'm aware that AWS Opsworks sets some tags for you (stack name, layar name), but I need to set some tags myself.There doesn't appear to be a way to set them through the Opsworks API, so I'll assume I need to use a cookbook/recipe to set them somehow.Is there an existing method/cookbook to do so, or do I need to go and learn chef? | How to set EC2 tags on AWS Opsworks |
The correct format is reverse:<?php
// wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY
// HMAC(HMAC(HMAC(HMAC("AWS4" + kSecret,"20110909"),"us-east-1"),"iam"),"aws4_request")
$sign = hash_hmac('sha256', '20110909', 'AWS4wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY', true );
$sign = hash_hmac('sha256', 'us-east-1', $sign, true );
$sign = hash_hmac('sha256', 'iam', $sign, true );
$sign = hash_hmac('sha256', 'aws4_request', $sign, true );
$sign = str_split( $sign );
echo "152 241 216 137 254 196 244 66 26 220 82 43 171 12 225 248 46 105 41 194 98 237 21 229 169 76 144 239 209 227 176 231\n";
foreach( $sign as $t )
echo ord($t) . ' ';This matches the given string beautifully. Hope this helps someone!ShareFollowansweredMar 4, 2014 at 21:12user3377141user33771416111 silver badge44 bronze badges2The relevant part of the AWS SDK for PHP is located here:github.com/aws/aws-sdk-php/blob/master/src/Aws/Common/Signature/…–Jeremy LindblomMar 6, 2014 at 1:30Hello, thanks for your answer, it worked. I however still find issue when connecting it to my browser upload script. Could you take a look at my post here:stackoverflow.com/questions/28491237/…and share your insight please?–jaycodeFeb 13, 2015 at 1:49Add a comment| | I'm having an issue with hash_hmac and AWS signature version 4. I'm using the example they laid out here:http://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.htmlThe output is from the AWS website. I want to match it, I can't seem to see what I'm doing wrong. They wanted binary output and that's what I provide in each step.Here is my test file:<?php
// wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY
// HMAC(HMAC(HMAC(HMAC("AWS4" + kSecret,"20110909"),"us-east-1"),"iam"),"aws4_request")
$sign = hash_hmac('sha256', 'AWS4wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY', '20110909', true );
$sign = hash_hmac('sha256', $sign, 'us-east-1', true );
$sign = hash_hmac('sha256', $sign, 'iam', true );
$sign = hash_hmac('sha256', $sign, 'aws4_request', true );
$sign = str_split( $sign );
echo "152 241 216 137 254 196 244 66 26 220 82 43 171 12 225 248 46 105 41 194 98 237 21 229 169 76 144 239 209 227 176 231\n";
foreach( $sign as $t )
echo ord($t) . ' '; | PHP hash_hmac not matching AWS Signature 4 example |
Here's what I do that works:- name: Add machine to elb
local_action:
module: ec2_elb
aws_access_key: "{{lookup('env', 'AWS_ACCESS_KEY')}}"
aws_secret_key: "{{lookup('env', 'AWS_SECRET_KEY')}}"
region: "{{ansible_ec2_placement_region}}"
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{elb_name}}"
state: presentThe biggest issue was the access and secret keys. The ec2_elb module doesn't seem to use the environment variables or read ~/.boto, so I had to pass them manually.Theansible_ec2_*variables are available if you use the ec2_facts module. You can fill these parameters by yourself of course.ShareFolloweditedJul 14, 2014 at 15:57answeredJun 15, 2014 at 9:15hkaritihkariti1,7091414 silver badges1010 bronze badgesAdd a comment| | I am trying to use Ansible to create an EC2 instance, configure a web server and then register it to a load balancer. I have no problem creating the EC2 instance, nor configuring the web server but all attempts to register it against an existing load balancer fail with varying errors depending on the code I use.Has anyone had success in doing this?Here are the links to the Ansible documentation for the ec2 and ec2_elb modules:http://docs.ansible.com/ec2_module.htmlhttp://docs.ansible.com/ec2_elb_module.htmlAlternatively, if it is not possible to register the EC2 instance against the ELB post creation, I would settle for another 'play' that collects all EC2 instances with a certain name and loops through them, adding them to the ELB. | How do I create an AWS EC2 instance and assign it to an ELB using Ansible? |
Use theAWS Command Line Interfaceunified tool instead.aws ec2 describe-instance-status --instance-ids i-01234567 --filters Name=instance-status.reachability,Values=passed
{
"InstanceStatuses": [
{
"InstanceId": "i-01234567",
"InstanceState": {
"Code": 16,
"Name": "running"
},
"AvailabilityZone": "us-west-2c",
"SystemStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
},
"InstanceStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
}
}
]
}ShareFollowansweredFeb 13, 2014 at 19:20Ben WhaleyBen Whaley33.7k88 gold badges8989 silver badges8989 bronze badges1Sorry for checking back late. Thanks for the answer!–user2684206Feb 20, 2014 at 22:00Add a comment| | I want to make sure the instance has passed the two status checks(System/Instance reachability check) using command line.When I run thisec2-describe-instance-status
ec2-describe-instance-status XX($InstanceID)it would show running instances likeINSTANCE $InstanceID $REGION running 16But when I tried adding a filter to make sure the instance passed the status checkec2-describe-instance-status XX($InstanceID) --filter instance-status.reachability=passed
ec2-describe-instance-status XX($InstanceID) --filter "instance-status.reachability=passed"
ec2-describe-instance-status --filter instance-status.reachability=passednothing ever returned.I've double-checked the instances are running fine and actually passed the 2 status checks, but why nothing is returned after applying the filters?Update:In response to Rico, I tried the-voptionec2-describe-instance-status -vreturns one item in the instanceStatusSet, with the fields<item>
<instanceId>i-XXX</instanceId>
<availabilityZone>us-east-1d</availabilityZone>
<instanceState>
<code>16</code>
<name>running</name>
</instanceState>
</item>whileec2-describe-instance-status --filter instance-status.reachability=passed -v
ec2-describe-instance-status --filter "instance-status.reachability=passed" -vboth return an empty instanceStatusSet... | Check instance status using filter with EC2 API |
You get the above error because the actual .jar file is not in your repository. The .lastUpdated files are not the actual .jar files, and the latest updated version of them. The .lastUpdated files are just book-keeping files.For example, let's create a nonsense dependency:<dependency>
<groupId>asdfasdf</groupId>
<artifactId>sadffweklsfduasfsdjf</artifactId>
<version>1.0</version>
</dependency>and runmvn installon it.Maven then creates the directory~/.m2/repository/asdfasdf/sadffweklsfduasfsdjf/1.0and populates it with thesadffweklsfduasfsdjf-1.0.jar.lastUpdatedandsadffweklsfduasfsdjf-1.0.pom.lastUpdatedfiles.Inside one of the .lastUpdated files, you simply find something like:1 #NOTE: This is an Aether internal implementation file, its format can be ch anged without prior notice.
2 #Fri Jan 03 09:12:05 IST 2014
3 http\://repo.maven.apache.org/maven2/.lastUpdated=1388733125394
4 http\://repo.maven.apache.org/maven2/.error=and not the actual .jar or .pom file.To get rid of this error, you need to make sure that Maven is actually able to reach the remote repository and download the actual Amazon AWS jar that you need.ShareFolloweditedSep 17, 2018 at 9:46Shridutt Kothari7,35433 gold badges4343 silver badges6262 bronze badgesansweredJan 3, 2014 at 7:19solaticsolatic5633 bronze badgesAdd a comment| | I am using Linux system.Undermylocal maven repodirectory/Users/username/.m2/repository/I have the following directory:com/amazonaws/android/core/amazon-aws-android-core/1.6.1Which means thefull pathof the directory is:/Users/username/.m2/repository/com/amazonaws/android/core/amazon-aws-android-core/1.6.1Underthe above directory, I have files:amazon-aws-android-core-1.6.1.jar.lastUpdated
amazon-aws-android-core-1.6.1.pom.lastUpdatedIn my project, mypom.xmlcontains :<dependency>
<groupId>com.amazonaws.android.core</groupId>
<artifactId>amazon-aws-android-core</artifactId>
<version>1.6.1</version>
<type>jar</type>
</dependency>But when I runmvn clean install, I always get the followingerror message:The following artifacts could not be resolved: com.amazonaws.android.core:amazon-aws-android-core:jar:1.6.1Why I get the above error & How to get rid of it? | amazon aws artifacts could not be resolved |
Try Using _POST_FLAT_FILE_PRICEANDQUANTITYONLY_UPDATE_DATA_ feedtype and send a CSV file (tab delimited) in the body of the https request instead of an XML file. The first line of the csv should be: sku price quantity (separated by tabs) followed by lines containing the values (separated by tabs).ShareFollowansweredDec 12, 2013 at 22:30BuildtronixBuildtronix2944 bronze badges1Templates are availablehere–Prashant SaraswatMay 13, 2015 at 0:59Add a comment| | I am using amazon api for update product's quantity using "_POST_INVENTORY_AVAILABILITY_DATA_" feedtype like,<?xml version="1.0" encoding="utf-8" ?>
<AmazonEnvelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="amzn-envelope.xsd">
<Header>
<DocumentVersion>1.01</DocumentVersion>
<MerchantIdentifier>$merchantID</MerchantIdentifier>
</Header>
<MessageType>Inventory</MessageType>
<Message>
<MessageID>1</MessageID>
<OperationType>Update</OperationType>
<Inventory>
<SKU>$SKU</SKU>
<Quantity>8</Quantity>
</Inventory>
</Message>
</AmazonEnvelope>
<?xml version="1.0"?>
<SubmitFeedResponse xmlns="http://mws.amazonaws.com/doc/2009-01-01/">
<SubmitFeedResult>
<FeedSubmissionInfo>
<FeedSubmissionId>6791310806</FeedSubmissionId>
<FeedType>_POST_INVENTORY_AVAILABILITY_DATA_</FeedType>
<SubmittedDate>2013-03-21T19:48:37+00:00</SubmittedDate>
<FeedProcessingStatus>_SUBMITTED_</FeedProcessingStatus>
</FeedSubmissionInfo>
</SubmitFeedResult>
<ResponseMetadata>
<RequestId>fd07bf18-4f6a-4786-bdf9-9d4db50956d0</RequestId>
</ResponseMetadata>
</SubmitFeedResponse>but when i try to update 15k or more products at a time by loading products using magento collection quantity not updating in amazon after few hours also. Is it right method or do i need to use any other method?Can anyone help me?Thanks in advance. | Amazon MWS - Update product quantity |
If you use Java 1.7, you can usetry-with-resouceblock. The object will be closed automatically when leaving the block.GetObjectRequest req = new GetObjectRequest(bucketName, fileName);
try(S3Object object = s3Client.getObject(req)) {
...
} catch(AmazonServiceException e) {
if(e.getErrorCode().equals("NoSuchKey"));
}If you use Java 1.6 or prior version, you need to do it in finally blockS3Object object = null;
GetObjectRequest req = new GetObjectRequest(bucketName, fileName);
try {
object = s3Client.getObject(req))
...
} catch(AmazonServiceException e) {
if(e.getErrorCode().equals("NoSuchKey"));
} finally {
if (object != null) {
object.close();
}
}ShareFollowansweredJan 21, 2014 at 7:57AMingAMing5,61722 gold badges2525 silver badges1515 bronze badgesAdd a comment| | I'm using AWS java SDK to upload file on AWS Management Console's Bucket. However, if there is no such file online at first time when I try to get access to it, my code will catch the exception (NoSuchKey). Then I want to close the connection. The problem is I don't have any reference to close that connection because of the exception(The original reference will be null). Here is my code:S3Object object = null;
GetObjectRequest req = new GetObjectRequest(bucketName, fileName);
try{
logconfig();
object = s3Client.getObject(req);
...
catch(AmazonServiceException e){
if(e.getErrorCode().equals("NoSuchKey"))I was trying to use "object" as a reference to close the connection between my eclipse and Aws, but apparently "object" is null when the exception happened.
Can anyone tell me how to do it?
Furthermore, because I can't close the connection, my console will have this warning every 60 seconds:8351167 [java-sdk-http-connection-reaper] DEBUG org.apache.http.impl.conn.PoolingClientConnectionManager - Closing connections idle longer than 60 SECONDS | How to close AWS connection if there is no such key in the query |
The list bucket operation is on the bucket, so your policy must be updated to allow specific operations on the bucketUsing a policy like the one below would give users ability upload, get and delete their files and list their files:{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":["s3:PutObject","s3:GetObject","s3:DeleteObject"],
"Resource":"arn:aws:s3:::__MY_APPS_BUCKET_NAME__/__USERNAME__/*"
},
{
"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::__MY_APPS_BUCKET_NAME__",
"Condition":{"StringLike":{"s3:prefix":"__USERNAME__/"}}
}
]
}ShareFollowansweredNov 1, 2013 at 21:13Bob KinneyBob Kinney8,96011 gold badge2828 silver badges3535 bronze badgesAdd a comment| | I have setup a webservice to create temporary credentials to a bucket in amazon s3 so that another application can use these temporary credentials to upload files.I want to create individual folders in the bucket for each userId and the temporary credentials will only allow access to the folder for the particular userHere is the policy that I am currently using{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::myTestBucket/1/*"
]
}
]}So far, this allows the calling applications to upload files to the folder 1 in myTestBucket and not the folder 2, which is expected.
One of the applications now needs to list the objects within the folder for each of the users.Using this same policy I get the following exception when I do this with the temporary credentialsStatus Code: 403, AWS Service: Amazon S3, AWS Request ID: 7BC4C177E8CA1762, AWS Error Code: AccessDenied, AWS Error Message: Access DeniedCan anyone suggest what needs to be updated in my policy in order to allow me to list files? | Amazon S3: Temporary Credentials |
I had to implement it myself, but the official aws-sdk-js now supports browser side JavaScript.https://github.com/aws/aws-sdk-jsShareFollowansweredMar 3, 2014 at 12:38Cagdas AltinkayaCagdas Altinkaya1,72022 gold badges2020 silver badges3232 bronze badgesAdd a comment| | On a project, I am required to access Amazon's DynamoDB directly from the browser. There is aws-sdk-js for node.js, but not for the browser side JS, so I'm trying to access using Amazon's HTTP API. Are there any implementations for this?Are there any implementations available for the signing process? (http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html) | Accessing DynamoDB over the browser |
After much research it appears that the answer is - no - there is no out of the box solution provided by AWS.However with a simple script running on each node, we can push data into CloudWatch and retrieve the data via the CloudWatch API.#!/bin/bash
export JAVA_HOME=/usr/java/latest
export AWS_CLOUDWATCH_HOME=/opt/aws
cd /opt/aws
./bin/mon-put-data -n 'Custom/connCounts' -m 'ConnectionCounts' -v `netstat -anp | awk '{print $4" "$6}' |grep 'PORT_NUMBER ESTABLISHED' | wc -l` --aws-credential-file /opt/aws/.ec2configShareFollowansweredNov 4, 2013 at 18:58britomanbritoman7311 silver badge88 bronze badgesAdd a comment| | Is there an AWS API method (or other procedure) to determine the number of clients connected to a given Elastic Load Balancer?Reviewing the ELB API documentation there does not seem to be a way. CloudWatch also does not seem to provide a method. Hoping to find some solutions / workarounds. | Is there a way to get client connections count in AWS ELB |
Performance of containers is very close to bare metal (or, in that case, to VMs, since you will be running in VMs).Specifically:on volumes, disk I/O performance is native;outside of volumes, there is a tiny overhead when opening files, and another overhead when doing the first change to a file in the original image (as the file gets copied to the RW layer), but after that, performance is native;network connections go through an extra NAT layer, which should amount to <<1ms (rather 0.01 to 0.1ms) until you get 1000s of requests per second; then you can bypass the NAT layer with tools likePipework;CPU performance is native;memory performance is native by default; but if you enable memory accounting+limiting there is an impact (a few %, up to 5-10% for memory intensive workloads which grow and shrink their memory usage a lot).Status monitoring should be exactly the same as for regular apps.Network configuration: if your apps expose well-known TCP ports, you will be fine with Docker port-mapping features. If you need large ranges of TCP ports, or dynamic allocation of ports, the above-mentionedPipeworkwill help.Don't hesitate if you have other questions! We also have an IRC channel (#docker on Freenode) and a mailing list (docker-user on Google groups).ShareFollowansweredOct 22, 2013 at 18:14jpetazzojpetazzo15.2k33 gold badges4444 silver badges4545 bronze badges1many thanks, jpetazzo, it's really helpful to me.–AeolusOct 23, 2013 at 7:05Add a comment| | Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed10 years ago.Improve this questionI wanna know if this is a good idea to go with?I have a couple java services which run in different boxes in aws vpc right now. Recently I read about docker and think it is really awesome. So my question is that if it is a good idea to replace these current boxes with docker boxes and put my java services on top of them? Of course still in vpc.The biggest benefit which I could image is by doing so it could save us the amount of work we spend on testing integration and debugging and so on.But I do concern about things likeperformance loss (if any)?
network configuring?
service status monitoring?I am really newbie on docker, so plz point me to any resource which you think might help, thx a lot. | docker container in AWS VPC, good idea or not? [closed] |
There were two possible solutions here:Set thedc_relay_domainsto*inupdate-exim4.conf.conf;Use SMTP authentication to ensure that the sender is allowed to have unrestricted access to the sending capabilities of Exim4.Going with option 2 is the only way to prevent an open relay and so I did that. I set an SMTP username and password in my mail clients (Outlook 2007, Thunderbird etc.) and uncommented thecram_md5_serverandlogin_serverauthenticator in Exims/etc/exim4/conf.d/auth/30_exim4-config_examplesfile. Then updated withdpkg-reconfigure exim4-config.ShareFollowansweredOct 22, 2013 at 8:43DoahhDoahh59011 gold badge88 silver badges2121 bronze badgesAdd a comment| | I am not very good with mail server configuration but I have an aws instance that can send mail to some domains such as mydomain.com. However, when I send to googlemail.com I get the error in the mail.log file:H=(blerg) [95.144.47.184] F=<[email protected]> rejected RCPT <[email protected]>: relay not permittedI have added the following into the DNS through Route53 but I am not sure that it quite what the error is referring to:mydomain.com. SPF "v=spf1 ip4:54.229.217.48"Does anyone have any pointers? I haven't managed to find out much that is helpful but I have played with Exim4's:dc_relay_netsanddpkg-reconfigure exim4'domains to relay mail for' 'IP addresses to relay mail for' but with no success. | AWS and Exim4 - 550 relay not permitted |
The Shutdown Behavior of EC2 instance created by Elastic beanstalk is to terminate on stop. If you can change those settings from your application end or API. I guess you problem is solved.ShareFollowansweredOct 9, 2013 at 14:40Jeevan DongreJeevan Dongre4,6291313 gold badges6868 silver badges132132 bronze badgesAdd a comment| | I have an EC2 instance that was built by elastic beanstalk and it is running fine within its environment.I need to take a snapshot of the EC2 volume in order to have a backup so as per what I assume is best practice, I go to the EC2 console and stop the EC2 instance that is running under elsatic beanstalk.The EC2 in instance begins to STOP and then TERMINATES!This is obviously a big problem. How do I STOP an EC2 instance temporarily that is running under EB? | Elastic Beanstalk terminates my EC2 instance when all I want to do was stop it temporarily |
ssh 54.218.73.244 fuser -k 7002/tcpShareFollowansweredSep 20, 2013 at 16:02Peter LyonsPeter Lyons144k3131 gold badges280280 silver badges275275 bronze badges0Add a comment| | I am trying to kill a process running inhttp://54.218.73.244:7002/i have used the commandfuser -k 7002/tcpit is not working the process continues to runI am using expressJS in server to run the server scripthow can i resolve this ? | How to kill a process running in a port in a remote aws server |
Rubber is essentially a capistrano plugin to automate deployments to amazon EC2. You don't have to manually install any of these packages. Rubber will install them for you (in the bootstrap phase), all you need to do is find the right recipe (template). You can find the list of recipes from the rubber's github page.https://github.com/rubber/rubber/tree/master/templatesFor the exact configuration that you mentioned, the following template should work.
complete_unicorn_nginx_postgresqlShareFollowansweredSep 20, 2013 at 8:14Karthik MallavarapuKarthik Mallavarapu14177 bronze badgesAdd a comment| | I am planning on using Rubber to deploy a Rails app on Amazon EC2. Do I need to install Ruby, Rails, Postgres, Nginx and Unicorn on the EC2 server before running Rubber? Or does Rubber do all of these installations on EC2? Please advise. Thanks. | First time using Rubber to deploy Rails app on Amazon EC2 |
You’re starting Sinatra in the development environment. When running in developmentSinatra only listens to requests from the local machine.There a few ways to change this, the simplest is probably to run in theproduction environment, e.g.:$ ruby myapp.rb -e productionYou could also explicitly set the bind variable if you wanted to keep running in development:set :bind, '0.0.0.0' # to listen on all interfacesShareFollowansweredSep 5, 2013 at 12:22mattmatt79k88 gold badges167167 silver badges198198 bronze badges2Awesome. the-e productionflag didn't work in my case but theset :bind, '0.0.0.0'opened access to the port 4567. Thank you!–alexhuang91Sep 5, 2013 at 17:211@user1296908 Are you usingmodular style? It doesn’t look like the command line flags work for modular style apps, only classic style. You can set theRACK_ENVenv variable from the command line if you want, modular apps will respect the setting then.–mattSep 5, 2013 at 17:54Add a comment| | I am trying to deploy a Ruby Sinatra api onto port 4567 of an EC2 micro instance.I have created a Security Group with the following rules (and created the instance with said security group):--------------------------------
| Ports | Protocol | Source |
--------------------------------
| 22 | tcp | 0.0.0.0/0 |
| 80 | tcp | 0.0.0.0/0 |
| 443 | tcp | 0.0.0.0/0 |
| 4567 | tcp | 0.0.0.0/0 |
--------------------------------I bound myapp.rb on port 4567 (the default, but for verbosity):set :port, 4567and ran the service:ruby myapp.rb
[2013-09-05 03:12:54] INFO WEBrick 1.3.1
[2013-09-05 03:12:54] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from WEBrick
[2013-09-05 03:12:54] INFO WEBrick::HTTPServer#start: pid=1811 port=4567Usednmapwhile ssh'd in the EC2 instance on localhost:Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:13 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00019s latency).
PORT STATE SERVICE
4567/tcp open tram
Nmap done: 1 IP address (1 host up) scanned in 0.08 secondsUsednmapwhile ssh'd in the EC2 instance on the external ip:Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:15 UTC
Nmap scan report for <removed>
Host is up (0.0036s latency).
PORT STATE SERVICE
4567/tcp closed tram
Nmap done: 1 IP address (1 host up) scanned in 0.11 secondsHow do I change the state of the port from closed to open? | Sinatra EC2 Deployment Security Group Error |
+50PTR record is for reverse DNS lookup or reverse DNS resolution is the determination of a domain name that is associated with a given IP address using the Domain Name Service (DNS) of the Internet. DNS is not capable of acting like an HTTP redirect in any way. I think you should create "A" record for WWW and setup server side redirect.ShareFollowansweredNov 25, 2013 at 11:26Pawel DubielPawel Dubiel19.6k33 gold badges4141 silver badges5858 bronze badgesAdd a comment| | I have a problem with a PTR (Pointer) in Route53.I wanthttp://tradehubz.comto redirect tohttp://www.tradehubz.com.From what I gathered, the way to do it is via the PTR record which should do aHTTP Redirect (301)?Example of Configuration on Route53Ping Result | Amazon Route 53 Pointer Record not working |
It does make sense, but you can't do it. You can configure S3 and DNS to route one hostname to one bucket, but not to just part of a bucket.If, instead of folders, you were willing to use two different buckets, then it is easy to set up the redirection of the two hostnames to the two different buckets.ShareFollowansweredJul 10, 2013 at 21:29Charles EngelkeCharles Engelke5,59911 gold badge3030 silver badges2626 bronze badgesAdd a comment| | I want to dynamically route subdomains to S3 bucket folders. Whats the best way to achieve this?Example:I have a bucket called 'example' which has two folders, one called 'A' and another one called 'B'. Now I want to dynamically route a.example.com to folder 'A' and b.example.com to folder 'B'...and so on.Hope this makes senseThanks | Subdomains for S3 Bucket Folders |
finally got solutionfor group in self.conn.get_all_security_groups():
for rule in group.rules:
print dir(rule)
for grant in rule.grants:
print dir(grant)thanks to boto user mailing listShareFollowansweredJul 11, 2013 at 8:07AlokAlok8,4681010 gold badges6060 silver badges121121 bronze badgesAdd a comment| | I am able to list all the security group using get_all_security_groups()I am also able to list inbound rules for a security group. but I want to see the source also for a rule (inbound rule) using boto.I tried to find out on google but could not see any way to see the source for a inbound rule.if anyone know please share | listing inbounds for a security group of AWS EC2 inctances using Python boto |
There isn't much difference if you are sending only few emails. But if you are sending many emails daily like user notifications, promotion etc then amazon doesn't like then being send from EC2.Bulk emailing might get ec2 ip ranges blacklisted I guess, so when you send bulk emails from EC2, AWS will issue a notice. I have seen that when I had some configuration issue with my script and send a few hundred email in a very short period.Amazon provides a way to remove these limitation on EC2 by submitting a request through the link given belowhttps://portal.aws.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-requestYou might have to setup elasticips for the EC2 instances, DKIM signing mechanism, SPF record, antispam, TLS etc.Sending email using AWS SES apis are very easy (atleast from my point of view) compared to the above config and if you are a EC2 user then SES is available free of charge.ShareFolloweditedJul 4, 2013 at 3:41answeredJul 4, 2013 at 3:27JosnidhinJosnidhin12.5k99 gold badges4343 silver badges6262 bronze badgesAdd a comment| | I am confused with sending emails onEc2.
i want to know why would we needSESif we can send emails usingsendmaillike we normally use inVPS servers.Whats the benefit of that. Am i missing something | What is the difference between sendmail and Amazon SES on Ec2 |
I assume that you're talking aboutAmazon's Buyer-Seller Messaging Service. I've looked around and there does not seem any API to access that information. But you can relay those mails to a regular email client (which might be integrated into your web app through POP3 or IMAP). Your responses (sent through SMTP) will appear in Seller Central just like manually entered messages.ShareFollowansweredJun 9, 2013 at 11:50HazzitHazzit6,84211 gold badge2828 silver badges4747 bronze badges2do you know if this is possible now. not sure how reply manager and zendesk channel reply are doing it–coder771May 12, 2017 at 8:41@coder771 I haven't seen any new API regarding this, so your options still are POP/IMAP and SMTP, I'm afraid. ReplyManager has a page stating that you need to set up POP and SMTP (replymanager.com/features/administrate), same is true for Zendesk. While Zendesk does require MWS authentication, italsoneeds an email configuration (channelreply.com/zendesk-app/setup-instructions)–HazzitMay 12, 2017 at 12:12Add a comment| | I am working on getting all orders from amazon and displaying them on my site.I just want to get the user's comments that are placed by customers against their order.I was looking at the Amazon MWS documentation, but didn't find anything. I also tried to search it on google but nothing was found.Is it possible to get comments on orders?If so, then how? | Amazon MWS : Get comments of orders |
You should configure them through.ebextensions, ask explained here:http://aws.typepad.com/aws/2012/10/customize-elastic-beanstalk-using-configuration-files.htmlSee similar question:AWS Elastic Beanstalk, running a cronjobShareFolloweditedMay 23, 2017 at 12:11CommunityBot111 silver badgeansweredJun 3, 2013 at 18:15yegor256yegor2561Add a comment| | My application require this 3 cronjobs to work properly:00 00 * * * wget http://www.mysite.com/cron/archivebenefits > /dev/null 2>&1
*/15 * * * * wget http://www.mysite.com/cron/archiveevents > /dev/null 2>&1
*/15 * * * * wget http://www.mysite.com/cron/sendlists > /dev/null 2>&1I'm using the AWS Elastic Beanstalk and the problem is that I can not setup them.
As the cronjobs run remotely requests, I can use them outside of my servers too. Anyone have any ideas?NOTE: Use an EC2 instance for them is unfeasible due to the price. | Cronjobs and AWS Elastic Beanstalk |
For more targeted results, you can use Browse Node IDs in aBrowseNodeLookuprequest.According to the documentation you can useBrowseNodeLookupiteratively to navigate through the browse node hierarchy to reach the node that most appropriately suits your search.Then you can use the browse node ID in anItemSearchrequest.Resourceshttp://docs.aws.amazon.com/AWSECommerceService/latest/DG/BrowseNodeIDs.htmlhttp://docs.aws.amazon.com/AWSECommerceService/latest/DG/BrowseNodeLookup.htmlhttp://docs.aws.amazon.com/AWSECommerceService/latest/DG/ItemSearch.htmlShareFollowansweredNov 3, 2013 at 23:58RafaSashiRafaSashi16.9k88 gold badges8585 silver badges9696 bronze badges1Hi please Help me when use signed url it working but after some time give error of time stamp.–HK_KhuntApr 25, 2015 at 5:57Add a comment| | I am currently using the amazon advertising api and I understand how you can specify the SearchIndex to tailor the results by index/department. (SearchIndex = All, Books, Toys, Kitchen, etc)In the search results, they list the ProductGroup which per theapi docsis:ProductGroup Product - category; similar to search indexI would like to be able to take the product group of a search result and show more results from that given group/index/department. In other words, given a result's ProductGroup, I want to search again with a more specific SearchIndex based on the ProductGroup. (My initial search uses "All" index).I can not simply throw one of the result's ProductGroup value and use it as the index because they do not match up 100%. For example, an item may have a product group of "Toy" or "Book" which is not the name of a SearchIndex (but 'Toys' and 'Books' are valid names).Is there a way to come up with a more specific SearchIndex value given the ProductGroup? I am aware of the list of allSearchIndex values listed by locale.One solution I am considering is taking all of the valid SearchIndex values listed in that link and mapping ProductGroups to them myself (It seems pretty straight-forward that a group value of 'Toy' indicates precense in a search index of 'Toys', etc), but I wanted to see if anyone more familiar with the API has a real solution for this. | Amazon product Api ~ Relationship between SearchIndex and ProductGroup? |
EDIT: see other answer about SES.AWS doesn't have a service like that. But you can easily write a service and run it on a Micro for less than 20 dollars a month.You could use a regular mail server, or a highly programmable one likeLamson.ShareFolloweditedOct 15, 2018 at 14:17Sasank Mukkamala1,5341414 silver badges2222 bronze badgesansweredMay 7, 2013 at 2:00BraveNewCurrencyBraveNewCurrency12.9k33 gold badges4343 silver badges5151 bronze badges21Why do you say that? SES natively supportsreceiving emails and saving them to an S3 bucketwhich seems to fit this use case.–Joe TaylorJan 11, 2016 at 14:46Did not know about that.–BraveNewCurrencyJan 12, 2016 at 20:51Add a comment| | I need to:Create a custom email address for a specific directory within S3 bucketStore all mail sent to the address, including any attachments, as a flat file in that bucketDoes AWS provide any capability like this? If not, is there any way to do it without standing up a new mail server?UpdateI've found a few SaaS tools that provide similar functionality:SMTP Logicprovides an email gateway that can, among other things, route attachments or archive copies of mail to S3.cloudmailinoffers attachment storage on S3 as a supplement to their primary email-to-webapp function.If you use Google Apps and have the ability to push Atom feeds to S3, you can use theGmail inbox feed. This might work for my specific case, but is not a very good general solution. | Email address for S3 bucket? |
Right click on the instance in the AWS console. Under "Instance Lifecycle", select "Stop". Wait for the instance to stop by refreshing the console or waiting for it to refresh. Once it's in "stopped" state, right click on the instance again, and click "Start". Note: this isnotan operating system reboot. You're actually stopping the reserved instance in the hypervisor and bringing it back up, which should route it to new hardware.The instance will come up on new hardware, and you'll have manually "scheduled" the maintenance.This is also how you'd increase the instance size, if you ever wanted more power than at1.micro. You'd stop the instance, "Change Instance Type", and start it again.ShareFolloweditedApr 16, 2013 at 13:23answeredApr 16, 2013 at 11:38ChristopherChristopher43.4k1111 gold badges8181 silver badges9999 bronze badges2@instancereboot is it instance reboot?.in the hint it is shwn as network maintenence and power maintenence.when i search i found that instance reboot is need to be done,can u help me to clear on this.–hackerApr 16, 2013 at 12:26@hacker: Yes. Stopping the instance and starting the instance again will reboot it. You could also use the "Reboot" command in the same menu in the console.–ChristopherApr 16, 2013 at 13:23Add a comment| | I am usingAWS t1 micro instanceto run some webservices of my application (LAMP server). And also one admin panel is there running with SQLite DB.Now I had overtaken my free tier limit. I have given a scheduled event as system maintenance, my instance is ebs backed, I want to do it manually before schedule. It is shown as system maintenance. Is it instance reboot or system reboot? I am getting confused.Can anybody help me in achieving this manually? | how to do system maintenance scheduled event in ec2 manually? |
Remove the socket and port number from your database.yml file and then try, it will work.ShareFollowansweredApr 5, 2013 at 10:59Jeevan DongreJeevan Dongre4,6291313 gold badges6868 silver badges132132 bronze badges21still isn't working.. :( ill update my question with more stuff i see–guy schallerApr 5, 2013 at 12:501restart the instance once n try. Also check whether you have installed my-client libraries on your instance–Jeevan DongreApr 5, 2013 at 15:14Add a comment| | I have an ec2 server and rds server.
and a Ruby On Rails App
connecting to the rds with these settings worked for me in local ENV:host: myappnameandhash.us-west-2.rds.amazonaws.com
adapter: mysql2
encoding: utf8
reconnect: false
database: mainDb
pool: 20
username: root
password: xxxx
socket: /var/run/mysqld/mysqld.sock
port: 3306but on my EC2 server I don't have that mysqld.sock file
so i get this error:FATAL: failed to connect to MySQL (error=Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2))what do i need to install in order to have the socket?thanksupdate:I removed the socket definition and the port.
I deploy using capistrano , now i ssh to my server and go to the "current" folder. there i try to run: rake ts:start
and i get the following:rake aborted!
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)but i don't even have the socket definition in my database.yml file anymore | can't connect to rds at ec2 server, missing mysql socket |
Use theputObjectmethod which is the replacement forputObjectFile:<?php
// Simple PUT:
if (S3::putObject(S3::inputFile($file), $bucket, $uri, S3::ACL_PRIVATE)) {
echo "File uploaded.";
} else {
echo "Failed to upload file.";
}
?>ShareFollowansweredMar 14, 2013 at 6:11Ryan WeirRyan Weir6,39755 gold badges4141 silver badges6161 bronze badgesAdd a comment| | I have been using s3.php and the function putObjectFile() to put objects onto S3. But now I see this is depreciated. I've been looking around at the best class to use going forward.What would be the best AWS Class to use for putting and getting objects?Also if you have some basic examples or a page with examples of how to do these 2 functions it would be great.thx | AWS S3 - Putting Objects and Getting Objects |
Which (as far as I understand) would mean that if an AZ were unavailable then machines would be started in other zones.That's correct and it would indeed be nice to have this option available within anAmazon VPCas well when running instances directly via the availableAmazon EC2API actions.Unfortunately both theRunInstancesand theRequestSpotInstancesAPI actions only allow to specify the optional parametersSubnetIdorLaunchSpecification.SubnetIdrespectively (The ID of the subnet in which to launch the [Spot] Instance), thus won't have any information into which VPC you would want to launch the instance if no subnet is specified.WorkaroundYou can achieve the desired behavior indirectly viaAuto Scalingby means of itsCreateAutoScalingGroupAPI action, see parameterVPCZoneIdentifier:A comma-separated list of subnet identifiers of Amazon Virtual Private Clouds (Amazon VPCs).This feature is also available via theAutoScalingGroupresource type withinAWS CloudFormation.ShareFolloweditedApr 4, 2013 at 0:22answeredApr 2, 2013 at 20:17Steffen OpelSteffen Opel64.3k1111 gold badges193193 silver badges212212 bronze badges0Add a comment| | I have a pretty standard stack, RDS, 2 EC2 instances using ELB. Because I wanted the ELB to be restricted to a particular IP range I've launched the stack in VPC, for DR reasons across 2 subnets.I use several ephemeral ec2 machines, which when not in VPC I allowed to startup in any availability zone. Which (as far as I understand) would mean that if an AZ were unavailable then machines would be started in other zones.Is there a way to emulate this in VPC? Is there a way of saying launch a machine in any subnet in a VPC?If not its fairly easy to workaround by picking a subnet at random, and if it fails trying another. Just wondered if there was a supported method that's cleaner?I'm using python and boto.thanks | Launch EC2 instance in any VPC subnet emulating "No Preference" option in non-VPC launch |
It really depends on your application. Generally though, you can expect it to take 5-10 minutes for a new instance to come online, register with the ELB, and begin serving traffic.Autoscaling isn't really intended for bursting, it works better when you have predictable traffic patterns. But with custom Cloudwatch metrics, you can do some pretty cool, predictive things that autoscale based on external factors such as: volume of Twitter mentions, Google Analytics data, number of active user sessions, etc.ShareFollowansweredJan 2, 2013 at 16:36jamiebjamieb9,9531414 gold badges5050 silver badges6565 bronze badgesAdd a comment| | I'm trying to decide what metric to use as a trigger for eb auto scaling to fire up a new instance, and what I'm leaning towards atm is response time - so if a user doesn't get a response in say 4 seconds another ec2 instance is fired up.What I'm struggling to find out, however, is how long it takes on average for eb to bring another instance online. I'm just concerned that if it gets to the point where the existing instances aren't coping with the load, are people going to be refused a connection and/or experience an extremely slow website for several minutes until auto scaling detects the problem and brings another instance online?If anyone has experience of this with an ecommerce solution I would love to hear what auto scaling configuration you find works to ensure a seamless user experience. | Elastic beanstalk auto scaling - how long to bring up a new instance |
You can in fact use exactly the same naming scheme as you already have. The/character is perfectly valid in an S3 object name, and means nothing more than any other character like.orq.ShareFollowansweredDec 28, 2012 at 10:37Greg HewgillGreg Hewgill969k186186 gold badges1.2k1.2k silver badges1.3k1.3k bronze badges0Add a comment| | I'm working on a website which is starting to generate a large volume of user-uploaded photos, which are then converted into multi thumbnails of different sizes and stored. So far, these have been stored locally but I would like start storing and serving them via Amazon S3.I've read Amazon's bucket and file naming rules which are clear, but I am wondering if there are other practical best practices for future maintainability.Until now, I've been doing this:User with GUID 31928 uploads image.jpg at 12-01-15 15:38:44Thumbnail "small" gets stored as /s/28/19/3/31928/120115153844.jpg... where the the path is derived from the GUID and the image filename from the timestamp. This distributes files without creating massive folders, keeps everything sufficiently unique, and makes it possible for images to be matched against a GUID even manually. It's worked well so far.With S3, I'll probably be serving these images from a single bucket but as the bucket cannot contain sub-folders, I'm curious as to how other people are storing large volumes of images. For example:hash: 2fkoer983RoerWokfw.jpgguid_hash: 31928_2fkoer983RoerWokfw.jpgguid_size_hash: 31928_s_2fkoer983RoerWokfw.jpg... or something else?Am I over-thinking this? Any experience would be appreciated, thanks. | Sensible filenaming in Amazon S3 |
Use two Linux machines as VPN GW, each in each VPC.
Configure IPsec VPN between them.That's all you needShareFolloweditedMar 19, 2013 at 13:31Rais Alam6,9981212 gold badges5454 silver badges8484 bronze badgesansweredDec 9, 2012 at 21:19Lahav SavirLahav Savir4622 bronze badgesAdd a comment| | I am looking to find a way to communicate between 2 VPCs in AWS without the use of VPN connections to and from a certain company (outside AWS) - so that the traffic does not pass through the company's gateway. Or, simply said, access an EC2 instance in a VPC from another VPC (both in AWS) without leaving the Amazon Network (not going out on the internet, not even encrypted).Basically what I want to do is to have a VPC acting as a "proxy" (let's call it PROX) and one acting as a "target" (called TARG). Now I want to connect a company through VPC to the PROX and inside the PROX route the requests to the TARG. Is this achievable? I would go for a traditional public-private single VPC, but I was asked to look into the previously described "architecture". | How to setup VPC to VPC connection without VPN? |
If you want to automate the process, use AWS SDK.Like in following case use AWS PHP SDK:use Aws\Common\Aws;
$aws = Aws::factory('/path/to/your/config.php');
$s3 = $aws->get('S3');
$s3->putObject(array(
'Bucket' => 'your-bucket-name',
'Key' => 'your-object-key',
'SourceFile' => '/path/to/your/file.ext'
));More details:http://blogs.aws.amazon.com/php/post/Tx9BDFNDYYU4VF/Transferring-Files-To-and-From-Amazon-S3http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.htmlShareFollowansweredApr 17, 2014 at 2:00SumoanandSumoanand8,88133 gold badges4848 silver badges4646 bronze badgesAdd a comment| | I have about 15 gigs of data in 5 files that I need to transfer to an Amazon S3 bucket, they are currently hosted on a remote server that I have no scripting or shell access to - I can only download them VIA an httpd link.How can I transfer these files to my Amazon S3 bucket without first having to download them to my local machine then re-upload them to S3? | How to transfer files from a remote server to my Amazon S3 instance? |
Your suggestion sounds like a good schema as long as there are enough users.As you know, Amazon automatically spread your tables over partitions for reliability and performances. I'm not 100% sure but I thinkQueryrequests can only be worked on a single partition at a time. This matters because the provisioned throughput is evenly split over these partitions meaning that frequent queries on the same item will only use a fraction of what you provisioned.ShareFollowansweredNov 7, 2012 at 14:44yadutafyadutaf6,96411 gold badge3838 silver badges4848 bronze badgesAdd a comment| | We are collecting time series events for users and need to be able to query over a time range. An example row might be:{ user_id: 100, timestamp: 1352293487, location: "UK", rating:5 }We need to be able to query over a time range based on the timestamp for a particular user. Would I be correct in thinking we could utilise DynamoDB's Query operation and set the user_id to the primary key, timestamp to the range key in order to efficiently query between two timestamp values? | Storing time series data in DynamoDB |
There was a bug in the signature version 3 https signer that omitted the session token from the request. A new release was published today (1.7.1) that addressed this issue.ShareFollowansweredNov 7, 2012 at 19:00Trevor RoweTrevor Rowe6,50922 gold badges2828 silver badges3535 bronze badges41Awesome! Thank you very much Trevor ! Really appreciate to see how fast the issue was fixed :)–PapelPincelNov 8, 2012 at 8:37@trevor-rowe This seems to have happened to me today. Were you involved in the previous fix?–esmitJul 30, 2015 at 4:28@esmit Yes I was, though I have not heard of any other reports of a regression.–Trevor RoweJul 30, 2015 at 7:57@TrevorRowe It turns out that it was my mistake. Thanks for the reply.–esmitJul 30, 2015 at 16:46Add a comment| | I'm getting this error"The security token included in the request is invalid"when trying to get hosted zones list from Route53 using AWS Ruby SDK, eventhough I'm running my script from instance having "full privileges" IAM role.
Here is the full trace :/usr/lib/ruby/gems/1.8/gems/aws-sdk-1.6.9/lib/aws/core/client.rb:318:in `return_or_raise': The security token included in the request is invalid (AWS::Route53::Errors::InvalidClientTokenId)
from /usr/lib/ruby/gems/1.8/gems/aws-sdk-1.6.9/lib/aws/core/client.rb:419:in `client_request'
from (eval):3:in `list_hosted_zones'
from test.rb:7And the test.rb file :require 'rubygems'
require 'aws-sdk'
AWS.config()
r53 = AWS::Route53.new
resp = r53.client.list_hosted_zones
resp[:hosted_zones].each do |zone|
puts zone
endIt seems that the issue is related to Route53 sdk methods, because I've tested with another code to manage EC2 and Elastic Load Balancers with same SDK and it's working just fine.What do you think ? Did I missed something ? Thank you ! | Getting error "The security token included in the request is invalid" when using AWS ruby sdk |
+150Install AWS with Composer Package Manager for PHP, it's a clear procedure and is normally working out of the box.You will also get the benefit to install other PHP based Packages easily, too.ShareFollowansweredMar 10, 2013 at 12:51hakrehakre195k5454 gold badges442442 silver badges843843 bronze badges2I think I was having versioning problems with composer, but i think this is a good solution to the general problem.–argentageMar 12, 2013 at 22:32try self-update if the composer version you have is some days older:getcomposer.org/doc/03-cli.md#self-update–hakreMar 12, 2013 at 22:39Add a comment| | So, I think I'm doing everything correctly here...I downloaded the newest AWS PHP SDK, then I copy the config-sample.inc.php to config.inc.php and fill out the keys, etc.In my application, I require_once("../AWS/sdk.class.php") and I'm getting an error:"PHP Fatal error: Class 'CFCredentials' not found in /Applications/MAMP/htdocs/AWS/config.inc.php on line 50"I pass the sdk_compatibility_test.php test, so can someone help me figure out what the issue here is???I can't figure out how sdk.class.php gets access to the CFCredentials class, since it never includes/requires "utilities/utilities.class.php", but I imagine the devs at Amazon have it linked up some how. I think I'm just missing something.Thanks! | AWS S3 PHP Fatal error: Class 'CFCredentials' not found |
Check out Amazon's new Data Pipeline service. I'm successfully using it to do something very similar.http://aws.amazon.com/datapipeline/ShareFollowansweredJan 8, 2013 at 2:20Matt GilloolyMatt Gillooly15133 bronze badgesAdd a comment| | I want to dump all my dynamodb tables data to s3 files every hour. What is the best way to schedule an elastic mapreduce job flow ?
Can I do it with Amazon Simple workflow service? | How to schedule an Elastic MapReduce Job Flow |
You can deploy your war file using Elastic Bean Stalk which is a Platform as a Service Offering of AWS. It helps you to create a stack i.e. a compute platform with all required softwares (App Server, RDS, Web Server, Install Softwares etc).If you already have an AWS environment present or an instance running, you can use AWS Code DeployCode DeployShareFollowansweredAug 19, 2015 at 6:16Sriram85Sriram855111 silver badge66 bronze badgesAdd a comment| | How to deploy war file into AWS(Amezon Web Services),
please kindly provide me some guide to do this..
urget one.Windows 7(64-bit)Thanks & Regards,
Muthu | Deploy war to AWS |
To get SSL to work in between the LoadBalancer and the Elastic beanstalk I need several things:Configure the EC2 LoadBalancer to forward port 443 to port 443 (on SSL). I already had this part in the question above.Configure the IIS on the EC2 instance like any other site with SSL:
a). Install SSL Cert on EC2 instance in IIS.
b). Add https/443 binding with the SSL cert.The problem was I was expecting #2 for free. On Windows Azure this is pretty much free when you configure certificates on your instance, but as of now this is not the case on AMZN ElasticBeanstalk for windows.I also would expect #2 to be scriptable so I could scale up or down instances without have to manually do #2. I was looking for some easy way to tie in power-shell scripts on my EB instances but they apparently don't have this feature either.My final solution was to create a custom vm images (AMI) with the SSL cert installed an the https binding already added. If I do this I can deploy the ElasticBeanstalk image with my SSL stuff already setup. Doing this then allows me to scale up or down without any configuration.ShareFollowansweredDec 4, 2012 at 4:58gmetzkergmetzker58511 gold badge88 silver badges2222 bronze badgesAdd a comment| | I have a Windows/.NET elastic beanstalk instance with an SSL cert setup on the load balancer. By default this creates a port forwarding from https/443 to http/80. I would like to have 443/https on the load balancer forward to 443 https on the beanstalk instance.I was trying to do what is documentedhere:I reconfigured the corresponding EC2 instance EC2-->Load Balancers-->Listeners so that HTTPS forward to HTTPS configured with my SSL cert, the problem is when I try and make an HTTPS request after that it just times out. It seems like the ElasticBeanstalk instance doesn't like me modifying the EC2 Listeners.Any ideas? | Can aws elastic load balancer foward port 443 to port 443 for an elastic beanstalk instance? |
From your code snippet, you don't get any more updates because you have fallen through your loop, sinceupload.isDone()is true. If you add:System.out.println("upload prog " + upload.getProgress().getPercentTransfered() + " state " + upload.getState());after the end of your loop, you will see theCompletedmessage. You probably see multiple 100% messages, because the TransferManager is waiting for the upload to complete.ShareFollowansweredOct 10, 2012 at 23:51Wade MatveyenkoWade Matveyenko4,37011 gold badge2424 silver badges2727 bronze badgesAdd a comment| | I'm using Amazon's providedHigh-Level APIto upload files to Amazon S3. I use a lightly-modified version of the example provided:public Upload uploadFile() {
transferManager = new TransferManager(new BasicAWSCredentials("KEY", "SECRETKEY"));
upload = transferManager.upload(existingBucketName, keyName, new File(filePath));
return upload;
}Meanwhile, from another thread, I'm measuring its progress:while (!upload.isDone()) {
System.out.println("upload prog " + upload.getProgress().getPercentTransfered() + " state " + upload.getState());
Thread.sleep(200);
}The progress reporting itself seems to be working well, as I'm getting progress that makes sense. However, once it reaches 100%, the upload stalls. It looks a lot like the call to isDone() is blocking, as it simply will not update.OUTPUTupload prog 91.9009559586608 state InProgress
upload prog 95.31523296022095 state InProgress
upload prog 99.01403304524446 state InProgress
upload prog 100.0 state InProgress
upload prog 100.0 state InProgressThe percentage, once it gets to 100%, will not update again. It appears to update twice and then hang.If I check for the existence of the file externally, using Cyberduck, it appears to have uploaded the file successfully. | Amazon S3 Upload hangs on 100% |
The only way to rename an object is to copy the old object to a new object, and set the new name on the new copy.The REST call you need is detailedhere.SyntaxPUT /destinationObject HTTP/1.1
Host: destinationBucket.s3.amazonaws.com
x-amz-copy-source: /source_bucket/sourceObject
x-amz-metadata-directive: metadata_directive
x-amz-copy-source-if-match: etag
x-amz-copy-source-if-none-match: etag
x-amz-copy-source-if-unmodified-since: time_stamp
x-amz-copy-source-if-modified-since: time_stamp
<request metadata>
Authorization: signatureValue
Date: dateThis implementation of the PUT operation creates a copy of an object
that is already stored in Amazon S3. A PUT copy operation is the same
as performing a GET and then a PUT. Adding the request header,
x-amz-copy-source, makes the PUT operation copy the source object into
the destination bucket.Keep in mind the existing ACLs, however:When copying an object, you can preserve most of the metadata
(default) or specify new metadata. However, the ACL is not preserved
and is set to private for the user making the request.ShareFollowansweredAug 29, 2012 at 21:06GalacticJelloGalacticJello11.3k22 gold badges2727 silver badges3535 bronze badgesAdd a comment| | How can I changekey/nameof Amazon S3 object using REST or SOAP? | How can I change key/name of Amazon S3 object using REST or SOAP? |
You can compute messages per second rate like the next code sample shows:from time import time
time_started = time()
messages_sent = 0.
MAX_PER_SEC = 70
for user in _10000_users:
msg = generate_message(user)
if messages_sent / (time() - time_started) >= MAX_PER_SEC: # Rate condition
sleep(0.1)
ses.send_message(msg)
messages_sent += 1ShareFollowansweredAug 16, 2012 at 14:03Rostyslav DzinkoRostyslav Dzinko39.9k55 gold badges5151 silver badges6363 bronze badges11Can you please explain how the above works, trying to use the logic within a multithreaded application. Thanks.–st__gen_serverAug 6, 2015 at 23:05Add a comment| | I want to send 10000s of individual mails at once; possibly from a largeforloop:for user in _10000_users:
msg = generate_message( user)
if(ses.can_send_more_messages == False):
sleep( 0.1) #to throttle ourselves
ses.send_message( msg)But I am worried about the70 mails/second throttleon our SES account. So I want my program to respect this limit byinspecting the queueand wait if it has exceeded the limit and sending again only if its clear.I am using boto in Python to interface with SQS. And I expect only a single machine to send messages, although in future multiple machines may send messages in parallel being ignorant of each other.How can I do rate-limiting of emails to 70 per second by inspecting the queue or using a Python-specific technique? | Rate-limiting my program from sending too many SES emails via SQS |
Be sure you know where your performance bottleneck is.If both instances are in the same Availability Zone, network latency should not be the largest performance issue. In fact if you have instances that are at leastlarge... due to the better NIC... network latency should be a non-issue.To know for sure, measure your network utilization with a monitoring tool.If any of your working set (MongoDB documents that are used with any frequency) cannot fit in RAM of the instance, that means you are touching EBS. EBS isvery, very slowcompared to what MongoDB needs. I measured a single EBS volume usingiozonerecently and found the EBS volume to be half as fast as my laptop's rotational hard drive.You can improve EBS performance substantially bystriping multiple EBS volumesinto a software RAID configuration.The bottom line when running MongoDB on AWS is that you need enough RAM to hold the MongoDB documents that you will touch with any frequency.ShareFollowansweredJul 17, 2012 at 19:13Eric J.Eric J.149k6363 gold badges343343 silver badges556556 bronze badges1I'll try to stripe multiple EBS and see the difference, Thanks!–kschaefflerJul 18, 2012 at 12:30Add a comment| | I'm actually new on AWS. And I configured 2 EC2 instances.One for my MongoDB database and an other one for my application.I'm using pymongo to make the connection. But If send data through instances each time, it takes too much time. I would like to know if it's possible to have the mongoDB instance as localhost for the application one, using groups or I don't know, to get better performances.Or If it is better to put the database on the same instance as my application and get more EBS. | AWS MongoDB EC2 instance as localhost with EC2 application instance |
If it works with one JRE and not another, the problem is likely that you don't have the correct CA cert installed in your 1.7 JRE keystore. See this post for details:http://welocally.com/?p=1358You can also just connect to the http:// version of the Dynamo endpoint and so avoid ssl altogether (and get a nice performance boost as a result).ShareFollowansweredOct 14, 2012 at 23:22Zach MusgraveZach Musgrave1,01277 silver badges77 bronze badgesAdd a comment| | There is a simple scan call going to dynamo from my code which works fine in Java 6 and not in Java 7. Theamazon forumsmention this problem and recommend disabling certificate verification, which seems risky to me. Does anyone know what changed between Java 6 & 7 to cause this issue?3-Jul-2012 3:51:27 PM com.amazonaws.http.AmazonHttpClient executeHelper
WARNING: Unable to execute HTTP request: peer not authenticated | AWS Java client does not authenticate dynamo endpoint on Java 7 |
Heroku runs in US-East region so as long as you setup there you shouldn't incur any transfer costs between dynos and other services.There's more details on thehttps://devcenter.heroku.com/articles/amazon_rdspage - it relates to RDS but a lot of it is general Amazon stuff like security groups etc.ShareFollowansweredApr 20, 2012 at 7:27John BeynonJohn Beynon37.5k88 gold badges8989 silver badges9797 bronze badges12Some additional security info:devcenter.heroku.com/articles/external-services–Ryan DaigleApr 20, 2012 at 14:23Add a comment| | I'm considering using Heroku for a NodeJS app, and I was wondering if their Dynos enjoy the free internal data transfer inside the AWS network.I want to use DynamoDB, ElastiCache, RDS, SQS and a bunch of other AWS offerings - if I can connect to all of them from Heroku, which region and AZ do I need to set them up in to talk to them for free from the Heroku Dynos? | Do Heroku Dynos enjoy free data transfer inside the AWS network? |
Starting out with a new project and not really knowing what to expect from the usage i'd say that the better option is to go with SimpleDB. It doesn't sound like your usage is going to be very high SimpleDB should be able to handle that no problem. The real power of dynamoDB comes in when you really have a lot of load. You don't fall into that category it seems.If you design your application correctly switching between SimpleDB and DynamoDB should be a simple task if you decide at some point that SimlpeDB is not working out. I do these kind of switches all the time with other components in my software. Since both databases are NoSQL you shouldn't have a problem converting between the two. Just make sure that any any features you use in SimpleDB are available in DynamoDB. Make sure to design your database design for both DynamoDB has stricter requirements using indexes make sure that the two will be compatible.That being said. Plenty of people have been using SimpleDB for their applications and I don't expect that you would see any performance problems unless your product really takes off, at which time you can invest in resources to move to DynamoDB.Aside from all that we have the pricing, like you already mentioned. SimpleDB is the obvious solution for your use case.ShareFollowansweredMar 17, 2012 at 19:48bwightbwight3,3001818 silver badges2121 bronze badgesAdd a comment| | We are building a mobile app with a rails CMS to manage it.What our app look like?Every admin user of the app can set one private channel with very small amount of data -
About 50 short strings.Users can then download the app and register few different channels and fetch the data from the server to their devices. The data will be stored locally and will not be fetched again unless the admin user will update the data (but we assume that it won't happen so often). Every channel will be available to not more then 500 devices.The users can contribute to the channel but this data will be stored on S3 and not on the database.2 important points:Most of the channels will be active for 5 months and not for 500 users +-.But most of the activity will happen on the same couple of days.Every channel is for small amout of users (500) But we hope :) to get to hundreds of thousens of admin users.Building the CMS with rails we saw that using SimpleDB is more strait-forward then using DynamoDB. But, as we are not server experts, we saw the limitations of SimpleDB and we don't know if SimpleDB could handle the amount of data transfer that we will have (if our app will succeed). another important point is that DynamoDb costs are much higher and not depended on the use while SimpleDb will be much cheaper at the beginning.The question is:Does simpleDB can feet our needs?Could we migrate later to dynamoDB if our service will grow in the future ? | Amazon SimpleDB or DynamoDB |
This has meanwhile been addressed in the AWS team response to the identical question asked in the AWS forum, seeEC2 reports AMI: Unavailable:This is an AWS owned AMI that is no longer publicly available as it is
deprecated. This will not affect your currently running instance.
Additionally, if you create an EBS AMI of your running instance you
will create a point in time backup of your current configuration --
which you can use to launch duplicate instances from.The current AWS provided Windows Server 2008 32bit AMI is:
ami-541dcf3dShareFollowansweredMar 21, 2012 at 18:30Steffen OpelSteffen Opel64.3k1111 gold badges193193 silver badges212212 bronze badgesAdd a comment| | I'm running two EC2 instances, (Linux and Windows) on AWS, which initiated based on AMIs provided by Amazon. Everything works fine, but since a couple of weeks, I noticed that on Windows Instance, under "Description" tab, it says that "AMI: Unavailable (ami-f0c9ff84)".I have not performed a reboot on that EC2 for more than a month and I'm curious if everything will work again seamless after reboot. Is "AMI unavailable" a serious problem? Why the AMI is not available any more? I'm not sure if I need to take some actions from my side (e.g. to get an EBS snapshot in case of failed reboot, whatever).Should I be afraid that restarting the instance will not work while it says "AMI unavailable"?Thanks in advance! | What does (EC2) 'AMI: Unavailable' mean and how should I handle it? |
Answering my own question here!The code is wrong. If you're using aggregate library to reduce, your output does not follow the usual key value pair. It requires a "prefix".if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
#This is the correct way of printing for aggregate library
#Print all as a string.
print "LongValueSum:" + "Express" + "\t" + list[3]The other "prefixes" available are: DoubleValueSum, LongValueMax, LongValueMin, StringValueMax, StringValueMin, UniqValueCount, ValueHistogram. For more info, look herehttp://hadoop.apache.org/common/docs/r0.15.2/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html.Yes, if you want to do more than just the basic sum, min, max or count, you need to write your own reducer.I do not yet have the answer.ShareFolloweditedFeb 16, 2012 at 5:35answeredFeb 15, 2012 at 1:30DeyangDeyang51077 silver badges2020 bronze badges1@Deyang Hi, I am new with hadoop -python. I have also similar work to do, but I have multiple csv files in hadoop directory, I have written script which is running properly on local machine. When I run it on cluster, It gives an error as "Streaming Command Failed". Can you suggest how can read all the csv files from hdfs directory.–MegaBytesApr 22, 2015 at 6:17Add a comment| | I have a huge CSV file I would like to process using Hadoop MapReduce on Amazon EMR (python).The file has 7 fields, however, I am only looking at thedateandquantityfield."date" "receiptId" "productId" "quantity" "price" "posId" "cashierId"Firstly, my mapper.pyimport sys
def main(argv):
line = sys.stdin.readline()
try:
while line:
list = line.split('\t')
#If date meets criteria, add quantity to express key
if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
print '%s\t%s' % ("Express", int(list[3]))
#Else, add quantity to non-express key
else:
print '%s\t%s' % ("Non-express", int(list[3]))
line = sys.stdin.readline()
except "end of file":
return None
if __name__ == "__main__":
main(sys.argv)For the reducer, I will be using the streaming command: aggregate.Question:Is my code right? I ran it in Amazon EMR but i got an empty output.So my end result should be: express, XXX and non-express, YYY. Can I have it do a divide operation before returning the result? Just the result of XXX/YYY. Where should i put this code? A reducer??Also, this is a huge CSV file, so will mapping break it up into a few partitions? Or do I need to explicitly call a FileSplit? If so, how do I do that? | Using Hadoop in python to process a large csv file |
Avoid In-Memory or Temporary TablesSince MySQL does not write in-memory or temporary tables to disk, using these MySQL features will cause problems when trying to use the Point In Time Restore RDS feature. This operation relies on being able to recreate the DB Instance by playing back the operations executed on the database. If during this play back some of the operations rely on information that is not present (since it was never committed to disk) MySQL cannot start up. When MySQL fails to start during this RDS restore operation, RDS will set the DB Instance status to incompatible-restore.Note: Due to the nature of how MySQL Read Replicas are created, using in-memory or temporary tables can also prevent successful creation of RDS MySQL Read Replicas.ShareFollowansweredJul 18, 2012 at 18:40EduardoEduardo4622 bronze badgesAdd a comment| | I've got a strange issue with some memory tables I'm running on RDS. I don't know if this is an issue specific to RDS, mysql 5.1.57, or if it's justPEBKACon my part but it's been a frustrating afternoon.No matter what value I give to max_heap_table_size my memory tables are always stuck at a max data length of 9360878. This has been determined using SHOW TABLE STATUS and just by inserting known amounts of data in the tables.I've tried setting that value in the RDS parameter group (I've tried rebooting even though I set the method to immediate) and I've tried to set the value at the query line using SET. I've tried every value from 16 megabytes to 16 gigabytes and it has no effect on max_data_length.I've also tried setting max_temp_table_size even though that shouldn't be in play with non-temporary memory engine tables from as I understand it.Can anyone point me in the right direction? I need the tables to be able to hold about 150M. | Amazon RDS Max_data_length on memory tables |
Someone asked a question like this - it turned out that the bucket had a global public read set on it using a bucket policy. Double check that there are no other ACLs, etc that allow access. Also it looks like there is no 'listing' access - is that what you want? Can you call get on a bucket and get a listing of all files in it? (You should not be able to do this).Don't know if this will help - they use "StringLike" in the policy.http://www.techtricky.com/amazon-s3-how-to-restrict-user-access-to-specific-folder-or-bucket/https://forums.aws.amazon.com/search.jspa?objID=f76&q=stringlike&x=0&y=0ShareFolloweditedNov 10, 2011 at 15:13answeredNov 10, 2011 at 15:03Tom AndersenTom Andersen7,14233 gold badges3939 silver badges5555 bronze badgesAdd a comment| | I'm writing an app where I have a set of users, and each user will have a number of files associated with them in a 'directory' within an S3 bucket. Users will be authenticating using Amazon's STS, getting temporary security credentials that should allow them to access resources they own while not allowing them to access resources they do not (think: "home" directories).Assuming the user already exists in the system (and is authenticated), and their file bucket is created (without a specified policy or ACL) using the naming scheme:<< my app's bucket>>/<< user's identifier >>/During a request for a user accessing file, we grant temporary security credentials as follows using boto:get_federation_token(<< user's identifier >>, duration,policy=user_policy)where user_policy is:user_policy = (r'{"Statement": [{"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion"],
"Resource":"arn:aws:s3:::/%s/*"}]}' % (<< user's identifier >>))I had thought I understood policies, but apparently I'm missing something. Using the above scheme, I'm able to get/put resources under the user's directory, but also the directories/resources belonging to other users. For the life of me, I can't get access properly segregated. I've played with bucket policies as well, but that didn't bear fruit.Any direction would be appreciated.Note: I'm stuck using STS, as we'll likely have too many users to create/use IAM users. | How to use AWS S3 policies to enforce ownership of resources for federated users? |
It appears the script is run asroot. So if you have to communicate via SSH, make sure that the root account is able to make the connections.ShareFollowansweredNov 9, 2011 at 18:46RobRob7,51777 gold badges3838 silver badges3939 bronze badgesAdd a comment| | I'm trying to launch an EC2 instance with a script in the user-data. The script contains an rsync call to a remote server but this fails. I believe this is because I need to setup the user that runs the user-data to be able to connect to the remote server.What user runs the user-data script?Thestderrthat I have logged is:Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(601) [Receiver=3.0.7] | What user runs the user-data script when starting up a new ubuntu EC2 instance? |
Figured it out. This is the code I ended up using:<?php
class AmazonMusicSearch extends AmazonECS {
protected $asin;
protected $detailPageUrl;
protected $ecs;
function __get($name) {
return $this->$name;
}
function __construct() {
$this->ecs = new parent(AZ_APP_ID, AZ_APP_SECRET, 'com', AZ_ASSOCIATE_TAG);
}
function searchByAsin($asin) {
$search = $this->ecs->responseGroup('Small')->category('Music')->search($asin);
$this->asin = $asin;
if(isset($search['Items']['Item']['DetailPageURL'])) {
$this->detailPageUrl = $search['Items']['Item']['DetailPageURL'];
} elseif(isset($search['Items']['Item'][0]['DetailPageURL'])) {
$this->detailPageUrl = $search['Items']['Item'][0]['DetailPageURL'];
} else {
return false;
}
return $this;
}
function detailPageFromAsin($asin) {
return $this->searchByAsin($asin)->detailPageUrl;
}
}
?>ShareFollowansweredAug 26, 2011 at 20:28Hugh GuineyHugh Guiney1,33922 gold badges2020 silver badges3434 bronze badgesAdd a comment| | I am using thePHP Soap Libraryto connect to Amazon and retrieve product cover art from ASINs. That much I've accomplished, but according to theAgreement(at least as far as I can tell; IANAL), any info I get from the API must be linked to its respective Product Detail Page on the Amazon retail site. I've browsed through thedocs, but for the life of me I can not figure out what method, etc. I need to use, short of constructing the URL manually (which is potentially unstable). Any insight? | Amazon Product Advertising API: Get Product Detail URL from ASIN |
try out the s3cmd sync option something like the followings3cmd sync /mydir s3://mybucker/ShareFolloweditedJul 31, 2011 at 6:06answeredJul 31, 2011 at 5:57JosnidhinJosnidhin12.5k99 gold badges4343 silver badges6262 bronze badges1You can refer to the following site which explains its3tools.org/s3cmd-syncSync does a version check on file before upload to s3 and upload file that have changed locally–JosnidhinAug 1, 2011 at 11:14Add a comment| | s3cmd put --recursive /mydir s3://mybucker/This is the way I currently upload my directory to S3.
I run this every night to backup my stuff.However, it's getting too big. I only want to upload files that was modified instead of every file. How can I do that? | How can I use s3cmd to move only the files that changed to Amazon S3 ? |
InstallLWP::Protocol::https.SeeWhat's the easiest way to install a missing Perl module?from theSO Perl FAQ,How to install CPAN moduleson CPAN, andperlmodinstallin the Perl documentation.ShareFolloweditedMay 23, 2017 at 11:47CommunityBot111 silver badgeansweredApr 1, 2011 at 22:22daximdaxim39.4k44 gold badges6868 silver badges133133 bronze badges1Thanks. I installed that but get a report at the end that 2/2 test failed. ses-verify-email-address still doesn't work.–MitchApr 4, 2011 at 17:46Add a comment| | I'm running into an unexpected amount of difficulty trying to use ses-verify-email-address. I am using Ubuntu Hardy on AWS with Perl 5.8.8.After copying the Perl scripts and creating a key file, I got a "command not found" error. Then I installed the files mentioned in the SES README - Digest::SHA, URI::Escape, Bundle::LWP, MIME::Base64, Crypt::SSLeay and XML::LibXML. It's not obvious that these installed correctly and now when I run ses-verify-email-address.pl, I get the message "LWP will support https URLs if the LWP::Protocol::https module is installed."I've been using Python and know nothing about Perl. | Amazon SES setup problems |
Windows server 2008 R2 only supports 64-bit architecture. See the system requirements:http://www.microsoft.com/windowsserver2008/en/us/system-requirements.aspxAmazon does not have small instance that is 64-bit, hence why you need to use large or above. And they do not allow you to launch R2 on a micro instance even through micro instances can be 64-bit, probably because of performance reasons.2 Here is the anser to your second question:https://serverfault.com/questions/55355/whats-the-difference-between-windows-server-2008-2008-sp2-and-2008-r2ShareFolloweditedApr 13, 2017 at 12:13CommunityBot111 silver badgeansweredMar 16, 2011 at 15:51BigJoe714BigJoe7146,80277 gold badges4848 silver badges5050 bronze badges11I've been able to launch R2 on micro - as long as it doesn't have SQL Server, it's fine.–Roman StarkovJun 14, 2011 at 16:25Add a comment| | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed13 years ago.Improve this questionA new Windows 2k8 R2 image is now available on Amazon. I have two questions about this:It doesn't appear that 32-bit (Small) images are available. Is this correct?From a programmatic point of view, what advantages will R2 give me over the standard Windows image? | Amazon EC2 with Windows 2008 R2 [closed] |
Have you looked at BasicAWSCredentials?BasicAWSCredentials credentials = new BasicAWSCredentials(accessKeyId, secretKey);
AmazonSimpleDB mDB = new AmazonSimpleDBClient(credentials);You can load accessKeyId and secretKey through the use of Properties.ShareFollowansweredFeb 8, 2011 at 22:57RyanRyan39222 silver badges1111 bronze badgesAdd a comment| | I'm using Java.
Instead of usingAmazonSimpleDB sdb = new AmazonSimpleDBClient(new PropertiesCredentials(
new File("/AwsCredentials.properties")));is there anyway to store the credential information (the accesskey and secretkey) in the program; something likeAmazonSimpleDB sdb = new AmazonSimpleDBClient("acesskey","secretkey");It seems like this function does not exist. | Simple DB accessing |
The latency should be similar to the latency of two computers in the same LAN. Just make sure that you are using the private IPs when connecting the two images and not their public ones.ShareFollowansweredDec 20, 2010 at 13:49kgiannakakiskgiannakakis104k2727 gold badges159159 silver badges196196 bronze badges3What's this latency? 1 millisecond? 10 milliseconds? 100 milliseconds?–carlosalbertomstDec 20, 2010 at 13:533It should be in the range of 1 millisecond. Why don't you make an experiment? Start two instances and make a ping from one to another.–kgiannakakisDec 20, 2010 at 13:561I get an average latency of 1 millisecond for all users coming into my load balancers externally across all instances. Given this I would think internally it'll be very very fast.–Frodo BagginsMar 2, 2011 at 9:54Add a comment| | What's the expected latency for a simple connection between a pair of Amazon EC2 instances in the same region?Thanks! | Amazon EC2 latency |
+50The storage metrics are enabled by default to all customers, and they are reported once per day for all s3 buckets at no additional cost. However, the request metrics are not enabled by default since they incur charges at the same rate as Amazon CloudWatch custom metrics. As request metrics involve costs, you have to explicitly enable them on your S3 buckets. Thus, there is no universal flag currently to enable this request metrics feature for you on all s3 buckets.You also mentioned that you were able to view the request metrics after applying the filter. it's important to note that the filter serves a dual purpose - it not only filters the data but also enables the request metrics for your bucket. Based on the provided image, it appears that you created the filter to view request metrics for the entire bucket. However, you can also limit the filter scope using custom filter types such as Prefix, Object tags, and Access points.Reference:S3 Cloudwatch Metrics.ShareFollowansweredJul 19, 2023 at 22:44codeninja.sjcodeninja.sj3,92911 gold badge2020 silver badges3939 bronze badgesAdd a comment| | I tried to retrieve request metrics for S3 buckets but couldn't access them through the "All metrics" query. However, after creating a bucket-specific filter, I managed to view the dashboard with metrics like AllRequest and GetRequest. Is there a method to apply these changes universally to obtain request metrics for all buckets associated with an account?Currently the cloudwatch > s3 dashboard looks like thisAfter adding a filter for a bucket it looks like thisMain goal is to get the request metrics using the GetMetricsData with a metrics query | Not able to get request metrics for AWS S3 |
Simple enough in the end, I just used sudo java -Dserver.port=80 -jar .jar & No setting up the code differently, just setting it to use port 80. Also found out I had to use sudo, otherwise you get a permission denied when allocating to the root.ShareFollowansweredMar 15, 2023 at 8:28Hywel GriffithsHywel Griffiths33511 gold badge55 silver badges1919 bronze badges12Yes, that will work, but keep in mind that that should only be used for internal/proof-of-concept/throwaway setup; you dont wan't your app to run as root only because you want to bind to port 80–Dusan BajicMar 15, 2023 at 8:52Add a comment| | I'm having trouble connecting Route 53 to an EC2 with an app running on the 8080 port. Currently, I can hit and use the app usinghttp://<EC2 name>:8080/<endpoint path>, but I obviously want to hit it with a domain name instead.I created a host zone .com, created a record, used record type A and set the ip as the ipv 4 provided by the EC2. I then copied the four ns- route traffic names across to the nameservers in registered domain. Unfortunately, when I hitwww.<domain name>/<endpoint path>I get 'did not match any documents'.Is this something I'm missing when setting up the route 53 or is it an issue to do with the port?Any help would be gratefully received. | Connecting Route 53 to EC2 and port |
Let's understand what is non current version and current version?Whenever versioned bucket object is deleted current object version becomes noncurrent, and the delete marker becomes the current version.what is expired delete marker?A delete marker with zero noncurrent versions is referred to as an expired object delete marker.So option 4 and 5 will solve your purposeoption 4 will permananelty delete non current objects, which will make delete marker as expired since there will no non current versionoption 5 will delete expired delete markersNote: Lifecycle rule policiestakes time to takeeffect as objects are queued and it happens in an asynchronous manner.ShareFolloweditedFeb 16, 2023 at 3:27answeredFeb 15, 2023 at 8:50Jatin MehrotraJatin Mehrotra10.1k44 gold badges3636 silver badges8282 bronze badgesAdd a comment| | I have a S3 bucket which is version enabled, I want to permanently delete all the delete marked version objects from the S3 bucket using lifecycle rule.Which of the below options we need to choose, in order to permanently delete the versions of the objects.And also the delete marked objects may also be current version. | Permanently delete all delete marked objects in versioned S3 bucket |
You can configureProcfileto run multiple processes like main django app, celery and celery-beat in parallel as documentedhere:web: <command to start your django app>
celery: celery -A <path_to_celery_app> worker
celery_beat: celery -A <path_to_celery_app> beatShareFollowansweredFeb 2, 2023 at 5:04ErsainErsain1,49811 gold badge99 silver badges2222 bronze badges4Thanks! I seethiswhich statesUse a Procfile, a text file in the root directory. So do I just create a file calledProcfilewithin the directory containing wsgi.py?–Daniel JohnsonFeb 2, 2023 at 5:301Place it in the root directory of your app, in most cases it should be nearmanage.py–ErsainFeb 2, 2023 at 5:50any chance you can help withsaving data in a task?–Daniel JohnsonFeb 4, 2023 at 7:49Are you able to run yum install commands from the procfile? I need to ensure that bothsudo yum install libcurl-develandsudo yum install -y openssl-develare installed before starting my django app w/python manage.py runserversee this question for more context:stackoverflow.com/questions/75619991/…–Daniel JohnsonMar 8, 2023 at 17:53Add a comment| | In my django web app I am running a calculation which takes 2-10 minutes, and the AWS server times out with a 504 server error.It seems that the best way to fix this problem is to implement a worker, and offload the calculations. Through some research, it seems that Celery with a Redis server (maybe AWS SQS instead?) are best for Django.However, I only see tutorials for instances that are run locally. The Redis server is hosted byRailwayand Celery is run in a separate terminal than Django (on the same local machine). I'm curious if/where Celery and Redis should be running on AWS.Thisanswersaysyou should try to run celery as a deamon in the background. AWS Elastic Beanstalk uses supervisord already to run some deamon processes. So you can leverage that to run celeryd and avoid creating a custom AMI for this. It works nicely for me.but don't the Celery and Redis servers still need to run somehow?Where does the Celery server run?
How do I start it?
How can I leverage supervisord to run daemon processes? Thedocumentationisn't helping me very much with AWS integration | Where to run Celery on AWS |
would that still incur costs or do you only pay for "non-rejected" requests?You donotpay for rejected requests. I have worked with the developers to confirm the code that triggers the charges executes only after the request gets past the access controlsShareFollowansweredJan 31, 2023 at 18:02Seth ESeth E1,02544 silver badges1818 bronze badgesAWS EmployeeAdd a comment| | I'm curious about using API Gateway resource policies to only allow a subset of IPs to access it. I am wondering, if someone outside of this IP range would spam the endpoint, would that still incur costs or do you only pay for "non-rejected" requests?Thanks | Resource policies and cost on AWS API Gateway |
When creating the RDS instance you have the option to have RDS create an initial database. If you do not provide a value, RDS does not create a database at all (see image below). You'll need to connect to the instance and issue the command create a database yourself, e.g.CREATE DATABASE mydb;ShareFollowansweredDec 13, 2022 at 19:10Ben WhaleyBen Whaley33.7k88 gold badges8989 silver badges8989 bronze badgesRecognized by AWSAdd a comment| | I created a MYSQL database in AWS RDS, and this is the config settings setup:DB instance ID
database-1
Engine version
8.0.28
DB name
-so as you can see, there isn't a db name. So now, when I go to create a table via my code, or even mysql workbench, since no db name is included, it fails with:Database connection failed: Error: ER_BAD_DB_ERROR: Unknown database 'database-1'I am trying this in code as well:var mysqlconn = mysql.createConnection({
host: 'database-1.xxx.com,
user: 'username',
password: 'pw,
port: '3306',
database: 'database-1'
});
let createTable = "CREATE TABLE table_name (name VARCHAR(256),email VARCHAR(256),number INT(32))"but it gives the above error. any advice / help is much appreciated | Database connection failed: Error: ER_BAD_DB_ERROR: Unknown database 'database-1' |
Most likely amplify was successfully installed in ~/.amplify/bin (where ~ is your home directory) but not added to path. To fixCheck for successful installation ~/.amplify/binCheck $PATH to see if you see amplify directory (shouldnt be there)Add ~/.amplify/bin to your path to use it for e.g -> export PATH=$PATH:~/.amplify/binShareFolloweditedMay 2, 2023 at 7:49answeredMay 2, 2023 at 7:43Sudhir SrinivasanSudhir Srinivasan9344 bronze badgesAdd a comment| | I'm trying to get my amplify backend connected to my Next JS application. I've globally installed Amplify CLI via NPM and via CURL. But, despite it showing as successfully installed, it doesn't recognize the "amplify" command. Note: I'm using Mac OS.Commands Tried:npm install -g @aws-amplify/clicurl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELLBoth show succesful install results like below:Successful InstallBut...Whenever I run "amplify" as a command or "amplify configure" it doesn't recognize it in the terminal. What could be the cause of this? What am I doing wrong?Unrecognizable CommandOne thing I’m noticing is it does not indicate in terminal that it’s configured within my $Path. If that’s the root cause of the issue, how come the CURL command I ran doesn’t add it to my zshrc file?Thank you in advance! | AWS Amplify CLI | "Command Not Found" after Successful Install via NPM/CURL |
Top waits, Top SQL, etc. are alldifferent dimensionsthat you can use to understand what's contributing to database load. Dimensions are not comparable with each other.It sounds like you want to diagnose what's contributing to the PostgreSQL "CPU" wait event. You can find more information on this topic in the official RDS docs onTuning with wait event.If the issue turns out to be suboptimal queries, then you can find the worst performers in the Top SQL tab (dimension) of Performance Insights.ShareFollowansweredJan 9, 2023 at 3:56jtoberonjtoberon8,83611 gold badge3636 silver badges4848 bronze badgesAdd a comment| | Hello I'm looking at Performance Insights in AWS RDS (Postgres 10)I slice by "Waits"When I see Top databases, Top Applications, Top session Types and Top Users they are all actually higher than the SQL queries it selfFrom these metrics how do you tell what is bottlenecking the CPU? | How to debug high CPU AWS RDS Postgres? |
This sounds like your site needs to route requests through your index page. This would cause a HTTP 404 error, which could be masked by CloudFront as the 403 error you're getting here.This can happen for example in React apps, where if it receives a request directly for /example, it would go and look in the S3 bucket for an 'example' file which doesn't exist. You can handle this by redirecting your 404 errors to your index page where it can be properly routed - in your S3 static website hosting settings, set Error Document to index.html.ShareFollowansweredDec 1, 2022 at 6:33chamalchamal93366 silver badges1313 bronze badges2Okay, I set my error document to be index.html. So far still not working, but this was a great suggestion, and probably a big step closer to the overall fix.–AdamDec 1, 2022 at 6:51Maybe this answer would help -stackoverflow.com/a/50302276/10354667–chamalDec 1, 2022 at 7:24Add a comment| | I am using an AWS S3 bucket, Cloudfront, and Route53. And for details of how I have my setup,here is a linkto an answer I did telling people how to set this all up.
If going to www.<MyWebsite>.com/about
I get this:This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>PD94JP7DNG6TPDQF</RequestId>
<HostId>
qc0Fvl3fiS7igVeBEYfwvX19so0dH3hmIWNRBOcveK+j4DMmoPZQsxmbeA0XhFisy1BQvxmmrj8=
</HostId>
</Error>But if I go to www.<MyWebsite>.com and using the navigation bar of my site go to the "about" section, I get there just fine. So AWS doesn't like me hitting the about url directly. What do I need to do to allow for any subpage do be hit?This might be a duplicate of:Receive AccessDenied when trying to access a page via the full url on my websiteIf so then I will mark this as duplicate, standby. | AWS not allowing me to access subpages of my static website |
found the answer to my question. Apparently, the sagemaker environment is using an old build of XGBoost, around version 0.9. As the XGboost team make constant upgrades and changes to their library, AWS was unable to keep up with it.That said I was able to run my code below by downgrading the XGBoost library on my environment from 1.7 to 0.9 and it works like a charm.t = tarfile.open('model.tar.gz', 'r:gz')
t.extractall()
model = pkl.load('xgboost-model', 'rb')ShareFollowansweredNov 25, 2022 at 6:34rorivy15rorivy1514133 silver badges99 bronze badgesAdd a comment| | I'm new to Sagemaker and I trained a classifier model with the built in XGBoost. It saved a "Model.tar.gz" at an S3. I downloaded the file because I was planning to deploy the model else where. So to experiment, I started loading the file locally first. I tried this code.import pickle as pkl
import tarfile
t = tarfile.open('model.tar.gz', 'r:gz')
t.extractall()
model = pkl.load('xgboost-model', 'rb')But it's only giving me this errorXGBoostError: [13:32:18] /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:922: Check failed: header == serialisation_header_:
If you are loading a serialized model (like pickle in Python) generated by older
XGBoost, please export the model by calling `Booster.save_model` from that version
first, then load it back in current version. There's a simple script for helping
the process.So I tried using theBooster.save_modelfunction at sagemaker notebook but it doesnt work nor does pickling the trained model work.I also tried this codemodel = xgb.Booster()
model.load_model('xgboost-model')but it's giving me this errorXGBoostError: std::bad_allocAny help would be greatly appreciated. | How do you locally load model.tar.gz file from Sagemaker? |
CloudWatch metrics are a very "ephemereal" entity in AWS, they don't exist outside of their datapoints.So "importing" a metric is as simple as specifying its namespace, name, and dimensions, regardless if it already "exists" (i.e. there are datapoints with the same namespace, name, and dimensions) or not.So what you're doing in the code block in the question is exactly what's required, as long as the parameters match the datapoints that the AWS service emits.If your API is defined in CDK using the higher-level L2 constructs, you can use the provided abstractions instead. For example,HttpApi.metricServerError()would give you a reference to the metric emitted by a specific API.This is the preferred method of dealing with standard metrics in CDK.ShareFollowansweredOct 27, 2022 at 10:14gshpychkagshpychka9,99911 gold badge1515 silver badges3333 bronze badgesAdd a comment| | The API Gateway in AWS creates the metrics 4XXError and 5XXError for different APIs in CloudWatch. I need to set alarms for these already existing metrics in CDK.I cannot find how you can import pre-existing CloudWatch metrics in CDK. Can anyone help me with how the code block would look for it?Currently, the code looks like thisconst externalPaymentFailedAlarm = new Alarm(
this,
`ExternalPaymentFailedAlarm`,
{
alarmDescription: `Alarm if external payment failed`,
metric: new Metric({
namespace: "v1/events",
statistic: "SampleCount",
metricName: "PUBLISH_SUCCESS",
period: Duration.minutes(1),
dimensionsMap: {
EventName: "external_payment_failed",
ServiceName: `${stage}-payment-failed`,
LogGroup: `${stage}-payment-failed`,
ServiceType: "AWS::Lambda::Function",
},
label: "External payment failed count",
}),
threshold: 1,
evaluationPeriods: 1,
datapointsToAlarm: 1,
}
);
externalPaymentFailedAlarm.addAlarmAction(new SnsAction(alertsTopic)); | Creating a Cloudwatch alarm on an existing Cloudwatch metric in CDK |
Don't useCompress FileNamein MacOS, else you'll end up zipping your files with chaos. Use commandzipinstead. For example:zip testFile.zip index.js node_modules utilsThis will create the zip file contains all your required sub files and folders. After that you can config your Handler in Lambda like thistestFile/index.handerShareFollowansweredOct 25, 2022 at 6:29Lucas.PhelinyLucas.Pheliny9522 silver badges99 bronze badgesAdd a comment| | I tried to deploy a simple function using AWS Lambda. However, I got this error even though I set the handler correctly.
P/s: I did not use the serverless.yml nor CLI. I deployed it using AWS Lambda interface.Lambda Handler: functions/fetchNest/handler.fetchError:"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'handler'\nRequire stack:\n- /var/runtime/index.mjs",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'handler'",
"Require stack:",
"- /var/runtime/index.mjs",
" at _loadUserApp (file:///var/runtime/index.mjs:951:17)",
" at async Object.UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:976:21)",
" at async start (file:///var/runtime/index.mjs:1137:23)",
" at async file:///var/runtime/index.mjs:1143:1"
]handler.jsmodule.exports.fetch = async event => {
// Get SSM creds.folder structure | "Error: Cannot find module 'handler'\nRequire stack:\n- /var/runtime/index.mjs" |
Monitoring/alarming on a single data point is always going to be hard/tricky and is definitely a limitation of the service.I would say you should rethink your alarm. Why do need to alarm on if your lambda executes? This is really monitoring if CloudWatch rules is working which you should trust.I suggest you alarm on if your lambda throws errors or monitor the results of the actions that your Lambda takes if possible.If you really must alarm on executions, maybe the best you could do is alarm when you have no executions for 2 datapoints instead of just one. This will be a lot more stable but you may not be notified of the issue for 24 hours.ShareFollowansweredOct 8, 2022 at 1:42JD DJD D7,65822 gold badges3939 silver badges5454 bronze badges1The intention of the alarm is make sure that Lambda is indeed executing as per the expected schedule. Alarming on 2 datapoints is also not an option here, since (evaluation period * number of datapoints) is supposed to be <= 24hrs, as per CloudWatch. Alarming on the errors seems like the only option here. I'd have ideally preferred a combination of both though. One alarm to track the invocations and another one tracking the errors.–Raghav KukrejaOct 11, 2022 at 0:02Add a comment| | I have a Lambda function which is scheduled to run once every 24 hours. I also have a CloudWatch alarm if the number of invocations drop below 1 every 24 hours.The issue here is that the invocation metric doesn't always show up in time for when the alarm condition is being evaluated. As a result, I have 0 invocations for a brief duration for the sliding window of 24 hours (the alarm evaluation period). This results in the alarm changing its state, only to recover within 1 minute, since the metric is now available to be evaluated.Now all of this could have been easy to tackle if CloudWatch supported evaluation periods greater than 24 hours, but alas, it doesn't. How do I tackle this situation?Am I approaching this problem correctly? If so, then how do I work around this CloudWatch limitation without introducing unnecessary complexity? | CloudWatch doesn't reliably monitor single datapoint within 24 hours |
My Lambda function is on private subnets. The function shouldn't require any access to the internet,If your lambda function is deployed in a VPC that does not have internet connectivity, then your lambda function will be unable to reach the service endpoint (sns.us-east-2) over the public internet, as you would expect.If you want private connectivity to the service, then you need to provision a VPC interface endpoint for the service and deploy it in the same VPC as your lambda.ShareFolloweditedSep 12, 2022 at 17:20answeredSep 12, 2022 at 17:13PaoloPaolo24.2k66 gold badges4343 silver badges7777 bronze badges0Add a comment| | I'm getting the following error when trying to callcreate_topic()in Boto3. It works locally in sam runningsam local invoke, but once deployed, it times out.ConnectTimeoutError: Connect timeout on endpoint URL: "https://sns.us-east-2.amazonaws.com/"Here's the code:sns = boto3.client('sns')
topic_name = f'my-sns-topic-{ENVIRONMENT}'
topic = sns.create_topic(Name=topic_name)
notification_channel = {"SNSTopicArn": topic["TopicArn"], "RoleArn": "arn:aws:iam::my-role"}My Lambda function is on private subnets. The function shouldn't require any access to the internet, so I think private subnets are ok (?). All my resources are on the same VPC.Does the lambda function have to be on a public subnet to reach SNS? I tried adding a0.0.0.0/0route mapped to my internet gateway to the route table associated with the private subnet, but that didn't help.What am I missing? | Boto3 SNS ConnectTimeoutError: Connect timeout on endpoint URL |
Seehttps://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.htmlEmbed the SSL Certificate ARN in thesecurelistener-alb.configfile as followsoption_settings:
aws:elbv2:listener:443:
ListenerEnabled: 'true'
Protocol: HTTPS
SSLCertificateArns: arn:aws:acm:us-east-2:1234567890123:certificate/####################################ShareFollowansweredSep 20, 2022 at 8:53Mike YearworthMike Yearworth10777 bronze badgesAdd a comment| | I have developed and deployed a python application to AWS Elastic Beanstalk that works fine. When I modify the application bundle with the addition of the.ebextensions/https-reencrypt-alb.configfile the deployment of the Application fails with an Error as follows:"Unable to deploy application version: Configuration validation exception: You must specify an SSL certificate to configure a listener to use HTTPS."Contents ofhttps-reencrypt-alb.configas follows...aws:elbv2:listener:443:
DefaultProcess: https
ListenerEnabled: 'true'
Protocol: HTTPS
aws:elasticbeanstalk:environment:process:https:
Port: '443'
Protocol: HTTPSI have a certificate created all ready, but creating a listener on port 443 fails (silently, after reporting - Pending create). I assume this is failing because I have not been able to deploy the version with this https termination file included.I have successfully deployed two previous, and very similar, applications with https support (in June and August) and they work fine. Has something changed in Elastic Beanstalk/Route 53/Certificate Manager since then that requires a different deployment process? | https listener creation fails in AWS Elastic Beanstalk |
You can create a file .npmrc with configlegacy-peer-deps=trueShareFollowansweredFeb 27, 2023 at 21:42Alex FedorovAlex Fedorov5566 bronze badgesAdd a comment| | I am trying to use npm package that has a max verison of 18.0 and my EB instance is on 18.2. I want to update the npm install command to add the -- legacy peer deps flag. Any suggestions? | AWS Elastic Beanstalk NPM failing. Need to add in the --legacy-peer-deps |
you can handle the error base on the response. if the command response containsAddon already existsyou canexit 0and return an error if something else, it can beaws clipermission or wrong command.resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
RESULT=$(aws eks create-addon --cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} --addon-name vpc-cni --addon-version v1.11.2-eksbuild.1 --service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} --resolve-conflicts OVERWRITE 2>&1)
if [ $? -eq 0 ]
then
echo "Addon installed successfully $RESULT"
exit 0
elif [[ "$RESULT" =~ .*"Addon already exists".* ]]
then
echo "Plugin already exists $RESULT" >&2
exit 0
else
echo "Encounter error $RESULT" >&2
exit 1
fi
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
}ShareFollowansweredAug 6, 2022 at 18:34AdiiiAdiii56.9k77 gold badges158158 silver badges156156 bronze badgesAdd a comment| | I am installing CNI using null_resource in terraform. Now if the CNI is already installed the terraform script fails with error:exit status 254. Output: │ An error occurred (ResourceInUseException) when calling the CreateAddon │ operation: Addon already exists.How can I make terraform continue with execution if the CNI is already installed, rather than failing.Below is my Configuration for installing CNI:### Installing CNI Addon ###
resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
aws eks create-addon \
--cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} \
--addon-name vpc-cni \
--addon-version v1.11.2-eksbuild.1 \
--service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} \
--resolve-conflicts OVERWRITE
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
} | How to make Terraform continue with execution and ignore an error with resource creation during terraform apply? |
Unfortunately, you can't do that. There are few approaches to authentication, the JWT Cognito uses is one of them. The pros is you're not keeping track of the authorization on your side, but you include the expiration date in the token. You can't choose which tokens to revoke, only way is to rotate private key, but in that case you force all users on all devices to relog.In your case you need to store info about logged devices on server side, and additionally verify those with Cognito hooks.ShareFollowansweredJul 29, 2022 at 21:52karjankarjan97611 gold badge77 silver badges1919 bronze badgesAdd a comment| | I am developing a react native mobile app. I want my user to login in one device with once account. If a user tries to login to another mobile device with same account, he should be logout from the first mobile device. but official docs of AWS cognito provide two options either logout or global logout. In global logout it logs user out from device 1 and 2 both. what is expected If a user logs in second mobile device it should automatically be logout from the other one.Please see the attached SDK link.see hereWhat I have already tried?Through the AdminUserGlobalSignOut method, we are only able to revoke refresh tokens. It invalidates all refresh tokens that Amazon Cognito has issued to a user. The user's current access and ID tokens remain valid until they expire. By default, access and ID tokens expire one hour after they're issued. see detail for AdminUserGlobalSignOut herehttps://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminUserGlobalSignOut-property.See hereWe need to immediately invalidate the user's current access and ID tokens when invalidates all refresh tokens or successfully calls AdminUserGlobalSignOut, Don't wait to expire The user's current access and ID tokens. | How can logout from all devices in AWS Cognito? |
For us the solution was to not set the datasource dynamically from the environment variable we added in the interface. When we moved the target query to a fixed datasource, it all worked ok.ShareFollowansweredFeb 20, 2023 at 14:54PeterdkPeterdk15.8k2121 gold badges102102 silver badges141141 bronze badges1I had a similar problem and solved it in the same way–rodolkJan 31 at 22:22Add a comment| | I'm trying to set up alerts on one of my graphs. I'm using AMG (Amazon Managed Grafana). However, I'm getting "Failed to test the rule" notification. When I inspect HTTP response, it showsStatus Code: 500 Internal Server Error
{"message":"Failed to test rule"}URLhttps://g-39e1d60d36.grafana-workspace.us-east-1.amazonaws.com/api/alerts/testHere is my alert setup (even If I try something super simple, still getting the same issue):To me, it seems like Grafana internal error/bug, does anyone experience a similar issue and know the potential resolution? | Testing Grafana alert returns "Failed to test the rule" with 500 - Internal Server Error response |
Try to untaint the resource e.g.terraform untaint aws_vpc_peering_connection.aShareFollowansweredJul 6, 2022 at 14:05Oleksii IakovenkoOleksii Iakovenko3611 bronze badge0Add a comment| | I'm trying to create VPC Peering between two VPCs in two different accounts. One is managed by me and another one by others and I don't have access to it.
I'm using the next snippet of Terraform script.resource "aws_vpc_peering_connection" "a" {
peer_owner_id = var.a.aws_account_id
peer_vpc_id = var.a.vpc_id
vpc_id = aws_vpc.main.id
peer_region = "eu-west-1"
requester {
allow_remote_vpc_dns_resolution = false
}
}Next, it is going to be manually accepted by those who manage that account.
The problem is whether Peering is accepted or not Terraform wants toreplacethat Peering connection:# module.vpc.aws_vpc_peering_connection.a is tainted, so must be replaced
-/+ resource "aws_vpc_peering_connection" "a" {
~ accept_status = "active" -> (known after apply)
~ id = "pcx-00000000000000000" -> (known after apply)
# (5 unchanged attributes hidden)
+ accepter {
+ allow_classic_link_to_remote_vpc = (known after apply)
+ allow_remote_vpc_dns_resolution = (known after apply)
+ allow_vpc_to_remote_classic_link = (known after apply)
}
# (1 unchanged block hidden)
}I have already tried to prevent the replacement by usinglifecyclelifecycle {
ignore_changes = all
}But it doesn't help... | VPC Peering is replaced all the time by Terraform |
cdk deploysynthesizestheCloudAssemblyartifacts intocdk.outeach time before deploying. Caching wouldn't help there.However, the CDK apparentlycaches zipped artifacts(before uploading to S3), so in theory you could save.zip-ing time by cachingcdk.out/.cache.ShareFollowansweredJun 30, 2022 at 11:09fedonevfedonev23.4k22 gold badges3131 silver badges4444 bronze badges2Try one time to make an arbitrary deployment withcdk deploy. This produces assets and templates incdk.outused for the deployment. Now, try successively - re-runningcdk deploy. - removingcdk.outand re-runningcdk deployThe latter takes much more time than the former, which I think invalidates your statement that caching would not help.–Emile TenezakisSep 11, 2023 at 5:42@EmileTenezakis Repeat deploys are faster because CDK can sometimes skip the S3 artefact upload, context lookups, and CloudFormation steps. But it always generates new local artefacts and hashes. The CDK re-synthesizes on each deploy to see whether there have been code/asset changes that require artefact redeploy. Ask yourself: how else would the CDK handle cache invalidation?–fedonevSep 11, 2023 at 8:25Add a comment| | I am using BitBucket pipelines to deploy an app to AWS using the Python CDK. As part of the process the cloud assemblycdk.outdirectory is created as documents in theAWS docs.I am wondering if there is any benefit in caching this directory so that it's reused between pipeline runs, just like we cachepipdependencies for example, or just let it be created from scratch on every pipeline run. | AWS CDK and caching the cdk.out directory in build pipelines |
Make sure your execution role does not have any permission boundaries. By default, the SageMakerFullAccess policy allows create app permissions - see this statement -{
"Effect": "Allow",
"Action": [
"sagemaker:CreatePresignedDomainUrl",
"sagemaker:DescribeDomain",
"sagemaker:ListDomains",
"sagemaker:DescribeUserProfile",
"sagemaker:ListUserProfiles",
"sagemaker:*App",
"sagemaker:ListApps"
],
"Resource": "*"
},You can add an inline policy such as below to make sure your role has permissions to create app -{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCreateApp",
"Effect": "Allow",
"Action": "sagemaker:CreateApp",
"Resource": "*"
}
]
}ShareFollowansweredJun 16, 2022 at 22:10durga_surydurga_sury1,02266 silver badges1010 bronze badgesAdd a comment| | I created a aws sagemaker user profile using terraform. I tried to launch the sagemaker studio from the user profile but was confronted with this error:SageMaker is unable to use your associated ExecutionRole [arn:aws:iam::xxxxxxxxxxxx:role/sagemaker-workshop-data-ml] to create app. Verify that your associated ExecutionRole has permission for 'sagemaker:CreateApp'. The role has sagemaker full access policy attached to it, but that policy doesn't have the createApp permission which is weird. Are there any policies I can attach to the role with the sagemaker createApp permission, or do I need to attach a policy to the role through terraform? | How to add sagemaker createApp to user profile executionrole? |
Starting with python3.8, AWS Lambda removed many Binaries like tar, find file, cat, which etc.So if you're using any of these binaries in our package, we can add them using layers.Method 1:Add libraries in layers in the custom directory, and let lambda know the directory by setting LD_LIBRARY_PATH and PATH environment Variables.Let's say you've added your binaries incustom_binsdirectory then you've to setPATHas/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin:/opt/custom_binsand if you've added your custom libraries incustom_libsdirectory then you've to setLD_LIBRARY_PATHas/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib:/opt/custom_libsMethod 2:By Default, Lambda will detect libraries in the/opt/libpath and binaries in/opt/bin, as we know layers will be available in/optdirectory, we can create a layer with/liband/bindirectories containing appropriate libraries & binaries.Layer Structure:custom_libs/
├── bin
│ ├── curl
│ └── bin_2
│ └── bin_3
└── lib
├── libbz2.so.1
├── lib_2
├── lib_3
└── magicHere, we need not set any environment variables as we uploaded our libraries & binaries in the path detected by lambda by default.Note: If your binaries require a magic file then you can add a magic file and specify a magic file path using theMAGICenvironment variable.ShareFollowansweredJun 3, 2022 at 6:57DilLip_ChowdaryDilLip_Chowdary1,07177 silver badges2020 bronze badgesAdd a comment| | I have deployed my app with python36 runtime long ago, and now AWS Deprecated this runtime.So while migrating from python36 to python38 or python39, I observed that AWS removed many in-built libraries in Amazon Linux 2 which is used for Python38 or python39.Here we need to add binaries if we're using them in our App.how to add custom binaries to the lambda package? Suggest ways to do it, if you know.References:AWS Lambda Deprecation Schedules | How to add custom libraries and binaries in AWS Lambda? |
Create a new,plain vanilla appto handle the standalone, non-pipeline deploy scenario:// bin/dev-app.ts
const app = new cdk.App();
new MyBusinessLogicStack(app, 'DevStack', props)Tell the CLI to deploy thedev-appwith an explicitapp command:cdk deploy --app 'npx ts-node bin/dev-app.ts'You now have two "apps": one that deploys the pipeline and the new one that deploys a standalone "business logic stack".ShareFollowansweredMay 17, 2022 at 12:56fedonevfedonev23.4k22 gold badges3131 silver badges4444 bronze badges3Is this really needed, though? You can just deploy the stages/stacks manually, the key is to supply the correct name.–gshpychkaMay 18, 2022 at 6:58@gshpychka Not strictly necessary, but it does have advantages for development use cases. A dev env deployed as a separate app: (1) can be deployed in isolation without touching the pipeline-linked production env; (2) can be configured to use cheaper resources; and (3) is compatible withwatch modefor faster deploy iterations.–fedonevMay 18, 2022 at 10:44(1) and (3) still applies to deploying the Pipeline's infra stacks manually, doesn't it?–gshpychkaMay 18, 2022 at 11:55Add a comment| | Currently I'm using@aws-cdk/pipelinespackage for quick and easy setup of CI/CD for my service.However during the experimentation/development phase, I would like to manually callcdk deployfor my stack with business logic components, so the deployment loop would be a lot faster, as I don't need pipeline self-mutation steps and also I don't want to push everything to repository each time.Unfortunately I'm not able to achieve this. After trying to call manuallynpx cdk deploycommand in the repository root folder, it's simply deploying the stack, that contains the pipeline resources.I was also trying to achieve this by calling stack name directly:npx cdk deploy -c config=dev <full-stack-name>And it fails withNo stacks match the name(s) [...]message.Is this possible? I believe it's quite important use case, since deploying through proper CI/CD pipeline takes at least 2-3 minutes and it ruins my focus. | AWS-CDK Pipelines trigger deployment from local terminal |
Execute the same command but tell node to add more memory:node --max-old-space-size=8192 <yourscript>The problem is likely that this is set by environment variables, so your machine probably has a different value to the EC2.Note that you would not be able to increase the allocated memory (8192MB in this example) more than the available memory of the machine. So if you are only running a.smallthen you are inherently not going to be able to match your own powerful machine.ShareFollowansweredMay 16, 2022 at 7:06TobinTobin1,78211 gold badge1717 silver badges2525 bronze badges11I just tried export NODE_OPTIONS="--max-old-space-size=8192" and than tried npm run build and now waiting for it to build.–Running momentsMay 16, 2022 at 7:08Is there a way to check how much memory i need?–Running momentsMay 16, 2022 at 7:11Easiest is to tune it by adding / removing until you and the machine are happy.–TobinMay 16, 2022 at 7:13Everytime I try to run npm run build it just crashes so I guess I will have to upgrade–Running momentsMay 16, 2022 at 7:15yeah; just relaunch it with more memory. please can you mark as correct if this has helped you so others can find it too :)–TobinMay 16, 2022 at 7:24|Show6more comments | I can see that the build folder has been made but get this error message. I dont know wether I should worry or not. I does not happen when i do it on my local machine?
yeah something def go wrong as I only get these 5 file made?
How can i prevent this
Command ran:npm run buildicon.ico logo192.png logo512.png manifest.json robots.txtFATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xb09c10 node::Abort() [/usr/bin/node]
2: 0xa1c193 node::FatalError(char const*, char const*) [/usr/bin/node]
3: 0xcf8dbe v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
4: 0xcf9137 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
5: 0xeb09d5 [/usr/bin/node]
6: 0xec069d v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]
7: 0xec339e v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]
8: 0xe848da v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node]
9: 0x11fd75c v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node]
10: 0x15f2099 [/usr/bin/node] | Problem doing npm run build on AWS server | JavaScript heap out of memory |
Is policy 1 or policy 2 the preferred policy?Policy 2.Is it better to have explicitdenystatements (along withallowstatements) in the same policy?Yes - if you want to deny any IAM actions,alwaysprefer explicit deny policies.An explicit deny in any policy overrides any allows.As long as the original deny policy is protected from changes, any new policies added that attempt to allow the unwanted actionswill not work.However, when usingnot_actionswith the allow effect, any new policies added to allow the unwanted actions will actually reverse the effect of the original policy &will allow the undesired actions. With no explicit deny, there is nothing stopping someone else from adding an explicit allow (most of the time).The 2nd policy is always preferred.I only usenot_actionswhen I want todenya lot of things to make the resulting IAM policy smaller, as I know it is guaranteed to be future proof if the policy is well-secured.ShareFolloweditedMay 3, 2022 at 20:11answeredMay 3, 2022 at 20:05Ermiya EskandaryErmiya Eskandary21.2k33 gold badges4242 silver badges5050 bronze badges31Thanks for editing my question and for the answer as well!–DmitrySemenovMay 3, 2022 at 20:18Thanks for editing my question and for the answer as well!–DmitrySemenovMay 3, 2022 at 20:191@DmitrySemenov No worries, enjoy using AWS :)–Ermiya EskandaryMay 3, 2022 at 20:19Add a comment| | Policy 1:data "aws_iam_policy_document" "kms_policy" {
statement {
sid = "AllowEKSKMSAccess"
actions = [
"kms:*"
]
not_actions = [
"kms:Delete*",
"kms:ScheduleKeyDeletion",
"kms:Revoke*",
"kms:Disable*",
]
resources = ["*"]
}
}Policy 2:data "aws_iam_policy_document" "kms_policy" {
statement {
sid = "AllowEKSKMSAccess"
effect = "Allow"
actions = [
"kms:*"
]
resources = ["*"]
}
statement {
sid = "DenyEKSKMSDeletion"
effect = "Deny"
actions = [
"kms:Delete*",
"kms:ScheduleKeyDeletion",
"kms:Revoke*",
"kms:Disable*",
]
resources = ["*"]
}
}I want to prevent 4 actions from within the role associated with a managed EKS node group.Is policy 1 or policy 2 the preferred policy?Is it better to have explicitdenystatements (along withallowstatements) in the same policy? | not_actions vs explicit deny & allow statements in an IAM policy |
It seems to be something related toversion 8.0.2, which I'm sure AWS will fix at some point.As a workaround you can install another Amplify CLI version and you will be able to use it.Example:npm i -g @aws-amplify/[email protected]ShareFollowansweredApr 26, 2022 at 9:38olayer9olayer92133 bronze badges1I'm working through the AWS WildRydes walkthru on my own. I tried various (more recent) versions to get it working and ended up corrupting my EC2 instance (which I had to destroy and start over). If you already followed the step in the walkthru, do:npm uninstall -g @aws-amplify/clithen enter the command above from this answer and you'll be good to go.–danApr 28, 2023 at 13:06Add a comment| | ec2-user:~/environment/wild-rydes (master) $ amplify initDownloading release fromhttps://d2bkhsss993doa.cloudfront.net/8.0.2/amplify-pkg-linux-x64.tgznode:internal/buffer:959super(bufferOrLength, byteOffset, length);^RangeError: Array buffer allocation failedat new ArrayBuffer ()at new Uint8Array ()at new FastBuffer (node:internal/buffer:959:5)at createUnsafeBuffer (node:internal/buffer:1062:12)at allocate (node:buffer:410:10)at Function.allocUnsafe (node:buffer:375:10)at Function.concat (node:buffer:553:25)at Extract. (/home/ec2-user/.nvm/versions/node/v16.14.2/lib/node_modules/@aws-amplify/cli/lib/binary.js:124:37)at Extract.emit (node:events:538:35)at finishMaybe (/home/ec2-user/.nvm/versions/node/v16.14.2/lib/node_modules/@aws-amplify/cli/node_modules/readable-stream/lib/_stream_writable.js:624:14) | Does anyone know why I'am getting error during amplify init ? I checked the memory is not full? |
As far as I know, you cannot do this using the JS AWS SDK "postToConnection" API. Best you can do is write your own poor's man fragmentation and send the chunks as independent messages.const splitInChunks =
(sizeInBytes: number) =>
(buffer: Buffer): Buffer[] => {
const size = Buffer.byteLength(buffer);
let start = 0;
let end = sizeInBytes;
const chunks: Buffer[] = [];
do {
chunks.push(buffer.subarray(start, end));
start += sizeInBytes;
end += sizeInBytes;
} while (start < size);
return chunks;
};WheresizeInBytesmust be smaller than 32KB. Then you iterate over the chunks:await Promise.all(chunks.map(c => apiGatewayClient.postToConnection({ data: JSON.stringify(c), connectionId: myConnectionId })Which may run into rate limits depending on the number of chunks, so consider sending the requests serially and not in parallelFinal remark:Buffer.prototype.subarrayis very efficient because it does not reallocate memory: the new chunks point at the same memory space of the original buffer. Think pointer arithmetic in C.ShareFollowansweredDec 20, 2022 at 7:25enanoneenanone9491111 silver badges3030 bronze badgesAdd a comment| | I am using native javascript websocket in browser and we have an application hosted on AWS where every request goes through API gateway.
In some cases, request data is going upto 60kb, and then my websocket connection is closing automatically. In AWS documentation, I found out below explanation of this issuehttps://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.htmlAPI Gateway supports message payloads up to 128 KB with a maximum frame size of 32 KB. If a message exceeds 32 KB, you must split it into multiple frames, each 32 KB or smaller. If a larger message is received, the connection is closed with code 1009.I tried to find how I can split a message in multiple frames using native javascript websocket but could not find any config related to frames in documentation or anywhere elseAlthough I find something related to message fragmentation but it seems like a custom solution that I need to implement at both frontend and backendhttps://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#message_fragmentation | Split websocket message in multiple frames |
You shouldn't need to interact with the v2IdleConnectionReaperanymore, and that's why the public interface has changed to reflect this.There is a key difference between AWS Java SDK v1 & v2 in regards to the S3 client.V1'sAmazonS3Clientimplements theAmazonS3custom interface which provides theshutdownmethod to be implemented.This wasn't enforced & was an optional method so I assume the daemon thread was needed to prevent leaks in case the S3 client was not closed at all anywhere.V2'sS3Clientimplements theAutoClosableinterface, an inherent interface injava.langafter version 7.WithAutoClosable, the AWS SDK is clearly communicating the expectation of your application managing the closing & cleanup of the S3 client. This is preferably done by delegating the closing of the client to the JVM by enclosing it in atry-with-resourcesstatement or if needs be, explicitly viaS3Client.close().The AWS Java SDK v2 utilises newer features of the Java language & as such, it expects you to handle your resources correctly in line with modern Java development practices.Close your S3 client objects (& object content streams!) correctly & you will be fine.ShareFolloweditedApr 29, 2022 at 21:33answeredApr 29, 2022 at 19:45Ermiya EskandaryErmiya Eskandary21.2k33 gold badges4242 silver badges5050 bronze badgesAdd a comment| | I'm using S3Client from Java SDK v2. to upload/download files from AWS S3 in a distributed web application.I had a problem withidle-connection-reaperdaemon thread preventing/delaying the class from being unloaded during shutdown. I did some investigations and I figured out that in AWS Java SDK v1, this could be resolved by callingIdleConnectionReaper.shutdown()method.I imported apache client to my project<!-- https://mvnrepository.com/artifact/software.amazon.awssdk/apache-client -->
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>apache-client</artifactId>
<version>2.17.162</version>
</dependency>And I would like to do the same thing using AWS Java SDK v2. The problem is thatshutdown()method is no longer static and exposed.They changed the class to a Singleton and the only exposed APIs are:public synchronized boolean registerConnectionManager(HttpClientConnectionManager manager, long maxIdleTime)
public synchronized boolean deregisterConnectionManager(HttpClientConnectionManager manager)deregisterConnectionManager()calls shutdown internally but I don't know whatHttpClientConnectionManagerI should give as an argument for both methodsMy question is: Is there another approach to shutdown that daemon thread or I should stick to the new implementation ofIdleConnectionReaper? If so, what are exactlyHttpClientConnectionManagerparameter in bothregisterConnectionManagerandderegisterConnectionManagermethods? | Shutdown IdleConnectionReaper in AWS Java SDK v2 |
Probably too late to be really helpful but moving the App Runner to a VPC sends all outgoing traffic to the VPC.
The two options given in the docs areAdding NAT gateways to each VPCSetting up VPC endpointsDocumented within the first bullet point of theConsiderations when selecting a subnetsectionhttps://docs.aws.amazon.com/apprunner/latest/dg/network-vpc.htmlShareFolloweditedAug 8, 2022 at 18:33answeredJul 7, 2022 at 19:16Lucian ThorrLucian Thorr2,14511 gold badge2222 silver badges3131 bronze badges2It is a bit confusing though that it is possible to associate the VPC Connector to the public subnets of the VPC. The public subnets allow internet access through the Internet Gatway, so logically that should allow the App Runner Service to access the internet.–adonigDec 18, 2022 at 13:11The documentation is not very clear on how the NAT Gateway needs to be set up so it works with App Runner. Here's how we got it working:github.com/aws/apprunner-roadmap/issues/192–TuureJun 9, 2023 at 8:25Add a comment| | I've set up an AWS App Runner service, which works fine. Currently for networking it's configured as public access, but I'd like to change this to a VPC so that I can connect the service to an RDS instance without having to open the database up to the world.When I change the networking config to use my default security group, the service is unable to access the Internet. Cloning a git repo from Bitbucket brings up the error:ssh: Could not resolve hostname bitbucket.org: Try again... and trying to runnpm installbrings up:npm ERR! network request to https://registry.npmjs.org/gulp failed, reason: connect ETIMEDOUT 104.16.24.35:443My security group has an outgoing rule allowing all traffic out to any destination. My RDS instance is in the same VPC/security group and I'm able to connect to this without issue (currently I've opened up port 3306 to the world). Everything else I've read from a bunch of Googling seems fine: route tables, internet gateways, firewall rules, etc.Any help would be much appreciated! | AWS App Runner service cannot access Internet when added to a VPC |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.