Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
This is basically a sensible design decision as you described it. Instead of getting into the"callback hell"the author of the example basically flattened the code by usingwaterfall.The alternative code would look like this:s3.getObject({Bucket: srcBucket, Key: srcKey}, function(response){ gm(response.Body).size(function(err, size) { // do resize with image magic } });Which is less readable, and can get much more complex and much less readable as steps are added to the processing.
I'm reviewing Amazon Lambda's example code for resizing images in S3 buckets.Example code(snipped for clarity):// Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ function download(next) { // Download the image from S3 into a buffer. s3.getObject({Bucket: srcBucket, Key: srcKey}, next); }, function tranform(response, next) { gm(response.Body).size(function(err, size) { // do resize with image magic } } ]); //... more handling code...shows they are using async waterfall. However, each of these ordered steps seems to rely on the results of the previous function. So in essence, this is a sequential operation.What is the benefit of using async waterfall here? Is this something to do with Lambda's execution engine at Amazon, or just a sensible design decision in node?
Use of async in Amazon Lambda example?
If you're using Apache then you could proxy all the /images URL's using Apache's built-in proxy support. You would want to add something like this to your Apache configuration:ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /images/ http://images.s3-website-us-east-1.amazonaws.com/This actually creates a reverse proxy. When the Apache server sees a request that matches /images/ then it will reach out to the S3 URL to fetch the object and return it back to the client.
We currently are moving our website to EC2 utilizing OpsWorks. Right now we reference all media files for the site with relative URL's in the coding... "/images/image1.jpg" and so on. Is there a way to alias the /images directory in our coding to utilize an S3 bucket? What we'd like to do is get big sized images out onto S3 so it's not stored withe each of our S3 instances.I know we can create an alias DNS entry for the bucket itself, but then we'd have to reference that full URL inside of our coding. For instance,http://images.s3-website-us-east-1.amazonaws.com. We'd rather keep the URL referencing in the coding relative and an absolute path.We are utilizing AWS Linux for the PHP/Apache front end EC2 instances.
AWS S3 utilizing for website static images
The issue has been resolved by removing a large amount of assets that were not needed or used, for instance i had the entire bootstrap project source and a few 3rd party library project source tree's and corresponding files, instead of just the src files that i was using.
I'm experiencing the below error when deploying to Elastic beanstalk. This is a ruby app running Rails 4.1.9 and Ruby 2.1.4 on Puma.The stacktrace is as follows:Errno::ENOMEM: Cannot allocate memory - node (in /var/app/ondeck/app/assets/javascripts/my_javascript.js)My javascript file is pretty basic, it looks like this//= require jquery //= require jquery_ujs //= require ../../../vendor/assets/components/bootstrap/dist/js/bootstrap.min //= require ../../../vendor/assets/components/thirdpartylib.js ... and then basic functionsUnsure why exactly this is failing. I have not changed anything in the javascript file or the vendor assets.Any ideas on how to resolve will be greatly appreciated.
Elastic beanstalk[Rails] deploy issue - Cannot allocate memory
The problem turned out to beTimecopfreezing to an earlier time as part of the test framework. This was solved by arranging things so the freeze wasn't necessary here.Discovered thanks to CircleCI's attentive support.
I'm running tests on CircleCI and getting this error about the time being wrong when posting to AWS S3 (viaPaperclip's S3 integration).AWS::S3::Errors::RequestTimeTooSkewed: The difference between the request time and the current time is too large.I've tried setting up NTP ordoing a curl requestto update the time, but even root is not permissioned as it's a shared environment.sudo date -s "$(curl -s --headhttp://google.com| grep ^Date: | sed 's/Date: //g')"date: cannot set date: Operation not permittedIt works fine on my local environment and in production. How can I get these tests to pass?
CircleCI with AWS: RequestTimeTooSkewed error
You will want to store the Solr indexes on an EBS volume, which you can attach to the server. S3 is meant for serving files directly out to the internet (such as images and css files), or for general file storage (such as backups.) It is not meant to be used as a mounted disk for a database.Solr likes very high IO, so the SSD backed EBS volumes are great for this. You can also make snapshots of an EBS volume to backup its data.If you setup Solr slaves, you can also get away with using the server's ephemeral storage. This is a large partition that comes with most instance types. It is volatile storage, meaning all of the data is lost if the server is shutdown. However, it is free and quite fast. It is perfect for a slave which replicates its data from a master Solr instance backed by EBS.
I am using solr on amazon ec2, and I am hoping to configure the solr instance so that it automatically stores data in amazon s3 instead of anywhere on the server. However I couldn't find any useful information on how to implement this. Does anyone know how? If this can't be achieved using amazon s3, what cloud storage do you recommend?Thanks in advance.
Storing solr data with amazon s3
You need to compile spark against Yarn to use it.Follow the steps explained here:https://spark.apache.org/docs/latest/building-spark.htmlMaven:build/mvn -Pyarn -Phadoop-2.x -Dhadoop.version=2.x.x -DskipTests clean packageSBT:build/sbt -Pyarn -Phadoop-2.x assemblyYou can also download a pre-compiled version here:http://spark.apache.org/downloads.html(choose a "pre-built for Hadoop")
I am trying to run a fat jar on a Spark cluster using Spark submit. I made the cluster using "spark-ec2" executable in Spark bundle on AWS.The command I am using to run the jar file isbin/spark-submit --class edu.gatech.cse8803.main.Main --master yarn-cluster ../src1/big-data-hw2-assembly-1.0.jarIn the beginning it was giving me the error that at least one of theHADOOP_CONF_DIRorYARN_CONF_DIRenvironment variable must be set. I didn't know what to set them to, so I used the following commandexport HADOOP_CONF_DIR=/mapreduce/confNow the error has changed toCould not load YARN classes. This copy of Spark may not have been compiled with YARN support. Run with --help for usage help or --verbose for debug outputThe home directory structure is as followsephemeral-hdfs hadoop-native mapreduce persistent-hdfs scala spark spark-ec2 src1 tachyonI even set the YARN_CONF_DIR variable to the same value as HADOOP_CONF_DIR, but the error message is not changing. I am unable to find any documentation that highlights this issue, most of them just mention these two variables and give no further details.
Spark Submit Issue
To do this kind of test you needpatchconnect_to_region(). When this method is patched return aMagicMock()object that you can use to test all your function behavior.Your test case can be something like this one:class MyTestCase(unittest.TestCase): @patch("boto.sqs.connect_to_region", autospec=True) def test_test(self, mock_connect_to_region): #grab the mocked connection returned by patched connect_to_region mock_con = mock_connect_to_region.return_value #call client self.test_client.get('/test') #test connect_to_region call mock_connect_to_region.assert_called_with("eu-west-1",aws_access_key_id="asd",aws_secret_access_key="asd") #test get_queue() mock_con.get_queue.assert_called_with('my_queue') #finaly test send_message mock_con.send_message.assert_called_with(mock_con.get_queue.return_value, json.dumps({'data':'asd'}))Just some notes:I wrote it in awhite boxstyle and check all calls of your view: you can do it more loose and omit some checks; useself.assertTrue(mock_con.send_message.called)if you want just check the call or usemock.ANYas argument if you are not interested in some argument content.autospec=Trueis not mandatory but very useful: take a look atautospeccing.I apologize if code contains some error... I cannot test it now but I hope the idea is clear enough.
My webapp wants to send a message to AWS SQS with boto and I'd want to mock out sending the actual message and just checking that calling send_message is called. However I do not understand how to use python mock to patch a function that a function being tested calls.How could I achieve mocking out boto con.send_message as in the pseudo-like code below?views.py:@app.route('/test') def send_msg(): con = boto.sqs.connect_to_region("eu-west-1",aws_access_key_id="asd",aws_secret_access_key="asd") que = con.get_queue('my_queue') msg = json.dumps({'data':'asd'}) r=con.send_message(que, msg)tests.pyclass MyTestCase(unittest.TestCase): def test_test(self): with patch('views.con.send_message') as sqs_send: self.test_client.get('/test') assert(sqs_send.called)
How to patch a function that a Flask view calls
Another alternative would be to directly read from your carrierwave object and use send_data as opposed to send_file. You'll get the added benefit of carrierwave using it's own local cache if your file exists there as well.Anyways you could try using send_data with your example like this:send_data(my_file.file.read, filename: 'your_txt_file.txt', type: 'text/plain', disposition: 'attachment', stream: 'true', buffer_size: '4096')You should specifydisposition: 'attachment'if you're wanting the file to be directly downloaded by the user, or if you would prefer the file to render within the users own browser, you could usedisposition: 'inline'
I want to download a file uploaded on aws s3:controllerdef download send_file my_file.url endActually I have tried all code found in similar posts:send_file open(my_file.url).readalso without read. Nothing works
Send_file aws s3 carrierwave
The Amazon Mobile Analytics client requires Cognito to facilitate authentication and authorization when submitting data. This is used to increase the security of submitting data from mobile clients to ensure valid credentials are sending the data for a particular app. If Cognito is not used, the data submission call will fail due to invalid permissions.
Compared to competing analytics services, Amazon Mobile Analytics appear to require many more configuration and integration steps.For example, in Flurry Analytics, the setup is pretty simple:[Flurry startSession:@"<app-id>"]; [Flurry logEvent:@"<event-name>"]; // Optionally, set the userID [Flurry setUserID:@"userid"];I was hoping the equivalent in Amazon Mobile Analytics would be something like this for unauthenticated users:[AWSLogger defaultLogger].logLevel = AWSLogLevelVerbose; AWSMobileAnalytics* analytics = [AWSMobileAnalytics mobileAnalyticsForAppId:@"<app-id>"]; id<AWSMobileAnalyticsEventClient> eventClient = analytics.eventClient; id<AWSMobileAnalyticsEvent> event = [eventClient createEventWithEventType:@"ScreenView"]; [eventClient recordEvent:event];However, after running that code and putting the app in the background to upload and send off the event, no errors or other log messages are given.Both Amazon's quick start guide and this tutorial (http://www.nickyap.info/mobile-analytics/) step you through extra steps configuring Amazon Cognito, even for tracking unauthenticated users. This in turn, requires extra AWS permissions for creating user roles, etc. which my AWS account doesn't have.Has anyone tried using Amazon Mobile Analytics without configuring Cognito first? Or is that an absolute requirement?
Using Amazon Mobile Analytics without configuring Amazon Cognito
Okay people, I was suspecting about Amazon Identity Manager here, I've found the reason here in this forumhttps://forums.aws.amazon.com/thread.jspa?messageID=390680If the sender of the SQS message used IAM credentials, then the SenderId will be the IAM user id, which is different than the "AWSAccessKeyID"If the sender of the SQS message used the "root" AWS credentials, then the SenderId will be the root account number.This information is what is provided by IAM:for the root user:AmazonIdentityManagement iam = new AmazonIdentityManagementClient(new BasicAWSCredentials(accessKey, secretKey)); GetUserResult getRootUserResult = iam.getUser(); System.out.println("RootUserId: "+ getRootUserResult.getUser().getUserId());prints out a number like "123412341234"Whereas for a particular Iam user:GetUserRequest getUserRequest = new GetUserRequest().withUserName("sqsuser"); GetUserResult getUserResult = iam.getUser(getUserRequest); System.out.println("UserId: "+ getUserResult.getUser().getUserId());prints out "AIDADEADBEEF123"
The scenario is the following:I have an Amazon SQS Queue that several AWS accounts are writing to.I am giving write permissions to a specific list of Amazon Account IDs.Each time a get a message from this Queue, I am also asking for the "SenderId" attribute.That's how I can identify which customer placed the Job in the queue, so I know who to bill.My problem is:For customer A with AWS account id 12345, the SenderId is 12345. It looks like an account ID.For customer B with AWS account id 67890, the SenderId is AFFFFCCUQBTZHXXMBGLTBK. Which looks totally random... out of the blue...There is nothing in the Amazon AWS documentation that tells that the sender ID can be other than the Amazon account ID. Well, except for anonymous access, which is not the case.Q: What is the “SenderId” attribute value of a message in the case of anonymous access? Amazon SQS provides the IP address when the AWS account ID is not available such as when an anonymous user sends a message.From the official API page this is what is said about the SenderIdSenderId - returns the AWS account number (or the IP address, if anonymous access is allowed) of the sender.Any ideas about why this inconsistency?P.S.: why do I care? If I can't map the SenderId I cannot bill :(
Amazon SQS SenderId inconsitency
I'm going to take a guess that you are usingsendmailto send the emails, correct?According to theAmazon SES Docs:(Optional) If you are sending email through Amazon SES from an Amazon EC2 instance, you may need to assign an Elastic IP Address to your Amazon EC2 instance for the receiving ISP to accept your email. For more informationsee Amazon EC2 Elastic IP Addresses.)I'd suggest moving away fromsendmailand useBototo integrate with SES or submit your emails via SMTP to Amazon from your Python app.UpdateThe comment fromMichael - sqlbotmade me think to investigate further, here is what I found.I remember learning about reverse DNS lookups on email, and found the following:Amazon now has a new email policy in which outbound SMTP traffic is blocked (beyond miniscule usage). In order to be able to send email directly from EC2 you also need to provision an Elastic IP address for your instance. Amazon will work to keep that Elastic IP of of the common anti-spam lists.as well asAmazon has announced a new private beta where they will set PTR records for your Elastic IP address.Taken fromSending Email from EC2. The article explains all the details in delivering email from an EC2 instance and its problems. Even though it's old, I believe the reverse lookups apply now more than ever to fight spam.
I have a small application running on Python Flask on EC2. Customers have been complaining not getting system emails. I have the production application load balanced on EC2. I just noticed that the reliability of emails seems to improve once each server has an Elastic IP. Since I have a load balancer I don't usually assign the IP to the production machines and only assign an Elastic IP to the test machines and only assign the ip address to production to keep it without additional charge when I don't require a test environment.Do I need an Elastic IP per server that are each talking to the email server (SES) when required?Note: SES is not available in my AWS region.
Sending emails to AWS SES from EC2 with no elastic ip
You can get this info throughget-caller-identityAPI of STS service.In Ruby:Aws::STS::Client.new(your_oprions).get_caller_identity[:account]
Usingfoglibrary of ruby how can I get the account ID of the current authentication? I useaccess_key_idandsecret_access_keyto authenticate.
Fog AWS: How to get account ID
Did you consider deploying each micro service on a separate docker container and deploying these containers on AWS Elastic Beanstalk?http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
We have developed multiple micro-services using DropWirzard to have embedded jetty servers for each micro-service.Has anyone had experience with deploying an embedded server to elastic-beanstalk for auto-scaling?-Thanks for your time
How to deploy an embedded server to Elastic-Beanstalk?
The configuration is now calledUseHttp. It's in the 'Amazon.Runtime.ClientConfig' class that is the base for all of the client-specific config classes.var config = new AmazonS3Config { UseHttp = true, ... }; var client = new AmazonS3Client(config);This works for all clients (unless they don't support http. You can still set it, but it will be ignored).
It looks like there used to be an option in the AmazonS3Config class to specify the communication protocol to use, but I don't see that anymore. Where did it go?I'm trying to do some benchmarking, and one of the things I want to test is HTTP vs HTTPS. Since a lot of our objects are fairly small (less than 512k), I'm wondering if the HTTPS handshaking is contributing to our slowness in uploading.
How can I force HTTP-only mode in the C# AWS SDK?
By 'anonymous user', do you mean 'unauthenticated user'? If so, then you have two options (#1 and #2 below). If not, then you have one option (#1 below). All of this assumes, of course, that you cannot persuade the uploader himself to modify the ACLs on these objects.delete the objects. As the bucket owner, you can always delete objects (and stop paying for them).become the object owner and grant the bucket owner (you) full control. Anyone can be the unauthenticated user and hence the object owner.Here is an example of how to do #2 for bkt/cat.jpg using node.js and the AWS JavaScript SDK. This code invokes putObjectAcl as the unauthenticated user and gives the bucket owner (you) full control over the object.var aws = require('aws-sdk'); var s3 = new aws.S3(); var p = { Bucket: 'bkt', Key: 'cat.jpg', ACL: 'bucket-owner-full-control' }; s3.makeUnauthenticatedRequest('putObjectAcl', p, function(e,d) { if (e) console.log('err: ' + e); if (d) console.log('data: ' + d); });Unfortunately, the awscli does not appear to support unauthenticated S3 calls otherwise I would have proposed using that to modify the ACLs of the object.Note that thecanned ACLof bucket-owner-full-control gives both the object ownerandthe bucket owner full control.
I have a sample web page that has allowed anonymous users to upload objects and create folders in my S3 bucket.Unfortunately I had not set any specific bucket policies or ACLs before doing this.Now I have the problem where an anonymous user has created a folder and uploaded objects which I (as the root user) cannot download or access. I plan to set up a new bucket policy before more users can upload objects, but right now I need access to these current objects owned by anonymous.Can someone tell me how I can do this?
Amazon S3 - How do I download objects owned by anonymous user?
You can create a distribution for a specific folder, was just announced in mid-december:Amazon CloudFront Now Allows Directory Path as Origin Name- Date: December 16, 2014Details: When you specify the origin for a CloudFront distribution - the Amazon S3 bucket or the custom origin where you store the original version of content - you can now specify a directory path in addition to a domain name. This makes it easier for you to deliver different types of content via CloudFront without changing your origin infrastructure. Learn more by reading our announcement.https://aws.amazon.com/about-aws/whats-new/2014/12/16/amazon-cloudfront-now-allows-directory-path-as-origin-name/
Is it possible to assign a unique domain name for an S3 bucket folder - not the entire bucket but just a folder in the bucket?For example if I have a bucket s3.amazonaws.com/my.domain.tv and a folder Vasilis/ within that bucket I want to point a domain name vasilis.domain.tv at s3.amazonaws.com/my.domain.tv/Vasilis.I have found how I can do it for an entire bucket but I couldn't find anything for folders.Also, can I do it with cloudfront, i.e. create a distribution for a specific folder in a bucket?
DNS name for S3 bucket folder
This purely depends on how you intend to use the Environments ( or what you intend to use ) and pretty much nothing in the technology implications of the RDS.The easiest way would be to have individual RDS instances and direct advantages would beseparation of concernsAbility to scale the environments individually ( i.e. to do stress test load test the environment - you can make the PROD instance to be a xlarge and the staging could be mirco )You need to understand that the costing implication would at least 2x; the other approach would be to create different databases in the same instance and treat / consider them as individual environments
What is the best method for setting up distinct Production and Staging databases in Amazon RDS? Is it advisable to spin up one RDS instance for Production and another for Staging and keep them entirely separate, or does it work just as well to just use one RDS instance with a Production database and a Staging database?
Amazon RDS Staging vs Production DB
They are case-sensitive, but metric generation can be different based on your metric filter setup:If you have three filters publishing to separate metrics e.g. LogMetrics/Metric1, LogMetrics/Metric2, LogMetrics/Metric3 then the entries with different casing should be collected into different metricsOn the other hand, if you have set up your filters to use the same metric then all log entries will be collected into this metric.It depends on your use case which way you set up your filters. In your case collecting all error messages to one metric is probably better because you can even define an alarm on that metric if the number of errors goes above a given threshold.To verify that patterns are case sensitive you can test them by:Using CloudWatch Console:Go tohttps://console.aws.amazon.com/cloudwatch/home#logs:Select a log groupClick on Create Metric FilterOn this page you can test any pattern against your log streams or against any custom text content that you enter to the text area. It will show the number of matches, the extracted values, etc.Using TestMetricFilter API call:seehttp://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_TestMetricFilter.html
The AWSdocumentationstates that Cloudwatch metric filters are case-sensitive, so I created 3 Cloudwatch Logs metrics, with filter patterns "ERROR", "Error", and "error", to ensure that I am informed of any errors written to my log files no matter the source.When I tested the metrics by forcing an error that resulted in the word "ERROR" to appear in a log, all 3 metrics were triggered, when I only expected the one with filter "ERROR" to trigger. Does this mean that the filters are actually case-insensitive, contrary to the documentation? This would clearly be handy (fewer metrics), but I want to be sure first. TIA
AWS Cloudwatch Metric Filters: are they actually case-insensitive?
I think there are two possible questions here so I'll answer them both.If the question is, can I access RDS from a Cognito authed app:RDS does not use AWS credentials for authentication, it uses database credentials. Cognito only vends credentials for AWS services, so you cannot use Cognito credentials to access an RDS database.If the question is, can I link RDS to Cognito so I can query, manipulate or analyze your end users' datasets:All access to users' datasets is done through Cognito. There is no export or link feature that allows you to use RDS.
i am developing a android application using AWS's RDS and AWS Cognito service. so my question is there any way that i can connect the RDS with Cognito? please help.
is it possible to connect amazon web service rds with amazon web service cognito?
You can definitely do this, but here's what you'll need to do:Make sure IIS is configured to route any incoming connection on a particular IP address to your site. This is distinct from IIS specifically listening for a particular hostname (e.g. mywebsite.com).As an alternative to the above, you could also manually set your DNS on your local computer and then use your web browser to visit mywebsite.com. From IIS's perspective, a user will have requested mywebsite.com just as if public DNS were setAs far as the IP address you visit, your instance will either have an ephemeral Public IP Address which will be reset when the instance is stopped and started, or an Elastic IP Address, which persists across restarts.As @Anthony Manzo mentioned, you'll need to make sure that your Security Group associated with this instance allows Port 80. In addition, you may want to disable Windows Firewall completely (or check that it allows Port 80 on all three "Zones" (Windows Firewall has 3 different zones to manage).
I just created a new site on my IIS on Amazon's EC2 and I was wondering if there is a way to access it publicly without assigning a domain.In detail. I created a new site dev.example.com which is accessible when I am logged in my instance. Is there a way to access it outside by doing let's say 54.xxx.xx.xxx:80:dev.example.comI don't know if that's even possible so any hints are appreciated
How to access a site on AWS EC2 without a domain name
Ok, here's what I found thanks to @sebsto suggesting the Policy Simulator: I need bothPutObjectAclandPutBucketAcl. Now sync works.
I can't figure out why I get a 403 permission denied error when viewing a page. I am using AWS CLI with the following command:aws s3 sync [source] [s3 destination] --acl public-read --recursive --delete --profile [my_profile]On IAM my policy is as follows:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["bucket_location"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["bucket_location"] } ] }The paths are correct since it does upload the files but it looks like it's ignoring the --acl public-read option. When i use cp command, it looks like it's running okay. I just like to use sync to use the --delete option for clean up. Any ideas?
AWS S3 CLI ACL public-read gives me 403 with sync command
Amazon now supports using a tcp header to pass the source along as discussed in thisarticle.Apache does not as time supportproxy protocolnatively. If you read the comments there are source patches to allow apache to handle it or you could switch to nginx.
We are using Amazon Elastic Load Balancer and have 2 apache servers behind it. However, we are not able to get the X-Forwarded-Headers on the application sideI read a similar post, but could not find a solution to itAmazon Elastic load balancer is not populating x-forwarded-proto headerThis is how ELB listeners are configuredHTTP 80 HTTP 80 N/A N/A TCP 443 TCP 443 N/A N/AShould changing the 443 port to HTTPS(Secure HTTP) instead of TCP populate the headers Other options are SSl(Secure TCP)If this works, I would also like to know why and what makes the difference
AWS ELB not populating x-forwarded-for header
It is found in the documentation:compile 'com.amazonaws:aws-android-sdk-s3:2.+'http://docs.aws.amazon.com/mobile/sdkforandroid/developerguide/setup.html
Is there any way to add AWS S3 compile line tobuild.gradleto load jar files to Android project? I can add them to libs folder after downloading full zip package from Amazon. But I want to do it using dependency.I've tried something like this, but have no luck.dependencies { compile 'com.amazonaws:aws-android-sdk:+' }The only solution had result was using java sdk, but I want to use new Android lib.dependencies { compile 'com.amazonaws:aws-java-sdk:+' }Can I find and read more info about how to build compile line, where to find this line in Maven repo? I need S3 and Core libs for my project.
AWS S3 Gradle dependency line
The different Amazon EC2 Availability Zones are in different physical locations. While the connections between availability zones are quite good, it is still a WAN connection.From the RabbitMQ docsRabbitMQ clusters do not tolerate network partitions well.If you are thinking of clustering across a WAN, don't.You should use federation or the shovel instead(emphasis mine)https://www.rabbitmq.com/partitions.htmlIn short, a 1 minute or so interruption in connectivity will cause a network partition to be created. While this would be an unusual event for EC2, it can and sometimes will happen.
Currently I am using one availability zone in my ec2 launch config. It is important that I don't get network partitions in my app, as rabbitmq does not handle network partitions well when clustering and HA is used (which I am using).I am very fuzzy on the concept of network partitions. Would it be safe for me to use two availability zones?
Will using two availability zones in ec2 introduce network paritions?
I came across similar problem when I did xml parsing in Python (2.7). Finally, it was figured out that it was caused by the inaccurately definedLD_LIBRARY_PATHenvironment variable. Here was my situation: the xml parsing library:libexpat.sowas confused with the MATLAB version (libexpat.so.1.5.0) between the system version (libexpat.so.1.6.0), theImportErrorarose when loaded the MATLAB versionlibexpat.so, after I precisely defined theLD_LIBRARY_PATH, that is excluding MATLAB library path, everything went smoothly.
I am having some problems running the aws cli on ubuntu 14.04 I keep getting the following errorTraceback (most recent call last): File "/usr/local/bin/aws", line 15, in <module> import awscli.clidriver File "/usr/local/lib/python2.7/dist-packages/awscli/clidriver.py", line 16, in <module> File "/usr/local/lib/python2.7/dist-packages/botocore/session.py", line 27, in <module> import botocore.credentials File "/usr/local/lib/python2.7/dist-packages/botocore/credentials.py", line 23, in <module> from botocore.compat import total_seconds File "/usr/local/lib/python2.7/dist-packages/botocore/compat.py", line 118, in <module> import xml.etree.cElementTree File "/usr/lib/python2.7/xml/etree/cElementTree.py", line 3, in <module> from _elementtree import * ImportError: PyCapsule_Import could not import module "pyexpat"When I dols -l /usr/lib/python2.7/*/pyexpat*I get-rw-r--r-- 1 root root 69200 Mar 23 01:57 /usr/lib/python2.7/lib-dynload/pyexpat.x86_64-linux-gnu.soAny help is much appreciated.EDITSomehow the problem was I had to run the aws commands with sudo.
Pyexpat import error when running aws cli
Here, throughput is the data sent over the network. When you specify some limit (20 in your case) then only that number of rows are transfered at that time. And in case of no limit, maximum of 1 MB of data will be send.Number of read capacity unit consumed on some query depends upon the size of your result. In case of read operations - 4KB = 1 unit and for write operations - 2KB = 1 unit.For example if you query returned 15KB of data then your read units consumed will be - 15/4 = 4 read units.
I have a small doubt regarding the READ capacity unit consumption when iquerya dynamo db table with aLIMITset on it.Say my query expressioncouldreturn 100 matching items if i iterate it withLastEvaluatedKeybut if the limit is set to 20 and i dont iterate all pages( i want top 20 only) then how much read capacity unit will be consumed ?Is it going to be for 100 items or only for the retrieved 20 items?I have read the documentation but could not find anything clearly mentioning the paginated cases.
Dynamo DB provisionedthroughput for paginated query
$objects = Get-S3Object -bucketname $S3_Bucket -SecretKey $S3_SecretKey -AccessKey $S3_AccessKey -Region $S3_Region -KeyPrefix 8.9.2014 foreach($key in $objects.key) { $filename = $key -replace "8.9.2014/" Copy-S3Object -Bucket $S3_Bucket -Key $key "$S3_Folder_Destination\$filename" -SecretKey $S3_SecretKey -AccessKey $S3_AccessKey -Region $S3_Region }see:https://forums.aws.amazon.com/thread.jspa?messageID=441291
I have created this function to pull objects from my S3 Bucket. It works but because of the -Key parameter I can only do one file at a time.Is there anyway to back up the entire contents of the bucket without writing multiple Copy-S3Object cmdlets?function CopyFromS3ToFolder($S3_Bucket, $S3_Folder_Destination, $S3_Key, $S3_SecretKey, $S3_AccessKey, $S3_Region) { #http://docs.aws.amazon.com/powershell/latest/reference/Index.html (Amazon Simple Storage Service) #version AWSToolsAndSDKForNet_sdk-2.0.11.0-ps-2.0.11.0-tk-1.6.5.2 Write-Host "Copying from S3 to Local Directory" Write-Host "Folder Name :$S3_Folder_Destination" Copy-S3Object -BucketName $S3_Bucket -LocalFile $S3_Folder_Destination -SecretKey $S3_SecretKey -AccessKey $S3_AccessKey -Region $S3_Region -Key $S3_Key }
How to backup the contents of an S3 Bucket locally using PowerShell
AWS S3 operations are eventually consistent. Data is stored redundantly in several places, and it can take time for every location to be updated. So if your read happens to hit the first place that was updated, you see the effect of the operation right away. If it hits another node, the updates may not have propagated to it yet.
I can't seem to find the answer to this online anywhere.I'm using the AWS S3 Ruby gem to callmove_toon my objects in S3. Immediately after moving, I read the S3 object in its new location. Once in a blue moon, this read will throw aNo Such Keyerror -- only once in a while, probably a few times in thousands of runs of this code.Can anyone confirm thatmove_tois a synchronous call? If it's not synchronous, how do I ensure that the object exists in its new location before reading it?
Are S3 methods synchronous?
Exactly that, aws expects a base64 encoded string to be passed for the UserData value. Why the tools doesn't do this for you, I don't know.So, instead of the string:/srv/user-data.shUse a base64 encoded version of the string (using an online encoder, I got the following):L3Nydi91c2VyLWRhdGEuc2g=I'm guessing the final json it should look something like this:'{"UserData": "L3Nydi91c2VyLWRhdGEuc2g=","InstanceType": "m1.small"}'Creating tags is pretty straight forward. Here's a link to the 'aws' CLI command documentation:http://docs.aws.amazon.com/cli/latest/reference/ec2/create-tags.htmlYou'll need to determine the resource AMI id:aws ec2 create-tags --resources ami-78a54011 --tags Key=Name,Value=myname
I am trying to create a new spot request using aws cli but it gives me an error if i pass a shell script as user-data to spot requesti tried thisaws ec2 request-spot-instances --spot-price "0.04" --instance-count 1 --launch-specification "{\"UserData\": \"/srv/user-data.sh\",\"InstanceType\": \"m1.small\"}"and this gives me an errorInvalid BASE64 encoding of user data (400 response code)andhow can i Tags my spot request with Name & Value
Passing script as user data into AWS EC2 spot instance
You should be using the Redshift Endpoint, if you have your security group settings configured correctly. You might want to set your group to 0.0.0.0/0 during testing(this opens up your cluster to the entire internet, and you can lock it down later once it works)You also need to make sure you have the correct ODBC/JDBC driver. I recommend either Netbeans(comes with connection driver), SQL Workbench, or Aginity Redshift. Default port is 5439.I thikn you are using the wrong driver(a mysql driver not psql driver), becaues your error says MySQL server.http://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-prereq.htmlFor mac, I believe you could also try and use the a terminal PSQL client. Something like...psql -H endpoint.aws.com -p 5439 -U username --password
I am using a mac and I typically use Sequel Pro to interact with sql databases. Usually I use mysql, but I understand redshift uses Postgres.When I try to connect to my Redshift db, should I use the IP, or the "endpoint"?Also when I try to connect, I get this error from Sequel Pro.Unable to connect to host {{my_db_host}}, or the request timed out. Be sure that the address is correct and that you have the necessary privileges, or try increasing the connection timeout (currently 10 seconds). MySQL said: Can't connect to MySQL server on '{{my_db_host}}' (4)Can anyone offer advice on how to get connected?Thanks
How to connect to aws Redshift db from mac
This error means that launching your environment timed out while waiting to hear back the EC2 instance. The instance did not report whether it successfully launched the environment or not. I would recommend taking snapshot logs to see detailed error messages from the instance.
What does this error mean please?Stack named 'awseb-eea9ufee4ak-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. (Service: AmazonCloudFormation; Status Code: 400; Error Code: OperationError; Request ID: null)
Error when starting Elastic Beanstalk environment
You need to change the permissions on your PEM key on the other computers:chmod 0400 LinuxDemo.pemSeeTrying to SSH into an Amazon Ec2 instance - permission error
I have an AWS Linux Instance with a LinuxDemo.pem key. I can access it from my own workstation no problem. But if I try and access it from home or if another colleague tries to we get the following result;@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for 'LinuxDemo.pem' are too open. It is recommended that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: LinuxDemo.pem Enter passphrase for key 'LinuxDemo.pem': Permission denied (publickey).Now the same command is ran on all workstations -ssh -i LinuxDemo.pem ec2-user@<IP_Address>How can I make it that a others can access this Instance as this is important.
How to get onto AWS Instance from another PC
Increase / Get a bigger hosting plan.I would not do that. The reason is, storage is cheap, while the other components of a "bigger hosting plan" will cost you dearly without providing an immediate benefit (more memory is expensive if you don't need it)Get my static content stored on Amazon S3.This is the way to go. S3 is very inexpensive, it is a no-brainer. Having said that, since we are talking video here, I would recommend a third option:[3.] Store video on AWS S3 and serve through CloudFront. It is still rather inexpensive by comparison, given the spectacular bandwidth and global distribution. CloudFront is Amazon's CDN for blazing fast speeds to any location.If you want to save on bandwidth, you may also consider using Amazon Elastic Transcoder for high-quality compression (to minimize your bandwidth usage).Traditional hosting is way too expensive for this.
I want some little guidance from you all. I have a multimedia based site which is hosted on a traditional Linux based, LAMP hosting. As the site has maximum of Images /Video content,there are around 30000+ posts and database size is around 20-25MB but the file system usage is of 10GB and Bandwidth of around 800-900 GB ( of allowed 1 TB ) is getting utilized every month.Now,after a little brainstorming and seeing my alternatives here and there, I have come up with two optionsIncrease / Get a bigger hosting plan.Get my static content stored on Amazon S3.While the first plan will be a simple option, I am actually looking forward for the second one, i.e. storing my static content on Amazon S3. The website i have is totally custom-coded and based on PHP+MySQL. I went through thishttp://undesigned.org.za/2007/10/22/amazon-s3-php-class/and it gave me a fair idea.I would love to know pros/cons when I consider hosting static content on s3.Please give your inputs.
What are the pros and cons to using AWS/S3 for static content?
Same question posted at amazon aws forums . Their is no clear documentation on AWS Simple Workflow Framework. You can checkhereAWS Workflow executesAsynchronouslyso that why generated code return type is void. IF you want then you can get by usingGetWorkflowExecutionHistoryRequest historyRequest = new GetWorkflowExecutionHistoryRequest(); historyRequest.setDomain(domain); historyRequest.setExecution(workflowExecution); historyRequest.setReverseOrder(true); History workflowExecutionHistory = service.getWorkflowExecutionHistory(historyRequest);If you want result then Just create a thread and when result populates in method you will get data . But this is not good way to run thread continuously.
I have started learning amazon web services with simple workflow service. I have completed the eclipse setup for development and successfully completed the hello world workflow application fromhere.For using the same application on web platform, I tried creating AWS web project and calling the workflow methods from servlet. The servlet runs without any error and output is printed to console. If I want the workflow to return the string message which is printed on console, what changes are needed?
AWS SWF - Return result of workflow
You can control the location of a new bucket by specifying a value for thelocationparameter in thecreate_bucketmethod. For example, to create a bucket in theap-northeast-1region you would do this:import boto.s3 from boto.s3.connection import Location c = boto.s3.connect_to_region('ap-northeast-1') bucket = c.create_bucket('mynewbucket', location=Location.APNortheast)In this example, I am connecting to the S3 endpoint in theap-northeast-1region but that is not required. Even if you are connected to the universal S3 endpoint you can still create a bucket in another location using this technique.To access the bucket after it has been created, you have a couple of options:You could connect to the S3 endpoint in the region you created the bucket and then use theget_bucketmethod to lookup your bucket and get aBucketobject for it.You could connect to the universal S3 endpoint and use theget_bucketmethod to lookup your bucket. In order for this to work, you need to follow the more restricted bucket naming conventions describedhere. This allows your bucket to be accessed viavirtual hostingstyle addressing, e.g.https://mybucket.s3.amazonaws.com/. This, in turn, allows DNS to resolve your request to the correct S3 endpoint. Note that the DNS records take time to propagate so if you try to address your bucket in this manner immediately after it has been create it might not work. Try again in a few minutes.
I created 3 different buckets, 1 using aws management console and 2 using boto api.Bucket created using aws management console was created in tokyo region whereas the one created using boto were created us-east-1 region.When I access by bucket using boto how does it findout the correct region in which the buckets are created. Also, how does it choose which region to create bucket in.I have gone thought the connection.py file in boto source code but I am not able to make any sense of the code.Any help is greatly appreciated!
How does boto choose aws region to create buckets?
It took me a while to figure this out, for the same reasons: no good examples in the documents.Here is how I managed to get it working, however:items = bucket.objects.with_prefix(prefix).page(:next_token => { :marker => marker }, :per_page => 100) items.each do |item| puts item.key enditems is aPageResultobject.I eventually figured out using a combination of the aws docs and reading the source code.
I found there has no good example in aws-sdk document to list s3 objects with marker and max-keys options.In Java, I can do it by :ObjectListing objectListing = s3.listObjects(new ListObjectsRequest() .withBucketName(bucket) .withPrefix(s3Prefix) .withMarker(s3Marker) .withMaxKeys(40));but in ruby, I can only find the with_prefix method but no way to fill other options. please help to tell how to config to list the objects with marker or max-kays
Ruby: use aws-sdk to list s3 objects with marker and max-keys
Unwrapping the API from the ComputeServiceContextComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2") .credentials("accessKey", "secretAccessKey") .buildView(ComputeServiceContext.class); ComputeService computeService = context.getComputeService(); AWSEC2Api ec2Api = context.unwrapApi(AWSEC2Api.class);Building the API directlyAWSEC2Api ec2Api = ContextBuilder.newBuilder("aws-ec2") .credentials("accessKey", "secretAccessKey") .buildApi(AWSEC2Api.class);
Using JClouds, up to version 1.6.x it was possible to access to the native EC2 provider API by using the following idiom:AWSEC2Client ec2Client = AWSEC2Client.class.cast(context.getProviderSpecificContext().getApi());Actually, I copied from the documentation page:http://jclouds.apache.org/guides/aws/It turns out that in the latest release this method has been removed. Is there an alternative method/way to access to the provider specific features (security groups, key-pairs, etc)?
How access native provider API with Jclouds 1.7
The problem were the filters. Apparently using them is a bad decision and there is no point whatsoever in doing so.Solution:/** * Returns a list with the public IPs of all the active instances, which are * returned by the {@link #getActiveInstances()} method. * * @return a list with the public IPs of all the active instances. * @see #getActiveInstances() * */ public List<String> getPublicIPs(){ List<String> publicIpsList = new LinkedList<String>(); //if there are no active instances, we return immediately to avoid extra //computations. if(!areAnyActive()) return publicIpsList; DescribeInstancesRequest request = new DescribeInstancesRequest(); request.setInstanceIds(instanceIds); DescribeInstancesResult result = ec2.describeInstances(request); List<Reservation> reservations = result.getReservations(); List<Instance> instances; for(Reservation res : reservations){ instances = res.getInstances(); for(Instance ins : instances){ LOG.info("PublicIP from " + ins.getImageId() + " is " + ins.getPublicIpAddress()); publicIpsList.add(ins.getPublicIpAddress()); } } return publicIpsList; }
I am trying to get the Public IP of all my Amazon Ec2 instances using the Java SDK. I have been searching through thedocumentationand found that I need to use aDescribeInstanceRequest,DescribeInstanceResultand aFilterto achieve my purpose.However, I do not understand how to complete the circle. The DescribeInstanceResult does not seem to have what I need and I do not know how to effectively print the instance IPs that I want.So far, this is my code:public List<String> getPublicIPs(){ DescribeInstancesRequest request = new DescribeInstancesRequest(); request.setInstanceIds(instanceIds); List<Filter> filters = new LinkedList<Filter>(); filters.add(new Filter("ip-address")); request.setFilters(filters); DescribeInstancesResult result = ec2.describeInstances(request); //what now!? return null; }How do I complete it? What am I missing?
Find the Public IP of all my EC2 instances using Java?
You're correct, this type of static content should not be part of your repository and certainly not stored on EC2 instance's volumes.AWS' best practice for this use case would be to use S3 and directly link to S3 objects from your HTML code. S3 is a natively HTTP enabled object storage service.In order to use S3 as web server, you must create a bucket on S3.You can either use the S3 provided URL<bucket-name>.s3-website-<AWS-region>.amazonaws.comto link to your content from your web pages.Or you can use your own domain name. In this case, your bucket name must be named after your domain name and you must enable "Website Hosting" option at the bucket level. This is required to let S3 know how to map HTTP requests to buckets.A high level scenario is described here :http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.htmlAnd more details are provided byS3 documentation.As an added benefit, storage in S3 costs less money than EBS storage.
I have a Python/Flask application that I've deployed in elastic beanstalk. I have been deploying updates viagit aws.push, which includes my static js libraries, css, and images.I now have about 1 GB of static content in the form of images. I want to serve that content from the same location as my application, that is, from the same place I was serving them before, in a/static/img/folder. However, I obviously don't want to add the images to source control or deploy them with the git macro.Ideally, I would like to connect to the instance where the files are hosted and upload them manually. However, I do not know how to do this. I have searched through the s3 associated with the elastic beanstalk app, but there is no sign of my app there, only a repository of zipped deployments.I could create a new bucket and handle things that way, but I haven't been able to map a domain to a new bucket. Whenever I try to add a CNAME record to the bucket, it is rejected because "URL/IP cannot be added as a CNAME." In any case, the process that seems most intuitive is to manually put unversioned static content in place next to versioned, deployed code.
How to manually upload static content with elastic beanstalk and s3
I'd wager that Powershell is having difficulty parsing that comma and loses the ParameterValue afterwards. You may want to try to wrap whole section after--parameterin a string (double-quoted, so$versionstill resolves):aws cloudformation create-stack --stack-name Cloud-$version --template-body C:\awsdeploy\MyCloud.template --parameters "ParameterKey=BuildNumber,ParameterValue=$version"Or, failing that, tryrunning the line explicitly in the cmd environment.If you're interested in an alternative solution, AWS has implemented their command line tools in a separate utility calledAWS Tools for Powershell.create-stackmaps toNew-CFNStackas shown in this documentation:New-CFNStack DocsIt looks like this would be the equivalent call:$p1 = New-Object -Type Amazon.CloudFormation.Model.Parameter $p1.ParameterKey = "BuildNumber" $p1.ParameterValue = "$version" New-CFNStack -StackName "cloud-$version" ` -TemplateBody "C:\awsdeploy\MyCloud.template" ` -Parameters @( $p1 )
I'm trying to create a PowerShell script that among other things creates an AWS CloudFormation stack. I'm having trouble with the aws cloudformation create-stack command however, it doesn't seem to be picking up the parameters. Here is the snippet giving me trouble:$version = Read-Host 'What version is this?' aws cloudformation create-stack --stack-name Cloud-$version --template-body C:\awsdeploy\MyCloud.template --parameters ParameterKey=BuildNumber,ParameterValue=$versionThe error I receive is:aws : At C:\awsdeploy\Deploy.ps1:11 char:1 + aws cloudformation create-stack --stack-name Cloud-$version --template-bo ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError A client error (ValidationError) occurred when calling the CreateStack operation: ParameterValue for ParameterKey BuildNumber is requiredI know the CloudFormation script is OK because I can execute it without issues via the AWS explorer. The parameters section looks like this:"Parameters" : { "BuildNumber" : { "Type" : "Number" } },I've tried the following, none of which seem to help:replacing $version with a static valuechanging the parameter type from Number to Stringtrying to pass the parameter list in JSON formatNo dice on any of these, same error. It's like it's just not accepting the parameters for some reason. Any ideas?
AWS: A client error (ValidationError) occurred when calling the CreateStack operation: ParameterValue for ParameterKey ... is required
I had a similar problem once because I was using context variables in my 500.html template. But by default Django does not provide any context to the 500 error page. So that leads to a "double" error, where the rendering the error page itself creates an error.From the Django documentation:The default 500 view passes no variables to the 500.html template and is rendered with an empty Context to lessen the chance of additional errors.https://docs.djangoproject.com/en/dev/topics/http/views/#the-500-server-error-viewSo if you use any context variables in your 500 error page, that's probably what happened. Not sure if this helps in your case though...If that was the problem, the solution would be to write a custom error handling view with a minimal context to render static files etc (as described in the documentation above).
I don't understand why django is not using my 500.html template for server errors. I deployed my app on Elastic Beanstalk, and while all 404 requests are handled by the 404.html template, 500 errors show the standard apache error:Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.25 (Amazon) Server at myapp.elasticbeanstalk.com Port 80What could be? (I've got both the templates in the same place)
Django 500.html template not used for internal server error on Amazon Elastic Beanstalk
Yes, by default the calls go to a "default region". Unless your tables are in the same region, this is expected. Better to set the region always.
I have created tables in AWS DynamoDB and grant the right to ec2 role in policy setting. However, when I run below code with java API, no table is returned. Is there anyone know what I should do?AWSCredentialsProvider credentialsProvider = new ClasspathPropertiesFileCredentialsProvider(); AmazonDynamoDB dynamoDB = new AmazonDynamoDBClient(credentialsProvider); return dynamoDB.listTables().getTableNames().toString();
dynamodb cannot get list of tables
Glacier itself is designed to try to make it impossible for any application to complete a multipart upload without an assurance of data integrity.http://docs.aws.amazon.com/amazonglacier/latest/dev/api-multipart-complete-upload.htmlThe API call that returns the archive id is sent with the "tree hash" -- a sha256 of the sha256 hashes of each MiB of the uploaded content, calculated as a tree coalescing up to a single hash -- and the total bytes uploaded. If these don't match what was actually uploaded in each part (which were also, meanwhile, being also validated against sha256 hashes and sub-tree-hashes as they were uploaded) then the "complete multipart" operation will fail.It should be virtually impossible by the design of the Glacier API for an application to "successfully" upload a file that isn't intact and yet return an archive id.
I am using Python and Boto in a script to copy several files from my local disks, turn them into .tar files and upload to AWS Glacier.I based my script on:http://www.withoutthesarcasm.com/using-amazon-glacier-for-personal-backups/#highlighter_243847Which uses the concurrent.ConcurrentUploaderI am just curious how sure I can be that the data is all in Glacier after successfully getting an ID back? Does the concurrentUploader do any kind of hash checking to ensure all the bits arrived?I want to remove files from my local disk but fear I should be implementing some kind of hash check... I am hoping this is happening under the hood. I have tried and successfully retrieved a couple of archives and was able to un-tar. Just trying to be very cautions.Does anyone know if there is checking under the hood that all pieces of the transfer were successfully uploaded? If not, does anyone have any python example code of how to implement an upload with hash checking?Many thanks!Boto Concurrent Uploader Docs:http://docs.pythonboto.org/en/latest/ref/glacier.html#boto.glacier.concurrent.ConcurrentUploaderUPDATE: Looking at the actual Boto Code (https://github.com/boto/boto/blob/develop/boto/glacier/concurrent.py) line 132 appears to show that the hashes are computed automatically but I am unclear what the[None] * total_partsmeans. Does this mean that the hashes are indeed calculated or is this left to the user to implement?
Guarantee of data integrity with Python Boto and Amazon Glacier concurrent.ConcurrentUploader?
FYI - This blog explains how it is done in Java... very simple.java.awsblog.com/post/Tx1VE22EWFR4H86/Accessing-Private-Content-in-Amazon-CloudFront
How do I create cloudfront signed URL usingAWS SDK?This really seems like it should be easy to do, but I just fail to see it. I generally understand how it works and could probably throw together plain Java code to do it myself. It seems weird that AWS SDK does not provide a method for this.Earlier question but with C#:cloudfront private time limited url.This linkexplains in theory how such cloudfront urls are generated, but without code examplesThis linkexplains how it is done with Java, but it apparently usesJetS3tlibrary instead of AWS SDK. at least I have been unable to locate the used CloudFrontService class inAWS SDK JavadocThis linkdemonstrates how it is done forS3using AWS SDKThis blog postI found referenced inanother related questioncontains source code for a java class CloudFrontSecurityProvider to do the signing and it is not very complicated.
Cloudfront limited time (signed) URL using Java AWS SDK
First of all if you are using Amazon Linux or any RPM based linux distro, you should use "httpd" rather than "apache2". Apache use in Debian family.I think you can resolve this issue by installing httpd-devel package. Tryyum search httpd-develand choose required version and then doyum install httpd-devel <package version>Sometime just firingyum install httpd-develcan do the job.
I'm trying to get started with an AWS website, and used the free tier Amazon Linux installation. I installed python3.3 from source, but the wsgi it comes with is for python2.6 so I tried installing mod_wsgi3.3 from source as well, at which point I get./configure --with-python=/usr/local/bin/python3 checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1704: apxs: command not found ./configure: line 1704: apxs: command not found ./configure: line 1705: apxs: command not found ./configure: line 1708: /: is a directory ./configure: line 1877: apxs: command not found configure: creating ./config.status config.status: error: cannot find input file: Makefile.inand for the life of me, I have not found a single helpful online source to tell me how to get apxs installed on this system. Suggestions have all been for ubuntu, and hencesudo apt-get install apache2{}-devwhere '{}' can be replace with nothing, or-workeror-threadedor-prefork; none of these have worked on my system (usingsudo yum installinstead).Is there a different package name I should be looking for? If so, what is it/where do I find potential packages?sudo yum search apachedoesn't yield anyapache2.Please help.
Where can I find apxs on amazon linux?
Installforeverand use a start script.$ npm install -g foreverI have several scripts for managing my production environment - the start script looks something like:#!/bin/bash forever stopall export MAIL_URL=... export MONGO_URL=... export MONGO_OPLOG_URL=... export PORT=3000 export ROOT_URL=... forever start /home/ubuntu/apps/myapp/bundle/main.js exit 0Conveniently, it will also append to a log file in~/.foreverwhich will show any errors encountered while running your app. You can get the location of the log file and other stats about your app with:$ forever listTo get your app to start on startup, you'd need to do something appropriate for your flavor of linux. You can maybe just put the start script in/etc/rc.local. For ubuntu seethis question.Also note youreallyshould be bundling your app if using it in production. Seethis comparisonfor more details on the differences.
I have a simple meteor app that I'm running on an Amazon EC2 server. Everything is working great. I start it manually with my user viameteorin the project directory.However, what I would like is for this app toRun on bootBe immune to hangupsI try running it vianohup meteor &, but when I try to log out of the EC2 instance, I get the "You have running jobs" message. Continuing to log out stops the app.How can I get the app to start on startup and stay up (unless it crashes for some reason)?
Keep meteor running on amazon EC2
Ok the difference here is thatData Transfer OUT From Amazon EC2 To Internetrepresent any data that goes outside of EC2. This is any protocol: HTTP, TCP, RDP, etc.Data Transfer OUT From Amazon EC2 To Using a public or Elastic IP addressthat you mention in reality is data transfer within EC2. If you use an internal IP address you don't get charged anything. However, if you use and elastic or public IP (This includes using the public DNS name) you get charged $.01/GB. Check the image below:So I believe if you are sending data outside of AWS the billing that you care isData Transfer OUT From Amazon EC2 To Internetat $0.12/GB for up to 10TB /month, $0.09/GB for the next 40TB and so forth...The difference between HTTP and RDP is just simply different protocols. If you look at the network stack they both happen to run on top of TCP.Additional comments:Public or Elastic IP address data transfer doesn't have to be within EC2. If you happen to do that, meaning transferring data within EC2 using public IP addresses (in other words the public cloud) Amazon will charge you something (less than EC2 traffic to another Cloud Provider for example). Note that if you use private addresses within a Region Amazon doesn't charge you anything. Notice that you can't do data transfer using private addresses across regions.
I'm trying to estimate data transfer pricing on AWS, but I'm confused by the distinction of:Data Transfer OUT From Amazon EC2 To InternetandData Transfer OUT From Amazon EC2 To Using a public or Elastic IP addressIs this the difference of calling by IP(http://54.128.10.9)vs. using DNS(http://[email protected])? The above IP and DNS are made up for this question.The first costs $.12/GB while the second is only $.01/GB so it's a big difference. Also, can anyone tell me if an RDP (Remote Desktop Protocol) is any different than a standard http call? I wouldn't think so, but would love to hear from someone who's been billed for this before.
Does anyone know the Data transfer type for RDP connections to AWS?
Once in theconsole(e.g. in the bucket folder), you can just start typing the name of the object you are looking for. The list will refresh with the top file being the one you're searching for.
I have uploaded an image to S3 on Amazon Web Services. I just wanted to search for the image in the admin console of S3. I cannot find any search options there. Is there any other way?
How to search a file in the console of AWS-S3?
Your inventory file is probably executable. Also, inventory files are not in YAML format (hence extension for your inventory is misleading).Try :mv inventory.yml inventory chmod a-x inventoryand it should be better.
This is myinventory.ymlfile:[hosts] somedns1.aws.com somedns2.aws.com somedns3.aws.com somedns4.aws.com somedns5.aws.comBut I'm getting--list ([Errno 8] Exec format error)It looks fine as perthis link, any idea?
What is the proper syntax for an ansible inventory file?
There are 2 things you need to do:set request's encoding to binarypass in binary buffer to S3Give this a try:var options = { uri: "http://files.parse.com/[the rest of the url]", encoding: 'binary', }; request(options, function (error, response, body) { if (!error && response.statusCode == 200) { s3.putObject({ "Body": new Buffer(body, 'binary'), "Key": "thumbnail2.jpg", "Bucket": "[my-bucket]" }, function (error, data) { console.log(error || data); }); } });
Okay. NodeJS using the request module. Downloading a resource in Parse (which is in S3) and I want upload to my S3 bucket (behind a CloudFront endpoint) using the aws-sdk node module. Here is my code:var AWS = require('aws-sdk'); var request = require('request'); AWS.config.loadFromPath('./aws-config.json'); var s3 = new AWS.S3(); var url = "http://files.parse.com/[the rest of the url]"; request(url, function (error, response, body) { console.log(response); if (!error && response.statusCode == 200) { s3.putObject({ "Body": body, "Key": "thumbnail2.jpg", "Bucket": "[my-bucket]" }, function (error, data) { console.log(error || data); }); } });If I open the parse url I see the image. If I open the url that is in my bucket, I get a broken image.
Using NodeJS and request module and aws-sdk to move images from Parse to S3/CloudFront
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html"When you allocate an EIP [within EC2-classic], it's for use only in EC2-Classic. When you allocate an EIP [within VPC], it's for use only in a VPC."
Is it possible to migrate/move ElasticIP from standard scope to VPC scope in Amazon Web Services?I have hundreds of domains pointing to particular EIP and it would be super-sweet if it was possible to attach it to the VPC and not to loose it.
Is it possible to migrate/move Amazon's EIP from standard to VPC scope?
Keep it simple, do it all in Redshift.First, use "CREATE TABLE … AS" to save all current history into a permanent table.CREATE TABLE admin.query_history AS SELECT * FROM stl_query;Second, usingpsqlto run it, schedule a job on a machine you control to run this every day.INSERT INTO admin.query_history SELECT * FROM stl_query WHERE query > (SELECT MAX(query) FROM admin.query_history);Done. :)Notes:You need an 8.x version ofpsqlif you haven't set this up yet.Even if your job doesn't run for a few days stl_query keeps enough history that you'll be covered.As per your comment, it might be safer to use starttime instead of query as the criteria.
In Redshift, there's anSTL_QUERYtable that stores queries that were run over the last 5 days. I'm trying to find a way to keep more than 5 days worth of records. Here are some things that I've considered:Is there a Redshift setting for this? It would appear not.Could I use a trigger? Triggers are not available in Redshift, so this is a no-go.Could I create an Amazon Data Pipeline job to periodically "scrape" theSTL_QUERYtable? I could, so this is an option. Unfortunately, I would have to give the pipeline some EC2 instance to use to run this work. It seems like a waste to have an instance sitting around to scrape this table once a day.Could I use an Amazon Simple Work Flow job to scrape the table? I could, but it suffers from the same issues as 3.Are there any other options/ideas that I'm missing? I would prefer some other option that does not involve me dedicating an EC2 instance, even if it means paying for an additional service (provided that it's cheaper than the EC2 instance I would have used in it's stead).
How do I keep more than 5 day's worth of query logs?
You can disable staging on Hive Activity to run any arbitrary Hive Script.stage = falseDo something like:{ "name": "DefaultActivity1", "id": "ActivityId_1", "type": "HiveActivity", "stage": "false", "scriptUri": "s3://baucket/query.hql", "scriptVariable": [ "param1=value1", "param2=value2" ], "schedule": { "ref": "ScheduleId_l" }, "runsOn": { "ref": "EmrClusterId_1" } },
I would like to automate my hive script every day , in order to do that i have an option which is data pipeline. But the problem is there that i am exporting data from dynamo-db to s3 and with a hive script i am manipulating this data. I am giving this input and output in hive-script that's where the problem starts because a hive-activity has to have input and output but i have to give them in script file.I am trying to find a way to automate this hive-script and waiting for some ideas ?Cheers,
Automating Hive Activity using aws
Directly through the console or the api you can't.It is possible to create a keypair using external tools and import it. You can try to create the keypair with password and then import it using the console or the api.For more information take a look athttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
Is it possible to use the AWS Management Console to create an ssh key pair that requires a passphrase? I can create a key pair and associate it with an ec2 instance, but I'm never given the option to also set a passphrase for the key.
AWS ssh key pair with passphrase
Just delete the instances you are not using any more. Not really sure what it means to cancel a service as billing is usage based.
I have an AWS account, i have created to do some research. I have started some of its services. Now I want to cancel some services like EC2 and RDS. There is an option in Manage Account section for "Cancel Selected Service" but in its dropdown menu, there is nothing to select.
How to stop a particular service in AWS
From within the instance, you can query for metadata about the instance by sending requests tohttp://169.254.169.254.More information can be found here:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html
How can I get the IP address (elastic IP) of the current Node.js host/server on Amazon EC2?Calling req.connection.address() is useless because EC2 uses an elastic IP. In fact, the IP that shows up using the ifconfig command is not the same as the IP which was used to access the server from outside (which is what I need). How can I get the elastic IP automatically?
Node.js - Getting the host IP address while on Amazon EC2
That's because by default, boto base64-encodes the payload of a message before sending it and decodes it upon reading. This is mainly because of historical reasons; in the early days of SQS there were a lot of restrictions on what kinds of characters could be in an SQS message. That's not really the case anymore so the encode/decode probably isn't necessary.To get around it, just useboto.message.RawMessageclass rather thanboto.message.Message.
According to theSQS docs, it it possible to post messages up to 256KB of data. I configured my queue to 256KB of data, but when I post using boto, I max out at ~196 000 bytes. Anything over this, I get the following response from SQS :boto.exception.SQSError: SQSError: 400 Bad Request <?xml version="1.0"?><ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"> <Error><Type>Sender</Type><Code>InvalidParameterValue</Code><Message> Value for parameter MessageBody is invalid. Reason: Message body must be shorter than 262144 bytes.</Message> <Detail/></Error><RequestId>dd24151b-d499-5bb1-acd0-5f776011e960</RequestId></ErrorResponse>Small python program to illustrate :from boto.sqs.message import Message from boto.sqs.connection import SQSConnection from boto.sqs.message import Message import sys sqs_conn = SQSConnection(AWS_KEY, AWS_SECRET) data = mylong_256kb_string print sys.getsizeof(data) current_queue = sqs_conn.create_queue('test_temp_queue') m = Message() m.set_body(data) current_queue.write(m)
AWS boto sqs - why can't I post messages bigger than 196 000 bytes?
The default document root on an EC2 instance is:/var/www/htmlYou should see the standard Amazon EC2index.htmlfile in there.
When Isshinto my ec2 instance and runls -aI see:.. .bash_logout .bash_profile .bashrc .ssh .viminfoand yet when I open the url associated with the instance, I see the standard welcome page, so it appears there must be anindex.htmllocated in the instance.
Why does my ec2 instance appear empty?
It's likely that the management interface only listens on localhost by default. There is usually a flag to enable it on all interfaces, but it's usually a bad (security) idea to do so.Alternately, you can access the management interface via SSH Port Forwarding:$ ssh -L 7474:localhost:7474 elastic.ip.addressOnce connected, you can point your browser at "localhost:7474" to see the remote management interface. Everything is encrypted, etc.
I've been fiddling around with this for awhile and figured I'd see if anyone can help me out. I have a EC2 instance running Apache/Ubuntu 12.04 and have successfully installed Neo4j 1.9.1. I didn't use Puppet or any Cloud Formation template for that matter, I simply installed Java 7 along with the stable Neo4j debian package and it's running perfectly fine locally if anyone else is having problems with Puppet. When I run#curl http://localhost:7474, I get the following:root@ip-xx-xxx-xx-xxx:~# curl http://localhost:7474 { "management" : "http://localhost:7474/db/manage/", "data" : "http://localhost:7474/db/data/" }root@ip-xx-xxx-xx-xxx:~# :7474/db/data/My problem is I cannot resolve a connection with my elastic IP or public DNS, they both work as I am able to SSH to the instance and the "It Works" Apache message shows, however when trying to access port 7474, I get a timeout error:http://elastic.ip.address:7474I do have port 7474 as well as port 80 open to the world within my security group and still am unable to resolve a connection, so I'm at a loss. Any help at all would be much appreciated!
AWS EC2 instance - Apache/Ubuntu 12.04 and Neo4j 1.9.1
The UserPreferenceDemo app creates the DynamoDB table in theUS-WEST-2 region. If you change the region selector drop down in the top right of the AWS Management Console toUS West 2 (Oregon), you should be able to see the table.
I am working with the UserPreferenceDemo, one of Amazon's Android SDK sample apps. This app utilizes a TVM (token vending machine) and DynamoDB. Our of the box, the app seems to work as it should: it creates a table, populates it, and you can view it on the app. However, the table does not seem to show up in my DynamoDB console. I am very confused, has anyone had a similar issue?
DynamoDB table from AWS Android SDK sample app is not in console
Apparently I misinterpreted the Intellisense description:// // Summary: // Identifies the range of bytes in the assembled archive that will be uploaded // in this part. Amazon Glacier uses this information to assemble the archive // in the proper sequence. The format of this header follows RFC 2616. An example // header is Content-Range:bytes 0-4194303/*.You're not supposed to include the name of the header itself so this line:string Range = "Content-Range:bytes " + FileStream.Position.ToString() + "-" + (FileStream.Position + Size - 1).ToString() + "/*";Should be:string Range = "bytes " + FileStream.Position.ToString() + "-" + (FileStream.Position + Size - 1).ToString() + "/*";Derp.
I can't figure out why I keep getting an invalid Content-Range from AWS Glacier. It looks to me like my format follows RFC 2616 but I keep getting an error. Help?Here's the code:using (var FileStream = new System.IO.FileStream(ARCHIVE_FILE, FileMode.Open)) { while (FileStream.Position < FileInfo.Length) { string Range = "Content-Range:bytes " + FileStream.Position.ToString() + "-" + (FileStream.Position + Size - 1).ToString() + "/*"; var request = new Amazon.Glacier.Model.UploadMultipartPartRequest() { AccountId = "-", VaultName = VAULT_NAME, Body = Amazon.Glacier.GlacierUtils.CreatePartStream(FileStream, Size), UploadId = UploadId, Range = Range, StreamTransferProgress = Progress }; //request.SetRange(FileStream.Position, FileStream.Position + Size - 1); response = GlacierClient.UploadMultipartPart(request); } }
Keep getting "Invalid Content-Range" response from AWS Glacier Multipart Upload
You could do something like this.files: "/opt/elasticbeanstalk/tasks/systemtaillogs.d/webapp.conf": mode: "000755" owner: root group: root content: | /var/app/support/logs/*.log /var/log/httpd/error_log /var/log/httpd/access_log /var/log/messages /var/app/current/code-igniter/application/logs/*log "/opt/elasticbeanstalk/tasks/publishlogs.d/webapp.conf": mode: "000755" owner: root group: root content: | /var/app/support/logs/*.log.* /var/app/support/logs/*.gz /var/log/httpd/*.gz /var/app/current/code-igniter/application/logs/*log
I'm using Amazon Beanstalk for my Symfony2.1 app (using Linux AMI with Apache) and I activated Log file rotation to Amazon S3. All is working properly, but I ant to know if there are any way to add other logs (that are in other locations) to the rotation system!Thanks in advance!
Rotate other logs to Amazon S3
I recommend using SES. Amazon sets up the DNS records and adds signatures to the messages, greatly reducing the chance they will be flagged as spam. And it's easier to do than setting up your own SMTP server. There's even an AmazonAWS SDK for node.jsthat supports SES.If you use SES you donotneed to open port 25. You don't need to open any incoming ports; you connect to SES via a normal https URL. (You don't need to open any incoming ports to use SMTP or sendmail to send mail out, either.)
My web (Node.js) application has a form that people fill out and send the inquiry to web admin (myself @ gmail). I use nodemailer before when I wasn't on AWS. The old server has SMTP. The amount of emails sending out from the server to my gmail is small, very small.Now, I've moved to AWS EC2. I would like to keep using nodemailer for sending out email in the code. For setting up mail server or enable me to send out mail, Should I:1) Use sendmail? What's the drawback? Will it be blocked by gmail?2) Set up my own SMTP server (postfix). But I don't need to do bulk email or receiving emails though...3) Use AWS SES service.Also, do I need to open up port 25 from my server in order to send out email?Thanks.
Should I use AWS SES, sendmail or setup SMTP for my node.js application?
You are correct, you must always query by HashKey, unless you do a full table scan.Doing a fulltable scan, you can look at each and every entry in your table and compare their LastLoginDate. This can quickly become unscalable depending on how many users you have.
I'm planning to have aUsertable withUserNameas the hash key and aLastLoginDateattribute (among others).I would like to be able to query the table for something like: All users that have not logged in for the last month.How would I do this with DynamoDB?I have been looking at local secondary indices, and thought of makingLastLoginDatea secondary index. But how I understand thedocumentation, secondary indices only help order results for the same hash key, and in my case each user will have a uniqueUserName. Does this make such a secondary index pointless?Thanks in advance!
Querying DynamoDB with unique hash attributes?
Because opsworks is currently in beta there are still a few issues, there is an issue with deployments. Apparently custom deployments are not run when initialising a new php instance (not sure about others as I have only used php)
Using Amazons OpsWorksI'm able to get a PHP App Server to get initialised, it downloads our project from git and sets it up, I've got a custom recipe being run onSetupthat works and downloadscomposer, but this gets run before the git repository is downloaded, so too early to try and change permissions.I've currently got a recipe inDeploythat changes permissions on some files that were created as part of downloading our git project, however this recipe doesn't seem to get fired when setting up a new instance, I can only run it by manually deploying an app.How can I have a recipe run after the git project has been downloaded by chef, when an instance is created (so when the site auto scales and a new instance is fired up, the recipe is run to set file permissions correctly)
How do I run chef recipes after app server is fired up
Okay! I had a thorough lookup into the problem and found that its XCode responsible for causing the app to take so much storage space.I'm not certain what goes on under the hood but when the app is connected to XCode and is running, the storage size would increase. But if its not connected to XCode, and run the app, it would run normally and won't take any undesired storage.I guess its because XCode might be saving logs/snap shots of app. I'm not sure.So, it seems that my app is safe (Thank GOD!) and hope apple won't have any objections when I submit it for release.@Yangfan Zhang: Flurry seems to be safe and is not responsible for the issue as haven't removed any of the mentioned libraries and done the investigation.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable,visit the help center.Closed10 years ago.I had a quick search for this but unfortunately couldn't find any answer.My iPhone app is still underdevelopment. I just noticed from my iPhone's Settings->General->Usage, that this app is taking over 900MB of space. I'm shocked and don't understand why.I'm not saving any downloaded images, strings or any kind of large data base. Original app size is < 10MB. It contains only a few images added to project folder.I'm suingNSUserDafaultsto save a few parameters. (Used for simple app/user settings)Other External/3rd Party Libraries, I used are:SBJsonAmazon Web Services (for uploading images)FlurryKTPhotoBrowserASIHTTPRequest (I know this has deprecated. So far its working good for me and in after the first release, I'm planning to replace it with any best alternate)FacebookTwitter (not integrated yet)Any ideas what could be the reason? Am I missing something or doing it wrong?Thanks in advance.Update:I deleted the app and re-installed it. Now after first launch, it takes 13.2MB where 7.4MB is app size and 5.8MB are taken by Documents and Data. It seems that the storage being used increases with time.
iPhone app storage growing [closed]
After puzzling about this for quite some time, by chance I have found the solution 1 min after posting. Host has to be 0.0.0.0app.run(port=8080,host='0.0.0.0')
I have an AWS instance running. Serving through a SimpleHTTPServer works.[ec2-user@ip-XXXXX ~]$ python -m SimpleHTTPServer 8080 Serving HTTP on 0.0.0.0 port 8080 ... p54A5C877.dip0.t-XXX.org - - [07/Mar/2013 12:36:45] "GET / HTTP/1.1" 200 -But then with flask, somehow the request does not channel through.>>> from flask import Flask >>> >>> app = Flask(__name__) >>> >>> @app.route('/') ... def hello_world(): ... return 'Hello World!' ... >>> if __name__ == '__main__': ... app.run(port=8080) ... * Running on http://127.0.0.1:8080/ => no request catched
AWS with flask (port channel)
S3GetObjectRequesthas a property calledurlConnection. You can callcancelon theurlConnectionproperty to cancel your download request.
While downloading files from amazon s3, I have tried to cancel/stop the file . But i cant get any solutions. So kindly suggest me a solution. ThanksS3GetObjectRequest *downloadRequest = [[[S3GetObjectRequest alloc] initWithKey:path withBucket:SECRET] autorelease]; downloadRequest.delegate=self; [s3 getObject:downloadRequest]; NSData *myData2; myData2 = response.body; [myData2 writeToFile:filePath atomically:YES];This is the code snippet am using for download.
How to stop/cancel the download request in aws s3
That's the way that SQS queues work by default (short polling). If you haven't changed any settings after setting up your queue, the default is to get messages from a weighted random sampling of machines. If you're using more than one machine and want all the messages you can consume at that moment (across all machines), you need to use long polling.See the Amazon documentation here.I don't think boto supports that directly ATM.
I am very new to AWS SQS queues and I am currently playing around with boto. I noticed that when I try to read a queue filled with messages in a while loop, I see that after 10-25 messages are read, the queue does not return any message (even though the queue has more than 1000+ messages). It starts populating another set of 10-25 messages after a few seconds or on stopping and restarting the the program. while true: read_queue() // connection is already established with the desired queue.Any thoughts on this behaviour or point me in the right direction. Just reiterating I am just couple of days old to SQS !!Thanks
Reading data consecutively in a AWS SQS queue
Just got into the same issue. AWS instance recognize only the key which was specified during the instance creation. All later changes to the key list will not affect already created instance.Edit: actually, here problem was in incorrect export to .ppk file using Puttygen. See comments below.
Already looked for a solution but nothing seems to be helpfull, I'm doing everything that is meant to be made and my instance keeps returning me the message "Server refused our key".Here's what I've been doing:1) Create Instance; 2) Download the .pem key; 3) PuttyGen to transform it in a private .ppk (SSH-2 RSA); 4) Associate an Elastic IP to the Instance; 5) Connect through 22 with the correct auth key generated on the 3rd step; 6) Server asks for username, insert "ubuntu" (using 12.04.1 LTS); 7) Server returns "Server refused our key".Tried to reboot a hundred times, tried SSH-1 RSA, tried public key instead of private key, tried keys with passphrase, tried everything.Someone else is experiencing this?Edit:Thought it might be a security problem, here are my rules if that helps:https://i.stack.imgur.com/Y3I2s.png
AWS EC2 keep refusing my private SSH 2 key
The server sends a wildcard certificate for*.s3.amazon.com. This certifies all subdomains of the domains3.amazon.com.Certificate is valid for your working examplealmaconnect.s3.amazon.combut not for your second example**alamonnect.**dev.s3.amazon.com.Create a bucket called e.g.alaconnectdevto work around this problem.With the distribution of Firefox 3.5, all major browsers allow only a single level of subdomain matching with certificate names that contain wildcards, in conformance with RFC 2818.In other words the certificate *.mydomain.com will work for one.mydomain.com or two.mydomain.com but NOT one.two.mydomain.com.Resources:Wikipedia Wildcard CertificatesRFC 2818 on IETF.org
I am having two bucketshttps://almaconnect.dev.s3.amazonaws.com/andhttps://almaconnect.s3.amazonaws.com/The first one when I hit gives non-secure result and asks me to add an exception in the browser. The 2nd one works fine. I am wondering what issue there can be. Please, help me guys....Thanks, Amit Chaudhary
https security exception for amazon s3 bucket
Have you seen/reviewed these details from our knowledge base?https://support.cloudflare.com/hc/en-us/articles/200168926-How-do-I-use-CloudFlare-with-Amazon-s-S3-Service-It includes the steps to correctly point at an Amazon S3 bucket. For privacy reasons though, I'd recommend you open a support ticket with CloudFlare directly viahttps://support.cloudflare.comso our support team can look at your account's specific details to offer suggestions.p.s. Disclaimer: I work at CloudFlare.
I am working on creating a website for my brother (my first website) and have decided to host the whole site through amazon s3. I have done the usual setup so far:Create bucket with the name of the desired domain (www.website.com).Make the bucket a website and assign the index document and error page.upload all of the content and make it public.The website works fantastic through the bucket endpoint linkhttp://www.website.com.s3-website-us-east-1.amazonaws.com/Following other tutorials on stackoverflow I then attempted to create a CNAME through my DNS provider. Unfortunately, my domain registrar (1and1.com) would not let me enter the amazon endpoint link saying that the url was too long.Further down in the comments section of the tutorial I was following was a comment from someone who surpassed this problem by using cloudflare.com.So, I signed up for cloudflare.com and changed the nameservers via 1and1.com to cloudflare cloudflare ones. After everything propagated I attempted to create a CNAME on cloudflare to the amazon bucket endpoint but i cant get it to work.How can I get my amazon s3 bucket on my domain name?
Assigning a CNAME to my static-website-bucket in amazon S3 using cloudflare
One post to many comments. You just need the following tables: 1. Post table: post-id (Hash Key), 2. Comment table: post-id (Hash Key), comment-id (Range Key)To get all the comment for a post, you query the comment table by just giving the HashKey (post-id).On Java side, you will have 2 classes (Post, Comments) mapped with the Dynamo DB Mapper.DynamoDBMapper (AWS SDK for Java - 1.7.1) :http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodb/datamodeling/DynamoDBMapper.html
Is possible to create a relationship between two tables using DynamoDB Java Persistence Model?I have the follow relationship of One Post to Many Comments@DynamoDBTable(tableName="Post_MyApp") public class Post { private String id; private String title; private Set <Comment> comments; //... Getters and setters and dynamo annotations }I have a separated table of comments, that Comment is another dynamo table/entity. My idea is create a table of post_comments like in SQL with all the comments of a post. This is the right way to do this with Dynamo or there is another way to do it better?
One-to-many in DynamoDB with Java Object Persistence Model
After a lot of searching I found DynamoDBInputFormat and DynamoDBOutputFormat in one of Amazon's libraries.On amazon elastic map reduce there is library called hive-bigbird-handler which contains input and output format for dynamoDB. Full class names are: org.apache.hadoop.hive.dynamodb.write.DynamoDBOutputFormat and org.apache.hadoop.hive.dynamodb.read.DynamoDBInputFormatI hope these classes will be useful to community.
I have to process some data which is persisted in Amazon Dynamo DB using Hadoop map reduce.I was searching over internet for Hadoop InputFormat for Dynamo DB and couldn't find it. I'm not familiar with Dynamo DB so I'm guessing there is some trick related to DynamoDB and Hadoop? If there is anywhere implementation of this Input Format could you please share it?
DynamoDB InputFormat for Hadoop
I got the issue resolved. It was not about firewall or permissions settings as I thought. I was trying to do putObject on bucket which I should have created before doing this. Inspite of saying no bucket found AWS gives a message Access Denied, which is wierd.Hope this helps someone.
I have full permission to my s3 User login:{ "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] }There is no group policy attached to this user. But when I give putObject() command from my java program, I receive Access Denied message. What can be the issue. As I told my user login has administrator access as well as AmazonS3FullAccess. Thanks for help in advance.
s3 putObject access denied
In answer to your questionsSince AWS Elastic Bean Stalk supports deployment of Java Web Apps on it.There wont be any problem deploying your Play! project.You dont have to be a Scala Pro to use Play!.You'll get used to Scala while using Play! and eventually become proficient in it and then if you want you can learn it.Of course you can develop your Application for AWS and test it locally.Eclipse IDE has a great plugin for that.http://aws.amazon.com/eclipse/
I have started with web development 2 months ago with python/gae. We switched from gae to amazon AWS and Java Play!.Will I run into problems if I want to deploy my app on AWS? At the moment I can use Elasticbeanstalk and it's a oneclick solution. Is Elasticbeanstalk compatible with Play! ?I don't know Scala. Because of the well written tutorial I have no problems using Scala for the templating / routing system. But maybe I will run into future problems. Would you recommend me to learn Scala if I want to use Play! ?I can run my app locally without deploying it, which gives me a really good workflow. Would it be possible to develop for AWS and testing my app locally?
Java Play! Framework Development
It turns out you can pass arbitrary parameters, including environment variables, to the container via the 'JVM Command Line Options' field in the 'container' area of the configuration.-Dgrails.env=DesiredEnvironmentNameWorks like a charm, I'm now using a single .war for all environments.
I run several environments of my Grails application up in Elastic Beanstalk. It would be a big timesaver to not have to build and upload different .war files just for the different environments (I have all the environmental differences passed in as system properties in the 'container' configuration area, so there is no external config file). As per this articlehttp://mrhaki.blogspot.ca/2011/02/grails-goodness-one-war-to-rule-them.html, it is possible to use a single .war and set the environment dynamically by passing the grails.env property, but it doesn't seem possible to do so as beanstalk limits you to a predefined set of named system properties (JDBC_CONNECTION_STRING, PARAM1, PARAM2, etc)What would be my best approach here?
How to use a single .war for several grails environments in AWS Elastic Beanstalk?
Pretty Simple.Download EC2 API. There is a CLI with it.keepEC2_PRIVATE_KEYandEC2_CERTin as your envt variables, where they areprivate keyandcertificatefiles that you generate from EC2 console.then callec2-reboot-instances instance_id [instance_id ...]Done.Refer:http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RebootInstances.htmlEdit 1Do I download this directly onto my Linux box? And how do I access the CLI on the linux box of the EC2 API? Sorry to ask so many questions, just need to know detailed steps of how to do this.Yes. Download it fromhereIf you have unzipped the API in/home/naishe/ec2api, you can call/home/naishe/ec2api/bin/ec2-reboot-instance <instance_id>. Or event better set unzipped location as your envt variableEC2_API_HOMEand append$EC2_API_HOME/binto your system'sPATH.Also, try investing some time onGetting Started Docwhich is amazingly simple.
Can someone elaborate more on the details of how to remotely start a EC2 instance remotely?I have a Linux box set up locally, and would like to set up a cronjob on it to start an instance in Amazon EC2. How do I do that?I've never worked with API's, if there are ways to use API's, can someone please explain how to do so...
Amazon EC2 Instance Remotely Start
If you want to signal error, return a non-zero code from your python script. You can write any logging to stderr and hadoop will capture that in the task logs. You can also send status to the reporter and counters by prefixing the stderr lines withreporter:status:<msg>orreporter:counter:<group>,<name>,<increment>
What is the best practice for reporting exceptions in Hadoop streaming with Python scripts?I mean: let's say I have a mapper script that can't understand its input, how do I signal Hadoop to terminate the job & report an error message?Do I useloggingand finish off withsys.exit?
Hadoop streaming: reporting error
I'm assuming you're using theSDK for PHPdirectly. Most SDKs don't play nicely in CI unless wrapped up.I highly recommend using the amazon-s3library(or rather, thespark).
I created a file calledawslib.phpand put it in theapplication/librariesfolder. These are the contents of awslib.php:<?php class Awslib { function Awslib() { require_once('sdk-1.5.6.2/sdk.class.php'); } }Also in the libraries folder is the PHP sdk as a folder namedsdk-1.5.6.2.On my home controller I am loading the library and instantiating the s3 class:$this->load->library('awslib'); $s3 = new AmazonS3();When I load my homepage I get this error:Fatal error: Class 'AmazonS3' not found in /var/www/application/controllers/home.php on line 23Why isn't it working?Note: the problem isn't with s3, I can get it to work fine when I store it outside codeigniter and load the demo files that come with the sdk.
How to use Amazon s3 as a Codeigniter library?
Do your development on a local instance of Tomcat, an IDE like IntelliJ will automatically update your changes. Once you have reached a reasonable milestone, e.g. completed a story, then redeploy your war.
I love the simplicity of Amazon Elastic Beanstalk.However it seems rather difficult to hot-reload code. In particular, in reading about theWAR file format (Sun), it states:One disadvantage of web deployment using WAR files in very dynamic environments is that minor changes cannot be made during runtime. Any change whatsoever requires regenerating and redeploying the entire WAR file.This is bad. I like to program in Clojure, which involves lots of testing / code reloading.My question: what is the right way to to hot code reloading in Amazon Elastic Beanstalk?Thanks!
Amazon Elastic BeanStalk, WAR Files, Hot Reloading
There is nothing like a default AMI forAmazon EC2, and no concept of selecting a default (or rather the region specific) AMI amongst the otherwise identical AMIs with different IDs per region either (a region independent AMI ID would be a nifty improvement though).This is usually solved by adding a respective mapping to your script, thus depends on the scripting environment in use (a simple map should always be available somehow) - e.g.AWS CloudFormationuses the very same approach itself, see the sampleEC2ChooseAMI.template, which is anexample of using Mappings to select an AMI based on region and instance type.TheAWSRegionArch2AMImap achieves what you desire, plus offering a choice of architecture as well (which implies a hint why a default AMI ID could not be as easy to implement then it might look at fist sight).
Theec2-run-instancescommand needs an AMI ID and the ID is different across all regions. Is there any way to specify that I need an AMI that will be suitable for region x / zone y and instance_type z?In other words I need a way to use some "default" AMI so that I can write a script that will work across all EC2 regions.
Is there any kind of a 'default' AMI ID across all EC2 regions?
This has been addressed in the recentAmazon S3team postAmazon S3 Performance Tips & Tricks:First: for smaller workloads (<50 total requests per second), none of the below applies, no matter how many total objects one has!S3 has a bunch of automated agents that work behind the scenes, smoothing out load all over the system, to ensure the myriad diverse workloads all share the resources of S3 fairly and snappily. Even workloads that burst occasionally up over 100 requests per second really don't need to give us any hints about what's coming...we are designed to just grow and support these workloads forever.S3 is a true scale-out design in action.S3 scales to both short-term and long-term workloads far, far greater than this. We have customers continuously performing thousands of requests per second against S3, all day every day.[...] We worked with other customers through our Premium Developer Support offerings to help them design a system that would scale basically indefinitely on S3. Today we’re going to publish that guidance for everyone’s benefit.[emphasis mine]You may want to read the entire post to gain more insight into the S3 architecture and resulting challenges for reallymassive workloads(i.e., as stressed by the S3 team, it won't apply at all for most use cases).
How fast can we download files from Amazon S3, is there an upper limit (and they distribute it between all the requests from the same user), or does it only depend on my internet connection download speed? I couldn't find it in their SLA.What other factors does it depend on? Do they throttle the data transfer rate at some level to prevent abuse?
On what factors does the download speed of assets from Amazon S3 depends?
the answer is that you cannot assign a tag until the instance is actually created. In order to tag this I have used a listener daemon to watch new instances and tag them once they've started.
I'd like to be able to include a tag when making a spot request via PHP. When creating on-demand instances, you can create the instance, then use it's instance to issue the following:$ec2->create_tags($instance_id, array( array('Key' => 'Name', 'Value' => 'MyTestMachine'), ));However, when issuing a spot bid, the instance isn't started right away, so you'd have to create a watcher tag to deal with this...unless you can add a tag in the request phase. I haven't found any documentation to show how this would go or look like, does it exist?
AWS EC2 Spot Instance PHP add tag when making spot request
NO - Your secret Access key is secret for a reason. Never pass it over the wire or you'll give any one who sniffs it full access to your AWS Account - they could use it to shutdown all your insances, delete entire S3 Buckets - everything.The signature is a "Signed request". you take the content of the request and create a Keyed-Hashing for Message Authentication code (HMAC) hash using your secret as the hash key. Since your secret key is only known to You and Amazon, When amazon receives the request they will also take the contents of your request and hash it based on your secret key - if they get the same hash as your signed request then they know the request was not tampered with. If they are different, then the request may have been maliciously tampered with or compromised so they will reject it.More details here:https://www.jokecamp.com/blog/examples-of-creating-base64-hashes-using-hmac-sha256-in-different-languages/Including code for calculating the HMAC.
I am using Amazon SES to try and send emails via a HTTP Post such as:https://email.us-east-1.amazonaws.com/?Action=SendEmail&Source=user%40example.com&Destination.ToAddresses.member.1=allan%40example.com&Message.Subject.Data=This%20is%20the%20subject%20line.&Message.Body.Text.Data=Hello.%20I%20hope%20you%20are%20having%20a%20good%20day.However in the HTTP Header it asks for X-Amzn-Authorization which consists of:X-Amzn-Authorization: AWS3-HTTPS AWSAccessKeyId=<Your AWS Access Key ID>, Algorithm=HmacSHA256, Signature=<Signature>I was wondering how to calculate the signature? Is it simply my Secret Access Key?A shown here on theAmazon Documentation Site.
How can I create the HMAC signature required to send Amazon SES emails via HTTP?
EC2 provides security groups, which are essentially a firewall external to the machine. The default security group will allow SSH and RDP connections. If you want requests for port 8080 to be received by the VM, update the security group settings for the VM. You can do this interactively from the Amazon Management Console.You also need to configure the firewall running in the windows VM, but it appears you did this when you added the service to the 'Permission Groups'.
I'm fairly new to EC2, hopefully someone can point me into the correct direction. I have a WCF Service hosted in Windows Service and would like to run this on EC2. I set up an EC2 account with Windows Server 2008 with SQL Server Express. I put my service out there and ran it, I'm able to test and connect to it from the browser with the private IP on the VM, but when I try to connect to the service from my computer with the Public IP, I'm not able to do so.Am I missing some important configuration or am I totally off? Any help would be greatly appreciated. I'm testing this with port 8080 and added that to the Permission Groups. I also tried to assigned an elastic IP to the instance. Thanks in advance.
WCF service in Windows service on Amazon EC2
You'll need to do this manually, however a single ejabberd server can handle quite a lot of traffic. Each server adds a significant amount of available connections to your cluster, so it's not a common task.That said, I'd really be careful running ejabberd in EC2. I've been doing it for about a year, and we fight mnesia network partitioning pretty regularly. Clustered ejabberd servers don't work very reliably in the EC2 network.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed10 years ago.Improve this questionUsingEjabberdinEC2as anXMPPserver to send real-time information to clients...How it is possible to set up clustering so that if the load on the server gets too much,Auto Scalingwill create a new EC2 instance that is part of the Ejabberd cluster?Thedocumentation I've readsuggests that you must already have the machines and manually configure each new one to be added to the cluster. Surely though you don't have to be running redundant EC2 instances just in case?
Automatic Ejabberd clustering with EC2 (Amazon Web Services) [closed]
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF--ec2 command line tools,--making your own AMI's from running instances (to save tedious and time consuming startup gumf),--route53 APIs for doing DNS magic,--ubunutu cloud-init for startup scripts,--32bit micro instances are your friend for dev work as they fall in thefree usage bracket
I'm still cheap.I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Using Amazon AWS as a development server.
They appear to useVanity.
I really like the design of the auto-generated documentation website for Amazon's PHP SDK. See here:http://docs.amazonwebservices.com/AWSSDKforPHP/latest/Do they use an open-source tool that I could use to create documentation for my own code? Something like phpdocs?
What is the documentation software that automatically creates Amazon PHP SDK?
http://www.allbuttonspressed.com/projects/django-mediageneratorprovides asset versioning. It provides a function and a template tag to give you the versioned filename from the actual filename. It also has some nice extras like js/css minification and concatenation.
[I'm using AWS but I think this question is relevant to all CDNs]I'm looking to seamless deploy my Django server to the AWS cloud.All static content (e.g. images, javascript, etc.) go to the Amazon Cloudfront CDN.The problem is that I'm trying to make the upgrade as "atomic" as possible, while I have very little control over the timing of CDN object invalidation.According to TFM, the solution is to version my objects, i.e. rename them adding a version id, e.g. arrow_v123.png. Obviously if the new server points to arrow_v124.png I have complete control over the timing of the entire distribution.I checked and from what I can tell the big boys are doing that - Facebook static content objects have a hashed name (and path).BUT HOW DO I AUTOMAGICALLY DO THIS IN DJANGO?I need to somehow:Generate a new version numberChange all the names of all the objects that are staticChange all the templates and python code to use those new namesOr somehow integrate with the development process:I edit a picture or a javascript fileI save it andit gets a new name?!?! and all references to it are auto-corrected?!?!I'm using Fabric for deployments, so it makes sense I need to modify my fabfile somehow.Please help.Tal.
Django seamless upgrades with CDN
When you spin an EC2 instance up, the root volume is ephemeral - that is, when the instance is terminated, the root volume is destroyed** (taking any data you put there with it). It doesn't matter how you partition that ephemeral volume and where you tuck your data on it - when it is destroyed, everything contained in that volume is lost.So if the data in the volume is entirely transient and fully recoverable/retrievable from somewhere else the next time you need it, there's no problem; terminate the instance, then spin a new one up and re-acquire the data you need to carry on working.However, if the data is NOT transient, and needs to be persisted so that work can carry on after an instance crash (and by crash, I mean something that terminates the instance or otherwise renders it inoperable and unrecoverable) then your data MUST NOT be on the root volume, but should be on another EBS volume which is attached to the instance. If and when that instance terminates or breaks irretrievably, your data is safe on that other volume - it can then be re-attached to a new instance for work to continue.** the exception is where your instance is EBS-backed and you swapped root volumes - in this case, the root volume is left behind after the instance terminates because it wasn't part of the 'package' created by the AMI when you started it.
I've launched an instance of the Basic 32-bit Amazon Linux AMI which has an 8GB volume as it's root device. If I terminate it, the EBS volume is destroyed as well. What I'd like to know is whether or not my data is protected (for example, the apache document root, or MySQL data) if the server crashes? A lot of tutorials seem to indicate that another EBS volume should be created and my data stored on that, but I'm not really seeing why two EBS volumes are needed?Or is the current setup okay for a web server setup?Many thanks in advance for your help!
LAMP server on EC2 (Amazon Linux Micro Instance)
isnt your document root public_html instead of www? so why did u put ur pages under www directory?
I am using this guide:http://codingthis.com/platforms/linux/how-to-host-simple-content-with-amazon-elastic-cloud-computing-ec2/I have a folder named public_html in my /home/ec2-user directory with a test.html file.What I have done so far:sudo yum -y install httpd php sudo chkconfig httpd on chmod 755 /home/ec2-user (I HAVE NO IDEA WHAT THIS DOES) sudo nano /etc/httpd/conf/httpd.conf (changed DocumentRoot to DocumentRoot /home/ec2-user/public_html)TLDR: How do I make it load my content (my html file) instead of the apache test pageEXTRA:I have a security group enabled for my instance with rules:ICMP Allow ALL TCP Allow ALL UDP Allow ALL TCP port 80 (Http)
Amazon EC2 How Do I host My Own Content? Stuck on having a working test apache page
What you really want is the equivalent of theOpenListingsreport from theProduct Advertising APIin MWS and that is theRequestReportcall with a report type of_GET_MERCHANT_LISTINGS_DATA_. This returns you all the inventory a seller has listed on Amazon and from here to getting your ASIN from that list it's close.You can find out more details in theirdocumentationAlso, I advise you to not use the Product Advertising API anymore as Amazon deprecated it and it will be out of use this time next year.
I am usingAmazon MWSandXML Feeds API(unfortunately I think you need a seller account to view these links)I need to get a list of all the items we sell on Amazon (to cross reference product names across our other selling channels).So technically this means I need to either :Get a direct list of all products in our seller marketplace (name, SKU)ORLookup a list of ASIN numbers and get product name + SKU back. I can get a list of ASINs we sell via the Inventory feed, but it doesn't give me product name..There doesn't seem to be any way in the MWS or XML API to do these simple tasks!!The only way I've found to lookup an ASIN is using this API which is from theProduct Advertising APIof all places....http://docs.amazonwebservices.com/AWSECommerceService/2010-11-01/DG/index.html?ItemLookup.htmlIt just seems really bizarre to me that I can't use the MWS (or XML API) and I'd like to know if this is the only way before I continue with the 'Product Advertising API'.
Lookup item by ASIN with Amazon MWS?
You can use the followingnugetpackage.PM> Install-Package Nager.AmazonProductAdvertisingExamplevar authentication = new AmazonAuthentication("accesskey", "secretkey"); var client = new AmazonProductAdvertisingClient(authentication, AmazonEndpoint.US); var result = await client.GetItemsAsync(new string[] { "B00BYPW00I", "B004MKNBJG" });
Using the Amazon Product Advertising API I am searching for 2 different UPCs:// prepare the first ItemSearchRequest // prepare a second ItemSearchRequest ItemSearchRequest request1 = new ItemSearchRequest(); request1.SearchIndex = "All"; //request1.Keywords = table.Rows[i].ItemArray[0].ToString(); request1.Keywords="9120031340270"; request1.ItemPage = "1"; request1.ResponseGroup = new string[] { "OfferSummary" }; ItemSearchRequest request2 = new ItemSearchRequest(); request2.SearchIndex = "All"; //request2.Keywords = table.Rows[i+1].ItemArray[0].ToString(); request2.Keywords = "9120031340300"; request2.ItemPage = "1"; request2.ResponseGroup = new string[] { "OfferSummary" }; // batch the two requests together ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request1,request2 }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request ItemSearchResponse response = client.ItemSearch(itemSearch); foreach (var item in response.Items[0].Item) { } foreach (var item in response.Items[1].Item) { }Is it possible to combine these two separate requests into one request and just have the first request return 2 items by settingkeywords = "9120031340256 and 9120031340270"Does anyone know how to do this?Do I need to specifically search the UPC?
Amazon Product Advertising API - searching for multiple UPCs
The item_lookup call only gets basic information. You will need to use the ResponseGroup parameter to specify what information you want Amazon to return (seeAmazon's ItemLookup Documentationfor more information). If you want to get additional product attributes, you should request the "Medium" response group. Your API call would look like:node = api.item_lookup('B001OXUIIG', ResponseGroup='Medium')Under itemAttributes, you should see the upc field (which is "092633186909" for the item you are looking for).
I've been trying to create a python script that uses a product's ASIN to return (via Amazon's API) its UPC. The module I've attempted to use thusfar is python-amazon-product-api (http://packages.python.org/python-amazon-product-api/), but it doesn't appear that this module supplies the UPC (or, at least, I can't find it under a product's attributes). Is this possible using this module? If not, what should I switch to? Here's what I have so far:import amazonproduct SECRET_KEY = 'xxx' AWS_KEY = 'yyy' api = amazonproduct.API(AWS_KEY,SECRET_KEY,'us') node = api.item_lookup('B001OXUIIG')And, again, the UPC doesn't appear to be under node.Items.Item.ItemAttributes . Thanks in advance for the help!
Using Amazon's API to find a product's UPC (Python)
My recommendation would be to have an intermediate webservice (probably sitting on an EC2 instance) handle all the communication between your iPhone app and SimpleDB.This solves your immeadiate question because now the intermediate web server can add a consistent timestamp.But... more importantly this solves the security problem of what to with you AWS keys (now they are living on a server you control). You really don't want to have them embedded in your iPhone app... :-)
I am working on a game on the iPhone that will use SimpleDB for game data storage and I'd like to be able to timeout some database entries.I figure i'll have some cron job that looks at the timestamp of each entry and if it's delta is greater than X i'll time out that entry(change a var in it so it is once again available for selection by users).My issue is i don't know how to unify the timestamps. What do i do if a user has their system clock screwed up and it's off by a couple hours or even days? What i'd like is to get the system time from the SimpleDB servers when i insert, but i believe i read that SimpleDB doesn't provide any sort of timestamp.My only current thought is to have all clients get a timestamp from some random online source at start up and then determine timestamps based off that time and system ticks, but i'd really prefer keeping this internal to my program and the AWS servers as i can't guarantee what the 3rd party timestamp source would be provide and when.Can anyone suggest a solution to this issue or perhaps alert me to an accessible timestamp that is maintained by the AWS servers? I'm still in the design phase so i haven't done much work with the AWS messages, do their response packets maybe contain the time somewhere?Thanks! -Skyler
SimpleDB timestamp on insert
No, unless you duplicate your keys into multiple S3 buckets. This is because S3 uses the Host header value as a reference to the bucket.I guess you could be sneaky and take advantage of the different URL styles. But it's a horrible suggestion and I would never implement it.http://www.mybucketdomain.com/foo.jpghttp://www.mybucketdomain.com.s3.amazonaws.com/foo.jpghttp://s3.amazonaws.com/www.mybucketdomain.com/foo.jpg
Is it possible to setup Amazon's Simple Storage Solution to use custom domains (storage-01.example.com, storage-02.example.com, storage-03.example.com, ...) without using Cloud Front? I don't really care about having an 'edge' network, but do want the browsers to make parallel requests for assets. Thanks!
Amazon S3 Multiple Custom Domain Without Cloudfront
In this case you can useError Document.
Is it possible to usemod_rewriteon Amazon S3?www.example.com/about.html -> mybucket.s3.amazon.com/index.html www.example.com/welcome.html -> mybucket.s3.amazon.com/index.html www.example.com/contact.html -> mybucket.s3.amazon.com/index.html
Mod_rewrite on Amazon S3
I'm sure there are other implementation of this kind of script but here's mine:http://www.capsunlock.net/2009/10/deleting-old-ebs-snapshots.html
What is the best way to automate (daily) snapshots of my EBS volumes (2) and manage them.By 'manage' I mean that I am looking for a script that will not only create daily backups (I am guessing a cron job will be involved) but that will also delete snapshots that are older than x days so as to avoid excessive data usage.I believe that such scripts do exist somewhere out there but I cant seem to pin one down.Ty
Script to automate creation and management of EC2 EBS snapshots
You can absolutely use Azure Storage (which includes tables, blobs, and queues) with no compute instances. Storage costs$0.15 / GB$0.11 or less / GB depending on quantity, and you'll pay for bandwidth usage ($0.10free inbound,$0.15$0.12 / GB outbound). And you'll pay $0.01 /10,000100,000 storage transactions.Regarding Azure tables specifically, you can have as many tables as you'd like within your storage account. Tables are schema-less, with up to 100TB per storage account.You can find more pricing infohere.You can sign up for a 90-day trial, including storage,here.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed11 years ago.I have an iPhone app that must use external cloud db to sync data between users. Data is structured, so BLOB storage will not do. So far the only alternatives that i see areAmazon SimpleDBMS Azure Storage (Tables). I didnt get if i could use just Storage and no Azure instances.Are there any other similar providers?
What is the cheapest cloud non-BLOB storage? [closed]