Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
The bootstrap action you have looks fine and is probably working. It's just that you are probably assuming that it will download the file to the same directory where you land when ssh'ing to the cluster, which is /home/hadoop, but that is not the case. The working directory of bootstrap actions is somewhere under /var/lib/bootstrap-actions, if I remember correctly.It would be easier to find the file you've downloaded if you change "." to something like "/home/hadoop". You could also create some other new directory to which to download the file as part of this script (using "sudo mkdir" and "sudo chown" if necessary).ShareFolloweditedJan 24, 2020 at 18:29answeredSep 21, 2016 at 10:17Jonathan KellyJonathan Kelly1,9601212 silver badges1515 bronze badges1Yep, that was the issue. Thanks!–Evan ZamirSep 21, 2016 at 17:08Add a comment| | I'm trying to download a postgres driver to each node of my cluster. I wrote the following bootstrap action, but it doesn't seem to have worked:#!/bin/bash
aws s3 cp s3://path/to/driver/jars/postgresql-9.4.1210.jre7.jar .I know this must be an easy thing to do, but I can't seem to find an obvious example. | How to write a bootstrap action to download a file to each node in EMR? |
Twilio evangelist here.By default Twilio is going to make a POST request to the Url, and I'm guessing your web server can't serve a .xml from a POST request. You can send the Method param to tell Twilio to make a GET request instead:callParams.put("To", "#number");
callParams.put("From", "#number");
callParams.put("Url", "https://myserveraddress/play.xml");
callParams.put("Method", "GET");Here is a longer sample:http://twilio.com/docs/api/rest/making-calls#example-5Hope that helps.ShareFollowansweredSep 16, 2016 at 2:20Devin RaderDevin Rader10.3k11 gold badge2020 silver badges3232 bronze badgesAdd a comment| | I have a twilio account and am able to make calls. I am also able to use twiML Bins to do some text-to-voice. I however would like to call people hand play the recording stored on my amazon server, my java code is as follows:callParams.put("To", "#number");
callParams.put("From", "#number");
callParams.put("Url", "https://myserveraddress/play.xml");The xml code is as follows:<?xml version="1.0" encoding="UTF-8" ?>
<Response>
<Play>https://myserveraddress/jazz.mp3</Play>
</Response>My mp3 is stored in the same location as my xml. But when I try make a call the twilio debugger tells me:Error - 11200. HTTP retrieval failure.Any Help will be appreciated. | Twilio not communicating with my xml file stored on amazon S3 |
I've managed to make a dict of tags with lists of values:- hosts: localhost
tasks:
- ec2_remote_facts:
region: eu-west-1
register: ec2_facts
# get all possible tag names
- set_fact: tags="{{ item.keys() }}"
with_items: "{{ ec2_facts.instances | map(attribute='tags') | list }}"
register: tmp_tags
# get flattened list of tags (for some reason lookup() returns string, so we use with_)
- assert: that=true
with_flattened: "{{ tmp_tags.results | map(attribute='ansible_facts.tags') | list }}"
register: tmp_tags
# get unique tag names
- set_fact: tags="{{ tmp_tags.results | map(attribute='item') | list | unique }}"
- set_fact: my_tags="{{ {} }}"
# get all possible values for a given tag
- set_fact:
my_tags: "{{ my_tags | combine( {''+item: ec2_facts.instances | map(attribute='tags.'+item) | select('defined') | list | unique}) }}"
with_items: "{{ tags }}"
- debug: var=my_tagsShareFollowansweredSep 9, 2016 at 15:31Konstantin SuvorovKonstantin Suvorov66.5k99 gold badges166166 silver badges195195 bronze badgesAdd a comment| | I'm trying to figure out a way to assign variables in Ansible based on tags I have in AWS. I was experimenting withec2_remote_tagsbut it's returning alot more information than I need. It seems like there should be an easier way to do this and I'm just not thinking of it.For example, if I have a tag calledfunctionthat creates thetag_function_apigroup using dynamic inventory and I want to assign a variablefunctionto the valueapi. Any ideas on an efficient way to do this? | Assign ansible vars based on AWS tags |
Blocking email domains is tough, because neither whitelisting nor blacklisting are an option.By whitelisting certain domains, you disallow people with email domains that are unknown to you (but might be perfectly valid), while by blacklisting you have to update the list of blacklisted domains on a daily basis, since new "10 minute email" domains emerge every day.Please note that temporary email addresses are invented for a way of saying:"Hey, I don't trust this website with my own email adrress", so you're most probably not going to trick users that are willing to hide their real address since they've got a valid reason to do so.Can't you adopt and implement something likeOpenID?ShareFollowansweredSep 9, 2016 at 12:31Piyush PatilPiyush Patil14.2k66 gold badges3939 silver badges5656 bronze badges1Good advices. Thank you for informations. I have to think about adding open id to register and logins later.–nolinesSep 9, 2016 at 12:34Add a comment| | My application sends alert and emails from using aws mail services but today aws send me a notification that says bounce rate over %20 it should be below %10.But app doesn't have any unverified mail addresses except mailinator.com(which are Disposable mails). Should i block that mail domains? | Aws bounce error, Temporary mail addresses like mailinator.com or etc. causes bounce or not? |
I faced the same error, turns out you have to specify the Role ARN, not the Role name. So instead of--role roleName, put--role arn:aws:iam::1234567891:role/service-role/roleName. You can find you role ARN by clicking on the role name inRolestab, and then at the top you'll find the role ARN.AWS really needs to fix their documentation for almost all of their services.ShareFollowansweredApr 21, 2017 at 6:29ShaileshShailesh2,17644 gold badges2828 silver badges4949 bronze badgesAdd a comment| | I am creating a nodejs application and deploying it as a lambda function on AWS. I am following the link:http://docs.aws.amazon.com/lambda/latest/dg/with-on-demand-https-example-create-iam-role.htmlI am now stuck at step 2.2-2.3. Step 2.2 has the json with the policy that needs to be attached to the role. When I use the below command (step 2.3) to create the lambda function:ws lambda create-function --region us-east-1 --function-name LambdaFunctionOverHttps --zip-file fileb://LambdaFunctionOverHttps.zip --role execution-role-arn --handler LambdaFunctionOverHttps.handler --runtime nodejs4.3Then I get the below error:-An error occurred (ValidationException) when calling the
CreateFunction operation: 1 validation error detected: Value
'execution-role-arn' at 'role' failed to satisfy constraint: Member
must satisfy regular expression pattern:
arn:aws:iam::\d{12}:role/?[a-zA-Z_0-9+=,.@-_/]+I even created the file "execution-role-arn" which had the json from Step 2.2. How can I resolve this error and create the lambda function? | How to attach policy to a role while creating an AWS lambda function in nodejs in AWS CLI? Facing error while attaching role |
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html:ImportantUser data scripts and cloud-init directives only run during the first boot cycle when an instance is launched.ShareFollowansweredSep 8, 2016 at 8:51Dusan BajicDusan Bajic10.5k33 gold badges3434 silver badges4545 bronze badges2Hi, thanks for that. I literally figured that out about 20 mins ago and now have the commands running successfully via /etc/rc.local–Nick RoperSep 8, 2016 at 12:201By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance but you can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.–AzizStarkNov 10, 2021 at 17:42Add a comment| | I have an EC2 instance that I'm starting a very simple user data script:#!/bin/bash
aws s3 cp s3://<bucket-name>/myconf.conf /etc/httpd/conf.d/myconf.confThe instance has an associated IAM Role that allows access to the bucket and if I ssh into the running instance manually I can sudo execute the command to copy the file from S3 to the local filesystem.However, if I delete the file, stop the instance, add the user data and start the instance again - then the file hasn't been copied down from S3 when I log back in.Any ideas?Thanks | AWS EC2 instance user data bash script not working |
This problem happens because the EDP is not in active status. When EDP is 'inactive' or its schedule is closed, you cannot rerun its steps anymore. I don't know how to schedule this kind of EDP again. A workaround is to clone the EDP and activate it.ShareFollowansweredMay 28, 2018 at 3:52Zhe HouZhe Hou22711 gold badge33 silver badges1111 bronze badgesAdd a comment| | My datapipeline has many acitivities (Shellcommandactivity) one of which has failed due to a programmatic issue. However when i try to re-run the failed activity after fixing programmatic issue. Failure & rerun mode is - cascade & schedule Type is On demand. I get the below errorThe given input is not valid: Set status 'RERUN' is not allowed on finished objects Activity2 (ShellCommandActivityId_vGL6K) (Service: DataPipeline; Status Code: 400; Error Code: InvalidRequestException; Request ID: 9a0cd59b-6a02-11e6-8592-cbb9c966228d)I have all access including administrator access. I have gone through all the posts and the documentation but was not able to find an answer | AWS Data Pipeline - Error when trying to re-run a failed acitivity |
Setting acceleration on an S3 bucket is not (yet) supported by CloudFormation. These things typically lag by a few weeks/months.Updates are usually announced on thewhat's newpage and in the docs (that you already linked).I feel bad for the CFN team - they're always playing catch-up.ShareFollowansweredAug 24, 2016 at 22:46rowanurowanu1,6831616 silver badges2222 bronze badges2Thanks, the acceleration makes such a massive difference to my uploads, I just want it always on. I will keep an eye open and ensure this answer gets updated when they add the relevant support.–Sam MackrillAug 25, 2016 at 10:392One option if you want everything included in your CloudFormation stack is to use custom provisioners. Most features are available via APIs right away, before they make it into CloudFormation. You can stick your API calls into an Lambda function to provision features via API calls as CloudFormation custom resources.docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/…I have a longer write up on this technique here:dwolla.com/updates/bootstrapping-cloudformation-attributes–CanutesonSep 23, 2016 at 22:42Add a comment| | My Serverless project creates an S3 bucket, I would like it to have Transfer Acceleration turned-on by default.I have tried this:"UploaderS3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "${uploaderBucket}-${aws-environment-lower}-${stage}",
"Accelerate": "Enabled",
"CorsConfiguration": {
"CorsRules": [
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposedHeaders": [
"Etag"
],
"MaxAge": "3000"
}
]
}
}but that isn't an accepted Property and I can't find anything appropriate in the aws docs:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.htmlIs there any way to do this during the resources deployment? | How to create S3 bucket via CloudFormation with Transfer Acceleration on by default? |
There is no Azure Schedule equivalent in AWS. But you can achieve your use case using AWS Lambda. Please check this AWS Guide for doing the same.https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.htmlShareFollowansweredAug 10, 2016 at 0:49Piyush PatilPiyush Patil14.2k66 gold badges3939 silver badges5656 bronze badgesAdd a comment| | I would like to configure recurrent calls (e.g. every minute) to a HTTP/HTTPS endpoint in AWS. What is the easiest way of accomplishing this?
In Azure I would configure a Azure Schedule job for this. Is it anything like that in AWS? | What is the closest in AWS to Azure Scheduler jobs? |
Yes, you can set json body and also message attributes. I have verified this. Here is a log of the headers I received from the SQS daemon on EB (my custom fields are "abc" and "def"):2017-09-10 16:19:53,689; INFO ; headers received:
X-Aws-Sqsd-Attr-Abc: 205
X-Aws-Sqsd-Attr-Def: 2017-09-10T16:19:53.537679+00:00
X-Aws-Sqsd-Msgid: bfd25652-9923-4c4c-86f2-9fea9fa2fas
X-Aws-Sqsd-Receive-Count: 1
X-Aws-Sqsd-Path:
X-Aws-Sqsd-Queue: myqueue
Content-Length: 16
User-Agent: aws-sqsd/2.3
X-Aws-Sqsd-First-Received-At: 2017-09-10T16:19:53Z
X-Aws-Sqsd-Sender-Id: AIDAJP6NVOXNJ7HY7QYOM
X-Aws-Sqsd-Sent-At: 2017-09-10T16:19:53Z
Host: localhost
Content-Type: application/jsonSee the docs:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-daemonI had trouble getting message attributes to come through, and it turned out (verified by an AWS support technician) that there is either an error in AWS implementation or in their documentation, because the problem was that I was using an underscore in my message attributes key names, which is supposedly supported but in practice makes HTTP headers fail to include message attributes.ShareFollowansweredSep 14, 2017 at 16:29Ben WheelerBen Wheeler7,00622 gold badges4545 silver badges5757 bronze badgesAdd a comment| | I am planning to deploy my nodejs application as a webserver + worker combination in EB. The webserver will insert a json ( in request body ) into a SQS queue. Worker then reads the queue and do some works.The problem is I need headers also in my worker. Is there any way to set headers to the request so that i can use it in worker ? | Custom headers when using AWS EB Worker |
Figured it out.If you specify aProfileCredentialsProvider()the AWS SDK will look for a configuration file, regardless of precedence. Simply creating a S3 Client like this:AmazonS3 s3Client = new AmazonS3Client();Will check the various locations specified for credentials.ShareFollowansweredJul 25, 2016 at 21:33Tanishq dubeyTanishq dubey1,52277 gold badges1919 silver badges4444 bronze badgesAdd a comment| | I am trying to upload a file to S3. The code to do so is below:AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
String key = String.format(Constants.KEY_NAME + "/%s/%s", activity_id, aFile.getName());
s3Client.putObject(Constants.BUCKET_NAME, key, aFile.getInputStream(), new ObjectMetadata());The problem I am having is that my ProfileCredentialsProvider cannot access my AWS keys. I have set my environment variables:AWS_ACCESS_KEY=keys go here
AWS_SECRET_KEY=keys go here
AWS_ACCESS_KEY_ID=keys go here
AWS_DEFAULT_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=keys go hereAnd as perAmazon's Documentationthe set environment variables have precedence over any configuration files. This leads me to ask, why are my keys not being grabbed from my environment variables? | AWS ProfileCredentialsProvider not able to get credentials |
I Found a way to publish to awsSteps1. Select your startup project
2. Goto Buid > Publish < Selected Startup project >
3. Create a profile and provide location to local drive ( C:/ or d:/)
4. After publish is successfull just copy the entire content of the folder and paste in aws wwwroot folderyour website is live now :)ShareFollowansweredAug 22, 2016 at 9:40Atal KishoreAtal Kishore4,59833 gold badges1919 silver badges2727 bronze badgesAdd a comment| | My website is using asp.net core 1.0 and angular 2. And I have account on amazon aws where i have a running ec2 instance (windows). Now I want to deploy this website to that instance.
my project struction is likewebsite
.
...src
.
..Bussinesslayer
..DataAccesslayer
..webapp <------startup project
.
..wwwrootHow to deploy this site on aws | Deploy asp.net core 1.0 website on aws |
You have 2 options:If you want to use the same authorizer function for both stages, you can parse theinput passed to the functionwhich includes the stage:{
"type":"TOKEN",
"authorizationToken":"<caller-supplied-token>",
"methodArn":"arn:aws:execute-api:<regionId>:<accountId>:<apiId>/<stage>/<method>/<resourcePath>"
}If you want to use different functions per stage you can make use of stage variables.Note:You will have to use the CLI or SDK to add an authorizer with a stage variable. An example with the CLI:aws apigateway update-authorizer --rest-api-id <apidId> --authorizer-id <authorizerId> --patch-operations '[{"op":"replace","path":"/authorizerUri","value":"arn:aws:apigateway:<region>:lambda:path/2015-03-31/functions/arn:aws:lambda:<region>:<accountId>:function:${stageVaribles.authorizer}/invocations"}]'ShareFollowansweredJul 20, 2016 at 17:37Bob KinneyBob Kinney8,96011 gold badge2828 silver badges3535 bronze badges2Thanks! We chose to go with Option 1. I think for larger deployments where more configuration churn could exist #2 would require deployment scripting.–Todd BaurJul 20, 2016 at 21:45var methodArn = event.methodArn.split('/')[1] || "development"; config = environments[methodArn];–Todd BaurJul 20, 2016 at 21:46Add a comment| | We have a custom authorizer for Auth0 configured in API Gateway. We want it to load different configuration values based on what stage it is invoked from. Is there a known way to handle this? | Custom Authorizer + Stages configuration values |
Aurora is also a type of Amazon RDS based on MySQL.
How did you migrate data from RDS (which one?) to Aurora on RDS? Did you use Amazon DMS to migrate data between Mysql/MariaDB/Aurora RDS to Aurora RDS? You said you restored a snapshot - (it's impossible to restore Aurora from non-Aurora snapshot).I had a performance issue with MariaDB and Aurora when I migrate data from other non RDS MariaDB through Amazon DMS. It was extremely slow! Migration process between MariaDB and RDS DM went without problems, no error/warning logs, but it just work extremely slow, almost all queries took 100 times more time than on standard (much smaller) EC2 instance with MariaDB. I tried to increase IOPS, resizing RDS, changing parameters, etc. Nothing helped!My solution was to not use DMS migration (which changed a lot in tables creation schemas). I did amysqldumpon EC2 Instance with MariaDB and restored it into new MariaDB RDS. Everything started working as expected with a good performance.ShareFollowansweredJul 19, 2016 at 14:15Marek PiatekMarek Piatek3111 bronze badge1Should we migrate from Aurora to MySQL RDS using DMS without taking downtime or using snapshot. Because my database is very huge (about 16gb) and taking so much time to download using mysqldump. and I don't want to take any downtime–Manish SapkalJul 20, 2016 at 8:03Add a comment| | Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed2 years ago.Improve this questionI have created one instance of Amazon Aurora in Sydney Region and restored my RDS snapshot on it. I am executing one simple query on one of my table which has roughly 6k records in it, returns a very slow result. I have not changed any parameter in default parameter group, which is linked to my instance. This query runs perfectly on my existing RDS instance with same parameters with 0.200sec and returns quick response. But the same query takes about 0.350sec on Aurora. My query plan (EXPLAIN) shows me no issue. It uses PRIMARY Index to get a result. So, I can't understand, Why it is so slow? Do I need to configure parameters? As they claim that Aurora is 5x faster than RDS. How Do I check?
Thanks. | Amazon Aurora is slow compare to Amazon RDS [closed] |
The last comment from Jeff led me to the answer. Thanks Jeff!String cognitoIdentityId = "your user's identity id";
String openIdToken = "open id token for the user created on backend";
Map<String,String> logins = new HashMap<>();
logins.put("cognito-identity.amazonaws.com", openIdToken);
GetCredentialsForIdentityRequest getCredentialsRequest =
new GetCredentialsForIdentityRequest()
.withIdentityId(cognitoIdentityId)
.withLogins(logins);
AmazonCognitoIdentityClient cognitoIdentityClient = new AmazonCognitoIdentityClient();
GetCredentialsForIdentityResult getCredentialsResult = cognitoIdentityClient.getCredentialsForIdentity(getCredentialsRequest);
Credentials credentials = getCredentialsResult.getCredentials();
AWSSessionCredentials sessionCredentials = new BasicSessionCredentials(
credentials.getAccessKeyId(),
credentials.getSecretKey(),
credentials.getSessionToken()
);
AmazonS3Client s3Client = new AmazonS3Client(sessionCredentials);
...ShareFolloweditedJul 22, 2016 at 0:08answeredJul 21, 2016 at 23:45MikeMike1,0411414 silver badges3737 bronze badges1This link was helpful ->mobile.awsblog.com/post/Tx2323YHCM0I7OO/…–MikeJul 22, 2016 at 1:29Add a comment| | I am trying to authenticate a java app to AWS services using a developer-authenticated Cognito identity. This is very straightforward in the AWS mobile SDKs (documentation), but I can't seem to find the equivalent classes in the Java SDK.The main issue I am having is that the Java SDK classes (such as WebIdentityFederationSessionCredentialsProvider) require the client code to know the arn of the role being assumed. With the mobile SDK, it uses the role configured for the federated identity. That's what I'd prefer to do, but it seems the Java SDK doesn't have the supporting classes for that. | Amazon Cognito developer authenticated identity with Java SDK |
Take a snapshot of your existing database, either manually, using the CLI or PowerShell, taking note of the DBSnapshotIdentifier.Using PowerShell it looks like this:New-RDSDBSnapshot -DBSnapshotIdentifier "NameOfYourNewSnapshot" -DBInstanceIdentifier "YourExistingDbIdentifier"Okay, now you have a snapshot, you need to change your CloudFormation template to use the DBSnapshotIdentifier.Change your existing template to create a SqlServer database and specify a new property,DBSnapshotIdentifier:"MyDB" : {
"Type" : "AWS::RDS::DBInstance",
"Properties" : {
"DBSecurityGroups" : [
{"Ref" : "MyDbSecurityByEC2SecurityGroup"}, {"Ref" : "MyDbSecurityByCIDRIPGroup"} ],
"AllocatedStorage" : "20",
"DBInstanceClass" : "db.t2.micro",
"Engine" : "sqlserver-ex",
"MasterUsername" : "MyName",
"MasterUserPassword" : "MyPassword",
"DBSnapshotIdentifier" : "NameOfYourNewSnapshot"
}
}That should be it, when you run your stack it will drop and re-create your database from your snapshot, so be sure to cater for the down time.Docs:http://docs.aws.amazon.com/powershell/latest/reference/Index.htmlhttp://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.htmlhttp://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-dbsnapshotidentifierShareFollowansweredMar 9, 2017 at 3:45shenkushenku12.2k1313 gold badges6464 silver badges120120 bronze badgesAdd a comment| | I'm pretty new to using AWS stuff. I wanted to take a snapshot of the current SQL Server instance and create another instance with the same snapshot (so that all the existing databases and data gets migrated) and more storage capacity using AWS Cloud Formation.I saw something on Amazon likehttps://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/RDS_MySQL_With_Read_Replica.templatebut couldn't tailor it to my needs. I don't want all that EC2 instance and extra things. Just my existing snapshot ID and the new SQL Server RDS instance details which will be cloned with the Snapshot ID | AWS RDS Cloud formation template for SQL Server |
Why not try?var os = require(“os”);
var hostname = os.hostname();It will return the docker container's hostname. If you haven't set a hostname explicitly, using something likedocker run -h hostname image commandthen it will return the docker host's hostname.Alternatively, you could do this using a deployment tool like puppet, ansible, etc. and template the file when you deploy the container.ShareFollowansweredJun 29, 2016 at 3:35stacksonstacksstacksonstacks8,97366 gold badges2828 silver badges4444 bronze badgesAdd a comment| | We're running NodeJS application inside docker container hosted on Amazon EC2 instance. ToTo enable Monitoring for Node.js app with Datadog we are using datadog-metrics library and integrate it with our application. We basically require to save the below Javascript code into a file called example_app.jsvar metrics = require('datadog-metrics');
metrics.init({ **host: 'myhost', prefix: 'myapp.'** });
function collectMemoryStats() {
var memUsage = process.memoryUsage();
metrics.gauge('memory.rss', memUsage.rss);
metrics.gauge('memory.heapTotal', memUsage.heapTotal);
metrics.gauge('memory.heapUsed', memUsage.heapUsed);
metrics.increment('memory.statsReported');
}
setInterval(collectMemoryStats, 5000);Although, we are able to successfully publish metrics to datadog but we're wondering if this can be automated. We want build this into our docker image, hence require an automatic way to pick up the hostname, at the very least be able to use the docker hosts name if possible..Because till now we're manually specifying "myhost" and "myapp" values manually. Any better way to fetch the AWS instance hostname value into %myhost? | Automatic way to pick up the hostname inside docker container |
At the moment there is no complete solution for this. You have to either use newly introduced AWS Cognito User Pools or create your own one. I would also recommend to checkout the projecthttps://github.com/danilop/LambdAuthwhich worth trying.ShareFolloweditedSep 4, 2017 at 13:18answeredJul 10, 2016 at 9:54AshanAshan19.3k44 gold badges4949 silver badges6868 bronze badgesAdd a comment| | I am trying to authenticate users via AWS Cognito/IAM services from my webapp. I have implemented Facebook and LinkedIn login and I'm wondering how I could use AWS to implement username+password login via my UI. Is there a way for me to set it up so that all I have to do is drop in button for username+password login on my view and that will authenticate users and redirect back to my backend service (similar to Facebook/LinkedIn) and where I can put in an endpoint URL?Do let me know If I need to be clearer.Edit1: I have already tried usingDeveloper Authenticated Workflow(enhanced workflow). I don't want to do the part where I create the User in my user pool by calling the AWS Cognito Identity API. I'd like AWS to do the user creation by itself. is this possible?Edit2: Another alternative solution is to create a Lambda which does what I want. But this is similar to the code to do that (which is on my backend). | AWS Authentication |
You do not need to flatten it. You can load it with thecopycommand after defining ajsonpathsconfig file to easily extract the column values from each json object.With your structure you'd create a file in S3 (s3://bucket/your_jsonpaths.json) like so:{
"jsonpaths": [
"$.user_id",
"$.metadata.connection_type",
"$.metadata.device_id"
]
}Then you'd run something like this in Redshift:copy your_table
from 's3://bucket/data_objects.json'
credentials '<aws-auth-args>'
json 's3://bucket/your_jsonpaths.json';If you have issues see what is in thestv_load_errorstable.Check out the Redshiftcopy commandandexamples.ShareFolloweditedJun 22, 2016 at 7:27answeredJun 21, 2016 at 21:08systemjacksystemjack2,9351818 silver badges2727 bronze badges2Would it be possible to automate the COPY command for every file put in an S3 bucket? Can the COPY be fired through the SDK? If so, it should be possible to write a Lambda function that fires the command.–AitorFJul 13, 2016 at 13:311You can enable SNS notifications on your S3 bucket and trigger the lambda from that. Check this out:docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.htmlanddocs.aws.amazon.com/sns/latest/dg/sns-lambda.html–systemjackJul 13, 2016 at 17:10Add a comment| | I have json file on S3, I want to transfer it to Redshift. One catch is that the file contains entries in such a format:{
"user_id":1,
"metadata":
{
"connection_type":"WIFI",
"device_id":"1234"
}
}Before I will save it to Redshift I want to flatten the file to contain columns:user_id | connection_type | device_idHow can I do this using AWS Data Pipeline?
Is there activity that can transform json to the desired form? I do not think that transform sql will support json fields. | Flattening JSON file while transferring from S3 to RedShift using AWS Pipeline |
This is a session issue on the AWS ELB. Enable the Sticky Sessions on the ELB and this issue will be resolved. Here is the developers guide.http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.htmlShareFollowansweredJul 4, 2016 at 23:45Piyush PatilPiyush Patil14.2k66 gold badges3939 silver badges5656 bronze badgesAdd a comment| | Our system uses an AWS Elastic Load Balancer.We are encountering a maddening issue where our HTML video tags are failing to play randomly. I can't reliably reproduce the issue unless I bypass the ELB, which makes me suspect it, naturally.I've verified that the same files are on both of our IIS servers, and I have verified that the MIME types are the same on both.The video files are H.264 MP4s, but they will sometimes work, so I don't think it has anything to do with Chrome's support of the codec.Anybody have an idea on what I can do, or where to look next? | HTML Video tag in Chrome fails to play intermittently against AWS ELB |
The question is the namespace of Application Load Balanceraws:elbv2(different from Elastic Load Balancer classicaws:elb)http://docs.aws.amazon.com/pt_br/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.htmlthis works in ALBoption_settings:
- namespace: aws:elbv2:loadbalancer
option_name: ManagedSecurityGroup
value: sg-XXXXXXXX
- namespace: aws:elbv2:loadbalancer
option_name: SecurityGroups
value: sg-XXXXXXXXShareFolloweditedNov 13, 2017 at 23:22Thomas Fritsch9,8243333 gold badges3939 silver badges4949 bronze badgesansweredNov 13, 2017 at 23:02Higo MatosHigo Matos3122 bronze badgesAdd a comment| | I'm trying to set existing Security Group for the ELB in my Elastic Beanstalk application with.ebextensions.For some reason, .configs likeoption_settings:
aws:elb:loadbalancer:
SecurityGroups: sg-abcd1234Don't seem to do anything. Also, since that existing SG is strictly defined, I don't want to useManagedSecurityGroupsince that would modify the existing SG.Any ideas how to achieve this? Help would be highly appreciated. | In Elastic Beanstalk, how to set existing security group to load balancer with .ebextensions? |
If you are referring to bot spam, consider using a good ol' CAPTCHA:Protect your website from spam and abuse while letting real people
pass through with easeNoCAPTCHA reCAPTCHAThis is Googles new version of reCAPTCHA. noCAPTCHA reCAPTCHA is the result, and it makes it easier to prove you’re users are real humans, without having to type in the classic distorted text image.This version simply offers a checkbox that says “I’m not a robot.” When you check the box, it performs a number of tests using a “risk analysis engine” to determine if you’re human or not.ShareFolloweditedMay 23, 2016 at 21:52answeredMay 23, 2016 at 14:33Rodrigo MurilloRodrigo Murillo13.4k22 gold badges3131 silver badges5151 bronze badges3Thank you. This is exactly what I was looking for.–dergronckiMay 23, 2016 at 18:39How does this protect API gateway exactly? because the abuser can use the system manually once to see the network request and then he can directly use the network request without visiting the frontend.–Sanket SardesaiNov 20, 2021 at 19:25Usehoneypot,ratelimiter, put rate limiters on APIs through middleware for an instance, signup API can be called X times in a day with X.X.X.X IP Address & AWS does offer free DDoS protection on all services so after all these your APIs will be pretty much secure, in the end it's never ending process.–Kunal AwasthiDec 29, 2021 at 6:15Add a comment| | I am using AWS Lambda, API Gateway and SES to process a contact form. Is there any recommendation how to secure the contact form against spamming?Thank you.Michael | How to secure a contact form against spamming which is processed by AWS Lambda and API Gateway |
Please findthis linkfrom AWS Blogs which is the exact use case for Cognito User Pool you are looking for and a very detailed step by step guide to implement it. I tried it and it worked, only difference is the deployment code consumes much more resources on S3 and lambda than Js code hence a pricing factor. If you are looking for similar use case of java script here is ademoimplementation for same. It should solve your problem.BRShareFollowansweredMay 24, 2016 at 14:27JeetJeet5,60988 gold badges4545 silver badges7878 bronze badgesAdd a comment| | I am exploring Amazon Cognito for one of my work related needs, I foundthis-aws cognito blog post about to access user pools via java scriptandthis-aws cognito sdk on java, alsothis-use cases to access user pools in javascriptI am lookingsimilarexamples or something similar reference in Java, my objective is to change such java code to Lambda functions, and all such user pools access would be done from API Gateway(On backend Java based lambda functions would work). | How to Access User Pools using the Amazon Cognito Identity SDK for Java |
API Gateway does not support this use case today; each method and path must be explicitly defined in your API definition.Supporting such passthrough proxies is a request we have heard from other customers and we may consider supporting it in future updates to the service.UPDATE 09/20/2016: I'm happy to announce that we've launched a set of features to allow for proxying of requests as described above. See ourannouncementfor more details.ShareFolloweditedSep 20, 2016 at 23:56answeredMay 18, 2016 at 18:19Bob KinneyBob Kinney8,96011 gold badge2828 silver badges3535 bronze badges3While not ideal, you could perform this routing/redirection in Lambda, could you not? Either by returning a redirect or proxying a call to the target server, perhaps?–ericpeters0nJun 29, 2016 at 23:28@ericpeters0n API Gateway would still need to support variable or "greedy" resource paths, which it does not. Currently you need to define every resource/method in your API Gateway definition and map this to an integration (e.g. Lambda).–Bob KinneyJul 1, 2016 at 16:04Gist (for single-level path):create-resource --path-part '{proxy+}' --parent-id {ID of / root resource};put-method --request-parameters method.request.path.proxy=true;put-integration --type HTTP_PROXY --uri 'http://downstream.domain/{basefile}' --request-parameters integration.request.path.basefile=method.request.path.proxy;create-deployment–Janaka BandaraApr 5, 2021 at 16:03Add a comment| | I'm trying to do some vary basic routing with API Gateway.
I need to achieve the following scenario:user makes request xxxx-execute-api.eu-west-1.amazonaws.com/prod/api1/a/b/../n?param1=val1&parma2=val2...¶mn=valn request should go toapi1.back.end/a/b/../n?param1=val1&parma2=val2...¶mn=valnuser makes request xxxx-execute-api.eu-west-1.amazonaws.com/prod/api2/a/b/../n?param1=val1&parma2=val2...¶mn=valn request should go toapi2.back.end/a/b/../n?param1=val1&parma2=val2...¶mn=valnuser makes request xxxx-execute-api.eu-west-1.amazonaws.com/prod/*****/a/b/../n?param1=val1&parma2=val2...¶mn=valn request should go toapi3.back.end/a/b/../n?param1=val1&parma2=val2...¶mn=valnThe routing should be done based on first path index after stage, and everything else after that should be passed to the http backend (like a transparent proxy).In other words, if path index 1 isapi1, forward request toapi1.back.end with full URI after path index 1; if path index 1 isapi2, forward request toapi2.back.end with full URI after path index 1; if path index 1 isanything elsethan the explicit values api1 or api2, forward request toapi3.back.end with full URI after path index 1;How would I achieve this, without adding any extra layers (lambda, cloudfront, ec2, etc.) ?Thank you! | Use API Gateway as http proxy with uri (request path+variable query params) passthrough |
It is not possible to do what you want using your current schema with plain SQL.If you can have application logic when creating your SQL query, you could dynamically create theSELECTstatement.Option ALoad the whole JSON in your app, parse it and obtain the required information this way.Option BWhen storing values in your database, parse the JSON object and add the discovered keys to another table. When querying your Redshift cluster, load this list of values and generate the appropriate SQL statement using this information.Here's hoping these workarounds can be applied to your situation.ShareFollowansweredMay 16, 2016 at 18:07GuiSimGuiSim7,45166 gold badges4141 silver badges5050 bronze badges1Thanks for your answer GuiSim, this is good to know. Option B will work for what I need :)–fezMay 20, 2016 at 8:33Add a comment| | I have a fieldvarchar(65000)column in my AWS Redshift database which is used to store JSON strings. The JSON key/value pairs change frequently and I need to be able to run a daily report to retrieve all key/value data from the column.For example:create table test.json(json varchar(65000));
insert into test.json
select '{"animal_id": 1, "name": "harry", "animal_type": "cat", "age": 2, "location": "oakland"}' union
select '{"animal_id": 2, "name": "louie","animal_type": "dog", "age": 4}' union
select '{"animal_id": 3, "gender": "female"}' union
select '{"animal_id": 4, "size": "large"}' ;With the above data I can write the below query to get the attributes I know are there however if a new attribute is added tomorrow, my report query will not pick up that new key/value pair. Is there any way to do aSELECT *type query on this table?SELECT
json_extract_path_text(JSON,'animal_id') animal_id,
json_extract_path_text(JSON,'name') name,
json_extract_path_text(JSON,'animal_type') animal_type,
json_extract_path_text(JSON,'location') location,
json_extract_path_text(JSON,'age') age,
json_extract_path_text(JSON,'gender') gender,
json_extract_path_text(JSON,'size') size
FROM test.json
ORDER BY animal_id; | Querying JSON Strings in AWS Redshift |
Since the default cache behavior can't (afaik) be removed, this seems like a clever "serverless" solution:Create a bucket in S3. The name won't matter. Don't put anything in it.Add a second origin to your CloudFront distribution, selecting the new bucket as the origin.Create a second cache behavior with path pattern/assets/*pointing to your original origin.Change the default cache behavior to use the new S3 origin (the unused, empty bucket).CloudFront will forward requests for/assets/*to your existing server, where they will be handled as now, but all other requests will be sent to the empty bucket, which has no content and no permissions, so the response will be403 Forbidden.Optionally, add an appropriate "robots.txt" file to that otherwise-empty bucket, and make it publicly readable, so CloudFront will serve it up to any crawlers that visit your CloudFront distribution, disallowing them from indexing, which should hopefully prompt them to remove any already-indexed results and not try to index the assets or any other paths they might have already learned by crawling the previously-exposed content at the "wrong" URL.ShareFolloweditedMay 11, 2016 at 5:31multipolygon2,21422 gold badges2020 silver badges2323 bronze badgesansweredMay 11, 2016 at 3:51Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges11That is ingenious! Ive just tested it out and it works great! Thank you–multipolygonMay 11, 2016 at 4:42Add a comment| | I have gone through the process of creating a CloudFront distribution with the Origin Domain Name pointing to my main Rails application where assets (images, css, js, ect) are located at/assets.However, by default, the CloudFront distribution is mirroring the entire domain (including dynamic pages).How can I limit it to just the/assetssub-tree?PS This is the article I am following:https://devcenter.heroku.com/articles/using-amazon-cloudfront-cdnThanks! | How do I limit AWS CloudFont so that it only serves requests from a single directory on my domain? |
Not with API Gateway directly, but since API Gateway uses theVelocity template engineunder the hood you might consider downloading and running the velocity engine on your own computer to debug your templates.ShareFollowansweredApr 28, 2016 at 15:30Lorenzo dLorenzo d2,0361111 silver badges1717 bronze badges11Ok, that I can do. Maybe you jave an Idea of the best way to do it? As I can see it velocity is a java library. Is there a simple executable with which I can test my templates?–NathanApr 28, 2016 at 15:53Add a comment| | I am trying to create a apigateway mapping template, that transforms this:{
"ref": "refs/heads/master"
}into this:{
"download_url":"http://example.com/master"
}So I tried this:{
"branch": $input.path($.ref).substring($input.path($.ref).lastIndexOf('/')+1)
}Testing this method I get a simple error:Execution failed due to configuration error: Unable to transform requestNow, of course I would like to know why this failed. But more importantly: How can I debug this? Is there anyway to get a more describing error message for a mapping template? | Is there a way to debug a mapping template in aws apigateway |
You cannot connect them as they are completely separate databases. However, you can put a simple user interface on top of your local DynamoDB database.I use the SQLite Browser:http://sqlitebrowser.org/. Once you have it installed, open the.dbfile located in the folder where you are runningDynamoDBLocal.jar. You should be able to see all your tables and the data within them. You won't be able to see DynamoDB specific things like your provisioned capacity, but I think this will give you enough of what you're looking for.Does this help?ShareFollowansweredApr 23, 2016 at 20:24readyornotreadyornot2,81322 gold badges1919 silver badges3232 bronze badges3It helps a lot, though the sqlite is not working perfectly with NoSQL database. It is displaying some unreadable letters in some columns.Thank you very much!!–SpiderApr 25, 2016 at 7:52I'm assuming that those are the "index" columns, right? If so, that is expected.–readyornotApr 25, 2016 at 11:39@Spider Since I answered your original question, would you mind upvoting my response and marking it as the answer? I'd appreciate it, thanks!–readyornotApr 26, 2016 at 19:58Add a comment| | Hello, thanks for your viewing my question first!I am running the Amazon dynamoDb locally and all databases are saved locally. With the local dynamoDb, I have to show everything with a lot of code, but I feel the interface at web service is much better, in which I can perform operations and see the tables directly and clearly:So may I ask how can connect them, then I can practice the coding and check the status easily?Looking forward to your reply and thank you so much!Sincerely | How to synchronize the local DynamoDb and Amazon DynamoDb web service |
You can delete the AMI without deleting any instances that were created using that AMI. Your question makes it sound like that's not possible.You can easily browse the EC2 instances in the AWS web console and see what AMI was used to create them. Or you can use theaws ec2 describe-instancescommand to list all your instances. The output of that command will include the ID of the AMI used to create the instance.ShareFollowansweredApr 9, 2016 at 21:06Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment| | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed3 years ago.Improve this questionI have launched several EC2 instances with my custom AMI. Now I want to completely delete the AMI so firstly I need to terminate all instances running that AMI.
Is it possible to do with AWS's API?For down-voters: Recommended cleanup process from AWS instructed me to terminate all instances running the AMIhttp://aws.amazon.com/articles/637. The reference can be obsolete but no need to down vote the question. | How to find EC2 instances running a certain AMI in order to delete both AMI and EC2 instances? [closed] |
Connection draining refers to open tcp connections with the client it has nothing to do with sessions on your instance. You may be able to do something with keep-alives if you do a TCP passthrough instead of http listener.The best route to go is set up sessions to be shared between your instances and then disable stickyness on the load balancer.ShareFollowansweredMar 31, 2016 at 20:25datasagedatasage19.2k22 gold badges4848 silver badges5555 bronze badges3I am totally restricted to see the code and they will not take the sessions out, that is why I have to workaround like this. When you say a "TCP passthrough instead of http listener" you mean, changing the listener from HTTP to TCP?–imTachuMar 31, 2016 at 22:02@TachúSalamanca Correct, there is no guarantee that this will work. But by adjusting keepalive settings you may be able to keep the TCP connection open longer.–datasageMar 31, 2016 at 22:04We already changed it, "Also, we changed the ELB listener configuration to work over TCP with no luck."–imTachuMar 31, 2016 at 22:05Add a comment| | So, we are kinda kinda lost using the AWS ELB connection draining feature.
We have an Auto Scaling Group and we have an application that has independent sessions (A session on every instance). We configured the ELB listener over HTTP on port 80, forwarding to port 8080 (this is of course the port where the application is deployed) and we created a LBCookieStickinessPolicy. We also enabled the connection draining for 120 seconds.The behavior we want:
We want to scale down an instance but since the session is sticked to each instance, we want to "maintain" that session during 120 seconds (Or the connection draining configuration).The behavior we have:
We have tried to deregister, set to stanby, terminate, stop, set to unhealthy an instance. But no matter what we do, the instance shut downs immediately causing the session to end abruptly. Also, we changed the ELB listener configuration to work over TCP with no luck.Thoughts? | ELB Connection Draining Configuration |
I want to do the same.As far as I can tell, it's not possible out of the box with the latest Grafana (2.6 at time of writing). Seethis issue.Thispull request implements it. It's currently tagged as 3.0-beta1. So I expect we'll both be able to do what we want come version 3.0.EDIT: inserting proof of 3.0-beta-1 workingI installed 3.0-beta-1, and was able to use Custom Metrics, as evidenced by this image:ShareFolloweditedApr 4, 2016 at 13:58answeredMar 23, 2016 at 13:49Mike HoganMike Hogan10.2k99 gold badges4343 silver badges7878 bronze badges2Hi, I installed 3.0 and still I cannot render the data in graph. I am not sure is it a problem in Graphana or data in cloudwatch. I could render graph in CloudWatch. Datasource connects successfully to ClowdWatch Namespace. When I configure a graph with Namespace, Metrics and Dimension - I am not able to view data. I am leaving the Period and Alias as default. Any recommendations?–Karthik PalaniveluMar 28, 2016 at 13:00@KarthikPalanivelu I tested 3.0-beta-1, and it works for me. I edited answer above accordingly–Mike HoganApr 4, 2016 at 13:58Add a comment| | I am new to Grafana. I am setting it up to view data from Cloudwatch for a Custom Metrics. Custom Metrics Namespace Name is JVMStats, Metric is JVMHeapUsed, Dimension is instance Id. If I configure these data, I am not able to get the graph. Can you please advice me on how to get the data?Regards
Karthik | Grafana - Configure Custom Metrics from Cloudwatch |
The exception you are seeing means that identity pool is not setup to allow unauthenticated identities. But since you are using Facebook Token and getting this error, it seems that token may not have been set correctly on credentials provider.This blog might be usefulhttps://mobile.awsblog.com/post/Tx92ASFNST8JPV/Using-Amazon-Cognito-with-Swift-sample-app-developer-guide-and-moreShareFollowansweredMar 9, 2016 at 21:57patanjalpatanjal65544 silver badges44 bronze badgesAdd a comment| | I am exploring AWS for iOS,I am trying to use following things,1.DynamoDB2.Cognito3.Facebook LogInI was getting AWS DynamoDB scan working when there wasn't any LogIn integrated.
After integration LogIn with Facebook, I am configuring facebook to cognito like this:if let fbToken = FBSDKAccessToken.currentAccessToken().tokenString{
let credentialsProvider = AWSCognitoCredentialsProvider(
regionType: CognitoRegionType,
identityPoolId: CognitoIdentityPoolId)
credentialsProvider.logins = [AWSCognitoLoginProviderKey.Facebook.rawValue: fbToken]
}But after configuring this I don't have access of DynamoDB now.
It says:Unauthenticated access is not supported for this identity poolNote: LogIn is necessary in my case. | Unauthenticated access is not supported for this identity pool, while using DynamoDB |
No. You cannot preserve the message id when moving between queues. SQS gives you the id and the timestamp:http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ImportantIdentifiers.htmlWhat you can do is in your application you either:have the ability to pick from both queues (and in normal operation mode you just have the normal queue). you enable the DLQ when reprocessingyou stamp the messageid and the timestamp in the message body when you requeue and in your application you first look in the message body before looking at the message itself. (you still need to use the sqs assigned messageid to ack the processing though)ShareFollowansweredMar 2, 2016 at 23:03MirceaMircea10.4k22 gold badges3232 silver badges4646 bronze badgesAdd a comment| | Is there a way to move a message from one queue to another? We have a case where messages may end up in the DLQ due to a resource not being available. Once the issue is resolved we'd like to move the message back to the original queue and have it processed again. For our tracking purposes it'd be nice if the original MessageId and SentTimestamp are preserved.The closest thing I've found is creating a new SendMessageRequest object and copying the contents of the message over. But this will create a brand new message with a new ID and timestamp.When a message is moved the DLQ the id and timestamp is preserved. Isn't it possible to just reverse this action somehow? | Move SQS Message To Different Queue |
Yes it is possible. AWS provided way for it.Please refer below two documents.1)Signing and Authenticating REST Requests2)GET ObjectHope it will help you :)ShareFolloweditedMar 1, 2016 at 14:18answeredMar 1, 2016 at 11:53Ravi HiraniRavi Hirani6,48111 gold badge2929 silver badges4242 bronze badgesAdd a comment| | I know how to download a file using PHP. But regarding Amazon S3's API; I just want to download using my own code. API is great, but just in case I just want to create my own set of code. I know I have to set my bucket as "public".Is this possible? - download without API; I haven't tried it yet. | Can I download file from my Amazon S3 account without using S3 API? |
In order for you to retrieve all of your products without input parameters, you can use the Reports API to request an inventory report or active listings reports or any of the report types here:http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html#ReportTypeCategories__ListingsReportsYou can call the Reports API just like the Products API, but there are extra steps involved. You first request the report using theRequestReportoperation, then you'll get back aGeneratedReportId. Take that Id and call theGetReportoperation and you'll get back the report once it's available. If you need more than a report, but need to work with the data in some other way, you can just write a routine in whatever language you're using to parse out the data in memory.Have you seen the client libraries? They do most of the work already, just plug in your keys.https://developer.amazonservices.com/gp/mws/api.html/188-4747010-1589520?ie=UTF8&group=bde§ion=reports&version=latestShareFolloweditedFeb 23, 2016 at 18:13answeredFeb 23, 2016 at 14:20ScottGScottG10.9k2525 gold badges8282 silver badges113113 bronze badges0Add a comment| | I have integrated the MWS API for my store. The issue is I was not able to get list of all products which I have submitted from feeds and also available products in Amazon store in account.I have tried all the api of MWS no any api giving all products.In Listmatchingproducts api it needs query parameter but for product listing there should not be query parameter required.So for all product listing which api will be used and how? | How to display all products without using ListMatchingProducts in the Amazon MWS Products API? |
I also have the same issue and found this :How to retrieve Amazon Returned Item from MWSAccording to this we can't get status as "Returned" using MWS. :(ShareFolloweditedMay 23, 2017 at 12:25CommunityBot111 silver badgeansweredFeb 16, 2016 at 13:47Pushpender SharmaPushpender Sharma29433 silver badges1515 bronze badgesAdd a comment| | I am using amazon mws api, I am trying to get ORDER status of amazon order.
But It does not provide me the Returned Order status. It only provide a very few order statuses.I am only getting following order status from amazon mws order api call.I need to know how Can I get the order status as returned? | Amazon mws api, I changed order status to returned , But still getting Shipped from API |
CloudWatch itself does not have a native export feature that will send data periodically to S3.As you suggest, you would need to develop a scrip tthat pulls the CloudWatch metrics that you wish to store (in this case ELB metrics) using the AWS CLI and copy those metrics to your S3 bucket on a regular basis.Using theget-metric-statisticscommand, the script would get the statistics for the specified metric, and store the data to your S3 bucketSee alsoElastic Load Balancing Dimensions and MetricsShareFollowansweredFeb 10, 2016 at 19:31Rodrigo MurilloRodrigo Murillo13.4k22 gold badges3131 silver badges5151 bronze badgesAdd a comment| | Is there a way to get CloudWatch Metrics directly into S3? I don't need logs but ELB Metrics. I would like them logged to S3 on a regular basis (ideally as CSV).Right now, I'm thinking of writing my own script to do it, but maybe there's there's an automatic way to put it in S3 (or Redshift)? | AWS CloudWatch Metrics directly to S3 |
You should include the dependency jars in alibsubdirectory. If you have a classmypackage.LambdaFunctionHandler, the zip file should have this basic structure:.
|-- mypackage
| +-- LambdaFunctionHandler.class
+-- lib
|-- myjar1.jar
+-- myjar2.jarShareFollowansweredFeb 4, 2016 at 17:55ataylorataylor65.4k2525 gold badges162162 silver badges189189 bronze badgesAdd a comment| | What is the standard practice for creating AWS Lambda jar.Should we bundle the dependencies as jars within the zip file or should the dependencies be unjarred and included as classes.As far as I know it is the first option that holds true, but this doubt came to mind when I was following the AWS Tutorial of thumbnails and it eventually created a jar that had classes for dependencies (like Jackson) rather than bundling Jackson jar in the artifact.Is there a sample AWS Lambda zip file that I can download and try (the one that has dependencies bundled as .jar files and not as .class files) | Building AWS Lambda jar |
The problems were:
1. I didn't set the correct permissions
2. I just did't wait enough so the data could be processed by the lambda function.ShareFollowansweredFeb 3, 2016 at 12:37DarkSunDarkSun44911 gold badge77 silver badges2020 bronze badgesAdd a comment| | I am streaming data to Amazon Kinesis, and I use Amazon Lambda to handle data and write it to DynamoDB.My Lambda code:var doc = require('dynamodb-doc');
var dynamo = new doc.DynamoDB();
exports.handler = function(event, context) {
//console.log('Received event:', JSON.stringify(event, null, 2));
event.Records.forEach(function(record) {
// Kinesis data is base64 encoded so decode here
var payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
console.log('Decoded payload:', payload);
var tableName = "_events";
var datetime = new Date().getTime().toString();
dynamo.putItem({
"TableName": tableName,
"Item" : {
"eventID" : record["eventID"],
"eventName" : payload
}
}, function(err, data) {
if (err) {
console.log("dynamodb error: " + err);
context.done('error putting item into dynamodb failed: '+err);
}
else {
console.log('great success: '+JSON.stringify(data, null, ' '));
context.succeed('K THX BY');
}
});
});
// context.succeed("Successfully processed " + event.Records.length + " records.");
};When I run test, data is successfully saved to DynamoDB. But when I stream the real data, it doesn't happen, while logs show that data was received by lambda function.
Also console.log() function doesn't work in putItem() block, so I have no idea how to debug this problem. | Amazon Lambda won't write to DynamoDB |
Everyone means everyone, with or without an AWS account - so unless you want to be on the hook for potential abuse, probably not a good idea unless you restricted with appropriate conditions, i.e. by IP Address.ShareFollowansweredJan 28, 2016 at 20:39E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badgesAdd a comment| | When specifying permissions on an Amazon AWS SQS queue you can specify a principal or sayEveryone. What doesEveryonemean? This is not explicitly document anywhere I could find.Does this mean "anyone in my AWS account" or "the whole world including evil hackers"? | What scope does an "Everyone" principal have in Amazon SQS permissions? |
I faced the same issue:botocore.exceptions.DataNotFoundError: Unable to load data for: ec2/2016-04-01/service-2For which I figured out the directory was missing. Updatingbotocoreby running the following solved my issue:pip install --upgrade botocoreShareFolloweditedOct 5, 2016 at 13:57Kurt Van den Branden12.4k1010 gold badges7878 silver badges8686 bronze badgesansweredOct 5, 2016 at 11:55MitzMitz5166 bronze badgesAdd a comment| | I am using boto3 in my project and when i package it as rpm it is raising error while initializing ec2 client.<class 'botocore.exceptions.DataNotFoundError'>:Unable to load data for: _endpoints. Traceback -Traceback (most recent call last):
File "roboClientLib/boto/awsDRLib.py", line 186, in _get_ec2_client
File "boto3/__init__.py", line 79, in client
File "boto3/session.py", line 200, in client
File "botocore/session.py", line 789, in create_client
File "botocore/session.py", line 682, in get_component
File "botocore/session.py", line 809, in get_component
File "botocore/session.py", line 179, in <lambda>
File "botocore/session.py", line 475, in get_data
File "botocore/loaders.py", line 119, in _wrapper
File "botocore/loaders.py", line 377, in load_data
DataNotFoundError: Unable to load data for: _endpointsCan anyone help me here. Probably boto3 requires some run time resolutions which it not able to get this in rpm.I tried with using LD_LIBRARY_PATH in /etc/environment which is not working.export LD_LIBRARY_PATH="/usr/lib/python2.6/site-packages/boto3:/usr/lib/python2.6/site-packages/boto3-1.2.3.dist-info:/usr/lib/python2.6/site-packages/botocore: | boto3 throws error in when packaged under rpm |
It's fairly hard to give any solution without understanding your requirements ex: peak load, dependencies of these applications, etc.Just based on preliminary info you can try to use Amazon ECS/Docker so that you can deploy multiple applications on single host.ShareFollowansweredJan 16, 2016 at 20:34itwarilalitwarilal1,34411 gold badge1010 silver badges1717 bronze badgesAdd a comment| | I'm fairly new to AWS and before i realised how awesome it is, the cost of running an ec2 instances hit me to reality.So here's my problem-I have about 130 apis(Spring boot) to run my application. And as of yet, I've build them into about 15 modules. For example - settings module has all the apis related to changing the username, the password.And then i uploaded these modules through Elastic Beanstalk into about 5 application - each consisting of 3 environments.Now I get this feeling that i'm doing it all wrong, cause the cost spikes upto 300$ a month.
Since i'm from India, here amazon doesnt support reserved instances.It'd be a great help if you could guide me through on what should be done instead.Any help would be great.Thanks | How should i deploy all my APIs? |
You can put the ${rdsInstanceName} in the environment section of a function'ss-function.jsonfile, then access it using theprocess.env.MyRdsInstanceNamewithin Lambda:"environment": {
"MyRdsInstanceName": "${rdsInstanceName}"
...
}and reference this stage/region specific variable in your Lambda using something like:var myRdsInstanceName = process.env.MyRdsInstanceName;Hope this helpsShareFollowansweredJun 6, 2016 at 2:03Matt DMatt D3,37811 gold badge1515 silver badges3030 bronze badgesAdd a comment| | Is there a way to use serverless env variable in s-resources-cf.json?I create a RDS instance in s-resources-cf.json that's used by some of my lambdas. Instead of putting the db name and password into s-project.json or s-variables-env.json I’d like to reference env vars and have them filled in as part of the deployment, similar to how vars in s-variables-env.json can be references in s-resources-cf.json using ${}. | Using env variables in serverless s-resources-cf.json |
You can set up aninternal Elastic Load Balancerto round robin requests to the slaves. Then configure two connections in your code: one that points directly to the master for writes and one that points to the ELB endpoint for reads.Or if you're adventurous, you could set up your own internal load balancer using Nginx, HAProxy, or something similar. In either case, your LB will listen on port 3306.ShareFolloweditedJan 3, 2016 at 21:33answeredJan 3, 2016 at 21:15Patrick LeePatrick Lee2,02011 gold badge2020 silver badges2424 bronze badges2Thanks for the answer!–user2924127Jan 3, 2016 at 21:36No problem. Happy to help with any RDS questions.–Patrick LeeJan 3, 2016 at 21:37Add a comment| | I am new to aws.I have a mysql rds instance and I just created 2 read replicas. My application is written in Java, and what I have done up until now is using the JDBC I have connected to the one aws instance, but now how do I distribute the work around the 3 servers? | AWS rds - How to read from a read replica inside of a Java application? |
Does it make sense to store that information in a local databaseYes. Actually this sounds exactly like a typical caching setup. I would recommend looking into Redis instead of using a relational database for this.how do I then automatically run this function every 1 to 5 minutes in
the backgroundProbably aCronjob. You would have to provide more information like where your application is running (AWS EC2 or somewhere else?) and if it is running on Linux or Windows, before I could give a more detailed recommendation.ShareFollowansweredDec 28, 2015 at 19:52Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badges1Thanks for the info. I'll be looking into Redis. Thanks for the tip on Cron jobs, too!–PHPNeophyteDec 28, 2015 at 21:14Add a comment| | I'm working on a site that pulls product price data from Amazon.com and Walmart. I'm guessing that in the future, it will also pull data from other places.My first idea was to pull the data directly from Amazon (using their product advertising API) and then display the data on the site for every single visitor who landed on the page. That's not a bad idea if there aren't many product prices I'll be retrieving (or if the number of site visitors is low). I think that I will run into problems once the site gets busy and if i increase the number of product whose price i want to pull.Using the Amazon and Walmart API, i was able to make successful REST api calls and parse the XML returned to obtain the information that i needed.Does it make sense to store that information in a local database, update it say, every 1-5 minutes, and then get the site visitors to pull the pricing information from my local database instead of making an API call to Amazon and Walmart?If I do go this route and create a function that uses the Amazon and Walmart API to pull price data, how do I then automatically run this function every 1 to 5 minutes in the background, 24/7/365? | Storing Amazon API data in Local Database |
First, use IAM roles. That removes 90% of your credentials. Once you've done that, you can store (encrypted!) credentials in an S3 bucket and carefully control access. Here's a good primer from AWS:https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2ShareFollowansweredDec 28, 2015 at 16:20300D7309EF17300D7309EF1724k1313 gold badges8787 silver badges102102 bronze badgesAdd a comment| | I've been using AWS Codedeploy using github as the revision source. I have couple of configuration files that contains credentials(e.g. NewRelic and other third party license key) which I do not want to add it to my github repository. But, I need them in the EC2 instances.What is a standard way of managing these configurations. Or, what tools do you guys use for the same purpose? | How do I handle configuration files containing credentials in AWS? |
Fromec2 - create, terminate, start or stop an instance in ec2Parameter:assign_public_ip(added in 1.5)Choices: Yes / NoComments: when provisioning within vpc, assign a public IP address. Boto library must be 2.13.0+As long as your ansible version is >= 1.5, you should be able use this parameter.- ec2:
...
image: ami-123456
...
vpc_subnet_id: subnet-12345678
assign_public_ip: yesShareFollowansweredDec 23, 2015 at 18:24helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badges0Add a comment| | I am actually trying to create public IP address and DNS name which is going to be used for auto scaling groups that launch instances into an amazon VPC.Is that true that I have to use "assign_public_ip" parameter in ec2_lc module? If yes then how can I assign to it in Ansible script ?I have created all the scripts for auto scaling, launch configuration and load balancing. It is just that I cannot log in to instance unless and until I assign Public IP or Public DNS name to it. | Assign Public IP address on AWS while creating Autoscaling groups that launch instances into an Amazon VPC in Ansible |
Reservation is basically a set of instances. These instances can share the same reservation ID if you launch them together.Amazon documentations:reservation-id - The ID of the instance's reservation. A reservation
ID is created any time you launch an instance. A reservation ID has a
one-to-one relationship with an instance launch request, but can be
associated with more than one instance if you launch multiple
instances using the same launch request.When launching an instance, you can launch multiple instances with one launch request. These instances then belong to one reservation. A reservation can be understood as an atomic launch of one or many instances.Do not mistakereservationswithreserved instances. These are completely different concepts.As for your other question: Yes, the security group matches the group of each instance in the reservation (all belong to the same security groups).ShareFolloweditedDec 1, 2015 at 9:42answeredDec 1, 2015 at 9:35SmajlSmajl7,6933131 gold badges110110 silver badges180180 bronze badgesAdd a comment| | When running aDescribeInstancesRequestI get a list of reservations. Each reservation contains a list of security groups and a list of instances.So my question is: What criteria are instances grouped in reservations and can I count on the security groups returned by the reservation being the security groups of the instances in the reservation. | What is the role of the Reservation class in the AWS EC2 Java SDK? |
Well,
for anyone interested in that in future - after doing some research finally discovered the proper syntax. It references the AWS API universal update structure for which I, unfortunately, wasn't able to find documentation anywhere.Hint: Analyse XHR request sent from your browser while working in AWS administration.Assuming use of aws-php-sdk-v3:$sdk->createApiGateway()->updateIntegration([
'restApiId'=>'<your restApiId here>',
'resourceId' => '<specific resource id here>',
'httpMethod' => 'POST',
'patchOperations' => [
[
'op' => 'replace',
'path' => '/requestTemplates/application~1json',
'value' => '{"response":"Hello, Kitty!"}'
]
]
]);Thepathparameter references aJSON-pointerstring as describedhereTheopparameter is obvious enough - but when usingcopyormovethere must be alsofromparameter with JSON-pointer to source filled in.Thevalueis just raw string you want to write somewhere.Another possibilities and combinations are obvious.Good luck!ShareFolloweditedNov 17, 2015 at 13:17answeredNov 17, 2015 at 12:45rudolfdobiasrudolfdobias1,83833 gold badges1919 silver badges4040 bronze badgesAdd a comment| | I'm desperately trying to find out how to change mapping template for integration request in POST request in API Gateway with PHP SDK v3. I've googled for hours and it seems there's no further documentation for that, nothing.
The only thing isofficial AWS documentation for that.and it's very brief.It seems really simple - let's call an update method, fill a new application/json response in it and we're done - but - there are available four candidate API methods for doing that:UpdateMethod, UpdateMethodResponse, UpdateIntegration, UpdateIntegrationResponseand for all of them there is the same documentation:$result = $client->update<whatever>([
'httpMethod' => '<string>', // REQUIRED
'patchOperations' => [
[
'from' => '<string>',
'op' => 'add|remove|replace|move|copy|test',
'path' => '<string>',
'value' => '<string>',
],
// ...
],
'resourceId' => '<string>', // REQUIRED
'restApiId' => '<string>', // REQUIRED]);So, does anyone knows:Which method is suitable for doing thatWhat to fill in in these four 'universal' fieldsHave anybody ever done that through v3 API?Any help is appreciated, thanks you very much. | Updating API gateway integration request mapping template AWS PHP SDK v3 |
I would remove theproviderkey. Thecarrierwave-awsgemreadme(I'm guessing you are using that or something similar) does not even mention theproviderkey. That might have been an old requirement that has been deprecated.ShareFollowansweredNov 13, 2015 at 3:01Ryan KRyan K3,99544 gold badges4040 silver badges4343 bronze badges0Add a comment| | I have been bumping my head on the wall trying to get this working on production. For some reason, it works locally but not up on heroku.I keep getting this error messageArgumentError in Sessions#indexinvalid configuration option :providerAt first I assume it was because of this!but later after further digging I found out its pointing to myinitializers/aws.rbCarrierWave.configure do |config|
config.storage = :aws
config.aws_bucket = 'thehatgame'
config.aws_acl = :public_read
config.aws_authenticated_url_expiration = 60 * 60 * 24 * 365
config.aws_credentials = {
:provider => 'AWS',
:access_key_id => ENV['SECRET_KEY'],
:secret_access_key => ENV['SECRET_ACCESS_KEY'],
:region => ENV['S3_REGION']
}
endAny help is welcomed, I did find alinkto a question similar, but that didn't work either | invalid configuration option `:provider' |
Most of the time this issue occur due to insufficient IAM Permission on the Instance and CodeDeploy Service. You need to check /var/log/aws/codedeploy-agent/codedeploy-agent.log file for detail information. Also in/etc/codedeploy-agent/conf/codedeployagent.ymlfile you can set:verbose: trueto get more info in log file.These are the IAM Policies you need to update :// Policy Role for Code Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:PutLifecycleHook",
"autoscaling:DeleteLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CompleteLifecycleAction",
"autoscaling:DescribeAutoscalingGroups",
"autoscaling:PutInstanceInStandby",
"autoscaling:PutInstanceInService",
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
// Policy Trust for Code Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"codedeploy.us-west-2.amazonaws.com",
"codedeploy.us-east-1.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
// Instance Role for EC2 Instance
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}ShareFollowansweredNov 27, 2015 at 9:48user5366090user5366090Add a comment| | I am new with Amazon Code Deploy. I am getting an error when deployingDeployment FailedNo hosts succeededI checked the service code deploy-agent on my Linux machine and it's running
How can I fix this issue? | Amazon EC2 Code Deploy No hosts succeeded |
I'm assuming you are looking to return from the lambda handler function. If so, while not elegant, you can do this:json.loads(json.dumps(value, cls=JSONEncoder))Not awesome because eventually Lambda will convert that structure back into a string (not sure if there's a way to just skip the intermediate step of converting to a python structure).ShareFollowansweredDec 8, 2017 at 14:53Eric HorneEric Horne10311 silver badge66 bronze badges1why is this downvoted? In fact, this seems to be the only way to go. puzzling.–DillJan 23, 2019 at 23:52Add a comment| | When you query data with your lambda function from dynamodb and there are binary types or decimal/number types in it my default setup below prompts with an JSONEncoding Error that he cannot deal with Binary or Decimal. I can work around with a small piece of code that I attach tojson.dumps(data, indent=2, cls=JSONEncoder)class JSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
if o % 1 > 0:
return float(o)
else:
return int(o)
return super(JSONEncoder, self).default(o)But usingjson.dumps()in front of yourreturnstatement double json format the response and that leads to escaped signs. To repeat the problem justreturn dataprompts with the mentioned error.How can I affect myreturnstatement that converts to json?UPDATE:The problem is solved dirty by manually change the items with:test = operations['Items'][0]
test['id'] = float(test['id'])But this seems messy. | AWS Lambda python custom response encoding |
You can't query across dimensions, but you can create additional summary metrics to roll up the same data. For example, if you were recording messages processed on behalf of your customers, you might put metric data twice for each message -- one to the summary message count, and one to the message count with the customer as a dimension. It's a bit redundant, but it works.If you take a look at the metrics set up in CloudWatch for an Elastic Load Balancer, you'll see the same metrics recorded several ways to summarize by availability zone, load balancer, load balancer and AZ, etc.ShareFollowansweredNov 4, 2015 at 4:38JamesJames11.8k22 gold badges3535 silver badges4141 bronze badges1Thanks for the suggestion. It kind of makes sense if you think of the data as being already pre-aggregated using the supplied dimensions.. it then becomes a matter of simply providing the aggregations you need prior to emitting the said custom metrics.–Lex LuthorNov 4, 2015 at 11:08Add a comment| | I recently added the ability for my application to upload custom metrics to AWS CloudWatch so that we can better monitor the performance characteristics of the system.I am now trying to create a report by querying those collected custom metrics using the AWS CloudWatch CLI. However, I've come across a seemingly insurmountable problem, namely, the inability to aggregate statistics across dimensions for custom metrics emitted using PutMetricData as perthis article.Is anyone aware of a way to specify dimension values using something like wildcards or regular expressions (e.g. *,?,.+ etc) ? | Query AWS CloudWatch custom metrics across dimensions |
TheuserAdminAnyDatabaserole allows the user to grant access (for itself, or any other users) to any other database, however, that does not automatically grant that admin user read/write permission on all those databases (though it can bestow them upon themselves). You can resolve your authentication issue by granting the user the additional rolereadAnyDatabase.db.createUser(
{
user: "test1",
pwd: "password",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, {role:"readAnyDatabase",db:"admin"} ]
}
)Link to MongoDB docs: Create a User AdministratorShareFolloweditedOct 27, 2015 at 18:46jpaljasma1,6221616 silver badges2121 bronze badgesansweredOct 27, 2015 at 16:09AlexAlex21.5k1111 gold badges6363 silver badges7575 bronze badgesAdd a comment| | I created an admin user:> db.createUser(
... {
... user: "administrator",
... pwd: "password",
... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
... }
... )
Successfully added user: {
"user" : "administrator",
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
}
]
}and now i'm trying to use it for enter with:ubuntu@***ip number***:/etc$ sudo mongo --port 27017 -u administrator -p password --authenticationDatabase adminThis is what returns:MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:27017/test
2015-10-27T15:33:25.670+0000 E QUERY Error: 18 Authentication failed.
at DB._authOrThrow (src/mongo/shell/db.js:1271:32)
at (auth):6:8
at (auth):7:2 at src/mongo/shell/db.js:1271Mongo is installed into an Amazon EC2 machine with Ubuntu.What is missing? | Remote and local authentication fails on Mongo DB 3.0.7 (installed on Amazon EC2) |
schoolboy error, Turns out I need to allow AWS traffic through to my postgres server via the security group.ShareFollowansweredOct 16, 2015 at 19:54PodgeypoosPodgeypoos17711 gold badge11 silver badge1010 bronze badgesAdd a comment| | var pg = require("pg");
exports.handler = function(event, context) {
var conn = "blanked out for SO";
var client = new pg.Client(conn);
client.connect();
userName = event.userName;
var client = new pg.Client(conn);
client.connect();
var query = client.query({
text: 'SELECT address from users where userName= $1',
values: [userName]
});
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
var jsonString = JSON.stringify(result.rows);
var jsonObj = JSON.parse(jsonString);
client.end();
context.done(null, jsonObj);
});
};Im using the above code to return one row from a table. I execute locally using lambda-local and have uploaded to execute in AWS, i keep getting a time out from AWS/local. I believe it has got to do with the query.on, if I add a context.done(null,"success") to the end just before the last brace it will return a success.
How do i get it to return the row from the query? | AWS Lambda postgres query is timing out |
I've faced a similar problem recently with ruby 2.2.2. I was writing to AWS S3 with theaws-sdkgem. I found the solution onthis issue on aws-sdk GitHub.There is a memory leak in theStringIOclass shipped with ruby 2.2.0 to 2.2.2. This class is used byaws-sdkwhen sending files to S3. This bug wasreported and fixedon 2.2.3.Hopefully, upgrading to ruby 2.2.3 will fix your problem.ShareFollowansweredOct 12, 2015 at 11:16haradwaithharadwaith2,7101515 silver badges2020 bronze badges2Hi haradwaith, tried this solution mentioned on github link you provided. Seems like it is working will have to monitor for a day to be sure.–anshul410Oct 12, 2015 at 13:36Hi, this resolved the issue. Though this answer does not detail memory leak debugging in ruby, I am accepting this answer as the issue I faced is resolved.–anshul410Oct 13, 2015 at 9:50Add a comment| | I have a production rails application server, the memory usage of rails worker process of which increases from ~300 MB to ~1.2GB in 3-4 days.How can I debug this memory leak.
I am using rvm 2.2.2 and my application server is deployed in AWS:ElasticBeanstalk . I am using puma web server.Please provide detailed answer. | Ruby production server memory leak |
According to the documentation,DynamoDBContext.SaveAsynctakes a typeT, and aCancellationToken. It does not take any form of delegate type, at all.What you want is to do is:public async Task SaveAsync<T>(T entity, CancellationToken ct)
{
await context.SaveAsync<T>(entity, ct);
Console.WriteLine("entity saved");
}ShareFolloweditedSep 29, 2015 at 21:17answeredSep 29, 2015 at 21:00Yuval ItzchakovYuval Itzchakov148k3232 gold badges262262 silver badges326326 bronze badgesAdd a comment| | I'm trying to save my administrator class object to DynamoDB using Context.SaveAsync method:// Save admin to DynamoDB.
context.SaveAsync(admin,(result)=>{
if (result.Exception == null)
{
Console.WriteLine("admin saved");
}
});but it keeps bothering me with following error:cannot convert `lambda expression' to non-delegate type `system.threading.cancellationtoken'How do I handle this issue ?. I'm using Xamarin Studio for OS X | Problems with SaveAsync task in DynamoDB for C# |
You need to modify you code to create the target directory on your local filesystem if it does not already exist. It should look something like this:use File::Path qw[make_path];
sub export_bucket {
my ( $conn, $bucket, $directory ) = @_;
$bucket = $conn->bucket($bucket);
my $response = $bucket->list();
print $response->{bucket} . "\n";
for my $key ( @{ $response->{keys} } ) {
print "\t" . $key->{key} . "\n";
_export_file( $conn, $bucket, $key->{key}, $directory . '/' . $key->{key} );
}
}
sub _export_file {
my ( $conn, $bucket, $name, $path ) = @_;
print "Downloading $name file", "\n";
my $test = $bucket->get_key_filename( $name, 'GET', $path );
print Dumper($test);
my $acl = $bucket->get_acl($name);
print Dumper($acl);
## get path directory part
my ($dir_part) = $path =~ /(.+)\/[^\/]+$/;
unless ( -d $dir_part ) {
make_path($dir_part);
}
open my $acl_file, '>', $path . '.acl';
print $acl_file $acl;
close $acl_file;
}ShareFollowansweredSep 28, 2015 at 13:22PradeepPradeep3,1231818 silver badges2121 bronze badges0Add a comment| | I am trying to fetch files from s3 usingAmazon::S3module in Perl . I am successfully able to download files which are not prefixed but unable to fetch prefixed files liketest/abc.txt.I am using below code.sub export_bucket {
my ($conn, $bucket, $directory) = @_;
$bucket = $conn->bucket($bucket);
my $response = $bucket->list();
print $response->{bucket}."\n";
for my $key (@{ $response->{keys} }) {
print "\t".$key->{key}."\n";
_export_file($conn,$bucket,$key->{key}, $directory.'/'.$key->{key});
}
}
sub _export_file {
my ($conn,$bucket,$name,$path) = @_;
print "Downloading $name file","\n";
my $test = $bucket->get_key_filename($name,'GET',$path);
print Dumper($test);
my $acl = $bucket->get_acl($name);
print Dumper($acl);
open my $acl_file, '>', $path.'.acl';
print $acl_file $acl;
close $acl_file;
}Suggest me what changes should i make so that when a prefixed/folder comes i should be able to download the folder as well.Thanks | Fetch and upload files and prefixed files to s3 using perl |
When this question was asked in late 2015, non-U.S. mobile numbers were not supported when sending SMS from SNS, and at the time, that was the correct answer to the original question.As noted in the comments, this is no longer the case.SNSannounced global SMS capabilitiesin June, 2016.See alsohttp://docs.aws.amazon.com/sns/latest/dg/sms_supported-countries.htmlHistorical reference from the Internet Archive "Wayback Machine" captured in September, 2015 --SMS notifications are currently supported for phone numbers in the United States. SMS messages can be sent only from topics created in the US East (N. Virginia) region. However, you can publish messages to topics that you create in the US East (N. Virginia) region from any other region.http://web.archive.org/web/20150919111507/http://docs.aws.amazon.com/sns/latest/dg/SMSMessages.htmlShareFolloweditedNov 7, 2017 at 21:27answeredSep 14, 2015 at 19:58Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges2This is no longer true:docs.aws.amazon.com/sns/latest/dg/sms_supported-countries.html–RobertoNoveloNov 7, 2017 at 17:461@RobertoNovelo thank you for bringing this to my attention. The answer now reflect the current status of the service, with the historical information captured for reference.–Michael - sqlbotNov 7, 2017 at 21:28Add a comment| | I am using the aws sns service to push/send messages for US mobile number and to do so used the below method.a) load sns$sns = new A2Sns(array(
'key' => 'aaaaaaaaaa',
'secret' => 'bbbbbbbbbbbbbbbbb',
'region' => 'us-east-1'
));b) create topicc) set topic attributed) create subscriptionBut, the same method is not working for Indian mobile number. Is it really possible? If so, what i have to for that? | AWS SNS for India |
While Mircea's answer works, it wasn't ideal for my use case because it runs the migration on all the instances in the stack during the deploy. This will thrash your database if you have a lot of instances defined in your stack.What I ended up doing in the end was to use a custom cookbook that overrides only the migrate attribute, setting it totruefor one and only one node.this forum postgave me the inspiration.I already had custom cookbooks enabled for my stack, and for this method to work, you'll need to do the same. I then defined adeploycookbook in my custom cookbooks repository that only had one file:deploy/attributes/customize.rbcontaining:migrate_node = 'rails-app1'
current_hostname = node[:opsworks][:instance][:hostname]
application = <your application short name>
if migrate_node == current_hostname
normal[:deploy][application][:migrate] = true
else
normal[:deploy][application][:migrate] = false
endThat code just hard codes 'rails-app1' as the node to run the migrations, and then checks to see if the current node is that one. If so, it queues up the migration for that node. If not, it ensures the migration does not run on that node.ShareFolloweditedSep 14, 2015 at 19:45answeredAug 24, 2015 at 22:0373273268155 silver badges1212 bronze badgesAdd a comment| | I have a few Rails stacks set up on AWS OpsWorks, and I primarily use the OpsWorks console web app to deploy my code to the stack from GitHub.On the 'Deploy app' page on OpsWorks, there is a 'Migrate database' switch that defaults to off. Database migrations in Rails are idempotent, so it never hurts to run the migration, but it can most definitely hurt if you forget to run the migration when it needed to be run.Is there any way I can have that switch default to 'Yes' to always run migrations? I don't want to do it with a custom recipe because I'd like the migration to run on one and only one instance during the deploy. Is there some configuration option that I'm missing so that the database migrations automatically run when I deploy code to the stack through the OpsWorks console? | How to always run migration during OpsWorks deployments to Rails stacks |
"x-amz-server-side-encryption-customer-algorithm" and "x-amz-server-side-encryption-customer-key" should be used at server side when signing the URL and the client don't need to add any header to the requests.I don't know the PHP syntax but in Java SDK it works like this:generatePresignedUrlRequest = new GeneratePresignedUrlRequest(BUCKET_NAME, TOKEN)
.withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm())
.withKmsCmkId("YOUR_KMS_KEY_ID");ORgeneratePresignedUrlRequest.addRequestParameter("x-amz-server-side-encryption", "aws:kms");
generatePresignedUrlRequest.addRequestParameter("x-amz-server-side-encryption-aws-kms-key-id", "YOUR_KMS_KEY_ID");When signing for GET method you shouldn't do anything spacial.For more info you can look at this guide:Generating Amazon S3 Pre-signed URLs with SSERazShareFollowansweredNov 4, 2018 at 11:47Raz ZelingerRaz Zelinger68699 silver badges2525 bronze badges1very useful answer, but it gives me https://{bucketname}.s3.amazonaws.com URL which gives me a privacy error in the browser is there a way to generates3.amazonaws.comURL–Ahmed E. EldeebJul 21, 2019 at 11:26Add a comment| | I have created a signed URL for my s3 object.The object is stored using 'Server-Side Encryption with Customer-Provided Encryption Keys'.Now, When my client browses to the signed URL he gets :The object was stored using a form of Server Side Encryption. The correct parameters must be provided to retrieve the objectI need somehow make my client send the "x-amz-server-side-encryption-customer-algorithm" and "x-amz-server-side-encryption-customer-key" headers before reaching the URL.Any idea how can I achieve that ? | How to get Amazon s3 Encrypted object with signed URL? |
Sorry that you are having problems with using Restcomm on Amazon Cloud. When you purchase Restcomm, you are only presented with the default Region, which is US East (N. Virginia). Depending on the type of setup you use (One-click or custom) you should be able to configure the instance as needed. It is also possible to start the instance using the EC2 console by searching for the AMI in the Region mentioned above.When you launch the instance, you will be able to configure the subnet and security groups that will allow you to successfully use Restcomm. Here is a screenshot showing you how to choose the appropriate subnet. If you are not familiar with Restcomm, I suggest you leave this as default so that you can concentrate on familiarizing yourself with the platform.The screenshot shows the default settings.ShareFollowansweredAug 17, 2015 at 16:04CharlesCharles25611 silver badge22 bronze badges11Thanks this is what I needed. Now it launched and I just need to learn how to use Restcomm :)–karelAug 18, 2015 at 21:25Add a comment| | I tried to installRestcomm for VoIP Innovations on AWSusing the default setup but it didn't work.This is the error message:Your recent Restcomm for VoIP Innovations launch failed. Your requested instance type (m1.large) is not supported in your requested Availability Zone (us-east-1e). Please retry your request by not specifying an Availability Zone or choosing us-east-1a, us-east-1c, us-east-1b. (Service: AmazonEC2; Status Code: 400; Error Code: Unsupported; Request ID: 9654025e-42e7-402f-93f7-969d2eb04845)I tried small/medium too, with no luck :(Any clue on how to force it on 1a/B/C rather than 1e?Thanks! | How to select a specifc Availability Zone on AWS to get Restcomm working? |
I was struggling with this for hours, and right after I post the question here I find the answer.The trick is that apparently you can't use AWS root keys for this, you have to create an IAM user and give SQS permissions to that user.ShareFollowansweredAug 12, 2015 at 16:40user1950164user195016411What SQS permissions did you provide here?–MurphyJul 8, 2016 at 13:56Add a comment| | So, I have anAWS Access Key Idand its respectiveAWS Secret Key. Furthermore from the AWS dashboard I have created a queue in SQS and put a test message in the queue. I have downloadedbotofor Python. However, when I try to run even the most basic command, I get an error:import boto.sqs
conn = boto.sqs.connect_to_region('us-west-2',
aws_access_key_id = settings.AWSAccessKeyId,
aws_secret_access_key = settings.AWSSecretKey)
print conn.get_all_queues()
exit()I get the following error:Traceback (most recent call last):
File "my_prog.py", line 43, in <module>
print conn.get_all_queues()
File "/usr/local/lib/python2.7/dist-packages/boto/sqs/connection.py", line 446, in get_all_queues
return self.get_list('ListQueues', params, [('QueueUrl', Queue)])
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1186, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.SQSError: SQSError: 403 Forbidden
<?xml version="1.0"?>
<ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<Error>
<Type>Sender</Type>
<Code>OptInRequired</Code>
<Message>The AWS Access Key Id needs a subscription for the service.</Message>
<Detail/>
</Error>
<RequestId>45255e1e-aaff-548b-9d71-105bda134530</RequestId>
</ErrorResponse>The keys are correct, I am using them successfully in other contexts. | I have an AWS access key and I have created an SQS queue but still can't access SQS using python boto |
Most likely what is happening is that your lambda function is not keeping up with the data rate coming into Kinesis. The way lambda functions with Kinesis event streams work, there is only one (single core) lambda function attached to each shard. So you are only getting 3 functions.You can see if the function is falling behind by looking at the iteratorAgeMilliseconds metric on Kinesis. This, coupled with a look at the average execution duration on your lambda function and the lambda event source batch size, should give you a good idea of how much data your lambda function is actually processing per second.(Event source batch size) * (average size of each record) / (average duration of lambda invocation) * (number of shards) = total bytes/second processed. You can use this to determine how many shards of Kinesis you need to keep up with the load.Also, you may want to look into a "fan out" setup, wherein you have one lambda function reading events off of the stream and then directly invoking another lambda function with the events. This gets you away from the shard-affinity in lambda.ShareFollowansweredMar 5, 2016 at 3:02Ryan GrossRyan Gross6,45322 gold badges3232 silver badges4444 bronze badgesAdd a comment| | We're using Kinesis as a buffer for Lambda, which then inserts into Redshift. The Lambda function creates a file in S3 and does a COPY in Redshift to insert the data. We're seeing very high delays in data coming out Kinesis and we're worried this is resulting in data older than 24 hours being dropped. We currently have 3 shards running, and are no where near our maximum throughput.In the same space of time we've also seen an increase in the amount of data going into Kinesis. However, as we are only using about a third of our write throughput, we shouldn't be throttled. There are no fluctuations in any of the Lambda or Redshift metrics.The attached files show the stats from our Kinesis stream. What could be causing this to happen, and how would I go about fixing it? | Increased Kinesis latency resulting in low gets and high delays via Lambda |
Redis is a basic component on which you can build a queuing system. That said, implementing a true guaranteed delivery system on top of Redis is not trivial, especially if you need a transactional behavior.Here are some queuing systems implemented with Redis in various languages:http://python-rq.org/https://github.com/resque/resquehttps://github.com/ask/celeryhttps://pypi.python.org/pypi/rpqueueSimilar things could be developed in Go, but when it comes to a true guaranteed delivery semantic, the devil is in the details.You will probably be better served by a dedicated queuing system, such as RabbitMQ or ActiveMQ. While they are more complex, they offer more features, and probably better guarantees.Here is a Go client for RabbitMQ:https://github.com/streadway/amqpYou might also be interested by looking atdisque(a dedicated queuing solution from Redis author), and the corresponding Go client athttps://github.com/EverythingMe/go-disqueFinally,beanstalkdis another lightweight solution; you can find the Go client at:https://github.com/kr/beanstalkShareFollowansweredJul 17, 2015 at 18:57Didier SpeziaDidier Spezia71.9k1212 gold badges191191 silver badges155155 bronze badges1redis is additional work to setup and maintain. same for rabbitmq–MirceaJul 18, 2015 at 4:07Add a comment| | So I've been looking into building an application that relies on something similar to a messaging bus. The idea is to be extremely fault tolerant. I have a queue of tasks that need to be performed, and here are the steps that I believe would be the end goal in a queue based system.A/B server initially adds an item to the queue to be worked on.C server listening on queue sees a new item on the queue, and begins to work on it. I believe it should lock the item with a timeout, (If the server crashes, etc..., I need other workers to be able to work on it.)One of two things happens now:C server fails to respond to said task, or it has taken long past the timeout, and the queue unlocks it and passes it to server D for processing- OR -C server completes the task and ultimately removes the entry from the queue.I was looking into different solutions, and I see a lot of people using REDIS as a backend for performing this operation, but it's queue is fairly simplistic. For exampleRPOPLPUSHwill remove the key from the queue. What happens if the server crashes? The queue now thinks it processed that item, and we have a lost task.What steps are recommended for ensuring task completions, and noting task failures to be able to be reproccessed by another server? I intend to write the tasks in go and I'm open to using cloud services such as AWS. | Queue based processing |
The parameters toAWS.configareaccess_key_idandsecret_access_key, without theaws_prefix.http://docs.aws.amazon.com/AWSRubySDK/latest/AWS.html#config-class_methodShareFollowansweredJul 20, 2015 at 9:41michaeljosephmichaeljoseph7,06355 gold badges2727 silver badges2727 bronze badgesAdd a comment| | I have written a rake task that does an copy_to from one directory in a bucket to another directory within the same bucket. When I test it locally it works fine, but when I deploy it to an environment it returns AWS::S3::Errors::AccessDenied: Access Denied. I assume that it has something to do with the AWS credentials on the environment I am deploying too, I am also confident that the problem is to do with the copy_to as I accessed the bucket from the rails console and had no issuesmy copy from statement is as followscreds = YAML::load_file(Rails.root.join("config", "s3.yml"))
AWS.config(aws_access_key_id: creds[:access_key_id],
aws_secret_access_key: creds[:secret_access_key])
s3.buckets['test-bucket'].objects['path to file'].copy_to('new_path') | AWS::S3::Errors::AccessDenied: Access Denied when trying to do copy_to |
Windows Explorer doesn't allow you to create file names starting with.I believe the reason for this lies inDOS file nameswhich had separate fields for name and extension, and file name could not be empty.The only workaround I know is to usemkdirto create directories starting with.:mkdir .ebextensionsSimilarly, if you have troubles to create a file starting with a., you can use:echo > .configThis will create an empty file named.configwhich you will be able to modify with notepad for example.ShareFollowansweredJul 8, 2015 at 12:26Dmitry GrigoryevDmitry Grigoryev3,22311 gold badge2727 silver badges5656 bronze badges3thanks for solving the first part of my dilemma. my second is the following: is am not sure if what is written in the .config as written in my initial post does actually allow htaccess to work as when I uploaded the instance it did not rewrite/cleanup the url as intented–code_legendJul 8, 2015 at 12:41Unfortunately I can't help you with this part. Hope you'll get more answers addressing this issue.–Dmitry GrigoryevJul 8, 2015 at 15:261Late but FWIW youcancreate file names starting with.in Windows but you have to append a.as well. For example, to create the.ebextensionsthe directory in a Windows environment one would need to name the new directory.ebextensions.Note the trailing.It doesn't display but forces Windows to cooperate.–alphazwestJul 24, 2019 at 12:47Add a comment| | I would like to updatehttpd conf in elastic beanstalkso I could set theAllowOverride alland hence allow myself to execute the following.htaccesscode:RewriteEngine On
RewriteRule ^/?category/([^/d]+)/?$ searchPage.php?crs_category=$1 [L,QSA]the htaccess file is located in the rootTo configure httpd conf I am trying to follow the below guide:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.htmlMy problems are as follow:My computer does not allow me to create a folder called.ebextensions- it won't accept the dot in front of exbextensionsI will place the.configfile inside the exbentension with the following code:<Directory />
Options None
AllowOverride All
Order deny,allow
Deny from all
</Directory>
<Directory /path/to/your/htdocs/>
AllowOverride All
</Directory>I am not sure if I am following/executing things properly, and would appreciate any guidance. | configuring httpd conf in aws |
I don't think there is any way to do this with DynamoDB. The batch API does not support it. It is a bug in boto that theput_itemmethod of theBatchTableobject accepts theoverwriteparameter. If you check the code, you can see that it does nothing with that parameter. It is ignored because there is nothing that it can do with it. DynamoDB just doesn't support this. At least not yet.ShareFollowansweredJun 24, 2015 at 13:04garnaatgarnaat45k77 gold badges125125 silver badges103103 bronze badges1Not the answer I was hoping for :). I created anissueat the boto repository.–bartaeltermanJun 24, 2015 at 13:13Add a comment| | I'm usingboto's DynamoDB v2and I'm writing items to a table in batch. However I'm unable to prevent DynamoDB from overwriting attributes of existing items. I'd rather have the process fail.The table has the following schema:from boto.dynamodb2.table import Table, HashKey, RangeKey
conn = get_connection()
t = Table.create(
'intervals',
schema=[
HashKey('id'),
RangeKey('start')
],
connection=conn
)Say I insert one item:item = {
'id': '4920',
'start': '20',
'stop': '40'
}
t.put_item(data=item)Now, when I insert new items withbatch_write, I want to make sure DynamoDB will not overwrite the existing item. According tothe documentation, this should be achieved with theoverwriteparameter from theput_itemmethod of theBatchTableclass (which is the one that is used as context manager in the example below)new_items = [{
'id': '4920',
'start': '20',
'stop': '90'
}]
with t.batch_write() as batch:
for i in new_items:
batch.put_item(data=i, overwrite=False)However, it doesn't. Thestopattribute in my example gets a new value90. So the previous value (40) is overwritten.If I use the table's ownput_itemmethod, theoverwriteparameter works. Setting it toTruereplaces thestopvalue while setting it toFalseresults in aConditionalCheckFailedException.How can I get that exception when usingbatch_write? | boto DynamoDB: How can I prevent overwriting items with batch_write? |
You can useAmazonS3.generatePresignedUrl(String, String, Date)to generate a presigned url and pass it to Picasso. Here is an example"Generate a Pre-signed Object URL using AWS SDK for Java". Though the example is for the Java SDK, it's applicable for the AWS Android SDK.ShareFollowansweredJun 21, 2015 at 17:57YangfanYangfan1,86611 gold badge1111 silver badges1313 bronze badges2That's nice method. Didn't know AWS SDK has it. :)–Nikola DespotoskiJun 21, 2015 at 17:58Be careful that the presigned url changes every time Amazon S3 credentials expires (key changes). So Picasso will load images again from network and not from cache. You should keep a map of your presigned urls. Seestackoverflow.com/questions/31354601/…–EnaMar 1, 2016 at 11:00Add a comment| | I am storing my images on Amazon S3. I use the following code to download image from Amazon S3S3ObjectInputStream content = s3Client.getObject("bucketname", url).getObjectContent();
byte[] bytes ;
bytes = IOUtils.toByteArray(content);
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
bitmap = Bitmap.createScaledBitmap(bitmap, width, height, true);
bitmap = CommonUtilities.getRoundedCornerBitmap(bitmap, 30);
cache.put(url, new SoftReference<Bitmap>(bitmap));
return bitmap;While going through Picasso documentation, I read that to load images we simply need to doPicasso.with(context).load("http://i.imgur.com/DvpvklR.png").into(imageView);So how to download Amazon S3 images through Picasso. | How do use use Picasso library while downloading images from Amazon S3? |
The reason why you're having this issue is because you're runningartisan config:cachebefore the "release" - on/var/app/ondeckif you runeb ssh, you'll see that your app is living inside/var/wwwYou need to runconfig:cacheusing a post deploy hook - however it seems that this isn't officially supported yet. Here's a workaround:http://junkheap.net/blog/2013/05/20/elastic-beanstalk-post-deployment-scripts/ShareFollowansweredSep 12, 2015 at 16:21Jad JoubranJad Joubran2,56133 gold badges3131 silver badges5757 bronze badgesAdd a comment| | I use to have a working deployment system to amazon beanstalk with EC2 servers and recently I added some optimization post commmands to my scripts such ascomposer dump-autoload
sudo php artisan optimize --force
sudo php artisan route:cacheNow on one of my API endpoints it's strange I get half of the data then at the end I have an errorfile_put_contents(/var/app/ondeck/storage/framework/sessions/34325rfeq4324qfgr4): failed to open stream: No such file or directoryWhat's causing this and how do I fix this in the ec2 deployment setup?EDITI just found out something! If on the server thats giving me the error I run this command below to clear the config cache my error dissapears. So how exactly do I fix this so that I can still run php artisan config:cache and not have it break?php artisan config:clear | Laravel storage/framework/sessions on EC2 gives failed to open stream |
I have found no documentation on the maximum length of an ARN, overall, but it's service-specific, more often than not, with maximum lengths of each element in the ARN -- presumably -- combining within each service to define the maximumas this forum answer suggests.A quick search indicates that you'll see a maximum of 2048hereor 256hereor the oddly-sized non-power-of-2 111hereor... You get the idea. It varies by service.The longest ARN I have encountered has been from S3, where an ARN can include a key prefix, so those could theoretically exceed 1,024 though I've not encountered any that actually approached that length.Bearing in mind that ARNs, or at least many of their elements are case-sensitive, I tend to go withVARBINARY()with a length suited to the expected size range for the service in question. I would expect many applications would be quite comfortable somewhere below 255 but that's an architectural decision specific to your application and the AWS services involved.ShareFollowansweredJun 17, 2015 at 0:43Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badgesAdd a comment| | What's the best datatype to store an ARN in MySQL? I'm guessing a VARCHAR with a large character limit would be best. Is there a limit to how long ARNs can be? How long of a VARCHAR should I have? | Best Data Type for Storing AWS ARNs in MySQL? |
You've linked doco particular to CloudFormation so a bunch of the complexity is probably associated with that context.Here's the stand-alone documentation for the Cloudwatch Logs Agent:Quick StartAgent ReferenceIf you're on Amazon Linux, you can install the 'awslogs' system package via yum. Once that's done, you can enable the logs plugin for the AWS CLI by making sure you have the following section in the CLI's config file:[plugins]
cwlogs = cwlogsE.g., the system package should create a file under /etc/awslogs/awscli.conf . You can use that file by setting the...AWS_CONFIG_FILE=/etc/awslogs/awscli.conf...environment variable.Once that's all done, you can:$ aws logs push helpand$ cat /path/to/some/file | aws logs push [options]The agent also comes with helpers to keep various log files in sync.ShareFolloweditedJan 25, 2017 at 15:17answeredJan 24, 2017 at 21:10steamer25steamer259,35811 gold badge3333 silver badges3838 bronze badgesAdd a comment| | tl;drThe configuration of cloudwatch agent is #$%^. Any straightforward way?I wanted one place to store the logs, so I used Amazon CloudWatch Logs Agent. At first it seemed like I'd just add a Resource saying something like "create a log group, then a log stream and send this file, thank you" - all declarative and neat, but...According tothis docI had to setup JSON configuration that created a BASH script that downloaded a Python script that set up the service that used a generated config in yet-another-language somewhere else.I'd think logging is something frequently used, so there must be a declarative configuration way, not this 4-language crazy combo. Am I missing something, or is ops world so painful?Thanks for ideas! | A sane way to set up CloudWatch logs (awslogs-agent) |
The creation of roles for EMR (for example, the default roles) only need to be done once per account per region. This is not a step that will need to performed regularly. If you wanted to create the roles via boto one could manually create the roles using the IAM API (http://boto.readthedocs.org/en/latest/ref/iam.html) and build the roles in accordance with the default policies as defined athttp://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-iam-roles-defaultroles.htmlShareFollowansweredJun 8, 2015 at 20:24ChristopherBChristopherB2,0381515 silver badges1818 bronze badgesAdd a comment| | Im using elastic mapreduce with boto.Everything was working fine, but since this week Im getting this error:InstanceProfile is required for creating clusterIm trying to fix this issue, and it seems that now we need to create a default role for elastic map reduce.And I did this using awscli, with this commands below, but there isnt other way to do this (for example with boto)?If there isnt other way it is possible to create for example a python function, that execute this 3 commands below?1 - pip install awscli2 - aws configure3 - aws emr create-default-rolesAter use this commands above I also needed to add in mapreduce job function this:ami_version="2.4.9",
job_flow_role="EMR_EC2_DefaultRole",
service_role="EMR_DefaultRole" | InstanceProfile is required for creating cluster - create python function to install module |
Good starting point would be to store your instances in a variable.$ec2 = Get-EC2Instance
$ec2.instanceswill display all the info you can get.Same can be done like this(Get-EC2Instance).Instancesor filter only what you need.(Get-EC2Instance).Instances | select-object ImageId,InstanceId,VpcIdShareFollowansweredMay 26, 2015 at 5:03Jan ChrbolkaJan Chrbolka4,28322 gold badges3030 silver badges3838 bronze badgesAdd a comment| | Is there a way of getting instance's description (eg, instance ID, AMI ID, VPC ID....etc) and tags using Powershell?I triedGet-EC2Instance | select *but it doesn't give all the information I need. | Use powershell to get instance description and tags |
In order to use$autoloader->registerNamespace('Aws'), the AWS lib you seek must be on your PHP include path, which probably includes your./librarydirectory. Instead, you have the AWS lib buried down in./library/Eplan/AmazonCloudSearch, which almost certainly isnoton your PHP include_path.Try moving the AWS library up two levels, directly into the./librarydirectory.ShareFolloweditedMay 22, 2015 at 9:55answeredMay 21, 2015 at 8:15David WeinraubDavid Weinraub14.2k44 gold badges4343 silver badges6464 bronze badges12You can also add the include path into yourapplication.inifile withincludePaths.myPath = APPLICATION_PATH "/../wherever/it/may/be"–Kevin NagurskiMay 25, 2015 at 11:36Add a comment| | I'm trying to implement Amazon WebServices PHP SDK into my Zend 1 project, but it seems to fail loading the classes.I have got the library intolibrary/Eplan/AmazonCloudSearchand after investigation it seems that in order to be able to load the namespace I need to call theregisterNamespacemethod fromZend_Loader_Autoloader::getInstance()so I've got this on the top of the autoloader (I also tried to put it in the bootstrap without luck):require_once 'Zend/Loader/Autoloader.php';
$autoloader = Zend_Loader_Autoloader::getInstance();
$autoloader->registerNamespace("Aws");Namespaces of the AWS library are like this:Aws\namespaceThe errors I get are likeWarning: include_once(Aws/Common/Aws.php): failed to open stream: No such file or directory in /srv/www_nfs_desarrollo/vhosts/desarrollo.techmaker.net/httpdocs/library/Zend/Loader.php on line 134Autoloader full code:http://pastebin.com/gS9mcntKI've been the full day struggling my head trying to solve this without luck, any ideas? | Zend 1 3d party NameSpaces autoload don't working |
+50A few suggestions:Try to enclose your commands in quotes, that's a requirementAlso, not sure if $NODE_HOME is working - could you run simple test like echo $NODE_HOME > /tmp/test.txt?ShareFollowansweredMay 11, 2015 at 17:06sap1enssap1ens2,90711 gold badge2727 silver badges3131 bronze badgesAdd a comment| | Mypackage.jsonhas:"scripts": {
"start": "node_modules/.bin/coffee server.coffee",
"test": "NODE_ENV=test node test/runner.js",
"coverage": "NODE_ENV=test COVERAGE=1 node test/runner.js -R html-cov test/ > ./test/coverage.html",
"testw": "fswatch -o test src | xargs -n1 -I{} sh -c 'coffeelint src server.coffee ; npm test'",
"db:drop": "node scripts/drop-tables.js",
"encryptConfig": "node_modules/.bin/coffee config/encrypt.coffee",
"decryptConfig": "node_modules/.bin/coffee config/decrypt.coffee",
"postinstall": "npm run decryptConfig"
},When I deploy to Elastic Beanstalk, I'd like to run thepostinstall, but apparently it doesn't do that. Okay, no problem.I created a file called.ebextensions/00.decrypt.configwhich has:commands:
00-add-home-variable:
command: sed -i 's/function error_exit/export HOME=\/root\n\nfunction error_exit/' /opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh
container_commands:
02-decrypt-config:
command: $NODE_HOME/bin/npm run decryptConfigHowever this doesn't seem to run either. What am I doing incorrectly? | How can I run an npm script for an AWS Elastic Beanstalk Deployment? |
Upgrading to the latest mongodb (3.0.2) helped resolve this issue for me.P.S. - Make sure you kill the mongod process already running using killall -15 instead of pkill -9 as the latter could cause damage.ShareFolloweditedApr 21, 2015 at 5:20answeredApr 20, 2015 at 14:59Harshil GurhaHarshil Gurha7111 silver badge88 bronze badgesAdd a comment| | We just migrated our infrastructure on AWS from one account to another.
The mongo version installed on the server is 2.4.9
I am new to MongoDb and faced the following 2 errors when I ran the web app -{"name":"MongoError","errmsg":"exception: FieldPath field names may not start with '$'.","code":16410,"ok":0}and{"name":"MongoError","errmsg":"exception: the $cond operator requires an array of 3 operands","code":16019,"ok":0}The web app was working on our previous instances. Can anyone point me in the right direction? | MongoError exception: FieldPath field names may not start with '$' |
When you useAmazon Cognito, the service takes care of all the steps necessary to create a unique identifier for your app’s users and retrieve temporary, limited privilege AWS credentials. This means that you can follow security best practices, and use these temporary, limited privilege credentials instead of having to hardcode credentials into your app.You can still useAccessKeyandSecretKeywithAWSStaticCredentialsProviderin the AWS Mobile SDK for iOS, but we discourage its use in production apps for security concerns.ShareFollowansweredApr 14, 2015 at 18:38YosukeYosuke3,75911 gold badge1212 silver badges2121 bronze badges1Thanks for the explanation. I had asked this question, because I was facing some issue with the IAM way. I found a way to fix the issue by using default IAM role for the cognito identity pool. So I no longer require the AccessKey way. Anyways it's good to know the information.–0xC0DED00DApr 15, 2015 at 6:58Add a comment| | I am usingDynamoDBor any AWS stuff for the first time and thus have a very little idea how they work. I have seen the documentation where it's mentioned that to useDynamoDB, usingIAMandCognito syncis better way.I have a very simple requirement. I have an iOS app in which a user can register and login. The functionality is provided by a third party SDK, but I want to store the user information in the DynamoDB table named asUsers.I am not sure if I needCognito SyncorIAMfor this.So My question is, is it possible to use DynamoDb in iOS without using these two extra features? If yes, then is it possible to do it with theAWS mobile SDKor do I need some other ways such as using RESTful APIs for that? | Using DynamoDB without Cognito API |
Thanks everyone. It seems that AWS added a new set of IPs which was added to the necessary files since we access it behind a proxy.ShareFolloweditedDec 10, 2023 at 16:10Sithu4,7821010 gold badges6565 silver badges111111 bronze badgesansweredApr 12, 2015 at 8:05Ayushi DalmiaAyushi Dalmia29544 silver badges1414 bronze badgesAdd a comment| | I had an instance which I could smoothly ssh. However, suddenly today I was not able to do it after starting. Since I had an EBS and had edited the /etc/fstab file, some of the answers at stackoverflow said it has some issues with ssh_config file. I detaches the existing EBS, mounted it to a micro instance so as to fix the broken volume. However, I was unable to ssh the new micro.I terminated the instance in 1. Created a new one with similar config. Still cannot ssh.Just for the sake of it, I created one without an EBS.
I can ping both 2 and 3 but not ssh.
The exact problem I am facing is this:aws ssh connection refusedAnyone, been there done that? | Issues while trying to ssh instance over AWS: |
You can prevent users from downloading the file directly using a HTTP referrer policy. You should restrict the HTTP referrer to the domain where you host the embedded media player.This articleexplains it very well if you're using Amazon AWSKeep in mind that the referrer can be spoofed using third-party software or browser extensions, but it should prevent the casual user from downloading the file.ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredMar 31, 2015 at 13:27ilianilian12911 gold badge11 silver badge77 bronze badges1This works only for S3. You have to pass through Referer header through CloudFront, and it will be part of the cache key, which will significantly reduce cache ratio.–Alex ZApr 11, 2015 at 20:18Add a comment| | I am amazon cloudfront distribution for video files in my website.
To read those files I am using signed url method with canned policy to read video file from amazon cloudfront distribution. Below is the example of signed url.http://cloudfront-domainname/VideoFileName.mp4?Expires=1427805933&Signature=signature-of-policy-statement&Key-Pair-Id=cloudfront-key-pair-idIf I directly paste this url in address bar, I am able to download the video.
How to prevent user from downloading this video but video should play in the HTML5 media player. | how to prevent downloading video from amazon cloudfront using signed urls |
Yes, your understanding of the first two scenarios is correct. To throw more light on this, Amazon Cognito has two concepts when dealing with multiple providers: Linking and Merging. Linking is when on a single device you are logged in with provider A and already have an identityId and you login with provider B. In this scenario the identityId will remain the same and provider B will belinkedto the existing identity. Now let us say that you are logged in with provider A on device X and with provider B on device Y. Both these end users(identities) will have their unique identityIds. Now if you login with provider A on device Y, it will result inmergeof these two identities and return you an identityId which will have both providers associated with it. I hope this clears any confusion around using multiple providers.Thanks,RachitShareFollowansweredMar 23, 2015 at 18:44Rachit DhallRachit Dhall1,6511111 silver badges1212 bronze badgesAdd a comment| | I've been having some issues understanding the Amazon AWS Cognito Workflow for adding multiple login providers, here is some psedu code to demonstrate my questions:{Code to get CognitoCachingProvider}
Device Cognito ID = A
{Code to get Google Token}
withLogin(Google Token)
if(Identity is changed)
identityListener(
Device Cognito ID = ID in Cognito Pool)
else(
Device Cognito ID = a;
cognitoprovider.setLogin (Google Token);
)
withLogin(Facebook Token);
if(identity is changed)(
*****Device Cognito ID = ID in cognito Pool;*****
cognitoprovider.setLogin(Google TOken);
cognitoprovider.refresh();)
else
(Cognitoprovider.setLogin(Facebook Token);
cognitoprovider.refresh();)So my real question is in the second step. Let's say that I want to bind both Facebook and Google to a specific Cognito ID.Three examples:1) There is no Cognito ID assigned - assign Google+ and Facebook
2) There is a cognito ID assigned with Google and no Facebook
- The acquisition of the Google Login should not affect the Cognito ID
- The acquisition of the Facebook Login is simply added as another provider
3) There is no cognito ID assigned with Google but one with facebook:
- The acquisition of the Google Login creates a new and separate cognito ID that is immediately overwritten by the Facebook Login Token's associated Cognito IDIs that correct? | Android Amazon AWS Cognito Workflow confirmation |
I understand this post is 3 years old, but it is possible to do this now with the AWS CLI,https://docs.aws.amazon.com/cli/latest/reference/ec2/stop-instances.htmlaws ec2 stop-instances --instance-idsaws ec2 start-instances --instance-idsThis could also be done from a Lambda Function, and scheduledShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredMar 28, 2018 at 1:07MaxMax5433 bronze badges0Add a comment| | I'm trying to stop and then immediately start (NOT REBOOT) my Amazon EC2 server fromwithinmy instanceI have CLI (Command Line Interface Tools) and am running a Windows 2012 server.Basically, I want toec2-stop-instancesfrom a batch, and thenec2-start-instancesright after. But I want thestart-instancesto run after a minute or so.Is there a way to send the command and ask Amazon to wait a minute before it is run?This way, running the batch script will stop then start the instance.Again, I can't use reboot. For some reason, it does not work with my needs. | Stopping then starting EC2 from command line |
I had a similar issue today and here are the steps I followed to investigate :-I modified line no 133 at .git/AWSDevTools/aws/dev_tools.py to print the exception likeexcept Exception, e:
print e
* Please make sure of spaces as Python does not work in case of spaces.I ran command git aws.push again
and here is the exception printed :-BotoServerError: 403 Forbidden
{"Error":{"Code":"SignatureDoesNotMatch","Message":"Signature not yet current: 20150512T181122Z is still later than 20150512T181112Z (20150512T180612Z + 5 min.)","Type":"Sender"},"The issue is because there was a time difference in server and machine I corrected it and it stated working fine.Basically the Exception will helps to let you know exact root cause, It may be related to Secret key as well.ShareFollowansweredMay 12, 2015 at 19:21AriAri34122 silver badges1414 bronze badgesAdd a comment| | When I would like to push incremental changes to the AWS Elastic Beanstalk solution I get the following:$ git aws.push
Updating the AWS Elastic Beanstalk environment None...
Error: Failed to get the Amazon S3 bucket nameI've already addedFULLS3Accessto my AWS users policies. | elastic beanstalk: incremental push git |
It was easy:eb deploy -r us-east-1Done! :)ShareFollowansweredFeb 14, 2015 at 21:40Alexander ParamonovAlexander Paramonov1,4711414 silver badges2424 bronze badgesAdd a comment| | Is it possible to deploy elastic beanstalk application to several regions?
Is there a way to change eb config to deploy 2 applications at once when you doeb deploy?I need to deploy the same code to us-west and us-east regions. | Deploy elastic beanstalk application to several regions |
It is very common to run Emacs locally (e.g. on your Mac) and edit files on remote systems usingTRAMP, an excellent built-in library.To edit a remote file over SSH,find-fileusing a pattern like//ssh:user@host:path/to/fileIn this casepath/to/fileis a path on the remote system relative to your home directory. As you might expect, starting this value with/lets you specify an absolute path.I think that AWS forces you to specify a.pemkey file for its SSH connections. The easiest way to make this work with Emacs is to add your AWS machine to~/.ssh/config, e.g.Host example
HostName example.com
User ubuntu
IdentityFile ~/path/to/example.pemand then edit//ssh:example:path/to/filein Emacs. Your SSH configuration settings should take effect.It is also possible to usemultiple hops, which lets you chain together TRAMP methods, e.g. "SSH to serverexample.comand then edit filesome_file.txtusingsudo".ShareFollowansweredFeb 13, 2015 at 14:27ChrisChris132k116116 gold badges285285 silver badges267267 bronze badges0Add a comment| | I am new to emacs and was trying to use it when I am editing files on an AWS server. The problem is that when I ssh from terminal (on my Mac) and try to use the Meta or Esc keys they don't work. The meta key just causes characters like this --> √≈ß to appear. The esc key causes nothing to happen. Does anyone know how to fix this? | Using Emacs on AWS Ubuntu system - Meta and Esc keys don't work |
No -- it is more accurate to say that the size ofgetObjectSummaries()is the number of objects in the page of results fromlistObjects()that you received. Otherwise, yes -- it is correct that each object summary in the list should correspond to an S3 object.AWS pages the results of large result sets to 1,000 results per page. In a general case, you will need to expect to make this call more than once:IfisTruncated()is true, callgetNextMarker()to get the next marker to use in your next ListObjectsRequest tolistObjects().If false, this last response you got will be the last one you need to handle. You can finish counting your objects at this point.With that in mind, this isn't a great solution for very large buckets since you have to enumerate the whole bucket. Here'san alternative solutionusings3cmdthat you may be able to call from Java.ShareFolloweditedMay 23, 2017 at 11:56CommunityBot111 silver badgeansweredJan 23, 2015 at 20:41Anthony NeaceAnthony Neace25.4k88 gold badges114114 silver badges130130 bronze badgesAdd a comment| | I need to know the number of files that are stored underS3bucket. Currently,ObjectListingdoesn't has a method such ascountornumberOfObject.However, it has a method that will return a List ofS3ObjectSummary.public java.util.List<S3ObjectSummary> getObjectSummaries()Since it is aList, I can callsize()method but is it accurate and right thing to assume that the size ofgetObjectSummaries()List is the same number of objects that are stored under a bucket? | Will getObjectSummaries get the count of objects stored in a S3 Bucket? |
Amazon Simple Email Service (SES) is an outbound-only email sending service. It takes the place of an SMTP server and provides extra capabilities that improve deliverability of your email.You pass an email and recipient list to SES via an API call or by treating it as your SMTP server. SES then sends the email to the recipients. It does not alter the contents of the email, so it cannot insert fields such as "Dear ,".SES is not a list manager. It does not maintain a "list" of subscribers.ShareFollowansweredJan 17, 2015 at 10:19John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badgesAdd a comment| | I have just register amazon ses email services and verified domain as well as sender email address.
Now I cant find where I can import list of email ids. These are my subscriber email ids where I want to send mails.Can anyone let me know how to import list? I cant find any option on dashboard. | How to Import email list on Amazon SES? |
You are getting an error becauseexchangeis a keyword used to move the data in a partition from a table to another table that has the same schema but does not already have that partition for details viewHive Language ManualandHIVE-4095.ShareFollowansweredJan 12, 2015 at 10:15AshrithAshrith6,79522 gold badges3131 silver badges3838 bronze badgesAdd a comment| | I am using latest AWS Hive version0.13.0.FAILED: ParseException: cannot recognize input near 'exchange' 'string' ',' in column specificationI am getting the above error when I run the below(create table) query.CREATE EXTERNAL TABLE test (
foo string,
exchange string,
bar string) ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/home/hadoop/test/';If I rename the exchange like 'xch' it creates table successfully. Any reason? | FAILED: ParseException: cannot recognize input near 'exchange' 'string' ',' in column specification |
Here's how to create an SQS connection that connects to fake_sqs:region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here:https://github.com/adammck/fake_sqsShareFollowansweredJan 15, 2015 at 19:26pfhayespfhayes3,84733 gold badges1919 silver badges1717 bronze badges1Thanks a ton, mate!–Rajan PrasadNov 22, 2018 at 15:57Add a comment| | I'm currently in need of connecting to afake_sqsserver for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);and then:queue = connAmazon.get_queue('some_queue')but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ? | boto.sqs connect to non-aws endpoint |
When using Amazon SES in sandbox/test mode, all from/to/cc addresses must be verified email addresses. The error "Email address is not verified" means that atleast one of the email addresses is not verified. It could be the TO, FROM, CC, or BCC.In your case, ensure that both "[email protected]" and "[email protected]" are verified and/or that "aryxxx.com" is a verified domain.ShareFollowansweredDec 16, 2014 at 12:42Matt HouserMatt Houser35k66 gold badges7373 silver badges9292 bronze badgesAdd a comment| | I can send emails with SMTP option in .Net but I need to use .NET SDK to send emails via Amazon. It gives me error that says "Email address is not verified", event though I am sure that it is verified. By the way, I am using a Test Account(SandBox).What am I doing wrong? or am I missing anything?here is my code,var sesClient = new AmazonSimpleEmailServiceClient("AKIAJHXXXXXXXXXXX", "RVGdbCKXILwjUIKSexKlwXXXXXXXXXXXX",Amazon.RegionEndpoint.USEast1);
var dest = new Destination
{
ToAddresses = new List<string>() { "[email protected]" },
CcAddresses = new List<string>() { "[email protected]" }
};
var from = "[email protected]";
var subject = new Content("You're invited to the meeting");
var body = new Body(new Content("Please join us Monday at 7:00 PM."));
var msg = new Message(subject, body);
var request = new SendEmailRequest
{
Destination = dest,
Message = msg,
Source = from
};
var verify = sesClient.VerifyEmailAddress(new VerifyEmailAddressRequest { EmailAddress = "[email protected]" });
try
{
var response = sesClient.SendEmail(request);
}
catch (Exception ex)
{
throw ex;
} | cannot send emails with Amazon SES .NET SDK |
This is still a work in progress.Please see:https://github.com/GoogleCloudPlatform/kubernetes/pull/2672For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated.Thanks!ShareFollowansweredDec 15, 2014 at 23:55Brendan BurnsBrendan Burns73433 silver badges33 bronze badges0Add a comment| | It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration:{
...
"createExternalLoadBalancer": true
...
}This doesn't seem to work for AWS. I'm getting the following error when running the service create:requested an external service, but no cloud provider suppliedI know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB.What's the recommended way of doing this on AWS? | Web facing application on Kubernetes and AWS |
When you create a pre-signed URL, that is done completely locally. You could do it "by yourself", but it is much easier to use the SDK, and there would be no practical diferences. See that there is no "sign" action on theS3 API.However, you can not sign at the "bucket level", as signature is checked per-object. I believe signing a whole bucket would not be feasible.ShareFollowansweredNov 23, 2014 at 16:39Julio FaermanJulio Faerman13.4k99 gold badges5959 silver badges7777 bronze badges41The line I extracted above is an a actual call to amazon as to get the signed token. It is not local as far as I can see.–JAR.JAR.beansNov 23, 2014 at 16:451You can monitor your network or check the source (github.com/aws/aws-sdk-ruby/blob/…), this method call is resolved locally and does not invoke the remote service.–Julio FaermanNov 23, 2014 at 16:582My god. I had a "bucket.blank?" in my code, which is what triggered the call to AWS -github.com/aws/aws-sdk-ruby/blob/…I didn't even consider this... thanks.–JAR.JAR.beansNov 23, 2014 at 18:501Accepted this answer as this one is correcting my wrong assumption that invoking a url_for(secure: true) is a network call.–JAR.JAR.beansNov 23, 2014 at 18:51Add a comment| | I want to be able to serve URLs to client that are "signed" and so, are only relevant to 24 hours (for example).
However, I don't want to call S3 for every URL generated:AWS::S3::S3Object.new(bucket, name).url_for(:read, :secure => true, :expires => expires_in).to_sInstead, I want to generate the URL by myself (I have the file name and the bucket link, I can build it myself).However, I want to sign the url at the bucket level (say, once a day for all the files in a given bucket). is this possible? | Amazon S3 secure URL at the bucket level |
There could be a lot of reasons, because of various configuration errors, but most common problem is when you neglect to an an internet gateway to your VPC.By default, instances that you launch into a virtual private cloud
(VPC) can't communicate with the Internet. You can enable access to
the Internet from your VPC by attaching an Internet gateway to the
VPC, ensuring that your instances have a public IP address, creating a
custom route table, and updating your security group rules.http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.htmlShareFollowansweredNov 22, 2014 at 11:39E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badges1There's an internet gateway, it was created when the wizard was run–mlbiamNov 23, 2014 at 1:53Add a comment| | I used the VPC setup wizard to create a type #2 VPC (public and private subnet). I create a jump box in public and a server in the private subnet. I can SSH into the jump box and from there into the server running on the private subnet. However once I'm on server on the private subnet, I can't get to the internet or run yum. What am I missing? Everything I'm reading says I should be done and able to start accessing the internet from the private subnet without doing anything special to routing tables. | Why can't I access the internet from my private subnet on an AWS VPC? |
One option is an Overlay Network such asweave. This is very easy to set up, lets containers talk to each other on different hosts or even different datacenters, and lets you choose what to connect and what to isolate. Please note: I work on weave.Why might you need an add-on?
As described inthe docs, by default Docker creates a single-host bridge and gives each container a private IP address. So now you have two problems: how to set up routes between bridges on different machines, and how to hide or change those IP addresses because they can't be used on the public internet.You could just use the host's network, with thedocker runcommand-line option--net=host, but this loses some of the encapsulation you get from containers.There are active discussions going on aboutDocker network driverswhich would make solutions easier to implement, but no code as yet.Amazon havelaunched their own container clustering servicewhich promises to let containers talk to each other, but again it's not available yet.ShareFollowansweredNov 18, 2014 at 9:51BryanBryan11.8k33 gold badges5555 silver badges7979 bronze badgesAdd a comment| | I want to ping a docker container on Host A (EC2 instance) from another docker container on Host B (another EC2 instance). What are the steps I need to follow? | Ping docker containers on different hosts with ip addresses |
Okay. Seems like AmazonIdentityManagementClient listInstanceProfiles() call does the trick.Some kind of solution should work. Sorry for bother.public Collection<String> getIAMRolesRange() {
AmazonIdentityManagementClient identityManagementClient = new AmazonIdentityManagementClient(new BasicAWSCredentials(awsAccount.getAccessKeyId(), awsAccount.getAccessSecret()));
ListInstanceProfilesResult listInstanceProfilesResult = identityManagementClient.listInstanceProfiles();
List<String> iamRoles = new LinkedList<String>();
for(InstanceProfile instanceProfile: listInstanceProfilesResult.getInstanceProfiles()) {
iamRoles.addAll(Collections2.transform(instanceProfile.getRoles(), iamRoleToStringFunction));
}
return iamRoles;
}ShareFollowansweredNov 15, 2014 at 11:36Nikolay ChernovNikolay Chernov16322 silver badges1212 bronze badgesAdd a comment| | There are several notes on how to run Instance with given IAM Role and create one. But what about retrieving such data from EC2 service using Amazon Client (Java SDK) or http-requests via Amazon API? Can I get such list of IAM Roles somehow (they were preliminary created in EC2 console by devOps team, so I must somehow expose them in other web-application)? Thanks in advance. | How can I get list of IAM Roles from EC2 using Java SDK or Amazon API? |
After additional research I found that it is possible to haveSQL Server CLR stored proceduresinAWSIn the David Iffland articleHow To Use SQL CLR in Amazon AWS RDSthere is step by step instruction how to do it.There is an improvement on the new version of AWS DB Parameter Group and it just require changing changing the flagclr enabledto1.ShareFolloweditedNov 18, 2014 at 15:41answeredNov 6, 2014 at 11:36WojciechWojciech19999 bronze badges1Looks like this is no longer the case with SQL Server 2017 in AWS, been discussing this at Reddit (reddit.com/r/aws/comments/8zbqvw/sql_server_2017_clr_support/…) with other users and have submitted a ticket for AWS to provide more info.–Roberto AndradeSep 5, 2018 at 1:17Add a comment| | I have 2SQL Server CLR stored procedures and we are moving database server toAWS.I would like to know if those CLR stored procedures will work after move to AWS?Can use SQL Server CLR stored procedures in AWS? do I need to do anything special? or maybe I need to rewrite them to T-SQL? | SQL Server CLR stored procedures in AWS |
You need to set the entire public dns address as your host.(e.g.)c <- RS.connect(host = "ec2-X-X-X-X.{availability_zone}.compute.amazonaws.com")ShareFolloweditedNov 27, 2015 at 11:47jangorecki16.5k44 gold badges8282 silver badges165165 bronze badgesansweredOct 27, 2014 at 19:25Garth MilesGarth Miles4622 bronze badges0Add a comment| | Currently trying to connect to an Amazon AWS server via IP address on port 6311. I've set up Rserve as a daemon on the AWS server and have checked that it is in fact listening on port 6311 by calling the netstat command, but when I run the follow from my local R client:c <- RS.connect(host = "x.x.x.x")I get this error message:- cannot connect to x.x.x.x:6311The local client does have RSClient installed, we've verified that Rserve is installed and running correctly on the host server.Does anyone have any suggestions on how to connect to a remote server using this method?? | Rserve connection from local R client to Rserve host on AWS Server |
Answer found here:https://forums.aws.amazon.com/message.jspa?messageID=579018#579018In short the security on the load balancers were off.ShareFollowansweredNov 3, 2014 at 17:24Steve HoltSteve Holt8599 bronze badgesAdd a comment| | Hello I am i doing a proof of concept with AWS's EC2 and Loadbalancer. I have a wildfly quickstart running on 2 different EC2 instances. They work fine, in that i can go to them directly in my browser and get the sites to come up. One says hello server 1 and the other 2. Running on port 8080.I have a load balancer set up and it sees my instances and the healthcheck i have in place says they're working.The configuration is: 80 (HTTP) forwarding to 8080 (HTTP)When i go to the dns entry + health check path (HTTP:80/wildfly-helloworld/HelloWorld) for the load balancer in my browser it times out.The bizarre thing again is that it shows my instances as "In Service" and healthy.Also security on the load balancer is allows ALL inbound and outbound traffic.Any suggestions?Thanks | AWS Load Balancer |
PHP doesn't automatically resolve a string containing multiple path levels to children of an object like you are attempting to do.This willnotwork even if$objcontains the child hierarchy you are expecting:$obj = ...;
$path = 'level1->level2->level3';
echo $obj->$path; // WRONG!You would need to split up the path and "walk" through the object trying to resolve the final property.
Here is an example based on yours:<?php
$obj = new stdClass();
$obj->name = 'Fred';
$obj->job = new stdClass();
$obj->job->position = 'Janitor';
$obj->job->years = 4;
print_r($obj);
echo 'Years in current job: '.string($obj, 'job->years').PHP_EOL;
function string($obj, $path_str)
{
$val = null;
$path = preg_split('/->/', $path_str);
$node = $obj;
while (($prop = array_shift($path)) !== null) {
if (!is_object($obj) || !property_exists($node, $prop)) {
$val = null;
break;
}
$val = $node->$prop;
// TODO: Insert any logic here for cleaning up $val
$node = $node->$prop;
}
return $val;
}Here it is working:http://3v4l.org/9L4gcShareFollowansweredOct 8, 2014 at 1:16itsmejodieitsmejodie4,17811 gold badge1818 silver badges2020 bronze badges1Hi, sorry it's not working for you. As you can see from the link I provided it works on a standard object. You might try experimenting with changing!property_exists($node, $prop)toisset($node->$prop)–itsmejodieOct 9, 2014 at 0:50Add a comment| | I am have written a helper function to "cleanup" callback variables for input into MySQL. This is the function that I wrote:public function string($object, $objectPath) {
if (!empty($object->$objectPath) || $object->$objectPath !== '') {
$value = $object->$objectPath;
} else {
return 'NULL';
}
if (!empty($value) || $value != '') {
return "'".str_replace("'","''",$value)."'";
} else {
return 'NULL';
}
}Now,$objectis always an object returned by the call, and$objectPathis always a string to points to a given value. Here's where the problem comes in. This works:$value = $this->db->string($object, 'foo');However, this does not work:$value = $this->db->string($object, 'foo->bar->foo1->bar1');Whenever$objectPathis more than "one layer" deep, I get the following error from (Amazon's) client library:Fatal error: Call to undefined method MarketplaceWebServiceOrders_Model_Order::getFoo->Bar() in /path/to/Model.php on line 63The code block that the error refers to is this:public function __get($propertyName)
{
$getter = "get$propertyName";
return $this->$getter(); // this is line 63
}$objectis not XML, so I can't useSimpleXMLElementandXPath.What is the problem with my code? Is it that am I concatenating an object and a string? If so, how can I make that possible? How can I get this function to do what I intended it to do?By the way, I'm using PHP 5.4.27. | PHP: how to resolve a dynamic property of an object that is multiple levels deep |
SHORT VERSION:It seems that you don't need a kernel id at all if your ami is an hvm, so long as your set your options right.LONG VERSION:If you create your ami using a boto call like:ami_id = conn.register_image(
name='some_name',
description='some_description',
architecture='x86_64',
root_device_name='/dev/sda1',
snapshot_id=snapshot_id,
delete_root_volume_on_termination=True)It seems to work if the instance's original ami is the most recent hvm ami listed in the aws console. But stopped working once aws updated its default ami's. I assumed its because something on the backend picks up the right kernel id or something. Either way this working is VERY CONFUSING!However if you set the virtualization_type to hvm it seems consistently works without a kernel id.ami_id = conn.register_image(
name='some_name',
description='some_description',
architecture='x86_64',
virtualization_type='hvm',
root_device_name='/dev/sda1',
snapshot_id=snapshot_id,
delete_root_volume_on_termination=True)On the other hand if your instance is paravirtual it seems that so long as you specify the kernel you don't need to specify the virtualization_type in the boto call.ShareFollowansweredOct 7, 2014 at 23:25TristanMatthewsTristanMatthews2,49144 gold badges2525 silver badges3535 bronze badgesAdd a comment| | SHORT VERSION:For AWS how do you find the kernel id from a given ami id, or from an instance launched with that ami.LONG VERSION:I have an aws instance where all drives are ebs backed. I'm trying to launch an exact copy of it from snapshots of the drives.The first step in this process is to create a new ami from the root volume snapshot. When I have done this previously I've just googled the ami id and found somewhere that had the Kernel ID posted for the standard ubuntu ami I had selected from the aws console, but that doesn't seem to be work this time.A lot of searching, reading the documentation, and aws forums makes it sound like the kernel file should be populated in the instance description, but for me (and a lot of other people in the forums) its blank. I tried launching a new (from the console) instance [Amazon Linux AMI 2014.09 (HVM) - ami-08842d60] the kernel field is blank for that one also.If I create a brand new machine, snapshot it, and then leave the kernel as default the ami works just fine, but default doesn't work for any of the older ami's I have tried.Any one have any idea what the process for finding kernek ID for an ami is these days? | Launch an aws instance from snapshot, can't find kernel ids |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.