Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Your CloudFormation resourcepgDBshould be of the typeAWS::RDS::DBInstance, and therefore, it instructs CloudFormation to create one RDS instance. Each RDS instance can contain variable number of databases or database schemas.CloudFormation does not provide means to provision the instance upon creation. To create Postgredatabaseson an RDS instance you have to use, for example, an EC2 instance to provision the database using stored SQL dumps or running plain commands, such asCREATE DATABASE, withpsql. There is already aquestionregarding RDS provisioning.You can create multiple RDS databaseinstancesby defining anotherAWS::RDS::DBInstanceresource with its own parameters. By default, you can have 40 RDS Postgre instances on an account (see the limits in theFAQ).
I have a requirement to create multiple databases in Postgres RDS provided by AWS using Cloudformation. I am able to create single database.Below is snippet of my template :"pgDB": { "Properties": { "AllocatedStorage": { "Ref": "Storage" }, "DBInstanceClass": { "Ref": "DBInstanceClass" }, "DBInstanceIdentifier": { "Ref": "DBInstanceName" }, "DBName": { "Ref": "DBName" }, "DBParameterGroupName": { "Ref": "myDBParamGroup" }, "DBSubnetGroupName": { "Ref": "myDBSubnetGroup" }, "Engine": "postgres", "MasterUserPassword": { "Ref": "DBPassword" }, "MasterUsername": { "Ref": "DBUser" }, "VPCSecurityGroups": [{ "Fn::GetAtt": [ "myDBEC2SecurityGroup", "GroupId" ] }] } }
Create multiple databases in AWS Postgres RDS using Cloudformation
Those aren't the nameserver settings.If you registered your domain through Amazon, click "Registered Domains" on the left side of the page, and click "Add or edit name servers" in the details page for the appropriate domain.If you registered your domain somewhere else, you'll need to change the nameservers there.
I purchased my domain name on Route53 and tried to point the nameserver to Bluehost's (where my Wordpress site is located). I set the nameserver settings in Route53 but it has been 2 days and it doesn't seem like anything has propagated - I get no response viadigorpingas well.I've been instructed that the nameservers for Bluehost are:ns1.bluehost.comns2.bluehost.comHere is what my Route53 settings look like:
Route53 pointing nameservers to Bluehost not working
One way, though I'm not convinced it's optimal:A lambda that's triggered by an CloudWatch Event (say every second, or every 10 seconds, depending on your rate limit). Which polls SQS to receive (at most) N messages, it then "fans-out" to another Lambda function with each message.Some pseudo code:# Lambda 1 (schedule by CloudWatch Event / e.g. CRON) def handle_cron(event, context): # in order to get more messages, we might have to receive several times (loop) for message in queue.receive_messages(MaxNumberOfMessages=10): # Note: the Event InvocationType so we don't want to wait for the response! lambda_client.invoke(FunctionName="foo", Payload=message.body, InvocationType='Event')and# Lambda 2 (triggered only by the invoke in Lambda 1) def handle_message(event, context): # handle message pass
Every day, I will have a CRON task run which populates an SQS queue with a number of tasks which needs to be achieved. So (for example) at 9AM every morning, and empty queue will receive ~100 messages that will need to be processed.I would like a new worker to be spun up every second until the queue is empty. If any task fails, it's put at the back of the queue to be re-run.For example, if each task takes up to 1.5 seconds to complete:after 1 second, 1 worker will have started message Aafter 2 seconds, 1 worker may still be running message A and 1 worker will have started running message Bafter 100 seconds, 1 worker may still be running message XX and 1 worker will pick up message B because it failed previousafter 101 seconds, no more workers are propagated until the next morningIs there any way to have this type of infrastructure configured within AWS lambda?
Rate-limiting a Worker for a Queue (e.g.: SQS)
You can use credentials from one account ("Account A") to make API calls to an Amazon SQS queue in adifferent account("Account B").Simplyadd a policy to the SQS queuein the target account (Account B). Here is a sample policy from theUsing Identity-Based Policies (IAM) Policies for Amazon SQSdocumentation:The following example policy grants AWS account number 111122223333 theSendMessagepermission for the queue named 444455556666/queue1 in the US East (Ohio) region.{ "Version": "2012-10-17", "Id": "Queue1_Policy_UUID", "Statement": [ { "Sid":"Queue1_SendMessage", "Effect": "Allow", "Principal": { "AWS": "111122223333" }, "Action": "sqs:SendMessage", "Resource": "arn:aws:sqs:us-east-2:444455556666:queue1" } ] }
For putting messages to SQS Queues and getting a connection, we need to have a key for the account/user.I want to feed messages to two queues which are available in two different AWS Accounts. How can I achieve that? As far as my understanding is, we can setup only one access/key credentials hence we cannot talk to two queues available in two different AWS Accounts.Any help would be appreciated. Thanks!
Feeding SQS Queues available in two different AWS Accounts
The latest AWS Application Load Balancer (ALB) can do the trick. This works for meFollow the steps here,1. Set up the ALBAWS documentation here, Follow steps up until Listeners tab,https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html2. On the Listeners tab,2.1. Modify rules in the Listeners ID2.2. Add Rule:2.3. Point to redirect URLThis should redirect you to external URL
I have a working domain, x.y, which is tied to an EC2 instance on a VPC.I want a path on that domain, x.y/z, to be routed to an external, non-AWS (IPv4*) microservice. Can this be done with ALB?I have followedUse Path-Based Routing with Your Application Load Balancerto set up target groups, but can't seem to link them to anything past EC2 instances?(*: Would be great to extend with ports, or even (sub)domains, paths, etc.)
Path-based routing to external resource with AWS Application Load-Balancer (ALB)
It should be .ebextensions not .ebextentions (note the typo).http://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/deployment-beanstalk-custom-netcore.htmlAlso in your example, the .ebextensions folder should live under foo.zip, not site.zip.
I've made a really simple .NET Core app and deployed it to Elastic Beanstalk under IIS/Windows platform. This is the layout of the bundle I am uploading to AWS.foo.zip aws-windows-deployment-manifest.json site.zip foo.dll web.config Microsoft.AspNetCore.Hosting.dll ... other dependenciesThis works great. But I want to change theIdleTimeouton the app pool to 0 (the default is 20). To that end, I created an.ebextentionsfolder, and added file 01_Idle_Timeout.config with the following content:commands: set_idle_time: command: c:\windows\system32\inetsrv\appcmd.exe set config /section:applicationPools "/[name='DefaultAppPool'].processModel.idleTimeout:0.00:00:00"I've tried placing this directory under foo.zip. I've tried placing it under site.zip. It just won't take effect.I've remoted into the Elastic Beanstalk instance and ran the command manually to make sure that it works and it does. But somehow the.ebextentionswon't process it.Am I missing something simple?
How to package ebextentions to Elastic Beanstalk with a .NET Core app?
Unfortunately, you can't use commas in Lambda environment variables. This is an AWS limitation and not a Serverless issue.For example, browse theAWS consoleand try to add a environment variable that contains a comma:When you save, you will get the following error:1 validation error detected: Value at 'environment.variables' failed to satisfy constraint: Map value must satisfy constraint: [Member must satisfy regular expression pattern: [^,]*]The error message says that the regex[^,]*must be satisfied and what this small regex explicitly says is tonot(^) accept the comma (,). Any other char is acceptable.I don't know why they don't accept the comma and this is not explained in theirdocumentation, but at least their error message shows that it is intentional.As a workaround, you can replace your commas by another symbol (like#) to create the env var and replace it back to comma after reading the variable, or you will need to create multiple env vars to store the endpoints.
I am trying to add a MongoDB cluster as part of a Serverless deployment, but I can't set the environment variable.Here is part of theserverless.ymlfile:service: serverless-test plugins: - serverless-offline provider: name: aws runtime: nodejs4.3 environment: MONGO_URI: "mongodb://mongo-6:27000,mongo-7:27000,mongo-8:27000/db-dev?replicaSet=mongo"How do I pass the MONGO_URI to contain the cluster as a comma separated value?Any advise is much appreciated.
Comma separator in Lambda function environment settings using the Serverless Framework
AWS CodePipeline is connected to GitHub via the new "Integrations" concept:https://github.com/integrations/aws-codepipelineThis concept was announced here:https://developer.github.com/changes/2016-09-14-Integrations-Early-Access/GitHub Integrations authenticate using JSON Web Tokens and private/public keys, so I'm not sure if AWS are technically correct in describing that as "OAuth" or not. Details here:https://developer.github.com/early-access/integrations/authentication/#as-an-integration
I am currently fiddling around with AWS CodePipeline for the first time and set up the Source and the Build step so far with a demo project.I have connected the Source Step with a GitHub account (a system-account we use), with admin access to all Repos. As the documentation states, the OAuth-scopesadmin:repo_hookandrepoare required for this to use; which are granted and the connection is fine.As the title of this question already states: The integration works just fine - when I push a new commit on master to GitHub, the Pipeline starts working and runs through smoothly.My question however is: How? As the docsstate here:To integrate with GitHub, AWS CodePipeline uses OAuth tokenshowever, when looking in my GitHub settings, I would have expected to find the application listed as an "OAuth application" directly on the Repository or on the organization "OAuth applications", but neither is the case!Thus, I am wondering how CodePipeline recognizes my new commit. Is it polling the SCM or some other sort of magic? I did not find any WebHooks either.Thank you in advance!
AWS CodePipeline recognizes my new GitHub commit fine - but how?
The reason the defaulthelloLambda function is not listed in the AWS Lambda console is because your Lambda function was uploaded to the default region (us-east-1), while the Lambda console displays the functions of another region.To set the correct region for your functions, you can use the region field of theserverless.ymlfile.Make sure the region property is directly under the provider section. With 2/4 spaces indent. Like this:provider: region: eu-west-1Alternatively, you can specify the region at deployment time, like so:sls deploy --region eu-west-1
I am new to the serverless framework. Well, at least to the latest version, which depends heavily on CloudFormation.I installed the framework globally on my computer using:npm install -g serverlessI then created a service using:serverless create --template aws-nodejs --path myServiceFinally, I ran:serverless deployEverything seems to deploy normally, it shows no error in the terminal. I can see the CloudFormation files in a newly created, dedicated S3 bucket.However, I cannot find the defaulthelloLambda function in the AWS Lambda console.What am I missing? Are the CloudFormation files not supposed to create Lambda functions upon deployment?
Serverless Framework: how to deploy with CloudFormation?
Though I never found any AWS official statement about this matter, I strongly believe that accessing private resources (VPCs, subnets) from an always public entity (as is API Gateway) would require much more effort (testing) regarding the product security.I don't believe their plan is to keep it like this forever, though. From this same article you linked, they state (my emphasis):Today, Amazon API Gateway cannot directly integrate with endpoints that live within a VPC without internet access.My guess is that "tomorrow" API Gateway access to private resources will exist and, yes, our lives will be easier (and cheaper, btw).In the end of the day, and given that my assumption is right, I believe it was the right decision: launch a useful (but more limited) version first and learn with it.EDIT:Since 2017 November, API Gateway integrates with private VPCs.https://aws.amazon.com/pt/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/
I have just read followingarticleAnd I really don't get why the AWS API Gateway doesn't support VPCs out of the box and we have to proxy all the requests through a lambda function?Does anyone have an idea about why is that?
Why does AWS API Gateway not support VPCs?
First, Line 9 contains a JSON syntax error, the brackets{}around your Role string should be removed:"Roles": [ "arn:aws:iam::710161973367:role/Cognito_CFIAuth_Role" ],Second,AWS::IAM::Policy'sRolesproperty accepts "ThenamesofAWS::IAM::Roles to attach to this policy", not full ARNs, so your line should be:"Roles": [ "Cognito_CFIAuth_Role" ],You also need a missing closing bracket}at the end of your example.
Following cloudformation template gives error on line 9 :{ "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Policy to allow send receive message from SQS Queue", "Resources" : { "MyPolicy" : { "Type" : "AWS::IAM::Policy", "Properties" : { "PolicyName" : "CFUsers", "Roles": [ { "arn:aws:iam::710161973367:role/Cognito_CFIAuth_Role" } ], "PolicyDocument" : { "Version" : "2012-10-17", "Statement": [ { "Sid": "Sid1482400105445", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::710161973367:role/Cognito_CFIAuth_Role" }, "Action": [ "SQS:SendMessage", "SQS:ReceiveMessage", "SQS:DeleteMessage", "SQS:GetQueueUrl" ], "Resource": "arn:aws:sqs:ap-south-1:710161973367:CFI-Trace" } ] } } } }I want role Cognito_CFIAuth_Role to have message send/read/delete previleges on SQS queue CFI-Trace. How do I attach SQS operation privileges to IAM Role ?
Attach policy to a IAM Role
I ran some tests locally and on AWS machines. My conclusion:www.supremenewyork.comblocks traffic that originates from AWS. It is easy to block traffic from AWS using IP tables. AWS publishesIP Address Rangesand it is easy to write a simple script likeAWS Blockerto block all traffic from AWS IPs.Why do some vendors block traffic from AWS? Increasing DDoS traffic and bot attacks from AWS hosted machines. Many attackers exploit compromised machines running in AWS to launch their attack. I have seen too many such incidents. AWS does its best to thwart such attempts. But if you see most of the attacks from a set of IP ranges, naturally you will try to block traffic from those IPs. I suspect the same in this case.The website is not pingable because ICMP traffic is blocked from all IPs. There is nothing you can do (unless you go through a VPN) to access the vendor website from your EC2 machine.
I was browsing the web using Firefox with my EC2 instance located in Ashburn, Virginia (IP Addr: 54.159.107.46) I visitedwww.supremenewyork.comand it did not load (other websites like Google did load.) I did some research and found the IP of Supreme's site: 52.6.25.180 . I found out that the location of that IP is ALSO IN ASHBURN, Virginia, which could only mean that supreme is using AWS to host their site. This is an issue for my instance because I want to connect to supreme using it, but because the IPs are in the same Server Building or in Amazon's IP range I can't. Is there a workaround to this issue? Please help.By the way: I tried pinging Supreme's IP from my EC2 instance – 100% packet loss.NOTE THAT I CAN ACCESS SUPREME FROM MY HOME COMPUTER: IT IS NOT DOWNIs there a security problem because I am trying to connect to their site?
EC2 Instance cannot connect to other Amazon Web Service
Just reuse the same values used to specify theFunctionNameandNameproperties in theAWS::Lambda::Aliasresource. For example, assuming your resource is specified like this in your template:"LambdaAliasDev" : { "Type" : "AWS::Lambda::Alias", "Properties" : { "FunctionName" : { "Ref" : "MyFunction" }, "FunctionVersion" : { "Fn::GetAtt" : [ "TestingNewFeature", "Version" ] }, "Name" : { "Ref" : "MyFunctionAlias" } } }You would combine the function and alias into a single string using theFn::Joinintrinsic function, like this:"ApiGatewayStageDev": { "Type": "AWS::ApiGateway::Stage", "Properties": { "StageName": "dev", "Description": "Dev Stage", "RestApiId": { "Ref": "ApiGatewayApi" }, "DeploymentId": { "Ref": "ApiGatewayDeployment" }, "Variables": { "lambdaAlias": { "Fn::Join": {[ ":", [ { "Ref": "MyFunction" }, { "Ref": "MyFunctionAlias" } ]} } } } }AssumingMyFunctionisFooandMyFunctionAliasisdev, this would setlambdaAliastoFoo:devas desired.
I need to set a stage variable for an API Gateway stage. This stage variable has to be just the lambda function and alias (Foo:dev). It cannot be the full ARN. The variable is then used in swagger to integrate API Gateway with a lambda function with a specific alias.It looks like the only thing I can out of AWS::Lambda::Alias resource is the ARN. How do I just get the name and alias?This is the stage resource. "lamdaAlias" gets set to the full ARN of the alias."ApiGatewayStageDev": { "Type": "AWS::ApiGateway::Stage", "Properties": { "StageName": "dev", "Description": "Dev Stage", "RestApiId": { "Ref": "ApiGatewayApi" }, "DeploymentId": { "Ref": "ApiGatewayDeployment" }, "Variables": { "lambdaAlias": { "Ref": "LambdaAliasDev" } } } }
How to get just the function name and alias from CloudFormation AWS::Lambda::Alias?
In the parameters you're passing to theputObject()function, include aMetadatakey which contains key/value pairs of the metadata you want to store with the S3 object.Example:s3.putObject({ Key: 'sea/animal.json', Metadata: { MyKey: 'MyValue', MyKey2: 'MyValue2' }, Body: '{"is dog":false,"name":"otter","stringified object?":true}' }, function (err, data) { // ... });See:putObject - Class: AWS.S3 — AWS SDK for JavaScript
I need to mock AWS S3 when using putObject().When calling the function I need to create the file with user metadata values. I tried to find some code examples over the web, but I found only this base code:var AWSMock = require('mock-aws-s3'); AWSMock.config.basePath = '/tmp/buckets/' // Can configure a basePath for your local buckets var s3 = AWSMock.S3({ params: { Bucket: 'example' } }); s3.putObject({Key: 'sea/animal.json', Body: '{"is dog":false,"name":"otter","stringified object?":true}'}, function(err, data) { s3.listObjects({Prefix: 'sea'}, function (err, data) { console.log(data); }); });Unfortunately, it's not includes the user metadata map.
How to mock S3 putObject() with user metadata?
If you created an instance with--instance-initiated-shutdown-behavior terminatethen it will terminate itself when stopped. All you need to do is runshutdown -h nowat the end of your script when it's done.So how do you start an instance at a specific time?There are third party services that do exactly that. Like GorillaStack.ORCreate aCloudWatch scheduled event. It's basically acronin the cloud that can run a Lambda function, which in turn may start you instance.
I seek a method to implement a cron-like system on AWS EC2 instance : every morning, I have some tasks to run, and they are using a lot of RAM (about 8 Go for each script). I don't want to pay a full-timec4.2xlargeinstance, that's the point.What I think about :At 00h, each day, create ac4.2xlargeinstanceWhen the system is running, run a PHP scriptWhen the PHP script ends, terminate the instanceHow to automate these actions ?
Programmatically create EC2 instance, run command, and terminate
Solved it by using following steps: 1. Changed the ipaddress in the admin panel> general settings to my ec2 host IP address with port as 8080Using SSH, logged into ec2, changed the user to user:kaa, password:kaa, used: sudo /usr/lib/kaa-sandbox/bin/change_kaa_host.sh host_ip Downloaded the new SDK and created a new app. Data was received in mongoDB.
I have Kaa Sandbox installed on AWS using default values 'localhost' and port as '27017' in log appender. Is this correct?Now running the Java SDK for "My first kaa app" is giving the following error on macOS.error message: INFO org.kaaproject.kaa.client.channel.impl.channels.DefaultOperationTcpChannel - Can't sync. Channel [default_operation_tcp_channel] is waiting for CONNACK message + KAASYNC messageIs this a problem with the IP address/port mentioned in log appender or is this a problem with mongoDB? Is mongoDB installed by default with Kaa Sandbox on AWS or is it missing and needs to be installed separately?Error msg also includes: [main] INFO org.kaaproject.kaa.client.channel.impl.DefaultChannelManager - Failed to find operations service for channel [default_operation_tcp_channel] type TransportProtocolId [id=1456013202, version=1]
Kaa Java SDK not syncing with KAA Sandbox MongoDB on AWS
As PerAmazon DynamoDB Pricing:Free Tier*As part of AWS’s Free Tier, AWS customers can get started with Amazon DynamoDB for free. DynamoDB customers get25 GB of free storage, as well as up to25 write capacity unitsand25 read capacity unitsof ongoing throughput capacity (enough throughput to handle up to 200 million requests per month) and 2.5 million read requests from DynamoDB Streams for free.Now, Let's See What is ReadCapacityUnits & WriteCapacityUnits ?As PerDynamoDB Documentation For Working With Tables:ReadCapacityUnitsOne Read Capacity Unit = one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size.WriteCapacityUnitsOne write capacity unit = one write per second, for items up to 1 KB in size.Calculating ReadCapacityUnits / WriteCapacityUnitsFirst, you need to calculate how many writes and reads per second you need.1 million evenly spread writes per day = 1,000,000 (No Of Writes) / 24 (Hours) / 60 (Minutes) / 60 (Seconds) = 11.6 writes per second.A DynamoDB Write Capacity Unit can handle 1 write per secondSo you need 12 Write Capacity Units.Similarly, to handle 1 million strongly consistent reads per day, you need 12 Read Capacity Units.Summary :You have a Big Pandora's Box as a Free tier. If you haven’t used up your free tier allotment (25 write capacity units, 25 read capacity units, 2.5 million Streams read requests, 25 GB of storage), you can run this application for free on DynamoDB.
AWS Free tier allows 25 Read/Write per account. I am not able to understand how this is allotted.I tried with MAX 8 Tables and MIN 2 Tables (Single DB only). AWS is allotting 3 Read/Write per table no matter what are the number of tables in DB. (I didn't go above 8 as it might cost me for going above the allowance of 25 R/W (not sure)).Can someone clarify if I make only 2 tables for my DB,can I increase my R/W limit for each to 12? There is an option to do that, but it results in an increase in price (Services -> DynamoDB -> Tables -> Choose Table ->Capacity).
AWS Free tier Read/Write allowance/table
Yes, that is the way if all you want is to check if the table exists. However, if you intend to create the table if not exists, you could use the API:TableUtils#createTableIfNotExists
I tried to search the web for any other way / function, and I came out with this:public static boolean isTableExist(Table table){ try { table.describe(); } catch (ResourceNotFoundException e) { return false; } return true; }Is there any chance to get rid of the try catch?
Is this the best practice of checking if DyanmoDB table exists?
Not yet, but we are planning on addressing this in the near future. Can't provide an ETA. But it would be similar to Lambda in that there will be a metric counter for throttled requests.
I've been using CloudWatch to track metrics against API Gateway and Lambda, and it shows throttled calls for Lambda, but is there any way to see the number of calls that are throttled earlier by API Gateway?
Is there a way to track calls throttled by API Gateway?
I finally figured this out. The problem was my IAMuser, that contains the vmimport role, did not have access to my S3 bucket. Once I granted my IAM user access to my S3 bucket (by setting a bucket policy in S3), the import-image command kicked off the process successfully.To set the bucket policy in S3, right-click on your bucket (i.e. the top level bucket name in S3), then click "Properties". Then from the right-hand menu that gets displayed, open "Permissions", and click "Add bucket policy". A small window will come up where you can put in JSON for a policy. Here is the one that worked for me:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1476979061000", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::MY-AWS-account-ID:user/myIAMuserID" }, "Action": "s3:*", "Resource": [ "arn:aws:s3:::mys3bucket", "arn:aws:s3:::mys3bucket/*" ] } ]}You'll need to replace "MY-AWS-account-ID" with your AWS Account ID, and "myIAMuserID" with your IAM user ID that contains the vmimport role.This documenttalks about how to get your AWS Account ID. Andthis documenttalks more about granting permissions in S3.
When running the AWS (Amazon Web Services) import-image task:aws ec2 import-image --description "My OVA" --disk-containers file://c:\TEMP\containers.jsonI get the following error:An error occurred (InvalidParameter) when calling the ImportImage operation: User does not have access to the S3 object.(mys3bucket/vms/myOVA.ova)I followed all of the instructions inthis AWS document on importing a VM(including Steps 1, 2, and 3). Specifically, I setup a vmimport role and the recommended policies for the role. What am I doing wrong?
AWS Import-image User does not have access to the S3 object
Never mind, my syntax was wrongGlobalSecondaryIndexes: - IndexName: "bid-uid-index" KeySchema: - AttributeName: "bid" KeyType: "HASH" - AttributeName: "uid" KeyType: "RANGE" Projection: ProjectionType: "ALL" ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1Changin it to above fixed the errors...
I am trying to create a table using serverless framework and even though I have specified Projection for the GSI, serverless is complaining that propertyProjectioncannot be empty. Am I getting the syntax wrong? If I remove the GSI section it works pretty fine.Table1: Type: "AWS::DynamoDB::Table" Properties: AttributeDefinitions: - AttributeName: "uid" AttributeType: "S" - AttributeName: "bid" AttributeType: "S" KeySchema: - AttributeName: "uid" KeyType: "HASH" - AttributeName: "bid" KeyType: "RANGE" GlobalSecondaryIndexes: - IndexName: "bid-uid-index" - KeySchema: - AttributeName: "bid" KeyType: "HASH" - AttributeName: "uid" KeyType: "RANGE" - Projection: - ProjectionType: "ALL" - ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 TableName: "Table1"
Property Projection cannot be empty
I'd add this as a comment, but I don't have enough reputation. :\Looking at the code, the lambda is probably shutting down before your callbacks complete. Which is why you do receive the first logging, but not the rest. And yes, you should get an error if the require('https') failed, so that's probably not the case.Can you post the rest of your code? Where do you invoke context.done in our lambda or, in newer versions of node, do you do the callback to the handler?
I have the following code, which is going to form part of an Alexa Skill, it works fine locally using node.exe but when I put it into Lambda the fetch returns nothing! and speechOutput just has "Your headlines are:". Can anyone suggest why it does not work?var speechOutput="Your headlines are:"; var urlPrefix = 'https://api.rss2json.com/v1/api.json?rss_url=http://news.com/feed/'; const https = require('https'); //console.log(https.get(urlPrefix)); https.get(urlPrefix, (res) => { var body = ''; res.on('data', function(data) { body += data; }); res.on('end', function() { var result = JSON.parse(body); jItems=result.items; for ( var i=0 ; i < jItems.length ; i++ ) { var article = jItems[i]; speechOutput +=", "+article.title ; } //console.log(speechOutput); }); }).on('error', function(e) { console.log('Error: ' + e); });
Code working locally but not on AWS Lambda
It looks like I should be using the config name returned by the commandeb config listand not the environment name. For example, in my case the environment name wasour-env-test. So when I run the commandeb config listI get backour-env-test-config. Now when I run the commandeb config get our-env-test-config>> eb config get our-env-test-config Configuration saved at: /Users/me/our-env/.elasticbeanstalk/saved_configs/our-env-test-config.cfg.ymlThe resulting config file is stored in the.elasticbeanstalkdirectory NOTE: Even before all this you need to initialize the directory with the appropriate EB project by runningeb initcommand
I have setup an elastic beanstalk application and environment using the AWS Web Console. Everything works well and as needed. Now we would like grab all the configuration for this environment so that we can setup this environment again, possibly using EB CLI for maintenance and deployment purposes (we are looking to transition to a different AWS account and clone it over there). I tried theeb config getusing the EB CLI but I get the errorgit:(master) ✗ eb config get our-env-test ERROR: Elastic Beanstalk could not find any saved configuration with the name "our-env-test".
Download/Retrieve existing Elastic Beanstalk environment configuration
Yes, Program running inside ECS can access s3 similarly to a program running inside Ec2 server. You need to set up proper IAM roles while launching ECS.See this linkAmazon ECS IAM Role guide for developers
From AWS documentations, a program running in AWS EC2 instance created with the correct IAM role could use AWS SDK to get temporaryaws_access_key_idandaws_secret_access_keyfor accessing desired aws resources. Does that apply for programs running inside a docker container in that instance? In my case, the container is started via AWS ECS and the program inside the container needs to access s3.If that is not general practice, what is the proper way of accessing a file stored in s3 from inside a container?
How to access other AWS services from inside a AWS ECS container on a EC2 host?
If its in the local filesystem, the URL should be file://user/hadoop/wet0 If its in HDFS, that should be a valid path. Use the hadoop fs command to take a looke.g: hadoop fs -ls /home/hadoopone think to look at, you say it's in "/home/hadoop", but the path in the error is "/user/hadoop". Make sure you aren't using ~ in the command line, as bash will do the expansion before spark sees it. Best to use the full path /home/hadoop
I'm trying to read in a text file on Amazon EMR using the python spark libraries. The file is in the home directory (/home/hadoop/wet0), but spark can't seem to find it.Line in question:lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])Error:pyspark.sql.utils.AnalysisException: u'Path does not exist: hdfs://ip-172-31-19-121.us-west-2.compute.internal:8020/user/hadoop/wet0;'Does the file have to be in a specific directory? I can't find information about this anywhere on the AWS website.
Spark/Hadoop can't find file on AWS EMR
When you sign in, language selection will appear in the top right. When you sign in as IAM user, it will be at the lower half of the screen.
I'm trying to figure out how to launch an ec2 instance in a language other than English. I've tried setting different regions but that doesn't do any good. I've also found mentions of a language menu in the ec2 dashboard but again no luck.Does anyone know how to do this or if it's possible? According to AWS documentation they support 19 different languages, Spanish been one of them, although I'm not quite sure what exactly that means.
How to launch an AWS EC2 instance in a different language
Try doing this:rdd.coalesce(1, shuffle = true).saveAsTextFile(...)My understanding is that theshuffle = trueargument will cause this to occur in parallel so it will output a single text file, but do be careful with massive data files.Hereare some more details on this issue at hand.
I use the following Scala code to create a text file in S3, with Apache Spark on AWS EMR.def createS3OutputFile() { val conf = new SparkConf().setAppName("Spark Pi") val spark = new SparkContext(conf) // use s3n ! val outputFileUri = s"s3n://$s3Bucket/emr-output/test-3.txt" val arr = Array("hello", "World", "!") val rdd = spark.parallelize(arr) rdd.saveAsTextFile(outputFileUri) spark.stop() } def main(args: Array[String]): Unit = { createS3OutputFile() }I create a fat JAR and upload it to S3. I then SSH into the cluster master and run the code with:spark-submit \ --deploy-mode cluster \ --class "$class_name" \ "s3://$s3_bucket/$app_s3_key"I am seeing this in the S3 console: instead of files there are folders.Each folder (for example test-3.txt) contains a long list of block files. Picture below:How do I output a simple text file to S3 as the output of my Spark job?
Write to a file in S3 using Spark on EMR
Enable CloudWatch Logs for API Gateway:https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cloudwatch-logs/
I created an API with AWS API Gateway. This API provides a method that calls an AWS Lambda function. When I call this API method manually using a REST client, it works properly, and the Lambda function is called.I also have a device that periodically pushes some data to a server via HTTP(S). When configured to push data to a HTTPS server running on an EC2 instance, it works properly. But when I configure the device to push data to API Gateway, the Lambda function is never called.I tried sniffing the traffic via WireShark, and I can see that requests are indeed sent by the device and that the API responds, but I can't view the contents of the requests and responses since they are encrypted. My guess is that API Gateway returns somme kind of error that prevents the Lambda to be called. Unfortunately, the device does not provide any logs. Is there any way on AWS side to see what is going on?
How to diagnose AWS API Gateway errors when there is no logging client-side
I originally thought the arrow function was the culprit. However, AWS Node.js 4.3.2 DOES support the arrow function, as mentioned in thispost about Node.js 4.3.2 Runtime on Lambda.NEW (correct) ANSWERDoes theevent.jsfile start with'use strict';?You must use strict mode for a class declaration in node.js 4.3.2Mozilla Developer Network about strict modeHoping this will help...ORIGINAL (incorrect) ANSWERmodule.exports = ProductsI believe the arrow function:() => {}is not yet implemented in the nodejs version you are using (4.3).See this answerArrow functions are supported in Node.js since version 4.4.5If updating your nodejs version is not an option for you, you could replace:exports.handler = (event, context, callback) => { console.log('done'); }withexports.handler = (event, context, callback) = function() { console.log('done'); }
This question already has answers here:Node.js support for => (arrow function)(4 answers)Closed7 years ago.I have code works in node.js v6.4: just two files, index.js:// ------------ Index.js ------------ 'use strict'; var Event = require('./models/event.js'); exports.handler = (event, context, callback) => { console.log('done'); }and event.js:// ------------ Event.js ------------ class Event { static get dynamoDBTableName() { return } get hashValue() { return } parseReference(reference) { return } } exports.Event = Eventwhen runindex.handleron AWS Lambda which use version node.js 4.3, it throws a error:Syntax error in module 'index': SyntaxError at exports.runInThisContext (vm.js:53:16) at Module._compile (module.js:373:25) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Module.require (module.js:353:17) at require (internal/module.js:12:17) at Object.<anonymous> (/var/task/index.js:16:13) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10)I think it's something wrong withexports.Event = Event,Is there some trick to fix this.I'm new to node.js.Any help should be appreciated.I think it's not SyntaxError with(event, context, callback) => { }Because AWS Lambda sample code runs well with this Syntax:
AWS Lambda exports class works in node.js v6.4 but not in node.js v4.3, how to fix this? [duplicate]
Polling yourselfusedto be the only available option, but theAWS SDK for Java 1.11.25release introduced thecom.amazonaws.waiterspackage, seeWaiters in the AWS SDK for Javafor an overview/introduction.Note that waiters will still poll under the hood, but they abstract that logic away to offer 'convenience' API methods to wait in a blocking way viarun()or in a callback oriented way viarunAsync().Regarding your explicit use case, you should look intoAmazonCloudFormationWaiters.stackDeleteComplete().
I'm working on a AWS CloudFormation management platform which allows users to launch, update and delete stacks on CloudFormation.When a stack is launched, I create a DB entry to associate it with a Template (collection of resources to be created) and a Customer. Users are able to call and view the latest events happening to their stack i.e. "CREATION_IN_PROGRESS", "CREATION_COMPLETED".Currently when a stack is deleted, I delete it from the DB immediately, providing no further information to the user aside from "Your stack is being deleted".The callback that is currently available when executing adeleteStack()is already returned once the stack deletion is initiated.I would like to provide more information and events whilst it is being deleted and when the stack is completely deleted, delete it from my DB.The only way to make this happen is executing a function to check the stacks' existence on a timed interval and once it is gone, delete it from the database.Am I wrong to assume this, or does anyone reading this have a better idea or implementation?Any information is welcome.
AWS Cloudformation callback when a stack is completely deleted
I think, there is no way to avoid billing when you have tables in DynamoDB.What you can do is you can export the data from Dynamo DB now and import when needed. You can delete all the tables after you done export.More info:http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Does anybody knows how to disable Aws Dynamodb on console? I have 100 of tables & my project is shut for 4 months but it will start again after this. So I don't want read/write capacity unit bill on each table. So I am finding any way to stop the Dynamodb instance or something which can deactivate my tables for this time of period.
How can we disable Dynamodb on AWS console?
There is no command to tell Amazon S3 to archive a specific object to Amazon Glacier. Instead,Lifecycle Rulesare used to identify objects.TheLifecycle Configuration Elementsdocumentation shows each rule consisting of:Rule metadatathat include a rule ID, and status indicating whether the rule is enabled or disabled. If a rule is disabled, Amazon S3 will not perform any actions specified in the rule.Prefixidentifying objects by the key prefix to which the rule applies.One or moretransition/expiration actions with a date or a time periodin the object's lifetime when you want Amazon S3 to perform the specified action.The only way to identifywhichobjects are transitioned is via theprefixparameter. Therefore, you would need to specify a separate rule for each object. (The prefix can include the full object name.)However, there is alimit of 1000 rulesper lifecycle configuration.Yes, you could move objects one-at-a-time to Amazon Glacier, but this would actually involve uploading archives to Glacier rather than 'moving' them from S3. Also, be careful -- there arehigher 'per request' charges for Glacier than S3that might actually cost you more than the savings you'll gain in storage costs.In the meantime, consider usingAmazon S3 Standard - Infrequent Access storage class, which cansave around 50% of S3 storage costsfor infrequently-accessed data.
I have 1000s of large (5 - 500Mb, most are ~100Mb) files in an S3 bucket, no organisation at all - no "directories". These files all have different expiration times (some expire after 60 days, others after 90, etc.) after which I would like to move them to the Glacier storage class.I have looked at the Life Cycle feature, but I cannot find how to apply a specific rule to one file. They appear to only work by using prefixes and I would rather not change my naming convention.I have tried - using the PHP SDK - to do a copyObject with the 'StorageClass' argument set to "GLACIER", but that predictably gave an exception. I guess the documentation is up to date and there really is no such value :-)I really hope I'm missing something, because I would hate to have to download these files and then upload them to Glacier 'manually'. I'd also be missing the easy restore features from the S3 console.
Can I programmatically move S3 objects to the Glacier storage class?
Fromthe official documentation:When the callback is called, the Lambda function exits only after the Node.js event loop is empty.Since you are calling the callback, but your Lambda function invocation is not ending, it appears you still have something on the event loop. Your function isn't really doing anything except creating a Redis connection. I'm guessing you need to close the Redis connection when you are done with it, in order to clear the event loop and allow the Lambda invocation to complete.
I'm trying to write an AWS Lambda function which uses Redis. When I run the code below:'use strict' function handler (data, context, callback) { const redis = require("redis") const _ = require("lodash") console.log('before client') const client = redis.createClient({ url: 'redis://cache-url.euw1.cache.amazonaws.com:6379', }) console.log('after client') callback(null, {status: 'result'}) console.log('after callback') } exports.handler = handlerI have an answer like this:{ "errorMessage": "2016-09-20T15:22:27.301Z 07d24e0b-7f46-11e6-85e9-e5f48906c0da Task timed out after 3.00 seconds" }and logs look like: 17:22:24 START RequestId: 07d24e0b-7f46-11e6-85e9-e5f48906c0da Version: $LATEST  17:22:26 2016-09-20T15:22:26.014Z 07d24e0b-7f46-11e6-85e9-e5f48906c0da before client  17:22:26 2016-09-20T15:22:26.134Z 07d24e0b-7f46-11e6-85e9-e5f48906c0da after client  17:22:26 2016-09-20T15:22:26.135Z 07d24e0b-7f46-11e6-85e9-e5f48906c0da after callback  17:22:27 END RequestId: 07d24e0b-7f46-11e6-85e9-e5f48906c0da  17:22:27 REPORT RequestId: 07d24e0b-7f46-11e6-85e9-e5f48906c0da Duration: 3001.81 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 24 MB  17:22:27 2016-09-20T15:22:27.301Z 07d24e0b-7f46-11e6-85e9-e5f48906c0da Task timed out after 3.00 secondswhich, IMHO, means that callback was called but nothing happened.When I remove client's initialization I see proper response.Any Ideas?
AWS Lambda and Redis client. Why can't I call callback?
You can build distributed locking mechanisms on AWS using DynamoDB withstrongly consistent reads. You can also do something similar using Redis (ElastiCache).Alternatively, you could use Lambda scheduled events to send a request to your load balancer on a cron schedule. Since only one back-end server would receive the request that server could execute the cron job.These solutions tend to break when your autoscaling group experiences a scale-in event and the server processing the task gets deleted. I prefer to have a small server, like a t2.nano, that isn't part of the cluster and schedule cron jobs on that.
I have a Laravel application where the Application servers are behind a Load Balancer. On these Application servers, I have cron jobs running, some of which should only be run once (or run on one instance).I did some research and found that people seem to favor a lock-system, where you keep all the cron jobs active on each application box, and when one goes to process a job, you create some sort of lock so the others know not to process the same job.I was wondering if anyone had more details on this procedure in regards to AWS, or if there's a better solution for this problem?
AWS - Load Balanced Instances & Cron Jobs
You wont be able to use the load balancer generated cookies. There is no way to set the domain of the cookie differently than the domain to which the cookie was requested.Solutions:Generate your own cookie, use that to determine stickyness.Switch to use path based routes instead of subdomainsUpdate your application so that it would not rely on stickyness to function properly.
The AWS ELB redirects differently for subdomains (in my specific case it's language subdomains like ko.mydomain.com and es.domain.com).I'm currently using the "Enable load balancer generated cookie stickiness" option. I understand the reason for this is the cookie it saves is based on the subdomain that is being accessed.How can I make the stickiness work across subdomains?
AWS ELB routes differently based on subdomain
There isn't an out of the box metric for something like that as Cloudwatch by default only has access to hypervisor level metrics rather than OS based metrics such as RAM usage or process related statistics.To augment the data in Cloudwatch you could write a small script that checks whether the process is running and then callsPutMetricDatato upload that metric to Cloudwatch.Something like this should work:#!/bin/bash ${process_name}=$1 DATE=`date +%Y-%m-%dT%H:%M:%S.000Z` processes_running=`pidof ${process_name} | wc -w` aws cloudwatch put-metric-data --metric-name ${process_name}_running --namespace "MyService" --value ${processes_running} --timestamp $DATEThen just call that withcronor something every minute (or however often you want to update Cloudwatch - max resolution is 1 minute though, more frequent calls will be aggregated)Then you just need tocreate an alarmthat performs some action (such as using SNS to send an email to all subscribed addresses but potentially also performing some action such as rebooting the instance).
I have an EC2 instance in AWS with Centos 6 and I only have supervisor on it which maintains a single PHP script. In some cases this script fails and I can see something like this:$ sudo /usr/local/bin/supervisorctl status my-worker EXITED Aug 19 10:19 AMI would like to receive alert email about it because my script hasn't worked since Aug 19.I try to find something related to health checks, but health check available only for load balancers. Also I tried to find something in CloudWatch but couldn't find a relevant metric for me.Any idea, how i can receive email when my worker fall down?
Alert email when worker fails on AWS EC2
Amazon S3 supports theIf-Modified-Sinceheader already, as well as the relatedIf-None-Match(which uses anETaginstead of a date).So, the way to load images only if they were changed is to actually use theIf-Modified-Since, orIf-None-Matchif you have the Etag. However, since you are talking about loading it to a browser, most browsers will already be doing this unless you have done something funky to disable browser caching.Seehttp://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html#RESTObjectGET-requests-request-headersfor details on the supported headers.Are you experiencing a situation where the browser is still loading the images from S3 even if they haven't been changed? If so, do you have more details on that, e.g. the browser, version, and something like aChrome network tab HAR fileillustrating the symptoms?By default, it should just work on both sides with no custom changes. I just replicated by uploading a fresh png image file to S3. In a fresh browser window, I opened the dev tools and loaded the network tab. I ensured that the 'disable caching' was UNticked, and 'preserve log' was ticked (to keep the log over multiple F5 refreshes).I loaded the image and then hit F5 to reload twice. The result was:As you can see, the first load was with a 200 status, the other requests received 304.
Is there any way to load images from S3 to browser only if they were changed, by sending If-Modified-Since header?It should be enabled by default on browsers and S3 but tests saying that images are loading on every refresh.
S3 stored image with If-Modified-Since header not working
By default, one replica shard is created per primary shard and since you have only one node, the replicas cannot be assigned.You simply need to passnumber_of_replicas:0when you create your indexPUT /my-index { "settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 0 } } }If you later want to increase the number of replica shards because you add new nodes, you can do it like this:PUT /my-index/_settings { "index" : { "number_of_replicas" : 1 } }
But because I'm hosting on AWS I'm not allowed to call/_settingsor/_clusterso how can I assign those shards?The error I'm getting when IPOSTto/_cluster:{ "Message": "Your request: '/_cluster' is not allowed." }
How to assign shards in AWS Elasticsearch?
There is no any single CLI command that you will run and get the monthly pricing. you will have to integrate the AWS Price List API to get the pricing.https://aws.amazon.com/blogs/aws/new-aws-price-list-api/This is also a good tool available to get the costs for the EC2 instances.https://github.com/colinbjohnson/aws-missing-tools/blob/master/_deprecated/ec2-cost-calculate/ec2-cost-calculate.sh
I want to get the Details of every instance available and their respective monthly costs, Usage details. Is there any cli command that can fetch the const details and export to csv file. How to achieve this through CLI. Thanks in advance.
How to get the Monthly Cost of aws ec2 instance using CLI tools
No. If you call your Lambda function, whatever calls it will get the return, not the Echo.What you are asking for is "push notification". There is a very long thread of people requesting this on the ASK forum. It is the most requested feature for the ASK. But Amazon have never indicated they are considering doing this. But, then, it is their policy not to indicate what they are doing anyway.Personally I do not think they will ever do this. There are too many security and privacy concerns. Some people have created hacks, whereby the run an agent on a computer hooked to their echo by bluetooth. They push a request to the computer and the computer plays a message over the Echo. That's the closest I've seen.
I want to have Alexa speak a response to an intent, but by manually invoking the Lambda function that contains the Alexa skill code, rather than by speaking the intent directly to the Echo.Could you, for instance, send a JSON payload that comprises an intent request to the Lambda function somehow (by AWS-SDK or via a rule on an IoT "thing") and expect the Lambda function to execute and the Echo to play the speech response?
Is it possible to invoke an AWS Lambda function with a payload to make Alexa speak?
So far, the best way to accomplish this that I've found is to Clone the Pipeline, make it On-Demand (instead of Scheduled) and activatethatone. This new Pipeline will activate and run immediately. This seems cumbersome, however; I'd be happy to hear a better way.
I have ascheduledAWS Data Pipeline that failed partway through its execution. I fixed the problem without modifying the Pipeline in any way (changed a script in S3). However, there seems to be no good way to restart the Pipeline from the beginning.I tried Deactivating/Reactivating the Pipeline, but the previously "FINISHED" nodes were not restarted. This is expected; according to thedocs, this only pauses and un-pauses execution of the Pipeline, which is not that we want.I tried Rerunning one of the nodes (call itx) individually, but it did not respect dependencies: none of the nodesxdepends on reran, nor did the nodes that depend onx.I tried activating it from a time in the past, but received the error:startTimestamp should be later than any Schedule StartDateTime in the pipeline (Service: DataPipeline; Status Code: 400; Error Code: InvalidRequestException; Request ID: <SANITIZED>).I would rather not change theSchedulenode, since I want the Pipeline to continue to respect it; I only need this one manual execution. How can I restart the Pipeline from the beginning, once?
How to restart an AWS Data Pipeline
You can delete the item by specifying the SaveBehaviour as CLOBBER without worrying about the version.DynamoDBMapper mapper; mapper.delete(object, new DynamoDBMapperConfig(DynamoDBMapperConfig.SaveBehavior.CLOBBER)
I have a Dynamo table that uses optimistic locking via theDynamoDBVersionAttributeto ensure that only one worker at a time has a document reserved. However, when I want to clean up a document, the delete throws aConditionalCheckFailedExceptionwhen I don't set the version in theDynamoDBMapper.At that point, I don't care what version the document is, and I want to delete it no matter what version it is. Is there a way to force the delete without worrying about the version? I want to overcome the exception without having to query Dynamo for the document.
Force delete on table that uses optimistic locking with version number
This has been driving me nuts for a week!I got it to work by usingCustom IoT Ruleinstead ofIoT Buttonon theIoT Type. The default rule name is iotbutton_xxxxxxxxxxxxxxxx and the default SQL statement isSELECT * FROM 'iotbutton/xxxxxxxxxxxxxxxx'(xxx... = serial number).Make sure you copy the policy from the sample code into the execution role - I know that has tripped up a lot of people.
Has anyone successfully set up their AWS IoT button?When stepping through with default values I keep getting this message:Please correct validation errors with your trigger.But there are no validation errors on any of the setup pages, or the page with the error message.I hate asking a broad question like this but it appears no one has ever had this error before.
Trying to set up AWS IoT button for the first time: Please correct validation errors with your trigger
We resolved the issue by adding the below policy in the trusted relationship of the custom role.{ "Effect": "Allow", "Principal": { "AWS": "<ARN of role that has to assume the custom role>" }, "Action": "sts:AssumeRole" }
In our application, we access the aws APIs with custom roles. In the developer environment, we provide access Key and secret key in the app.config and it works great.In the prod environment, we have setup an IAM role with necessary permissions to the custom roles and the EC2 instance is launched with that IAM role. When we try to switch role using the code, then we are getting below errorMessage: User: arn:aws:sts::XXXXXXXXX:assumed-role//i-0490fbbb5ea7df6a8 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXXXXXXXXX:role/Code:AmazonSecurityTokenServiceClient stsClient = new AmazonSecurityTokenServiceClient(); AssumeRoleResponse assumeRoleResponse = await stsClient.AssumeRoleAsync(new AssumeRoleRequest { RoleArn = roleArn, RoleSessionName = sessionName }); var sessionCredentials = new SessionAWSCredentials(assumeRoleResponse.Credentials.AccessKeyId, assumeRoleResponse.Credentials.SecretAccessKey, assumeRoleResponse.Credentials.SessionToken); AmazonS3Client s3Client = new AmazonS3Client(sessionCredentials);Policy details:"Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::account_id:role/role-name"Any help on this would be great. Thanks in advance.
AWS Assume role with EC2 instance IAM role not working
You need to deploy an API in order to get an endpoint URL. The same API may be deployed under different guises — you might call it "dev" for a development deployment or "prod" for production purposes.The API can only be accessed in this way once deployed, so:Go to "APIs > resources"Use the Actions button, "Actions > Deploy API"Deploy it as, e.g. "dev"Then, under "APIs > Stages", select the deployment and you will see the URL in a banner at the top, "Invoke URL:https://...amazonaws.com/dev"
How can I get the endpoint or URI from AWS API Gateway ? I see only arn from the management console
How to get endpoint/URI from AWS API Gateway?
You have to create method request parameters for your query string parameters, then you need to create a mapping template to map your query string parameters to the integration request body.The mapping template will be something like this,{ "email": "$input.params('email')", "name": "$input.params('name')" }
I use API-Gateway to map rest requests to some Lambda functions. It works fine for post methods, where i send my information in the body as JSON and access it in the lambda like somodule.exports.handler = function(event, context, cb) { var email = event.email; var name = event.name; }Now i wanted to create a GET, with query strings. On the request side on API-Gateway its fine you can select the query string names, but for the life of me I cant figure out what to do on the Integration Request side. How do i get my query strings into my lambda so that i can access them like above. Or are they accessed differently.I went through the docs, and still dont understand it. You would think this is like the most basic use case and they have an example, but no.Please can somebody help meThanks
AWS API-Gateway GET Method Parameter Mapping
That's should do the trick:# open connection to ec2 conn = get_ec2_conn() # get a list of all instances all_instances = conn.get_all_instances() # get instances with filter of running + with tag `Name` instances = conn.get_all_instances(filters={'tag-key': 'Name', 'instance-state-name': 'running'}) # make a list of filtered instances IDs `[i.id for i in instances]` # Filter from all instances the instance that are not in the filtered list instances_to_delete = [to_del for to_del in all_instances if to_del.id not in [i.id for i in instances]] # run over your `instances_to_delete` list and terminate each one of them for instance in instances_to_delete: conn.stop_instances(instance.id)And in boto3:# open connection to ec2 conn = get_ec2_conn() # get a list of all instances all_instances = [i for i in conn.instances.all()] # get instances with filter of running + with tag `Name` instances = [i for i in conn.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}, {'Name':'tag-key', 'Values':['Name']}])] # make a list of filtered instances IDs `[i.id for i in instances]` # Filter from all instances the instance that are not in the filtered list instances_to_delete = [to_del for to_del in all_instances if to_del.id not in [i.id for i in instances]] # run over your `instances_to_delete` list and terminate each one of them for instance in instances_to_delete: instance.stop()
I'm using this script by mlapida posted here:https://gist.github.com/mlapida/1917b5db84b76b1d1d55#file-ec2-stopped-tagged-lambda-pyThe script by mlapida does the opposite of what I need, I'm not that familiar with Python to know how to restructure it to make this work. I need to shutdown all EC2 instances that do not have a special tag identifying them.The logic would be: 1.) Identify all running instances 2.) Strip out any instances from that list that have the special tag 3.) Process the remaining list of instances to be shutdownAny help is greatly appreciated.
Shutdown EC2 instances that do not have a certain tag using Python
You can start your kinesis consumer again (or a different one) with different settings regarding the Shard iterator. see hereGetShardIteratorThe usual setting is LATEST or TRIM_HORIZON (oldest):{ "ShardId": "ShardId", "ShardIteratorType": "LATEST", "StreamName": "StreamName", }But you can change it to a specific time (from the last 24 hours){ "ShardId": "ShardId", "ShardIteratorType": "AT_TIMESTAMP", "StreamName": "StreamName", "Timestamp": 2016-06-29T19:58:46.480-00:00 }Keep in mind that usually the kinesis consumer saves its checkpoints in a dynamodb table, so if you are using the same kinesis application you need to delete those checkpoints first.
How do we read from AWS Kinesis stream going back in time?Using AWS Kinesis stream, one can send stream of events and the consumer application can read the events. Kinesis Stream worker fetches the records and passes them toIRecordProcessor#processRecordsfrom the last check point.However If I have a need to read the records going back in time, such as start processing records from 2 hours ago, how do I configure my kinesis worker to fetch me such records?
AWS Kinesis read from past
No, there is not.You get get user information given the user name by callingAdminGetUserfrom you backend with your developer credentials but not from an Android client.
Is there any way to get information about the user in AWS Cognito pool (on android) who is not logged in, knowing his ID? I tried that code:AppHelper.getPool().getUser(username).getDetailsInBackground(detailsHandler);However it works only for username who is currently logged in.
AWS Cognito- get user information with ID
Please use "withComparisonOperator(ComparisonOperator.NE)" for NOTEQUALS condition. Refer the below example.DynamoDBSaveExpression saveExpression = new DynamoDBSaveExpression(); Map expected = new HashMap(); expected.put("Status", new ExpectedAttributeValue(new AttributeValue(status)).withComparisonOperator(ComparisonOperator.NE)); saveExpression.setExpected(expected); dynamoDBMapper.save(obj, saveExpression);
Is it possible to do a conditional write/update using DynamoDB mapper when you want to have condition like: Write if attribute is not equal to X? I want to do a conditional write that does something like this:DynamoRecord record = mapper.load(DynamoRecord.class, hashKey); if (record.getSomeAttr() != terminationValue) { mapper.save(newRecord); }The attribute always exists. It can have multiple values and represents a condition after which the updates to the record should stop.I readthis articleandAWS documentationand a bunch of other posts but seems like only==operator is supported along with exists check. Is it possible to do a!=conditional write using a mapper? If so, any pointers are much appreciated.Thanks.Summary from @notionquest's answerUseComparisonOperator.NEIf the attribute is a boolean, annotate it using@DynamoDBNativeBooleanfor correct results while usingExpectedAttributeValue(new AttributeValue().withBool(true)). Seeherefor details.
Conditional write item if attribute not equals with DynamoDBMapper
It's safe to assume you are using the 'standard' endpoint. All of this primarily applies to it, and not the regional endpoints. S3 is atomic andeventually consistent.The documentation gives several examples, including this:A process writes a new object to Amazon S3 andimmediately lists keys within its bucket. Until the change is fully propagated,the object might not appear in the list.Occasionally delays ofmanyhourshave been seen, and my anecdata agrees withthis statementthat well over 99% of the data exists within 2 seconds.You can enable read-after-write consistency, which "fixes" this, by changing your endpoint froms3.amazonaws.comtos3-external-1.amazonaws.com:s3client = boto3.client('s3', endpoint_url='s3-external-1.amazonaws.com')
I'm using Python and boto3 to work with S3.I'm listing an S3 bucket and filtering by a prefix:bucket = s3.Bucket(config.S3_BUCKET) for s3_object in bucket.objects.filter(Prefix="0000-00-00/", Delimiter="/"):This gives me an iterable of S3 objects.If I print the object I see:s3.ObjectSummary(bucket_name='validation', key=u'0000-00-00/1463665359.Vfc01I205aeM627249')When I go to get the body though I get an exception:content = s3_object.get()["Body"].read()botocore.exceptions.ClientError: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.So boto just gave me the key, but then it says it doesn't exist?This doesn't happen for all keys. Just some. If I search for the invalid key in the AWS console it doesn't find it.
S3 Key Not Present Immediatly After Listing
Roles may not be assumed by root accounts.This error means exactly what it says.You cannot assume a role while using a root account, under any circumstances. You have to use an IAM account.There is no other workaround for this. The behavior is by design.
com.amazonaws.AmazonClientException: com.amazonaws.AmazonServiceException: Roles may not be assumed by root accounts. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;I created a role and it's Trust Relationship is :{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<awsID>:root", "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }I even tried creating a policy and assigned it to my role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::secorbackup" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::secorbackup/*" ] } ] }Nothing seems to work. I'm getting the same error. I am using pinterest/secor for log persistence from kafka to s3. Any suggestions?
write to AWS S3 programatically -returns- Roles may not be assumed by root accounts.
you need to include VPC information in your scripts or init:http://docs.aws.amazon.com/autoscaling/latest/userguide/asg-in-vpc.html
So I am following this link:Autoscale based on SQS queue sizeto create an autoscaling group for my instances. I have read many articles about this problem that I am getting and many people are getting the same problem, but theirs occurs when they try to use "t1.micro". Whereas, I am using "c4.xlarge" instance type and I already have a VPC defined for my Image. Why am I still getting this error:Launching a new EC2 instance. Status Reason: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. Launching EC2 instance failed.Does anybody have a solution for this?
EC2 Instance creation fails due to VPC Issues
TheTransferManagerusesFileobjects to support things like file locking when downloading pieces in parallel. It's not possible to use anOutputStreamdirectly. If your requirements are simple, like downloading small files from S3 one at a time, stick withgetObject.Otherwise, you can create a temporary file withFile.createTempFileand read the contents into a byte array when the download is done.
I'm implementing an helper class to handle transfers from and to an AWS S3 storage from my web application.In a first version of my class I was using directly aAmazonS3Clientto handle upload and download, but now I discoveredTransferManagerand I'd like to refactor my code to use this.The problem is that in my download method I return the stored file in form ofbyte[]. TransferManager instead has only methods that useFileas download destination (for exampledownload(GetObjectRequest getObjectRequest, File file)).My previous code was like this:GetObjectRequest getObjectRequest = new GetObjectRequest(bucket, key); S3Object s3Object = amazonS3Client.getObject(getObjectRequest); S3ObjectInputStream objectInputStream = s3Object.getObjectContent(); byte[] bytes = IOUtils.toByteArray(objectInputStream);Is there a way to useTransferManagerthe same way or should I simply continue using anAmazonS3Clientinstance?
Download file to stream instead of File
This could be achieved by option one.Disable the public access of files from S3 management console on AWS.You only need to build expiring urls if you want to restrict access.I suppose there must be a unique key available for each user. You can generate signed URLs using that key. for each object programmatically.Hereis one example available for reference.
We are facing a use case where we need to storeconfidentialimages of the user on S3. Now as S3 is accessible over HTTP and if we give a read access to the objects they will be available to the world via web. We need to restrict the images/files only to that user. So the possible solutions we thought are:URL masking in some way.(No idea exactly how)storing the files/images by creating unique encrypted s3 keyseg:http://bucket.s3.amazon.com/clients/img/j84jaljvkeh774d/myimage.jpgIn the first one we may not get the cloudfront or cdn benefits as it might involve a independent proxy server.The second one,is in a way secure as it would be difficult to predict the keyname,if its unique to a user.UsingACl and bucket policieswon't completely solve the problem. Also,if we write a policy which restricts IP addresses, the mobile app which uses the same API backend would end up not working as those would have requests originating from different IP's.We know we cannot completely secure them,but do we have an approach to deal with such a scenario?Please share your inputs.
(End)User level access to objects in S3
Create a Rails endpoint that can accept the SNS notifications.Enable S3 event notifications to SNSConfigure the SNS topicthat is receiving the S3 events to push to your Rails application's endpoint.
Disclaimer, I'm a Rails newb. So I may be doing this wrong altogether.I need to display the latest image in the view of a rails app immediately after it is uploaded to an AWS S3 bucket (from another source). Rather than repeatedly updating / polling for latest image, I think it'd be less taxing and costly to get a notification from AWS when a new image has been uploaded.I looked into SNS and it seems like perhaps an HTTP notification with the rails url as endpoint could be an option. But I'm not sure how to set that up.Any ideas or suggestions?
How to Receive Push Notifications from AWS S3 to Rails App?
It turns out my suspicions about permissions were correct - you also need to add aLambda.addPermissionwith the following pattern:{ FunctionName: functionArn, StatementId: Date.now().toString(), Action: 'lambda:InvokeFunction', Principal: 'sns.amazonaws.com', SourceArn: topicArn }
I'm setting up an SNS subscription for Lambda, using the Nodeaws-sdk. The call returns successfully - it gives me a subscription ARN, and when I look in the web console it appears. However, when I publish a message to the topic, nothing happens. I tried setting up the same subscription in the web console (all the fields look exactly the same) and itdoeswork.Is there something that the console does behind the scenes I'm not aware of? Sets permissions on the SNS topic/Lambda, anything like that?
SNS -> Lambda subscription doesn't work when set via API, but does when set by admin console
PutLogEvents is designed to put several events by definition (as per it name: PutLogEvent"S") :) Cloudwatch logs agent is doing this on its own and you don't have to worry about this.However please note: I don't recommend you to generate to much logs (e.g don't run debug mode in prodution), as cloudwatch logs can become pretty expensive as your volume of log is growing.
I am trying to find centralized solution to move my application logging from database (RDS).I was thinking to use CloudWatchLog but noticed that there is a limit for PutLogEvents requests:The maximum rate of a PutLogEvents request is 5 requests per second per log stream.Even if I will break my logs into many streams (based on EC2, log type - error,info,warning,debug) the limit of 5 req. per second is still very restrictive for an active application.The other solution is to somehow accumulate logs and send PutLogEvents with log records batch, but it means then I am forced to use database to accumulate that records.So the questions is:May be I'm wrong and limit of 5 req. per second is not so restrictive?Is there any other solution that I should consider, for example DynamoDB?
AWS CloudWatchLog limit
You have to own the domain you are requesting an ACM certificate for. Since you don't own the amazonaws.com domain, you can't request a certificate for that domain.
I'm trying to request an ssl cert fromAmazon's Certificate Managerservice and apply it to my ELB, however after entering the default DNS name for my ELB:my-aws-elb-XXXXXXX.us-west-1.elb.amazonaws.comThe request fails without giving any useful error messages. I did see a notice about having sufficient IAM rules. I enabled the Full ACM Manager permission policy for my IAM user however I'm not sure how that links up with making requests from the Amazon web console.Is it not possible to use the default DNS or do I need my own domain name?
Default Elastic Load Balancer DNS works with Amazon Certificate Manager?
From the documentationhere, it appears that the ELB will always route traffic to eth0:When you register an instance with an elastic network interface (ENI) attached, the load balancer routes traffic to the primary IP address of the primary interface (eth0) of the instance.So I think your only solution is to swap eth0 and eth1 on your Palo Altos such that eth0 is the interface in your public subnet that you want ELB traffic routed to.
I am creating a load balancer in front of two Palo Alto's that are acting as Next Gen Firewalls for a web application behind them. These firewall devices have three ENI's:eth0: a management ENI placed in an internal management subneteth1: a public ENI placed in an isolated, public subnet.eth2: a private ENI, placed in a DMZ subnet where the web application is also located.When creating the ELB, I've selected the public subnet as the service location. After adding the instances to the ELB, and receiving an InService status, I navigate to the ELB address to find the management interface now exposed (eth0).I can't seem to locate a way to manually specify the ENI for traffic on the ELB. Is this possible? If not, how am I to configure the ELB with eth1 only?
ELB with Multiple ENI Instances
You'll have to open port 80 on the server's firewall, and either run your server on port 80 or forward port 80 to port 8080. You'll need to lookup the instructions for doing that based on what version of Linux you are running, but it is probably going to be aniptablescommand.You'll also need to open port 80 on the EC2 server's security group.
I'm running Bitnami MEAN on an EC2 instance. I can host my app just fine on port 3000 or 8080. Currently if I don't specify a port I'm taken to the Bitnami MEAN homepage. I'd like to be able to access my app by directly from my EC2 public dns without specifying a port in the url. How can I accomplish this?
Amazon EC2 instance of Bitnami MEAN - how to host app on port 80?
I don't believe Elastic Transcoder can do what you want. The best solution for the video processing itself is to write a python script or similar that you can run on Elastic Beanstalk or a normal EC2 ( using docker may be a good idea to get a proper image with all the tools you will need ).Here a solution I use for transcoding which is a similar problem:web page allows users to upload video directly to S3 ( see fineuploader )s3 triggers an SQS messagean elastic beanstalk worker tier server runs a python script that checks the SQS queue and processes the job.for any job, Use ffmpeg to generate the frames ( Google ffmpeg video to frames ).if you want to keep the large pictures, upload them back to S3 or process the images first (resize) and then uploadoptionally, if you upload the large pictures to s3, you could use a lambda function just for the image resizing side.I wish I could show you the code for the different parts but my solution is more elaborated and does other things so it is not easy to extract and modify to show what you need, but I hope you get inspiration to do it yourself.
As mentionned in aprevious question, I'm looking for the best way to extract frames from videos using AWS.I came accross AWS Elastic Transcoder and tried to figure out if I could use it. The only option that could have been interesting is the thumbnails generation, but it is limited to 1 per second and I need all the frames of the video.Do you think there is way to do what I need with Elastic Transcoder ?Thanks
AWS Elastic Transcoder to extract frames from videos?
Yes, that is the correct approach.Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append-yyyy-mmfor the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.
I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?When I try repeating an upload using the same command as before:aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crtI receive:A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with namefoo.baralready exists.Is the best practice to simply upload using aDIFFERENTname then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following thecorrectapproach.EDIT 2015-03-30I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.The question remains however, is this the correct approach?
Renewing IAM SSL Server Certificates
I assume you have processing you didn't mention that takes so long that you can't add multiple books in one invocation of the lambda function.You can both fanout and recursively invoke your lambda function. There are benefits and drawbacks to both.If you fanout to too many, too often, your dynamodb writes could spike above provisioned write capacity.If you recursively call your function you will not be able to return a value to the caller. (Assuming that the whole chain of call takes more than five minutes.)
I'm using AWS Lambda in NodeJS. With this lambda, I want toadd a bookin DynamboDB. It works fine.Now, I want to do it for alist of books. I have some ideas but I don't know if it's possible in AWS lambda.idea 1 : fork several lambaI'm wondering if it's possible to have a "master" Lambda that have a list of books to add, andforeach booksinvoke a lambda function "insert book". The maximum timeout is 5 minutes so it's possible to make an asynchronous invocation from the "master" lambda in order to not wait for all forked lambda process ?idea 2 : recursive invocationsCreate a generic lambda that process the first book of a list of books passed as input. At the end of the process, remove the book from the list (if OK) and invoke the same lambda with the updated List.Note : the first invocation need to get the list of books.Many thanks for your help !Romain.
Recursive invocations / fork invocations in AWS Lambda
Your lambda function will be running with a specific role. Create a policy that grants access to the s3 resource and add it to the role.Example:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::my-bucket/file.txt" } ] }
I have a file in an S3 bucket for which I would like to restrict access, so that it can only be accessed from within a specific Lambda function. I tried writing a Bucket policy (subbing in my info for region, account, etc.) to accomplish this:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1457474835965", "Action": "s3:*", "Principal": "*", "Effect": "Deny", "Resource": "arn:aws:s3:::my-bucket/file.txt", "Condition": { "ArnNotEquals": { "aws:SourceArn": "arn:aws:lambda:region:account:function:FunctionName" } } } ] }However access to the file was still denied to the Lambda function when it was invoked. How can I accomplish what I am trying to do?
Restricting S3 bucket access to an AWS Lambda function
It is pretty difficult for most people to spoof an ip address. It is even more difficult for them to guess the IP address you're allowing through the AWS security group to the instance. Still harder is completing a TCP handshake with a spoofed IP, so I'd say you're pretty safe.
For example, I have an instance, and using a Security Group allowing income traffic from only my own IP address. My question is: if an attacker got the instance IP address, is there still any way he can attack(something like DDOS) my instance?
Is it completed safe if I set the security group only allow my own IP?
I've seen this exact symptom before. The issue may be with the date/time stamp being generated by the s3client in your Php code. Turn on debugging for the s3client and see if the date/time stamp in the request is being produced with "GMT" (correct) or something else. If its something else, that could be the issue, you could compare it to CyberDuck if it will show you the debug output for the request its making.
Im facing a mystery on S3 bucket policy, and it is preventing my a Php S3Client from PutObject. The error im getting is:Error executing "PutObject" on "https://s3.amazonaws.com/reusable-system-dev/b1e42024e33d62b852d3c94c85f68c72.jpeg"; AWS HTTP error: Client error response [url]https://s3.amazonaws.com/reusable-system-dev/b1e42024e33d62b852d3c94c85f68c72.jpeg[status code] 403 [reason phrase] Forbidden AccessDenied (client): Access DeniedBut, with the same Access key and Secret key, i can PutObject using a CyberDuck client! Is there any hidden S3 policy that could cause such a strange behavior?
aws s3Client PutObject Access Denied, but CyberDuck can PutObject Successfully
AWS iOS SDK for Modelling doesn't support array of arrays.You have to define a dictionary in between any nested arrays. So instead of array/object/array/array you slip in an extra "awshack" object: array/object/array/awshack-object/array{ "$schema": "http://json-schema.org/draft-04/schema#", "title": "QuestionsModel", "type": "array", "items": { "type": "object", "properties": { "section_name": { "type": "string" }, "options" : { "type" : "array", "items" : { "type" : "object", "properties" : { "awshack" : { "type" : "array", "items" : { "type" : "string" } } } } } } } }In the mapping template the "awshack" is slipped in outside the innermost loop.#foreach($items in $question.options.L) {"awshack" : [#foreach($item in $items.L) "$item.S"#if($foreach.hasNext),#end #end #if($foreach.hasNext),#end ]}#if($foreach.hasNext),#end #endAmazon confirms this limitation.
(Here's my Model scheme:{ "$schema": "http://json-schema.org/draft-04/schema#", "title": "QuestionsModel", "type": "array", "items": { "type": "object", "properties": { "section_name": { "type": "string" }, "options" : { "type" : "array", "items" : { "type" : "array", "items" : { "type" : "string" } } } }Here's the Mapping template:#set($inputRoot = $input.path('$')) [ #foreach($question in $inputRoot) { "section_name" : "$question.section_name.S", "options" : [ #foreach($items in $question.options.L) { [ #foreach($item in $items.L) { "$item.S" }#if($foreach.hasNext),#end #end ] }#if($foreach.hasNext),#end #end ] }#if($foreach.hasNext),#end #end ]Although this syntax correctly maps the data it results in "options" being an empty array.Without the "options" specified then my iOS app receives valid JSON. But when I try various syntaxes for "options" then I either get invalid JSON or an "Internal Service Error" and CloudWatch isn't much better offeringUnable to transform response.The options valid is populated with this content:{L=[{"L":[{"S":"1"},{"S":"Dr"}]},{"L":[{"S":"2"},{"S":"Mr"}]},{"L":[{"S":"3"},{"S":"Ms"}]},{"L":[{"S":"4"},{"S":"Mrs"}]},{"L":[{"S":"5"},{"S":"Prof."}]}]}which is provided by a Lambda function.I can only conclude, at this point, that API Gateway VTL doesn't support nested arrays.
how to handle nested lists in AWS APIG Mapping Template in VTL
The moment you have a single action that triggers 10k actions, you need to try to find a way to tell the user that "OK, I got it. I'll start working on it and will let you know when it's done".So to bring that work into the background, a domain event should be raised from your user's action which would be queued into SQS. The user gets notified, and then a worker can pick up that message from the queue and start sending emails and push notifications to another queue.At the end of the day, 10k messages in batches of 10 are just 1k requests to SQS, which should be pretty quick anyway.Try to keep your messages small. Don't send the whole content of an email into a queue message, because then you'll get unnecessary long latencies. Keep the content in a reachable place or just query for it again when consuming the message instead of passing big content up and down the network.
I have a Symfony2 app that under some circumstances has to send more than 10.000 push and email notifications.I developed a SQS flow with some workers polling the queues to send emails and mobile push notifications.But now, I have the problem that, when in the request/response cycle I need to send to SQS this task/jobs (maybe not that amount) this task itself is consuming a lot of time (response timeout is normally reached).Should I process this task at background (I need to send back a quick response)? And how to handle possible errors with this scenario?NOTE: Amazon SQS can receive 10 messages at one request and I already using this method. Maybe should I build a simple SQS Message with a lot of notifications jobs (max. 256K) to send less HTTP requests to SQS?
Sending thousands notifications on PHP (Symfony2) using SQS
You should clone the "parse-server-example" since it sounds like you need a starter project to run locally. Then you can start adding your existing Parse code to this new project.The "parse-server" repository contains the source code for the Parse Server npm package, which is used by the "parse-server-example" to run a server.
I am doing migration from my Parse hosted applications to ParseServer running on AWS or Heroku slowly.The Heroku "guide to deploy to Heroku and MongoLab" uses "Parse-server-example" on GitHub while the ParseServer wiki mentions cloning "Parse-server" instead. In particular, "Parse-server" repository does not contain the "cloud\main.js" file which is the cloud function file.If I want to run ParseServer locally or on AWS, which one of those two should I use? What are the differences between them?
What is the different between Parse-server and Parse-server-example on ParsePlatform on GitHub?
sudo pip install --ignore-installed awsebcli
sudo pip install awsebcliException: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 725, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 752, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove .....
Problems installing the latest version of the EB CL
There are 2 possible options to achieve what you want.Option 1:If your backend EC2 intances are in a public subnet, you could pre-allocate a pool of Elastic IP addresses and whitelist them with your private resource.Since your EC2 instances are created by an Auto Scaling group (I assume), you would then have a script that runs on your EC2 instance that would select an Elastic IP address from your pool and associate it with the instance.A problem occurs if your pool of Elastic IP addresses runs out.Option 2:If your EC2 instances are in a private subnet, then you would have all outbound traffic from your EC2 instances go through a NAT.You would allocate a single Elastic IP address and whitelist that Elastic IP address with your private resource.If you associate the Elastic IP address with your NAT, then your private resource will see the traffic from all your EC2 instances as originating from the whitelisted IP address.Additional CommentsSince you have the public facing ELB, your backend EC2 instances should be in private subnets for security purposes.This, along with the extra scripting required for option 1, makes option 2 the preferred choices.
Our current website uses elastic beanstalk to create instances, but we need to whitelist the IPs so they can talk to a private resource.How do you do this? The EBS uses a VPC with a public subnet.Thanks!
How do you allocate STATIC addresses to an EBS (beanstalk) within a VPC?
For "on demand" EC2Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour.Seehttps://aws.amazon.com/ec2/pricing/This used to be the case...but now (as of Oct 2021)Each partial instance-hour consumed is billed per-second for instances launched in Linux, Windows, or Windows with SQL Enterprise, SQL Standard, or SQL Web instances If your instance is billed by the second, then you're billed for a minimum of 60 seconds each time a new instance is started—that is, when the instance enters the running stateSome instances do still have a minimum charge period of one hour, but in the case of the OP's question the answwer is now 1 minute minimumSeehttps://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-hour-billing/
For example the list price for a nano instance is $0.0065 per Hour.If I start and immediately terminate a nano instance, did it cost me $0.0065?The reason this question relates to programming is because I am programmatically launching instances and I want to know if it's going to cost alot or be a trivial expense if my code does live launching as I'm testing.
What does it cost to start and immediately terminate an Amazon EC2 instance?
Meanwhile I hacked a bit around in theMWS clientsources.And it's really the case that the quota values are not exposed via the response nor via the WebServiceClient.So I slightly modified the source code ofMarketplaceWebServiceClient.javato rememer the quota values for the last received response. Somewhere around line 2100 it readspostResponse = httpClient.execute(method, httpContext);and after that line I insertedquotaMax = postResponse.getFirstHeader("x-mws-quota-max").getValue();quotaRemaining = postResponse.getFirstHeader("x-mws-quota-remaining").getValue();quotaResetsOn = postResponse.getFirstHeader("x-mws-quota-resetsOn").getValue();This does the trick for me and I can get the quota values directly form the client.
I'm using the Java MWS API from Amazon. Recently I received anInternal Errorwhile requestingGetOrderwhich was due to throttling limits.How can I determine the throttling limits?In thedocsI seeAmazon MWS provides header values in each call response that show the hourly quota for the current operation; the number of calls remaining in tha quota; and the date and time when the quota will reset. For example:x-mws-quota-max: 3600x-mws-quota-remaining: 10x-mws-quota-resetsOn: Wed, 06 Mar 2013 19:07:58 GMTBut I can't figure out how to get this metadata from the response. I expected them to be in theGetReportResponsewhich I receive from callinggetReport(GetReportRequest). It seems this data is not present. At least I wasn't able to get them.But what I can see from the log output is:org.apache.http.wire - << "x-mws-quota-max: 80.0"org.apache.http.wire - << "x-mws-quota-remaining: 79.0"org.apache.http.wire - << "x-mws-quota-resetsOn: 2016-01-23T09:19:00.000Z"This data indeed seems to be present somewhere. How can I get this information from the response?
Getting Amazon MWS throttling limits
Header mapping from response bodies was a recently added to API Gateway. You can see examples in ourdocumentation.Your mapping should be:integration.response.body.LocationEdit:Apologies for misreading. To remove the Location from the response body, you would need to have a mapping template with an empty JSON body.
I am developing a POST lambda function. I want to return "Location" in the header. So I configure API Gateway, like below:when I call the API, I am receving the "Location" correct in the header, but I am still receiving it on return message. Look below:[My python code:def os_create_subscription (event, context): customer_id = event["customer-id"] subscription_id = 12345 header_location = ("/customers/%s/subscriptions/%d" % (customer_id, subscription_id)) result = {"Location": header_location} return resultSo i would like to have the Location just in the header. Is there anyway to do this?
Returning header content - API Gateway + AWS Lambda
It turns out the best way to perform an "OR" operation is to use the new, string-based filtering commands.AttributesToGetis replaced byProjectionExpression, andScanFilteris replaced byFilterExpression.Note that with the new commands you will not be able to usereserved wordsand must work around any reserved words by defining those keys withExpressionAttributeNamesand any values withExpressionAttributeValues.dynamodb.scan({ // Define table to scan "TableName": "fm_tokens", // Only return selected values (optional) "ProjectionExpression": "user_id", // Write your expression, similar to SQL syntax "FilterExpression": "#token = :tkn AND "+ "token_time >= :expiration AND "+ "(attribute_not_exists(expiration_time) OR expiration_time >= :timestamp)", // Use this to avoid reserved words by defining variables with the name instead "ExpressionAttributeNames": { "#token": "token" }, // Same as with names, but for values - must include value type (S, N, etc.) "ExpressionAttributeValues": { ":tkn": {"S": tkn}, ":expiration": {"N": expiration.toString()}, ":timestamp": {"N": unix_timestamp.toString()} } }
I am attempting to scan a DynamoDB table from Lambda (Node.js) to check if a token has expired. I would like to filter the data to exclude items whereexpiration_timeis set and is less than the current time. I currently get a false positive for units where noexpiration_timeis set.How can I check for an expired timestampornoexpiration_timeattribute set at all?dynamodb.scan({ "TableName": "fm_tokens", "AttributesToGet": ["user_id"], "ScanFilter": { "token": { "AttributeValueList": [{"S": tkn.toString()}], "ComparisonOperator": "EQ" }, "token_time": { "AttributeValueList": [{"N": expiration.toString()}], "ComparisonOperator": "GE" }, "expiration_time": { "AttributeValueList": [{"N": unix_timestamp.toString()}], "ComparisonOperator": "GE", } } }
Find items where DynamoDB attribute is not set *or* greater than x
Don't configure the origin as S3 at all -- configure it as a Custom origin and then use the bucket's website endpoint hostname as the origin server hostname.At that point, you should be able to configure anOrigin Custom Headerthat CloudFront will send to the origin -- which happens to be the bucket's web site endpoint.User-Agentis not on thelist of custom headers that CloudFront won't forward, so you should be able to send a custom user agent string -- acting somewhat like a static password -- in the requests from CloudFront to S3, and configure your bucket to only allow that custom user agent.It could still theoretically be spoofed, but since it's a random string that you made up, nobody knows that value except you, S3, and CloudFront, and it would be very tricky for someone to spoof an unknown value, particularly since S3 simply denies access, without explanation.
I have a S3 bucket, serving as static website.I have a cloudfront distribution, pointed towards HTTP endpoint of the bucket.I want to limit access to my S3 bucket to only Cloudfront.I guess I can do this by adding Principal: arn:iam:cloudfront ....But this allows direct S3 access, not HTTP Endpoint access.When I configure Cloudfront to serve S3 bucket directly, it doesn't show subdirectory index.htmls. In order to reach mysite.com/blog/, I have to type mysite.com/blog/index.htmlFor this reason, I have to use HTTP endpoint of the S3 as if the site is not on S3 but on an Apache server.Now I can't restrict access via arn:iam:cloudfront. Because Cloudfront becomes yet another web crawler, S3 becomes yet another web server.They suggest adding custom headers so that the server understands it's the cloudfront. But S3 doesn't support custom headers.Restricting user agent to CloudFront and Principal to AWS: * does a brief work but it doesn't stop UserAgent spoofing.How can I solve this problem?
Amazon S3 access from Cloudfront through HTTP
Did you setwait: True? It will wait for the instance to go to running state. I never had issues with the following. I was able to get the public IP after register. If you still have issues, usewait_forfor IP to be available. Or post your script here.- name: Start the instance if not running ec2: instance_ids: myinstanceid region: us-east-1 state: running wait: True register: myinst
I see the Ansible EC2 Module's capability to provision / start / stop / terminate. However is there a way to lookup / query for the instance details likePrivate IP,Public IPetc.I am looking at the use case to obtain the Public IP [not the Elastic IP] which keeps changing during stop/start and update the Route53 DNS records accordingly.Any ideas ?
Ansible EC2 Module to Retrieve information about the Instance
It is highly recommended to deploy your application with the dependencies pre-installed. Having your deployment process depend on github and packagist is fragile at best and not recommended. Ideally, you would never run Composer anywhere other than on your development and CI environment. Your CI environment should produce a fully deployable release package (including all dependencies etc.) for staging/production environments.
I have a PHP application that runs on Docker (php:5.6-apache image). I use AWS Elastic Beanstalk Multicontainer Docker Environment to deploy the app to the cloud (using Dockerrun.aws.json v2).My problem is that I can't find a good workflow to update the composer dependencies after deploying.Below the contents of my Dockerrun.aws.json:{ "AWSEBDockerrunVersion": 2, "volumes": [ { "name": "php-app", "host": { "sourcePath": "/var/app/current/php-app" } } ], "containerDefinitions": [ { "name": "php-app", "image": "php:5.6-apache", "essential": true, "memory": 512, "portMappings": [ { "hostPort": 80, "containerPort": 80 } ], "mountPoints": [ { "sourceVolume": "php-app", "containerPath": "/var/www/html", "readOnly": true } ] } ] }What is the recommended way to runcomposer installon Elastic Beanstalk Multicontainer Docker Environment?
How to install composer dependencies on Elastic Beanstalk Multicontainer Docker Environment
Couple of things:Try replacing your smart quotes“ ”in the first two lines:$ export AWS_ACCESS_KEY_ID="foo" $ export AWS_SECRET_ACCESS_KEY="bar"Your default region string is incomplete. Try this:$ export AWS_DEFAULT_REGION="us-east-1"
I am trying to spin up an AWS Cluster. I am running the same code I always am but it is no longer working. The code is this, and I am running it in the command line on mac osx.$ export AWS_ACCESS_KEY_ID=“foo” $ export AWS_SECRET_ACCESS_KEY=“bar” $ export AWS_DEFAULT_REGION= "us-east-1d" $ /Users/xxxxx/Downloads/spark-1.5.2-bin-hadoop2.6/ec2/spark-ec2 -k username -i /Users/xxxxx/Downloads/this_is_file_being_read.pem -s 10 launch clusterI get the error'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)Is there anything I can do to get the file read? I don't know whats happening as I have already been running this code and it worked fine.
Creating EC2 Cluster: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
(I work at Aerospike). Specifying a DNS entry instead of an IP address in the mesh-seed-address-port is currently not supported. So you will have to use an IP address (or a list of IP addresses). This is something that we may support at some point in the future.
I am trying to configure aerospike to work in AWS. The recommended settings are to use Hearbeat mode mesh. Now I'm trying to use DNS saytrial.example.cominstead of IP inmesh-seed-address-port192.168.1.1003002in this config but am unable to do so. The problem is the cluster visibility is shown False in AMC. Can someone please help?
Aerospike config to use DNS instead of private ip
Yes,minOccurs="0"means optional, however...In general, applications often have additional requirements beyond those specified in the XSD of its XML input. When parameters are required only in certain circumstances, an XSD will list them as optional in the general case but then check them out-of-band wrt the XSD. (XSD 1.1 provides some additional expressiveness viaxs:assertionfor conditionally requiring elements/attributes, but it's not widely adopted yet.)Note thatbullet_point1does not appear in any of the XSDs or documentation you've linked to your question. Ifbullet_point1is derived fromBulletPointin the XSD, then it clearly is a downstream application that's making the additional requirement and issuing the error when it's unmet.
I am looking for some clarification regarding how to read the XSD that Amazon uses to validate their XML product feed.This is theXSDused (and the innerProductone) and these are thedocs.When I submit a basic product feed without a description or BulletPoint feed I get this error:A value was not provided for "bullet_point1". Please provide a value for "bullet_point1". This information appears on the product detail page and helps customers evaluate products.A value was not provided for "product_description". Please provide a value for "product_description". This information appears on the product detail page and helps customers evaluate products.Here are the relevant XSD sections:<xsd:element name="Description" minOccurs="0"> <xsd:simpleType> <xsd:restriction base="xsd:normalizedString"> <xsd:maxLength value="2000"/> </xsd:restriction> </xsd:simpleType> </xsd:element> <xsd:element name="BulletPoint" type="LongStringNotNull" minOccurs="0" maxOccurs="5"/>My understanding is thatminOccurs="0"means its not required yet it clearly is. I have looked through a few other inner XSD (such asBase) for these fields in case they were overridden but did not see anything.Is this the wrong XSD? Am I reading this wrong?
XSD validation clarification for Amazon MWS product feed
Hadoop 2.x allows you to set the map and reduce settings per job so you are setting the correct section. The problem is the Java opts Xmx memory must be less than the map/reduce.memory.mb. This property represents the total memory for heap and off heap usage. Take a look at the defaults as an example:http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-hadoop-task-config.html. If Yarn was killing off the containers for exceeding the memory when using the default settings then this means you need to give more memory to the off heap portion, thus increasing the gap between Xmx and the total map/reduce.memory.mb.
I run a MR job withone Masterandtwo slaverson the Amazon EMR, but got lots of the error messages likerunning beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 3.7 GB of 15 GB virtual memory used. Killing containeraftermap 100% reduce 35%I modified my codes by adding the following lines in the Hadoop 2.6.0 MR configuration, but I still got the same error messages.Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "jobtest2"); //conf.set("mapreduce.input.fileinputformat.split.minsize","3073741824"); conf.set("mapreduce.map.memory.mb", "8192"); conf.set("mapreduce.map.java.opts", "-Xmx8192m"); conf.set("mapreduce.reduce.memory.mb", "8192"); conf.set("mapreduce.reduce.java.opts", "-Xmx8192m");What is the correct way to configure those parameters(mapreduce.map.memory.mb,mapreduce.map.java.opts,mapreduce.reduce.memory.mb,mapreduce.reduce.java.opts) on Amazon EMR? Thank you!
How to configure Hadoop parameters on Amazon EMR?
It looks like there is no direct data transfer pipeline for pushing data into elasticsearch from Redshift. One alternative approach is to first dump the data in S3 and then push into elasticsearch.
I'm working on something related to Amazon elasticsearch service.For that,I need to get data from Amazon Redshift.The data to be tranfered is huge i.e. 100 GB.Is there any way to get it directly form Redshift or is it a two step process like Redshift->s3->elasticsearch?
Is it possible to transfer data from Redshift to Elasticsearch?
I like your idea to build a wrapper that can use either the local file system or S3. I'm not aware of anything existing that would provide that for you, but would certainly be interested to hear if you find anything.An alternative would be to use some sort of S3 file system mount, so that your application can always use standard file system I/O but the data might be written to S3 if your system has that location configured as an S3 mount. I don't recommend this approach because I've never heard of an S3 mounting solution that didn't have issues.Another alternative is to only design your application to use S3, and then use some sort of S3 compatible local object storage in your development environment. There are several answers tothis questionthat could provide an S3 compatible service during development.
I'm looking for the best way to switch between using the local filesystem and the Amazon S3 filesystem.I think ideally I would like a wrapper to both filesystems that I can code against. A configuration change would tell the wrapper which filesystem to use. This is useful to me because a developer can use their local filesystem, but our hosted environments can use Amazon S3 by just changing a configuration option.Are there any existing wrappers that do this? Should I write my own wrapper? Is there another approach I am not aware of that would be better?
What is the best approach in nodejs to switch between the local filesystem and Amazon S3 filesystem?
Try passingRAILS_ENV=productionbeforeshoryuken. If you pass it after, it won't work.RAILS_ENV=production bundle exec shoryuken -r path_to_my_worker.rb -C config/shoryuken.yml --rails
The shoryuken gem is a background worker for rails applications that reads from aws SQS.I can run the shoryuken worker in my local and it's working fine. When I run it in production environment in AWS it does not work. How do you run shoryuken in production environment? I'm also thinking that this might be an issue with my aws security groups. We are using VPC. Should I allow the SQS port? If so, what port is SQS running in? I also wonder why it's asking about port 5432 which is the port of our Postgres DB.bundle exec shoryuken -r path_to_my_worker.rb -C config/shoryuken.yml --rails RAILS_ENV=production could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?
Can't run rails shoryuken gem in production environment that reads from SQS
I'm afraid it can't be done or at least there is no documentation on how to do that. I think instance metadata is calculated upon instance configuration and launch.I would suggest retrieving your hostname through the OS itself, it is HOSTNAME environment variable or the output of hostname command e.g.:PHP:echo gethostname();Bash script:echo `hostname`; echo $HOSTNAME;
I have changed the hostname on my ec2 instance following the steps here:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-hostname.htmlHowever, the metadata service still returns the old hostname even after i rebooted my instance. How can I make the metadata return the new hostname?
How to get ec2 metadata to reflect the new hostname?
+502 things need to happen:you need a custom layeryou need to pull in a recipe/cookbook that contains the newer nodejsThe easiest way to do this is to use berkshelf as outlined here:http://docs.aws.amazon.com/opsworks/latest/userguide/cookbooks-101-opsworks-berkshelf.html#opsworks-berkshelf-opsworksIn the berksfile add the supermarket.chef.io as a source and the nodejs as a recipe.You can specify the node version in the opsworks stack config.Use the recipe in the custom layer and you should be set.
I am trying to set up an OpsWorks stack to with a Node.js layer that uses the latest version of Node (4.1.1). I am fairly new to Chef and I am not sure where in the cookbooks repo I would need to make changes to pull down and install Node 4.1.1, instead of their default which is 0.12.7.Any help is appreciated.
How do I configure OpsWorks to deploy a not-officially-supported version of Node.js?
Several people have had this same issue, and there are a few things to double check and a few tricky parts in that AWS Blog post that aren't well explained.Double check your IAM User that you created, and make sure it has the correct IAM policy. You can use the AWS-provided "AWSCodeDeployDeployerAccess" policy if you don't want to write your ownCheck outthis postin the AWS Developer Forum. The TLDR is that the deployment group must be all lower case. For some reason GitHub down-cases the deployment group name in the API call, which will cause a name mismatch with your deployment group in AWS.Make sure that you set your "environments" property to the name of your deployment group when you set up your "GitHub Auto-Deployment" service. The blog post doesn't say that they need to match, but if you look at the screenshots, the author does in fact use the same string for both the "environments" property in the Auto-Deployment service and the Deployment Group property in the AWS CodeDeploy serviceIf you're still having a hard time setting up the GitHub hook or CodeDeploy in general, I encourage you to take myAWS CodeDeploy course
Oops, we weren’t able to send the test payload: AWS Code Deploy doesn't support the push event.Above error shown to me when I am trying to test my hook service "Code Deploy For AWS". Also when I commit my code it should automatically deploy my new code, but it fails. Can you help me out for above?
GitHub Aws Code deployment shows "AWS CodeDeploy doesn't support the push event"
I was able to get the Host ID by callingAmazonS3Exception.getErrorResponseXml(). We're still working with Amazon to determine the root cause.
We have some Scala code running in Elastic Beanstalk (using Tomcat) that accesses S3 using the Java AWS SDK. It was working perfectly for months. Then, a few days ago, we started seeing some strange errors. It can read and write to S3 about a third of the time. The other two thirds of the time, it gets an access denied error when reading from S3.The exceptions look like this:com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6CAC5AB616FC6F23)All S3 operations use the same bucket. The IAM role has full access to S3 (allowed to do any operation using any bucket).We contacted Amazon support and they can't help us unless we provide a host ID and request ID that they can research. But the exception only has a request ID.I'm looking for one of two things: either a solution to the access denied errors, or a way to get a host ID we can give to Amazon support. I already tried callings3Client.getCachedResponseMetadata(getObjectRequest), but it always returnsnullafter the getObject call fails.
Intermittent Access Denied error from AWS S3
Those are just soft limits - just raise a support ticket for limit increase and you would be all set.You can navigate to EC2 -> Limits in the side bar.
Documentation says: Rules per network ACL : 20This is the one-way limit for a single network ACL, where the limit for ingress rules is 20, and the limit for egress rules is 20.I want to know can we increase the rules limit.
Can we increase the limit of ACL rules in AWS
For Singapore region:The Cloudwatch IP can be found if you ping the end point monitoring.ap-southeast-1.amazonaws.comvia any AWS server.For any other region in AWS please refer to the link below.http://docs.aws.amazon.com/general/latest/gr/rande.html#cw_regionThe above page lists the endpoints of All the AWS services.
I have a server(Java/Tomcat running) which was creating huge outbound traffic. This server can not be accessed from outside world only internal network server can access it. i.e. inbound is allowed only from internal network.To solve huge outbound traffic we have blocked all outbound traffic via aws security group except internal network servers.But now it has also stopped aws custom monitoring scripts to send data to cloudwatch.Sowhat is the ip rangethat I need to open in outbound rules to send traffic to cloudwatch?
Deny all outbound traffic except cloudwatch on AWS
It appears the Cross-Region Replication in Amazon S3cannot be chained. Therefore, it cannot be used to replicate from Bucket A to Bucket B to Bucket C.An alternative would be to use theAWS Command-Line Interface (CLI)to synchronise between buckets, eg:aws s3 sync s3://bucket1 s3://bucket2 aws s3 sync s3://bucket1 s3://bucket3Thesynccommand only copies new and changed files. Data is transferred directly between the Amazon S3 buckets, even if they are in different regions -- no data is downloaded/uploaded to your own computer.So, put these commands in acronjob or a Scheduled Task to run once an hour and the buckets will nicely replicate!See:AWS CLI S3 sync command documentation
I am trying to set up cross region replication so that my original file will be replicated to two different regions. Right now, I can only get it to replicate to one other region.For example, my files are on US Standard. When a file is uploaded it is replicated from US Standard to US West 2. I would also like for that file to be replicated to US West 1.Is there a way to do this?
AWS cross region replication to multiple regions?
I was able to resolve this with the help of few folks on AWS Forum. It appears that the API Gateway GET method expects an empty body. By default, if you are following the README sample that comes with generated JS SDK, passing 'undefined' or just '{}' inside the body to GET causes a non-matching payload and this results in an incorrect signature being calculated.As of now, I just made a small tweak in the /lib/apiGatewayCore/sigV4Client.js by hardcoding thebody = ''.This should be a temporary workout as this may affect your other API Gateway methods that require a filled 'body'. In my case, I only had GET methods.
I have created sample GET and POST APIs on Amazon API Gateway following their official documentation. I have generated JS SDK for these APIs, which I am using to call these APIs from a client-side JS file hosted on S3. This works flawlessly without any 'Authorization Type'.Now, when I set 'Authorization Type' for GET method as 'IAM', I am required to pass IAM credentials in order for it to work. In spite of passing my AWS account's root credentials, I am getting this in the response headers:x-amzn-ErrorType:InvalidSignatureException:http://internal.amazon.com/coral/com.amazon.coral.service/And finally it returns a403error code.My question is: Has anyone successfully attempted to use generated javascript SDK from Amazon API Gateway with IAM authentication? Can you point where I might be going wrong?Thanks.
Amazon API Gateway IAM authenticated example with generated JS SDK
I found answer to my own question. "CloudSearchDomain" is in another section separated from "CloudSearch". i feel sillyhttp://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudSearchDomain.html
I cannot find the API call to search the search domain in AWS-SDK documentation for NodeJS / JavaScript.CloudSearch developer guidesuggests that AWS-SDK be used to perform search queries, yet i cannot find any API call in sdkLink to AWS-SDK documentation:http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudSearch.html
How do i search my CloudSearch domain using AWS-SDK for Javascript?
The AWS CLI credentials are set in credential file and can be overridden with enviroment variables.To create differnent profiles can use the built in config tool:aws configure --profile user2Then when you use the aws to call elasticbeanstalk, you can specify this new profile to useaws --profile user2 elasticbeanstalk ...blah...blah...blah...
I deployed an application on EB with my own AWS account, but I need to do the same with another one. I have the user name, access key and secret access key for the AWS account I need to deploy from, but I don't even know how to switch out of my account to do it.I've been able to sign into the AWS cli with those credentials, but I'm having trouble using the aws elasticbeanstalk cli, so help deploying my application through that would be helpful as well.Thanks!
How do I change users in the EB CLI?
Only the identity id is maintained between pages, credentials are not. You will need to cache the Facebook token and supply it to the credentials object when you transition between pages to get AWS credentials. You will also need to track expiry of the Facebook token so can refresh your cached token if it has expired. This forums post has more details on the process.https://forums.aws.amazon.com/thread.jspa?threadID=179420&tstart=25
I'm using Facebook to create an authenticated identity via AWS Cognito, that's all working fine and I can login, and synchronise data.However, if I navigate away from my sign-in page - but remain in my site - the underlying AWS.config.credentials object is then null and I can't synchronise any data via a different page.Suspect I'm missing something obvious but can't see it from the Amazon docs and don't know what!Edit: Sorry - should have added - this is via the Javascript SDK
AWS.config.credentials are null between page requests
After a lot of research i came to know that i was specifying wrong region.My region was 'us-west-2' and i was using region as 'us-west-2c' Which was not a region but availability zone.After changing region to 'us-west-2' it worksAWS.config( region: 'us-west-2', access_key_id: 'xxxxxx', secret_access_key: 'xxxxxxxxx' ) ec2 = AWS::EC2::Client.new resp = ec2.start_instances({ instance_ids: ["i-xxxxxxxxx"], additional_info: "String" })
I am usingaws-sdk gem. i want to stop and start aws instance using 'aws-sdk' gem.Below is my code to start an already stooped amazon instance but it is giving me error asSocketError: getaddrinfo: Name or service not knownec2 = AWS::EC2::Client.new( region: 'us-west-2c', credentials: {:access_key_id => 'XXXXXXXXX',:secret_access_key => 'XXXXXXXXXXX'} ) resp = ec2.start_instances({ instance_ids: ["i-xxxxxx"], additional_info: "String" })Please helpThanks,
aws-sdk gem: SocketError: getaddrinfo: Name or service not known
Eventually I found the query below to satisfy my requirements.WITH users AS ( SELECT user_id, date_trunc('day', min(timestamp)) as activated_at from table group by 1 ) , events AS ( SELECT user_id, action, timestamp AS occurred_at FROM table ) SELECT DATE_TRUNC('day',u.activated_at) AS signup_date, TRUNC(EXTRACT('EPOCH' FROM e.occurred_at - u.activated_At)/(3600*24)) AS user_period, COUNT(DISTINCT e.user_id) AS retained_users FROM users u JOIN events e ON e.user_id = u.user_id AND e.occurred_at >= u.activated_at WHERE u.activated_at >= getdate() - INTERVAL '11 day' GROUP BY 1,2 ORDER BY 1,2It produces a slightly different table than I described above (but is better for my needs):signup_date user_period retained_users ----------- ----------- -------------- 2015-05-05 0 80 2015-05-05 1 60 2015-05-05 2 40 2015-05-05 3 20 2015-05-06 0 100 2015-05-06 1 80 2015-05-06 2 40 2015-05-06 3 20
I'm trying analyze user retention using a cohort analysis based on event data stored in Redshift.For example, in Redshift I have:timestamp action user id --------- ------ ------- 2015-05-05 12:00 homepage 1 2015-05-05 12:01 product page 1 2015-05-05 12:02 homepage 2 2015-05-05 12:03 checkout 1I would like to extract the daily retention cohort. For example:signup_day users_count d1 d2 d3 d4 d5 d6 d7 ---------- ----------- -- -- -- -- -- -- -- 2015-05-05 100 80 60 40 20 17 16 12 2015-05-06 150 120 90 60 30 22 18 15Wheresignup_dayrepresents the first date we have a record of a user action,users_countis the total amount of users who signed up onsignup_day,d1is the number of users who performed any action a day aftersignup_dayetc...Is there a better way to represent the retention analysis data?What would be the best query to achieve that with Amazon Redshift? Is it possible to do with a single query?
Cohort analysis with Amazon Redshift / PostgreSQL
You create AMIs directly from EC2 instances, not from snapshots. Snapshots are for EBS volumes. Check that you created your AMI correctly from a running EC2 instance on which you have Apache/Tomcat installed and running (and configured to autostart on reboot).No, you do not have to use Puppet/Chef or any other CM tool. You can do what you want in a couple of ways:The simplest way is to create an AMI from your running EC2 instance and then configure your Auto Scaling Group to launch new instances from that AMI based on some metric.Use a base AMI without Apache/Tomcat or your software and then bootstrap new instances at launch time to download and configure everything needed.The disadvantage of #1 is that your AMIs will get out of date quickly. The disadvantage of #2 is that your instances will taken longer to come into service. I would recommend a combination of #1 and #2, specifically that you capture a new AMI every few months and that becomes your base AMI for launching and you update the instance at launch time via userdata init script.
There are a number of questions on auto scaling. But none of it talks about scaling out the software stack installed on these servers. AWS Auto Scaling only scales out the resources. Not the software on it. In my case I am looking forward to scale out the Tomcat Server (and Apache HTTPD Server) installed on the first instance to be part of the new instance that AWS Scaling Service creates.I followed the regular process to establish scaling for my application on Amazon Web Services EC2 instances.Created a Snapshot from the existing instance with the exact configurations of the running instance - SuccessCreated an AMI from the above snapshot - SuccessCreated an auto scaling group and launch configuration - SuccessScaling Policy is to create a new instance upon CPU >= 65% for 2 times. - SuccessThe above procedure only creates a new instance but it does not copy the Software Stack present on the image.How do I accomplish auto scaling in such a way that when AWS auto scaling happens, Tomcat server part of the AMI is also copied and started up in the new scaled out instance.Do I definitely have to use Puppet/Chef or any such tools to achieve this? Or is there an option in AWS using Command Line?Please note that Elastic Load Balancer automatically adds the new instance on to it as per the launch configurations but it shows 'Out of Service' since there is no Apache server installed on the new scaled up instance.
How to scale out Tomcat on AWS EC2 instance?