Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
This can be a bit difficult since the command line options for Lambda require that you useaws lambda get-policyin order to find out which resources are allowed to perform thelambda:InvokeFunctionaction on a given function. These permissions aren't shown as part of the lambda configuration foraws lambda get-function-configuration. Usebashandjqto get a list of functions and spit out their allowed invokers. Like this:aws lambda list-functions | jq '.Functions[].FunctionName' --raw-output | while read f; do policy=$( aws lambda get-policy --function-name ${f} | jq '.Policy | fromjson | .Statement[] | select(.Effect=="Allow") | select(.Action=="lambda:InvokeFunction") | .Condition.ArnLike[]' --raw-output ) echo "FUNCTION ${f} CAN BE INVOKED FROM:" echo ${policy} doneThis will list the arn of the resources that are allowed to use the actionlambda:InvokeFunctionon the all Lambda functions returned fromlist-functions.
How to know which s3 bucket trigger which lambda without going to all lambdas?
How to know which s3 bucket trigger which lambda?
All subnets within a VPC can communicate with each other by default.In fact, the only way to prevent this is by defining Network ACLs that Deny traffic.So, yes, an instance in one private subnet can connect to an instance in another private subnet (in the same VPC). Just use thePrivate IP addressto connect.
I have a VPC, inside there is a public subnet and two private subnets. I configured security groups as well as route tables and I can access ES2 instances in the two private subnets from the instance in the public subnet.Now I want to know if I can directly connect to the instances in one private subnet from the instances in the other private subnets. If yes, how.Thanks, Philip
aws private subnets connectivity
This is very likely to be a side-effect of your API Gateway endpoint being configured asEdge Optimizedinstead ofRegional, because with an edge-optimized API, there is a hidden CloudFront distribution provisioned automatically... however, the CloudFront distribution associated with your API is not owned by your account, but rather by an account associated with API Gateway.Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution that is created and managed by API Gateway.—Amazon API Gateway Supports Regional API EndpointsThis creates a conflict that prevents the wildcard distribution from being created.Subdomains that mask a wildcard are not allowed to cross AWS account boundaries, because this would potentially allow traffic for a wildcard distribution's matching domains to be hijacked by creating a more specific alternate domain name -- but, as you noted from the documentation, you can do within your own account.Redeploying your API asRegionalinstead ofEdge Optimizedis the likely solution. If you still want the edge optimization behavior, you can create another CloudFront distribution with that specific subdomain for use with the API. This would be allowed, because you would own the distribution. Regional APIs are still globally accessible.
In Api Gateway I've created one custom domain,foo.example.com, which creates a Cloud Front distribution with thatCNAME.I also want to create a wildcard domain,*.example.com, but when attempting to create it, CloudFront throws an error:CNAMEAlreadyExistsException: One or more of the CNAMEs you provided are already associated with a different resourceAWS in its docs states that:However, you can add a wildcard alternate domain name, such as *.example.com, that includes (that overlaps with) a non-wildcard alternate domain name, such as www.example.com. Overlapping domain names can be in the same distribution or in separate distributions as long as both distributions were created by using the same AWS account.https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-wildcardSo I might have misunderstood this, is it possible to accomplish what I've described?
Cloud Front - overlapping Alternate Domain Names
DMS is adatamigration service. View is a virtual table(represented by sql code/object) and it does not contain any data by itself like a table does.
I'm using AWS DMS to migrate data from one Postgres database to another Postgres database. Everything works fine, except one thing: the views are no replicated on my target database.I've read that this cannot be done between heterogenous database (i.e. from Oracle to Postgres) using DMS, but I imagine that this is possible somehow when we're using the same database.Does someone know how to replicate the views using AWS DMS from Postgres to Postgres?
Replicate views from Postgres to Postgres on AWS DMS
So the problem was with forwarding of cookies, especially theXSRF_TOKENcookie. Cookies aren't forwarded by default through the cloudfront, you have to set up a whitelist to do that. Just edit the cloudfront distribution it's in Behaviours section. Another cookies to consider forwarding arelaravel_sessionif you use cookie sessions andremember_*if you use remember login feature.
Alright hello, I have deployed my Laravel app on AWS ELB and I set up Cloudfront distribution for my app. Now I am facing CSRF Token mismatch. I know that this error can be caused by multiple config values that may be wrong. Now I have managed to fix this issue by myself in the past, but it was a long time ago and I don't know what I did and where. So if you have some tips on what could be wrong and where, then definitely send them my way. ThanksEDIT:The exception happens after switching to Cloudfront. My problem is to get it working with Cloudfront.
AWS Cloudfront causing CSRF Token Mismatch Exception
I installed it like this.I downloaded the php source code of the currently installed version in my Amazon Linux 2wget http://php.net/get/php-7.2.8.tar.bz2/from/a/mirrorUnpacked it and went into php-7.2.8/ext/imap/Compiled extension:phpize ./configureI got some errors.Some U8T_CANONICAL stuff sosudo yum install libc-client-develThen libc-client.a not found so created a symlink for it:cd /usr/lib sudo ln -s /usr/lib64/libc-client.asome other imap library error so:sudo yum install uw-imap-staticI got some other errors so the working configure line was:./configure --with-kerberos --with-imap-ssl makeSUCCESS!cd php-7.2.8/ext/imap/modules sudo cp imap.so /usr/lib64/php/modules/Created an ini file to load it:sudo vi /etc/php.d/30-imap.iniadded to the file this content:extension=imaprestarted php service (you might need to restart httpd depending on your php installation):sudo systemctl restart php-fpmPHPinfo now contains: imap IMAP c-Client Version 2007f SSL Support enabled Kerberos Support enabled
I need to install php-imap on amazon ec2 linux 2 instance. All the php stuff is inside amzn2extra-lamp-mariadb10.2-php7.2 but php-imap package is missing. Any advice ?Thanks
installing php-imap on amazon ec2 linux 2
Here's the overall strategy:Let PyPDF2 handle the decodingPyPDF2 will be much smarter at determining how to decode the file than you will be. PdfFileReader can read from a stream or a path to a file so can read the file from S3 and prepare it as a byte stream. Let PdfFileReader do the hard work.Preparing the byte streamTo prepare the file stream as a byte stream you can use the BytesIO library.Python 2:from BytesIO import BytesIOPython 3:from io import BytesIOFor your code example:from io import BytesIO import boto3 from PyPDF2 import PdfReader s3 = boto3.resource("s3") obj = s3.Object(bucket_name, itemname) fs = obj.get()["Body"].read() reader = PdfReader(BytesIO(fs))
I am trying to get a pdf file stored in one of my S3 buckets in AWS, and get some of its metadata like number of pages, and file size. I successfully get the pdf file from the S3 bucket, getting this when calling print(obj)s3.Object(bucket_name='somebucketname', key='somefilename.pdf')When using PyPDF2.PdfFileReader() I try using the raw file, a UTF-8 decoded file, and a ISO-8859-1 decoded file. The ISO-8859-1 decoded file is the only one that doesn't raise an exception, but when trying to pass it into PdfFileReader as a parameter I get an error, and this tracebackTraceback (most recent call last): File "s3_test.py", line 18, in <module> pdfFile = PdfFileReader(parse3) File "/usr/local/lib/python3.6/site-packages/PyPDF2/pdf.py", line 1081, in __init__ fileobj = open(stream, 'rb') ValueError: embedded null byteAm I using the wrong encoding type to decode this pdf file, or is it something else like the first argument of pdfFileReader has to be a file path? Is there an easier way to access an S3 pdf object's metadata without having to jump through hoops to get there?Python Scriptimport boto3 from PyPDF2 import PdfReader s3 = boto3.resource('s3') obj = s3.Object(bucket_name, itemname) parse3 = obj.get()['Body'].read().decode("ISO-8859-1") pdfFile = PdfReader(parse3)
Issue with PyPDF2 and decoding pdf file from S3
In assume_role_policy, can you change the "Principal" line to as mentioned below: You are havingec2.amazonaws.com.{ "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ecs.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] }
I am trying to provision an ECS cluster with terraform, everything seems to work well up until I am creating the ecs service:resource "aws_ecs_service" "ecs-service" { name = "ecs-service" iam_role = "${aws_iam_role.ecs-service-role.name}" cluster = "${aws_ecs_cluster.ecs-cluster.id}" task_definition = "${aws_ecs_task_definition.my_cluster.family}" desired_count = 1 load_balancer { target_group_arn = "${aws_alb_target_group.ecs-target-group.arn}" container_port = 80 container_name = "my_cluster" } }and the IAM role is:resource "aws_iam_role" "ecs-service-role" { name = "ecs-service-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role_policy_attachment" "ecs-service-role-attachment" { role = "${aws_iam_role.ecs-service-role.name}" policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole" }I am getting the following error message:aws_ecs_service.ecs-service: 1 error(s) occurred:aws_ecs_service.ecs-service: InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
Terraform: ECS service - InvalidParameterException
Operations on ObjectsPre-signed URL is supported for:GETPUTIt is not supported for:LISTCOPYDELETEThe reason you are gettingSignatureDoesNotMatchis the operation is part of the signature. You cannot change the operation from GET to DELETE and expect the signature to match.
Hi I'm generating the s3 presigned "GET" urls to display images using code modified fromhttps://gist.github.com/kelvinmo/d78be66c4f36415a6b80Ideally I should also be able to generate a presigned delete URL, put it in the browser and the image would get deleted.I would like to modify this for the delete operation, there seems to be no info online on how to do this with a presigned url aside from the aws docs which are vague but say it's possible. I haven't managed to find any online tutorials using presigned urls for delete.https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.htmlI tried just changing the Get to Delete in the request as many docs say but this creates an incorrect signature:SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing methodIt looks like the s3 is matching the DELETE signature with PUT signature and saying it doesn't match, so how to do a delete?!Any clues or links would be helpful. I'm assuming the current GET script is sending the wrong parameters or something.
Generate presigned s3 Url for DELETE operation
There is no Managed MongoDB Service provided by AWS.However, there are managed MongoDB services which provides hosting on AWS (in addition to Azure, GCP etc. MongoDB Atlas is an example.MongoDB Atlas provides managed mongoDB service with options to host on AWS and you may opt to use that. You can choose the region of your preference and then use VPC Peering feature to make the application servers in your existing VPC/Account communicate with the MongoDB Atlas Setup.You can read more about all these athttps://www.mongodb.com/cloud/atlas
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed5 years ago.Improve this questionI am looking to use mongo DB for my project but dont want to go in administrative overhead to manage mongo services.As my project is currently hosting most of its component on AWS, i am looking for a managed mongo DB service (if any) provided by AWS.AWS provides Dynamo DB as managed service and its well documented but accesing Mongo DB managed service over AWS is not very clear to me.I have read about Mongo DB managed service - 'Atlas' but not sure can i access it as a service in my existing AWS instances.Please provide your inputs for the best practice suitable for this scenario.
is there any managed mongo DB service that AWS provide? [closed]
If it's not a production instance, you can start with t2.medium instance. If its a production instance, start with m5.large.Attach a new ebs volume of size 10GB and configure mongodb to use this new volume as the data directory. This helps to scale up your storage easily at later point of time.Make sure you format your ebs volume to have xfs file system before installing mongodb which is required for best performance by mongo.Also, later if you feel like increasing the instace size wheb your traffic increases, just use the "instance modify" option to get it done.
We plan to use MongoDB in production for our website which will be hosted (for the start) on a single ec2 instance. What is the recommandation for a MongoDB, which will have around 25k documents for the start with low traffic? So far i am not used to AWS, therefore a have no comparison to other dedicated hosters.The "storagesize" of the collection in question will be arond 400MB, "totalindexsize" maybe around 20MB
What is the recommended EC2 instance size for MongoDB
Suggested approach: A scheduled Lambda function that fires every 3 months and performs 4 steps.Start up your instanceUse the EC2 RunCommand API to remotely execute a command on your lightsail instanceMonitor the command until completeShut down the instanceSome prerequisites:Create a Lambda function and grant it permissions with an IAM role to usessm:*,ec2:startinstancesandec2:stopinstances(This will allow your lambda function to communicate to your lightsail instance and also monitor and send commands.Make the lambda function a scheduled function, so you can trigger it every 3 months automaticalleHave SSMAgent installed on your instance likethisGive your instance the appropriate IAM permissions for SSM communications through an instance policy (This will allow the instance to communicate with AWS SSM)"ec2messages:*","ssm:updateinstanceinformation","ssm:listassociations"Now write your lambda function using the AWS SDK and it'll work like a charm. If you're worried about costs, unless you run one mother of a script, you should fall within the free tier as you get 400 GB-seconds of compute time per month.This means you can run a lambda function with 1GB of memory for 400 seconds every month for free.PS: I mentioned EC2 alot, I'm aware you're using lightsail but as it's just a wrapper for EC2 I imagine the same functionality is available, correct me if I'm wrong.
I'm looking to automatically renew my SSL certificates for a website I'm hosting on GitLab pages using certbot auto. I already have this working, but I have to keep my Lightsail instance running continuously.I'm just looking for an automatic way to boot up my Lightsail instance every 3 months, and once booted run a little script, and then power down again. At the moment it's costing me $5 month, and I'm only using it for a few minutes each time.Is there a way to automatically schedule the bootup of a Lightsail instance every 3 months?
Is there a way to automatically schedule the bootup of a Lightsail instance every 3 months?
I found the solution:Wait a few hours.
I created a hosted zone for my domain, I transferred DNS service to Amazon Route 53, Checked response from Route 53 and it showed:DNS request sent to Route 53 mywebsite.com.br. IN NS EDNS0 client subnet IP 24 DNS response code NOERROR Protocol UDP Response returned by Route 53 ns-9999.awsdns-99.org. ns-9999.awsdns-99.co.uk. ns-999.awsdns-99.com. ns-99.awsdns-99.net.Which is correct, and I created a router that acess my beanstalk, like this:DNS response code, NOERROR Protocol UDP Response returned by Route 53 MYBEANSTALKIP MYBEANSTALKIPAll seems correct, expect the fact I can't acess it, and when I try to ping, open on browse, all it show is a incorrect URL message. I even tried to usehttps://www.whatsmydns.net/#A/to check it, nothing at all. Did I miss any step?
Response returned by Route53 shows no Error, yet when I try to acess on browser, no response
Yes you need to use filters withdescribe_vpcsAPI.The below code will list all VPC's which matches bothNameTag Value and theCIDRblock:import boto3 client = boto3.client('ec2',region_name='us-east-1') response = client.describe_vpcs( Filters=[ { 'Name': 'tag:Name', 'Values': [ '<Enter you VPC name here>', ] }, { 'Name': 'cidr-block-association.cidr-block', 'Values': [ '10.0.0.0/16', #Enter you cidr block here ] }, ] ) resp = response['Vpcs'] if resp: print(resp) else: print('No vpcs found')CIDRblock is the primary check for VPC. I would suggest to just use theCIDR Filter aloneinstead of clubbing withNameTag as then you can prevent creating VPC with sameCIDR Blocks.
I can create a VPC really quick like this:import boto3 as boto inst = boto.Session(profile_name='myprofile').resource('ec2') def createVpc(nid,az='us-west-2'): '''Create the VPC''' vpc = inst.create_vpc(CidrBlock = '10.'+str(nid)+'.0.0/16') vpc.create_tags( Tags = [ { 'Key': 'Name', 'Value': 'VPC-'+nid }, ] ) vpc.wait_until_available() createVpc('111')How can I check a VPC with CidrBlock:10.111.0.0/16or a Name:VPC-111already exists before it gets created? I actually wanna do the same check prior to any AWS resource creation but VPC is a start. Best!EDIT:found thatvpcs.filtercan be used to query a given VPC tags; e.g.:fltr = [{'Name':'tag:Name', 'Values':['VPC-'+str(nid)]}] list(inst.vpcs.filter(Filters=fltr))which returns a list object like this:[ec2.Vpc(id='vpc-43e56b3b')]. A list with length 0 (zero) is a good indication of a non-existent VPC but was wondering if there is more boto/aws way of detecting that.
Boto3: How to check if VPC already exists before creating it
Yourcreate-deploymentwill return a Deployment ID. Use that inaws deploy get-deployment --deployment-id XXXto see the status and info of the deployment:http://docs.aws.amazon.com/cli/latest/reference/deploy/get-deployment.htmlYou can useaws deploy wait deployment-successful --deployment-id XXXto wait for completion:http://docs.aws.amazon.com/cli/latest/reference/deploy/wait/deployment-successful.html.
I would like to know if it's possible to track the deployment status of CodeDeploy by using CLI. Currently, I'm using Bamboo to trigger the CodeDeploy deployment by CLI using: aws deploy create-deployment ... My Bamboo plan will show green the moment the deployment is triggered instead of checking if the actual deployment is succeeded. Is there a way to let Bamboo/command line verify if the actual deployment was successfully deployed? Many thanks!
AWS CodeDeploy deployment tracking
This will probably be the case of your result data set exceeding the limit 1MB:If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.Check on the result for theLastEvaluatedKeyfield and use it for the next scan operation passing it asExclusiveStartKey
In a DynamoDB table, I have an item with the following scheme:{ id: 427, type: 'page', ...other_data }When querying on primary index (id), I get the item returned as expected.With ascanoperation inside AWS DynamoDB web app to get all items with typepage, 188 items including this missing item are returned. However, performing this scan operation inside Lambda with the AWS SDK, only 162 items are returned. Part of the code looks like:const params = { TableName: <my-table-name>, FilterExpression: '#type = :type', ExpressionAttributeNames: { '#type': 'type' }, ExpressionAttributeValues: { ':type': 'page' } }; dynamodb.scan(params, (error, result) => { if (error) { console.log('error', error); } else { console.log(result.Items); // 162 items } });What is missing here?
DynamoDB scan leaves valid item out
According to the AWS documentation about hosted Direct connect, you can only have sub 1G connection through your ISP.
I need to have a 5 gbps Direct Connect connection to my Amazon VPC from my servers residing at a ISP data-centre.I can't wait for more than 1 week to set it up. Is it possible through a hosted Direct Connect connection to get the 5 gbps?
AWS hosted Direct Connect connect through ISP
AWS support solved the problem. Here's their answer:When Beanstalk is deploying an application, it keeps your application files in a "staging" directory while the EB Extensions and Hook Scripts are being processed. Once the pre-deploy scripts have finished, the application is then moved to the "production" directory. The issue you are having is related to the "manage.py" file not being in the expected location when your "01_collectstatic" command is being executed.The staging location for your environment (Python 3.4, Amazon Linux 2017.03) is "/opt/python/ondeck/app".The EB Extension "commands" section is executedbeforethe staging directory is actually created. To run your script once the staging directory has been created, you should use "container_commands". This section is meant for modifying your application after the application has been extracted, but before it has been deployed to the production directory. It will automatically run your command in your staging directory.Can you please try implementing the container_command section and see if it helps resolve your problem? The syntax will look similar to this (but please test it before deploying to production):container_commands: 01_collectstatic: command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
I am editing my.ebextensions.configfile to run some initialisation commands before deployment. I thought this commands would be run in the same folder of the extracted .zip containing my app. But that's not the case.manage.pyis in the root directory of my zip and if I do the commands:01_collectstatic: command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"I get aERROR: [Instance: i-085e84b9d1df851c9] Command failed on instance. Return code: 2 Output: python: can't open file 'manage.py': [Errno 2] No such file or directory.I could docommand: "python /opt/python/current/app/manage.py collectstatic --noinput"but that would run themanage.pythat successfully was deployed previously instead of running the one that is being deployed atm.I tried to check what was the working directory of the commands ran by the.configby doingcommand: "pwd"and it seems that pwd is/opt/elasticbeanstalk/eb_infrawhich doesn't contain my app.So I probably need to change$PYTHONPATHto contain the right path, but I don't know which path is it.In thiscommentthe user added the following to his .config file:option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: myapp.settings PYTHONPATH: "./src"Because hismanage.pylives inside the src folder within the root of his zip. In my case I would doPYTHONPATH: "."but it's not working.
Run the right scripts before deployment elastic beanstalk
you can check here:https://forums.developer.amazon.com/articles/2749/how-do-i-suppress-my-alexa-skill.htmlSummary:Skill suppression means your skill will no longer be available to end users. There are two options for skill suppression:Hard Take Down Suppression:Disables your skill for current users that have it enabled and also makes the skill unavailable for enablement by new users.Soft Hidden Suppression:Skill remains active for current users that have it enabled, but the skill is unavailable for enablement by newer usersTo suppress your skill, please sign into your developer account and file a contact us request here that includes the skill name, application id, type of suppression request, and reasoning for suppression:https://developer.amazon.com/appsandservices/support/contact/contact-usThe skill, once back in 'development' in the developer portal, can be resubmitted at a later point with or without changes.
I need to remove an alexa skill from amazon alexa console, it had beenLivemonths ago. But I cannot find any buttons or function at alexa console to remove it.It's so strange that an developer cannot remove his developed skill from amazon alexa.
How to remove Amazon skill at amazon alexa console?
I linked to it in your other thread -these are the supported event sources. Notice thatcloudwatch eventsare one of the possible event types. You could set up a Lambda to, for example, run every minute and poll an SQS queue. You cannot directly trigger a Lambda off of an SQS queue.
Does AWS lambda provide support for listening to SQS queue? I found some examples which says one can do that but I am not sure if AWS lambda explicity provide support for that. When I create the lambda function, then I found one blueprint for SQS. So,
AWS Lambda integration with SQS
import boto3 iam = boto3.resource('iam') def isPasswordEnabled(user): login_profile = iam.LoginProfile(user) try: login_profile.create_date print True except: print False >>> isPasswordEnabled('user1') True >>> isPasswordEnabled('user2') False
Among the users in IAM, I want to programmatically get the list of all password enabled users. From AWS Console, I can easily spot them. But, how to get their list programmatically? I want to use python boto to get that.I was reading up herehttp://boto3.readthedocs.io/en/latest/reference/services/iam.html#iam, but by most of the ways listed in this doc, I can only see option of using 'PasswordLastUsed' which would be null in three casesThe user does not have a passwordThe password exists but has never been usedthere is no sign-in data associated with the user.So just by checking if 'PasswordLastUsed' is null I can not claim that user does not have password and thereby, can not get all the users with password. Am I missing something here? Any other way or any other python resource I can use to do this?
Using boto3, how to check if AWS IAM user has password?
If you setup Cloudtail you could have a Lambda function monitor the logs and notify you on the tagging event PutBucketTaggingExample Lambda functionhttps://github.com/retailnext/aws-lambda-cloudtrail-alertCloudtail Documentationhttp://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
I'd like to have an AWS lambda triggered when a tag is added to an existing object in S3 bucket (in the same way as we can do it for object create and remove). Any way to do that?
Is there any event generated when tag is added to an existing S3 object?
Your template works perfectly find for me, except that I had to specify the ports for the App security Group:Resources: SecurityGroupBastion: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Bastion security group SecurityGroupIngress: - CidrIp: 0.0.0.0/0 IpProtocol: tcp FromPort: 22 ToPort: 22 VpcId: vpc-abcd1234 SecurityGroupApplication: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Application security group SecurityGroupIngress: - SourceSecurityGroupId: !Ref SecurityGroupBastion IpProtocol: tcp FromPort: 22 ToPort: 22
I have the following security group in a yaml template. I'd like to have the "SecurityGroupApplication" security group allow incoming connections from the "SecurityGroupBastion". However, the validate-template function of the aws client is telling me unhelpful information like "unsupported structure". Ok, but what is wrong with the structure? Ideas?Resources: SecurityGroupBastion: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Bastion security group SecurityGroupIngress: - CidrIp: 0.0.0.0/0 IpProtocol: tcp FromPort: 22 ToPort: 22 VpcId: !Ref vpcId SecurityGroupApplication: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Application security group SecurityGroupIngress: - SourceSecurityGroupId: !Ref SecurityGroupBastion IpProtocol: tcp
How do I add a cloudformation security group ingress rule that refers to another security group?
Check the documention and forums, it could be FIFO queues arenot available in all regions currently.
I am trying to create a FIFO SQS queue (just learning); looking at the docs herehttp://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-properties-sqs-queues-propMy simple CF template is:Resources: FifoQueue: Type: "AWS::SQS::Queue" Properties: FifoQueue: True QueueName: "FifoQueue.fifo"I get the following error: Unknown Attribute FifoQueue.If I delete the last name, for the Queue Name, I get:The name of a FIFO queue can only include alphanumeric characters, hyphens, or underscores, must end with .fifo suffix and be 1 to 80 in length.Anybody has an example of creating a FIFO queue with cloudformation ?
Creating FIFO SQS queue with cloudformation
You need to use topic() function in AWS IoT SQL query. Like this:SELECT * as data, topic() as topic FROM 'desired/+/topic'In this case, your event will include the original message in 'data' field and the used topic in 'topic' field. You can also use integer number as a parameter inside topic() function, to return only sub-group.More data in oficial documentation:http://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html#iot-function-topic
I've got a lambda function called by an IoT rule and I would like to know the topic name from inside this lambda function.So far i'm only able to retrieve the message data from theeventparameter. Nothing in thecontextparameter neither.I haven't found anything in the documentation...Is it even possible ?
Is it possible to retrieve topic name inside a Lambda function called by an Iot rule
Cognito User Pools does not support SRP authentication from .NET SDK. You will not be able to useAuthFlowType.USER_SRP_AUTHwith theInitiateAuthAPI call.If you want to sign-in using a USERNAME and PASSWORD directly, you can look at theAdmin Authentication flowwhich uses theAdminInitiateAuthAPI and ADMIN_NO_SRP_AUTH flow.
I am writing a console POC to demo AWS cognito authentication - App Pool not federated identity, as our API gateway authentication mechanism (not hosted in AWS). This is being written in C#.I have successfully created a user, confirmed them; but now I need to authenticate to retrieve a JWT that an I can pass around and validate downstream.The following codeusing (var client = new AmazonCognitoIdentityProviderClient()) { var initAuthRequest = new InitiateAuthRequest(); initAuthRequest.AuthParameters.Add("USERNAME", username); initAuthRequest.AuthParameters.Add("PASSWORD", password); initAuthRequest.ClientId = clientId; initAuthRequest.AuthFlow = AuthFlowType.USER_SRP_AUTH; var response = client.InitiateAuth(initAuthRequest); WriteLine("auth ok"); }Yields this exception:An unhandled exception of type 'Amazon.CognitoIdentityProvider.Model.InvalidParameterException' occurred in AWSSDK.Core.dllAdditional information: Missing required parameter SRP_AI cannot find a way in the dotnet sdk of generating an SRP header, can anyone help?Thanks KH
Authentication AWS Cognito SRP
It depends. API Gateway is mostly used to give temporary access to Lambda functions in environments that are not secure (i.e. browsers, desktop apps, NOT servers).If your environment is secure, as in it runs on an EC2 instance with an IAM role, or another server with secure stored credentials, then feel free to use the SDK and call the Lambda function correctly.If you need to expose your Lambda function to the entire internet, or to authorised users on the web, or to any user that has the potential to grab the access key and secret during transit, then you will want to stick API Gateway in front.With API Gateway you can secure your Lambda functions with API keys, or through other authorisers such as Amazon Cognito so that users need to sign in before they can use the API endpoint. This way they only gain temporary credentials, rather than permanent ones that shouldn't be available to anyone.
I'm trying to call a lambda function from NodeJS. After research i know 2 ways to do it:Assign Lambda function into AWS API Gateway and call that API.Call Lambda function through AWS SDKWhat are pros and cons of API Gateway and AWS SDK ? And when to use each way above?
AmazonWebService - Should i use AWS API Gateway or AWS SDK
In the security groups, assign incoming access to other security groups by specifying a security group ID instead of IP addresses. In the web console, if you start typing "sg" in the source field it will pop up a list of your security groups to choose from. Using a security group ID as the source allows all resources that belong to that security group to have access.Alternatively, if you just want one rule that allows access to every resource in your VPC you would specify your VPC's IP range.
I am setting up EMR clusters on demand, and have a windows EC2 server as a workstation, and a linux EC2 server as a secondary server. All in the same VPC. I would like to avoid having to set security group rules each time an instances comes up with a new IP.How would I simply allow any traffic to flow freely between all servers in the same VPC?Thanks!EDIT- Thanks for the replies, I know this is not good practice in production, but we are dealing with some issues tracking down functionality which we believe is caused by ports, this is just a exploration phase, and this will help us. Thanks!
AWS: How to allow all TCP traffic between all instances in same VPC?
The problem was the VPC. Even I had the simple VPC with just an public subnet, the beanstalk cannot talk to the instance and so cannot deploy the ECS task definition and docker containers in the instance.By creating two subnets namely public and private and having an NAT instance in public subnet, which becomes the router for the instances in the private subnet. Making the above setup worked for me and I could deploy the ECS task definition successfully to the EC2 instance in the private subnet.
I want to deploy an multi-container application in elastic beanstalk. I get the following error.Error 1: The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.I have set up the VPC with just the public subnet and the security group that allows all traffic both inbound and outbound. I know this is not encouraged for production level deployment, but I have reduced the complexity to find the cause of the error.So, the load balancer and the EC2 instance are inside the same public subnet that is attached with the internet gateway. They both share the same security group allowing all the traffic.Before the above error, I also get another error statingError 2: No ecs task definition (or empty definition file) found in environmentHaving said, I have bundled myDockerrun.aws.jsonfile with.ebextensionsfolderinside the source bundle which the beanstalk uses for deployment. After all these errors, drilling down to two questions:I cannot understand whyNo ecs task errorappears, when I have packaged mydockerrun.aws.jsonfile containing containerDefinitions?Since there is no ecs task running, there is nothing running in the instance. Is this why beanstalk and ELB cannot communicate to the instance? (Assuming my public subnet and all traffic security group is not a problem)
Elastic BeanStalk MultiContainer docker fails
First create different profiles. Use cli(this works from 1.3.0, won't work in 1.0.0, not sure which you are using since you mention both):serverless config credentials --provider aws --key 1234 --secret 5678 --profile your-profile-nameThen in yourserverless.ymlfile you can set the profile you want use:provider: name: aws runtime: nodejs4.3 stage: dev profile: your-profile-nameIf you want to automatically deploy to different profiles depending on the stage you define variables and reference them in yourserverless.ymlfile.provider: name: aws runtime: nodejs4.3 stage: ${opt:stage, self:custom.defaultStage} profile: ${self:custom.profiles.${self:provider.stage}} custom: defaultStage: dev profiles: dev: your-profile-name prod: another-profile-nameOr you can reference your profile name in any other way. Read about variables in serverless-framework. You can get the name of profile to use from another file, from cli or from the same file(like in the example I gave).More about the variables:https://serverless.com/framework/docs/providers/aws/guide/variables/
I try to use Serverless 1.0, with several AWS credentials. (In my PC, 1.3.0 is installed)I found some descriptions that "admin.env" can change credentials in Stack overflow or github issues, but I can't found how to write and where to put admin.env. Are there any good document for admin.env?
How to change aws credentials in Serverless 1.0?
You should be able to create "tables" -https://docs.aws.amazon.com/quicksight/latest/user/tabular.htmlTo create a non-aggregated view of the data, add fields only to the Value field well. This shows data without any aggregations.To create an aggregated view of the data, choose the fields you want to aggregate by, and then add them to the Group by field well.
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed4 years ago.Improve this questionI've been evaluating amazon quicksight recently, and it doesn't look like there is a way to create a report that contains a simple table of data. While I expect I'll mostly be using it to create visualizations, I do also want some simple tables in my reports/dashboards as well. Did I miss something / is there a way to create a simple table in Quicksight?Note that the pivot table option provided in quicksight is not really what I'm looking for (they're intended for comparing things in a matrix, not so much for just displaying data), and also that I want to be able to display a table in an analysis/dashboard, not in the data-import view you get when uploading data to quicksight.
How can I display a simple table in a report in Amazon Quicksight? [closed]
You can not keep lambda running forever. Lambda functions life time is limited with 300 seconds, after 300 seconds, your function dies. You can invoke same lambda function with cron expressions using CloudWatch Events.You can learn more about lambda limits fromhere.
I was experiencing difficulties when invoking same lambda function continuously by using setInterval function.lambda functionvar MongoClient = require('mongodb').MongoClient , format = require('util').format; function funUpdateCommand(event,context,callback){ var mongoUrl='mongodb://**.**.**.**:*****/DBname'; // var mongoUrl='mongodb://127.0.0.1:27017/DBname'; MongoClient.connect(mongoUrl, function(err, db) { if(err) throw err; var collection = db.collection('device'); var interval = setInterval(function() { collection.find({"deviceCommand.command":"getAudio","deviceCommand.timestamp":{ $lte: new Date((new Date)*1 - 60000*2)}}).toArray(function(err, results) { if(err){ console.log(err); }else{ for(var i=0;i<results.length;i++){ collection.update({_id:results[i]._id},{$set:{"deviceCommand.command":" "}},function(err, results) { }); } } }); }, 5000); }); context.succeed("Successfully uploaded"); } exports.handler=funUpdateCommand;I am trying to update some of the documents in my mongoDB,i need to run the aws lambda function as a continuous background job ,but when using setInterval it returns timeout error .How can i continuously run my aws lambda function using setInterval?
How to use setInterval in aws lambda function
TheConfiguring the AWS Command Line Interfacedocumentation page lists various places where configuration files are stored, such as:Linux:~/.aws/credentialsWindows:C:\Users\USERNAME \.aws\credentialsThere is also adefault profile, which sounds like something that might be causing your situation:Linux:export AWS_DEFAULT_PROFILE=user2Windows:set AWS_DEFAULT_PROFILE=user2I suggest checking to see whether thatenvironment variablehas been set.
I seem to be having difficulty deleting the access key profile i created for a test user usingaws configure --profile testuserI have tried deleting the entries in my~/.awsdirectory however when i runaws configure, i am getting the following error.botocore.exceptions.ProfileNotFound: The config profile (testuser) could not be foundA workaround is adding[profile testuser]in my~/.aws/configfile but i dont want to do that. I want to remove all traces of this testuser profile from my machine.
aws configure delete access key profile
The proper way to instantiate !Or/!Equals inside a condition block with YAML is as follows:Conditions: CreateBetaResources: !Or [!Equals [!Ref "Environment", beta], !Equals [!Ref "Environment", eubeta]] CreateStagingResources: !Equals [!Ref "Environment", staging] CreateProdResources: !Or [!Equals [!Ref "Environment", prod], !Equals [!Ref "Environment", euprod]]Do not include the list identifier before calling the !Equals function (-).
Error: "Template validation error: Template format error: Conditions can only be boolean operations on parameters and other conditions"Working JSON conditions block:"Conditions" : { "CreateBetaResources" : {"Fn::Or" : [ {"Fn::Equals" : [{"Ref" : "Environment"}, "beta"]}, {"Fn::Equals" : [{"Ref" : "Environment"}, "eubeta"]} ]}, "CreateStagingResources" : {"Fn::Equals" : [{"Ref" : "Environment"}, "staging"]}, "CreateProdResources" : { "Fn::Or": [ {"Fn::Equals" : [{"Ref" : "Environment"}, "prod"]}, {"Fn::Equals" : [{"Ref" : "Environment"}, "euprod"]} ] } },YAML block which isn't working:Conditions: CreateBetaResources: !Or [!Equals [!Ref "Environment", beta], !Equals [!Ref "Environment", eubeta]] CreateStagingResources: - !Equals [!Ref "Environment", staging] CreateProdResources: !Or [!Equals [!Ref "Environment", prod], !Equals [!Ref "Environment", euprod]]Why is this error happening? I've scoured the documentation on "Fn::Or" and conditionals... It seems as though the syntax is correct. I've also tried many, many other formats, but this is the one closest to the documentation example.
AWS CloudFormation YAML !Or function
Instead ofs3-us-west-2.amazonaws.com/<my-bucket-name>you should put<my-bucket-name>.
I am trying to upload a file from a java class to aws S3.I am using the exact code as givenhereThe only parts I changed are these:private static String bucketName = "s3-us-west-2.amazonaws.com/<my-bubket-name>"; private static String keyName = "*** Provide key ***"; private static String uploadFileName = "/home/...<localpath>.../test123";I am not sure what to add inProvide Key. But even if I leave it this way, i get an error like this :Error Message: The bucket is in this region: null.Please use this region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: *******) HTTP Status Code: 301 AWS Error Code: PermanentRedirect Error Type: Client
PermanentRedirect error while uploading to S3 bucket with aws-sdk-java
An ENI (Elastic Network Interface) isnever detachedwhen an instance is Stopped.Every Amazon EC2 instance has aprimary ENIoneth0. This ENI cannot be detached from the instance.It is also possible to createsecondary ENIsand attach them to instances. These stay attached during a Stop and Start, but you can choose to detach it and then attach it to another instance.I notice that your question is very similar to asample questionfor theAWS Solutions Architect - Associateexam:Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)A. The Elastic IP will be dissociated from the instanceB. All data on instance-store devices will be lostC. All data on EBS (Elastic Block Store) devices will be lostD. The ENI (Elastic Network Interface) is detachedE. The underlying host for the instance is changedIn this case, the question is referring to anElastic IPaddress rather than an ENI. Elastic IP addresses remain attached to an instance during Stop and Start if the instance is in a VPC. Earlier-style instances launched underEC2 Classic, however, do have their Elastic IP address detached when Stopped.
Does ENI gets detached when we stop and start the EC2 instance which is connected to the VPC in AWS?
Is the ENI detached from an Amazon EC2 instance when it is stopped and started?
In my experience there are two approaches you could take here.Create an AMI from an instance that has been fully provisioned by ansible. Then use this AMI in your launch configuration.The other option is to use a stock AMI and have ansible provision each new host that is launched by the autoscaling group using cloud-init.The second approach is lacking in many ways compared to the first approach in my opinion. It can take much longer to scale up when ansible needs to run every time. You also risk something going wrong during the provisioning, preventing the instance from joining the group, causing further delays. You also run the risk of there being drift between instances (depending on what you are having ansible do and if anything external changes between Autoscaling events).If you decide to create a fully provisioned AMI for your ASG you can do it manually from an instance you already have created. However if you expect to want to rebuild the image regularly you may want to look into a tool likepackerto help you create images in an automated way.
I have amazon ec2 instance which i configure with Ansible and its working fine.Now i want to put that as part of autoscaling group so that i can scale them as i want.But my problem is i don't have any Launch configuration which sets up instance . I do all stuff by Ansible.How can i configure auto scaling that after new instance is created then it is configured by ansible.
How to use auto scaling with ansible and already existed ec2
I figured out by myself. All I have to do is to apply some SQL.ALTER TABLE (table name) CONVERT TO CHARACTER SET UTF8Sorry for bothering.
I'm using RDS on ElasticbeansTalk of AWS. As I noticed Japanese character on RDS, I tried to change settings on RDS parameter group like this.And thisEven though I modified some settings, the configuration didn't still work.No matter what I import csv data including Japanese character, the table is getting like this.How should I do? Could you tell me how to apply utf-8 on existing RDS(EB) tables? Thanks in advance.
How to apply utf-8 to RDS
update-deploymentis used to update the metadata of an existing deployment. For example, to update the description of a deployment:aws apigateway update-deployment \ --rest-api-id <value> \ --deployment-id <value> \ --patch-operations 'op=replace,path=/description,value=<value>'If you wanted to re-deploy an API (this is what happens when you click on "Deploy API" in the web console), you'd use thecreate-deploymentcommand:aws apigateway create-deployment \ --rest-api-id <value> \ --stage-name <value>
How do I update a apigateway deployment using the cli? I can find theupdate-deploymentcommand, but I do not know what to put in for the values it needs in this skeleton (taken from thedocumentation):{ "op": "add"|"remove"|"replace"|"move"|"copy"|"test", "path": "string", "value": "string", "from": "string" }
Update apigateway deployment with cli
You can use AWS Pipeline. There are two basic templates, one for moving RDS tables to S3 and the second for importing data from S3 to DynamoDB. You can create your own pipeline using both templates.Regards
We have a couple of mySql tables in RDS that are huge (over 700 GB), that we'd like to migrate to a DynamoDB table. Can you suggest a strategy, or a direction to do this in a clean, parallelized way? Perhaps using EMR or the AWS Data Pipeline.
Need strategy advice for migrating large tables from RDS to DynamoDB
If you look in the API Gateway console and select the method in question, you should see a section titledMethod Responseon the right side. If you select that you should see the various response codes and you can add one or select an existing one and change theContent-Typeassociated with that response.
I'm hoping someone can help, I've got AWS Lambda returning some XML incontext.succeedorcontext.faileverything is excellent apart from one small part. echo out the XML, but because the header still hasContent-Type: application/jsonand the Twilio server I'm talking too is looking at this and rejecting it even though the body is actually valid XML.Is there a way to override the header?Many thanks.
AWS API Gateway Change Content Type
Make sure your ARN configuration for your Auth and Unauth are the full ARN
I'm making an app that uploads photos to an S3 bucket using the AWS SDK with Amazon Cognito. When I run the function that does this I get an error in the console that says theIdentity Pool [the id of my identity pool] can't be found. I've found a few solutions to this issue around the internet. However, none of them seem to work for me. Any ideas?
Amazon Cognito Identity Pool can not be found
If you're now loading a font from a different domain, most browsers will apply a Cross-Origin Resource Sharing limitation - that is to say, most browsers won't load a file from a different domain without a CORS Policy.You can whitelist the font to be loaded by any-domain by first having your webserver that CloudFront is serving from, send the following response header:Access-Control-Allow-Origin: "*"Secondly, you need to go into your CloudFront configuration and whitelist the "Access-Control-Allow-Origin" header to be passed from your webserver, to the end-user.More reading on CORS can be found here:http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.htmlhttp://www.html5rocks.com/en/tutorials/cors/
I have a problem with my rails app, I decided to move my assets to a CDN like cloudfront in AWS. Everything is better now. My assets are faster, but I have I problem: I'm using font-awesome gem for some icon in the app and since change to CloudFront they don't load.My app is on heroku with CloudFront for assets. And my configuration in production env is:# config/environments/production.rb config.action_controller.asset_host = "<YOUR DISTRIBUTION SUBDOMAIN>.cloudfront.net"I hope a little help with that because I can't find the answer for thatRegards !
CloudFront doesn't load my font-awesome rails 4
I would think you can do this with the following:CloudWatch metric: record the CPU usageCloudWatch alarm: alarm when the CPU metric goes above/below some thresholdSNS topic: send a notification when the CloudWatch alarm is triggeredLambda function: invoked by SNS to stop/start the relevant EC2 instanceSee theScaling ECSarticle which is similar andInvoking Lambda from SNS.
I have two servers (with my app on it) already running.. created from Ubuntu AMI. While using auto-scaling it starts new instances using an AMI.Can I use auto-scaling to ONLY stop instance (not terminate; so that I don’t need an AMI when starting server in future); and later start instance (old instance which I had stopped) whenever CPU increases above x% !If not auto-scaling; I am ok with any other solution also. I can stop the instance with Cloudwatch, but how do I restart it?
Stop (not terminate) an EC2 Instance when CPU drops below certain level
This use case is almost the textbook example for AWS Lambda. If you look atthe AWS Lambda image resize example, all you need to do is remove code that tests for the image type and actually does the resize- it's designed to download, transform, then upload the object to a new S3 bucket.Also, you may be able to do this even more easily (and cheaply) withS3 cross-region replication, but thatrequiresthe buckets be in different regions (thanks @William-Gaul).So, it depends on your precise use case.
We have multiple buckets that are used by our clients. A client uploads a single file (random filename) to their bucket, and we then visit that bucket and copy it to our own bucket for processing. Basically, this:https://stackoverflow.com/a/10418427/2868238How could I automate this? I note lambda has s3 object event support so wonder if I can use this somehow?Paul.
AWS S3/Lambda Copy object upon upload automagically?
I'm not sure what's happening with the deleting of the content, but you can try to use the graceful command to restart instead.sudo apache2ctl gracefulThis will gracefully reload its configuration!or the reload commandsudo service httpd reload
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed8 years ago.Improve this questionMuch as the title says, I'm hosting a PHP application in an EC2 instance (elastic beanstalk) on Amazon Web Services, actually running Wordpress connecting to an RDS instance. I've been needing to restart apache for a number of reasons, mainly because I'm using the mod_pagespeed apache module.Almost without fail when I do that though, it deletes the contents of/var/www/html/using this command:sudo service httpd restartI'm at a bit of a loss since I'm new to AWS, but this clearly isn't desired functionality. Is there another way I ought to go about restarting apache? Can anyone explain why that's happening?Any advice welcomed, I feel I've got to grips well with most of the admin but this is just a head scratcher for me!
Restarting httpd on AWS EC2 behaves erratically. Is it supposed to delete the contents of /var/www/html? [closed]
TheHttpUtility.UrlEncodemethod encodes spaces as+which is acceptable according to the standards. However, for some reason I don't understand, this causes problems with the Signed URLs and content disposition. The other encoding of space as%20works correctly. So after encoding, replace the+with%20. The working version is:var contentDisposition = HttpUtility.UrlEncode("attachment;filename=My File.txt"); contentDisposition = contentDisposition.Replace("+", "%20"); var key = "example.txt?response-content-disposition="+contentDisposition; return AmazonCloudFrontUrlSigner.GetCannedSignedURL( AmazonCloudFrontUrlSigner.Protocol.https, "myBucket", cloudFrontPrivateKey, key, cloudFrontAccessKeyId, expirationDateTime);
I have setup Cloudfront Signed URLs with an S3 origin correctly and am using theresponse-content-dispositionquery string parameter to specify the file download name. The signed URLs I generate using the .NET AWS SDKAmazonCloudFrontUrlSigner.GetCannedSignedURLmethodwork correctly when the content disposition filename doesn't contain spaces. However, if the filename contains spaces, I get access denied. So, something like the code below will generate a URL that gives access denied.var contentDisposition = HttpUtility.UrlEncode("attachment;filename=My File.txt"); var key = "example.txt?response-content-disposition="+contentDisposition; return AmazonCloudFrontUrlSigner.GetCannedSignedURL( AmazonCloudFrontUrlSigner.Protocol.https, "myBucket", cloudFrontPrivateKey, key, cloudFrontAccessKeyId, expirationDateTime);It clearly seems to have something to do with the URL encoding.I have read through all the information in the docs aboutServing Private Content through CloudFront. I read thecode of theAmazonCloudFrontUrlSignerclass. I've also tried an number of combinations ofUrlEncodelike not encoding, encoding only the filename portion and even not encoding but replacing with the encoded version after the signed URL is generated. All of those either give access denied or an error that the signature doesn't match the url.
Cloudfront Signed URLs not working with S3 content disposition filenames with spaces using .NET SDK
Unload does not delete or remove data from the original table. See the explicit truncate in theCOPY reload example
I want to copy table data from redshift to S3; but keep original data in redshift. I know there is UNLOAD command for that purpose. But I am not sure if it deletes/removes data from original table. Does somebody have solution?
Does UNLOAD command removes or deletes data from redshift?
Two common causes of connection failures to a new DB instance are:The DB instance was created using a security group that does not authorize connections from the device or Amazon EC2 instance where the MySQL application or utility is running. If the DB instance was created in a VPC, it must have a VPC security group that authorizes the connections. If the DB instance was created outside of a VPC, it must have a DB security group that authorizes the connections.The DB instance was created using the default port of 3306, and your company has firewall rules blocking connections to that port from devices in your company network. To fix this failure, recreate the instance with a different port.You can use SSL encryption on connections to an Amazon RDS MySQL DB instance. For information, see Using SSL with a MySQL DB Instance.I recommend to go through below document, it will help to fix your issue.Connecting to a DB Instance Running the MySQL Database Engine
I've set up a DB Instance on AWS, and looking around all the guides I should now be able to go on MySQL Workbench and connect it succesfully, as I have a hostname, port, user ID and password.However, when I enter all the details I specified when creating the instance, I get the error:Failed to Connect to MySQL at with userthen below it says the same error with (10060) in brackets. I looked up this error but couldn't find any relevant solution.
How to connect to Amazon Web Service RDS on MySQL Workbench?
You need to either run the command usingsudoor run the command normally as a user who has privileges to write to/usr/local/lib.sudo npm install -g bower
When I use the command:npm install -g bowerI get this error:npm ERR! tar.unpack untar error /home/ec2-user/.npm/bower/1.3.12/package.tgz npm ERR! Linux 3.14.20-20.44.amzn1.x86_64 npm ERR! argv "node" "/usr/local/bin/npm" "install" "-g" "bower" "-F" npm ERR! node v0.10.34 npm ERR! npm v2.1.14 npm ERR! path /usr/local/lib/node_modules/bower npm ERR! code EACCES npm ERR! errno 3 npm ERR! Error: EACCES, mkdir '/usr/local/lib/node_modules/bower' npm ERR! { [Error: EACCES, mkdir '/usr/local/lib/node_modules/bower'] npm ERR! errno: 3, npm ERR! code: 'EACCES', npm ERR! path: '/usr/local/lib/node_modules/bower', npm ERR! fstream_type: 'Directory', npm ERR! fstream_path: '/usr/local/lib/node_modules/bower', npm ERR! fstream_class: 'DirWriter', npm ERR! fstream_stack: npm ERR! [ '/usr/local/lib/node_modules/npm/node_modules/fstream/lib/dir-writer.js:36:23', npm ERR! '/usr/local/lib/node_modules/npm/node_modules/mkdirp/index.js:46:53', npm ERR! 'Object.oncomplete (fs.js:108:15)' ] } npm ERR! npm ERR! Please try running this command again as root/Administrator. npm ERR! Please include the following file with any support request: npm ERR! /home/ec2-user/var/www/html/npm-debug.log
npm install -g bower on AWS Amazon ami getting error
For running against just the newly created servers, I use a temporary group name and do something like the following by using a second play in the same playbook:- hosts: localhost tasks: - name: run your ec2 create a server code here ... register: cass_ec2 - name: add host to inventory add_host: name={{ item.private_ip }} groups=newinstances with_items: cas_ec2.instances - hosts: newinstances tasks: - name: do some fun stuff on the new instances hereAlternatively if you have consistently tagged all your servers (and with multiple tags if you also have to differentiate between production and development; and you are also using the ec2.py as the dynamic inventory script; and you are running this against all the servers in a second playbook run, then you can easily do something like the following:- hosts: tag_Name_cassandra tasks: - name: run your cassandra specific tasks herePersonally I use a mode tag (tag_mode_production vs tag_mode_development) as well in the above and force Ansible to only run on servers of a specific type (in your case Name=cassandra) in a specific mode (development). This looks like the following:- hosts: tag_Name_cassandra:&tag_mode_developmentJust make sure you specify the tag name and value correctly - it is case sensitive...
This is probably obvious, but how do you execute an operation against a set of servers in Ansible (this is with the EC2 plugin)?I can create my instances:--- - hosts: 127.0.0.1 connection: local - name: Launch instances local_action: module: ec2 region: us-west-1 group: cassandra keypair: cassandra instance_type: t2.micro image: ami-4b6f650e count: 1 wait: yes register: cass_ec2And I can put the instances into a tag:- name: Add tag to instances local_action: ec2_tag resource={{ item.id }} region=us-west-1 state=present with_items: cass_ec2.instances args: tags: Name: cassandraNow, let's say I want to run an operation on each server:# This does not work - It runs the command on localhost - name: TEST - touch file file: path=/test.txt state=touch with_items: cass_ec2.instancesHow to run the command against the remote instances just created?
Ansible EC2 - Perform operation on set of instances
It's a problem of correctly escaping the quotes in fact.Reason is :\"inside a CloudFormation string is escaped as"(double-quote).For example,"hello \"me\""gives you :hello "me"In your line, what you really feed to bash is :echo " accessKeyId:XXXXX" >> /home/ubuntu/myfile.jsonConsidering bash use of quotes, you get the stringaccessKeyId:XXXXXinside your/home/ubuntu/myfile.jsonTo solve your problem, I would recommend using:{"Fn::Join": ["", ["echo '\"accessKeyId\":\"", {"Ref": "AccessKeyId"}, "\"' >> /home/ubuntu/myfile.json"] ] },which is escaped asecho '"accessKeyId":"XXXXX"' >> /home/ubuntu/myfile.json(hard to read : the whole string used by echo is inside single-quotes).I'm not able to try it now, but it should do the trick.
After much research and frustration, I'm not quite getting the output I'm hoping for.The desired output into a file would be for example"accessKeyId":"UIIUHO]SOMEKEY[SHPIUIUHIU"But what I'm getting isaccessKeyId:UIIUHO]SOMEKEY[SHPIUIUHIUBelow is the line in an AWS Cloudformation template{"Fn::Join": ["", ["echo \" accessKeyId:", {"Ref": "AccessKeyId"}, "\" >> /home/ubuntu/myfile.json"] ] },I've tried adding \" with in the echo statement but no quotes are output. Can someone show how to produce the desired output above?
AWS Cloudformation output double quotes in a file using Fn::Join
No, there is no method for identifying matching AMIs across different regions.Some background...AnAmazon Machine Image (AMI)contains a disk image that is used to boot an Amazon EC2 instance. It is created by using theCreate Imagecommand on an existing Amazon EC2 instance, and can also be created from an EBS Snapshot (Linux only).Each AMI receives a unique IDin the formami-1234abcd.AMIs exist in only one AWS region.When an AMI is copied between regions, it receives a new AMI ID. This means that the "same" image will appear with a different AMI ID in each region.When an AMI is copied between regions, it will retain its name and description, but this is not necessarily unique, so it cannot be used to definitively match AMIs between regions.AMI creators will often supply a list of matching AMI IDs across regions, for example:Amazon Linux AMI IDs
as title, while I create CFN template, I always check EC2 console for AMI id in different regions.these ids belong to the same spec image, ex: all of them are "ubuntu 12.04 64 bit", but in various regionsis there any quick method to check this messages?thanks a lot!
How to find the same spec AMI in different regions?
You can use a site like CloudWatch:http://www.cloudwatch.in/. It measures EC2, SimpleDB, SQS and SNS services latency from your browser.Edit:CloudWatch doesn't contain the new eu-central-1 location, however I submitted a PR to add it. I will update this answer when it gets accepted.Edit 2:PR has been accepted, the list on the site now contains Frankfurt.
How can I check which Amazon region is has the lowest latency from my present location? Is there some script that pings servers on all of them several times and calculates the average latency?Preferably a solution that also compares latency to the newly launched Frankfurt (eu-central-1) region.
How can I know which AWS region has the lowest latency from my location?
Thanks for your sample code, Alex Chan. I needed this functionality too, so I decided to write more completeS3FileUploadFieldandS3ImageUploadFieldclasses, based on your code and various other snippets.You can find my code at:https://github.com/Jaza/flask-admin-s3-uploadAlso up on pypi, so you can install with:pip install flask-admin-s3-uploadI've documentated a basic usage example in the readme (can see it on the github project page). Hope this helps, for anyone else who needs S3 file uploads in flask-admin.
I am using Flask-Admin and is very happy with it. However, the sample in Flask-Admin only provides to upload the image to static folder. Is it possible to upload it to S3 directly with Flask-Admin? Thanks.Regards Alex
Upload Image to Amazon S3 with Flask-admin
UpdateAWS has finally launchedNew Event Notifications for Amazon S3today, which indeed simply extend the long availablePUT Bucket notificationAPI with additionalevent typesfor object creation via theS3 APIs such as PUT, POST, and COPY:s3:ObjectCreated:*s3:ObjectCreated:Puts3:ObjectCreated:Posts3:ObjectCreated:Copys3:ObjectCreated:CompleteMultipartUploadInitial Answer[...] is there any way to send the notification directly to the SWF, without having a service consuming them and starting the workflow?Unfortunately there is no such way, you indeed need a mediating service - while thePUT Bucket notificationhas obviously been designed to allow for other types of events too,Amazon S3doesn't supportAmazon SNSnotifications for anything butEnabling RRS Lost Object Notificationsas of today:This implementation of the PUT operation uses thenotificationsubresource to enable notifications of specified events for a bucket.Currently, thes3:ReducedRedundancyLostObjectevent is the only event supported for notifications. Thes3:ReducedRedundancyLostObjectevent is triggered when Amazon S3 detects that it has lost all replicas of an object and can no longer service requests for that object.[emphasis mine]
I have a workflow which takes a file in an S3 bucket and does a lot of processing and further requests based on the file contents. Currently, clients have to trigger the workflow manually after uploading the file. This seems to be a pretty common use case for me, so is there any way to trigger the workflow as soon as the file is uploaded?I imagine there should be an SNS notification in between, but is there any way to send the notification directly to the SWF, without having a service consuming them and starting the workflow?
Triggering SWF workflow after uploading to S3
There is not a way to update the backend authentication via the management console, but you can use the command line interface to do this.The process is documented athttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-backend-auth.html, in the "Using the Command Line Interface" section.The link above has the necessary details, but conceptually the steps are:Obtain your backend server's public key.Useelb-create-lb-policyto create a newPublicKeyPolicyTypepolicy for that public key.Useelb-create-lb-policyto create a newBackendServerAuthenticationPolicyTypebased on the new public key policy you just created. (You can also include previously existing public key policies if your load balancer has instances that still have old certificates; simply add more--attributearguments for those public key policies. You can see existing policies for the load balancer with theelb-describe-lb-policiescommand.)Useelb-set-lb-policyto tell your load balancer to use your new backend server authentication policy for the desired port.
How do I update the certificate used for backend authentication in my AWS elastic load balancer?I can't find anything in the AWS console or docs that explains how to do it.
How do I update the certificate used for backend authentication in my AWS elastic load balancer?
it's usually under the user /srv/www/#{application_shortname}
Let's say we have a laravel app that is deployed from a github repository.What I can't seem to find any documentation for is where the applications source code is deployed to on the filesystem?We are using PHP5.5 features in our code so our PHP App Server is a custom layer based off an AMI.I need to setup an apache vhost that points to the application, but I can't do this if I don't know where it get's deployed to!
Where does aws opsworks deploy to by default?
I think theaws-cloudfront-signpackage is deprecated now. You can use this packagehttps://www.npmjs.com/package/aws-sdkHere is the link to know how you can use this:-https://medium.com/roam-and-wander/using-cloudfront-signed-urls-to-serve-private-s3-content-e7c63ee271db
In my app i need to create the signed url for cloudfront. I am using the javascript sdk for browser. I dont want to use node.js.I am not getting how to create the signed url. I didn't find any sample code for javascript in amazon website. I included this js file:<script src="https://sdk.amazonaws.com/js/aws-sdk-2.0.0-rc1.min.js"></script>Other than this do i need to include any js file?I am having all the parameters like, Key-id, cloudfront domain then pem file everything. IBut i dont know how to implement it. Can anyone help me by showing some samples.I searched a lot but i am not getting it. I got the sample code for creating signed url using node.js, but i dont want to use node.js.
Creating signed url for amazon cloudfront using javascript
You need to add it in the modelclass Designs < ActiveRecord::Base has_attached_file :photo, :s3_protocol => :httpsRef:Is it possible to configure Paperclip to produce HTTPS urls?
My rails 4 app is using an Amazon s3 bucket to store images. The configuration is pretty default with my production.rb file looking like thisconfig.paperclip_defaults = { :storage => :s3, :s3_credentials => { :bucket => ENV['S3_BUCKET_NAME'], :access_key_id => ENV['AWS_ACCESS_KEY_ID'], :secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'] } }When the page loads an image, it loads it like this:http://s3.amazonaws.com/themoderntrunk/designs/photos/000/000/052/large_thumbnail/product12.jpeg?1389721666I wish for it to load with the prefix https:https://s3.amazonaws.com/themoderntrunk/designs/photos/000/000/052/large_thumbnail/product12.jpeg?1389721666without the SSL, my app is getting the warning in the consolehe page at 'https://www.themoderntrunk.com/assortments/4/designs/52-product-12' was loaded over HTTPS, but displayed insecure content from 'http://s3.amazonaws.com/themoderntrunk/designs/photos/000/000/049/grid/product9.jpg?1389721643': this content should also be loaded over HTTPS.Granted, in my production.rb file I haveconfig.forse_ssl = true. My app also has SSL certificate.
Rails s3 bucket SSL
In my case, using m1.small, I followedthis docto create a custom AMI.I think the reason it kept booting forever is the opsworks-agent files are still there. See step 4 underTo create a custom AMI from an AWS OpsWorks instance, you'd need to stop agent and delete it's files.The complete cycle improved from ~25 minutes down to ~10 minutes. By the ~11th minute, it's on LB health checks stage.Hope that helps.
I like AWS OpsWorks but one big drawback I am facing now is boot time: nodes are booting super slowly.In my case, for a t1.micro instance, it takes like 10 minutes before my cookbook can start running (although from EC2 console view, the instance should be ready after about 2 minutes: it can be accessed via SSH after this short period). You can also refer tothis topic.I tried using custom AMI but ran into another problem: the node kept booting forever. But that might be my fault while creating the AMI.Back to the original question, how can I improve this boot time of OpsWorks nodes?
How to improve OpsWorks node's boot time?
Like @chris said, there is no way to change the key associated with the instance. You will need to launch a new one with the new key assigned to it.BUT If SSH access is what you need, don't bother trying to change or update the key associated with the instance. It has been a while since I stopped assigning keys to instances over allowing OpsWorks manage user access in the permissions section.This gives you great flexibility because you don't need to share keys amongst users or start new instances every time you need to change or set a new SSH key. You can add or remove users any time, and you control who has SSH and/or Sudo access.To start either grant one of your users access to OpsWorks or import IAM users:After access has been granted, ask the user to add their own public key in the "My Settings" > Edit page:If you gave this user access to SSH all you have to do is wait for the recipes to finish running and the user will be able to connect to the instance like this:$ ssh -i ~/.ssh/[your-key-file] [user-name]@[instance-ip-address]Note that any time you make changes to the permissions or settings section, recipes will be run in your instances updating the user access.For more information on how to grant user permissions in OpsWorks see theAWS Documentation
I have an Amazon OpsWorks stack with EC2 instances. Now it has Default SSH key (.pem) which I have no access to. What I've tried:I've created a new one, saved it and didchmod 600.Tried to change KeyPair for an instance and tried tossh -v -i path/to/.pem ubuntu@hostafter restarting it:permission denied (public key)Tried to change KeyPair for the whole stack at theStack Settingspage after restarting the whole stack: still gettingpermission denied (public key)Tried to changeubuntutoec2-user. Still nothing!Noticed that keys changed atOpsWorks Homebut remained the same atEC2 Management Console. Strange.Am I missing anything? Doing wrong? Any help appreciated. Thanks
Change KeyPair for the whole Amazon OpsWorks stack
AWS have made it possible tocontrol access to payments and usage using IAM.When logged in as the root account, go toAccount Settingsin the Billing and Cost Management area, scroll down to "IAM User Access to Billing Information", click "Edit", and enable the option.With that done, the following policy will permit access to the payment and usage activity view:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1423852703000", "Effect": "Allow", "Action": [ "aws-portal:ModifyBilling", "aws-portal:ModifyPaymentMethods", "aws-portal:ViewBilling", "aws-portal:ViewPaymentMethods", "aws-portal:ViewUsage" ], "Resource": [ "*" ] } ] }A reference to the available permissions can be foundhere
I am looking for a policy to let accountant to manage payment methods and observe usage activity only. Would it be possible to construct such policy?Thanks
AWS IAM policy for payment and usage activity view only (for accountant staff)
Okay, we were able to figure this out on our own. The problem with my example above is that I'm using a list instead of a set. The value of a multi-value attribute MUST be a set.For example, this works:Item(Table('test'), data={'id': '123', 'content': 'test', 'list': set([1,2,3,4])}).save()
After scouring the documentation and various tutorials, I cannot figure out how to set or update an attribute on a dynamo Item that is a multi-valued data type (number or string set). I'm using boto (boto.dynamodb2, to be specific -- not boto.dynamodb).Trying something like this (where 'id' is the hash key):Item(Table('test'), data={'id': '123', 'content': 'test', 'list': [1,2,3,4]}).save()Results in this error:TypeError: Unsupported type "<type 'list'>" for value "[1, 2, 3, 4]"I feel like this must be possible in boto.dynamodb2, but it's odd that I can't find any examples of people doing this. (Everyone is just setting number or string attributes, not number set or string set attributes.)Any insight on this topic and how I might get this to work with boto would be very much appreciated! I'm guessing I'm overlooking something simple. Thanks!
Multi-valued data in DynamoDB using boto
In my php script that PUT files to S3 using AWK SDK for PHP, I had to add in the meta data, as shown below, which did the trick:$response = $s3->create_object('bucketname', 'mountpoint/'.$filename, array( 'body' => $json_data, 'contentType' => 'application/json', 'acl' => AmazonS3::ACL_PUBLIC, 'meta' => array( 'mode' => '33188', // x-amz-meta-mode ) ));The mode "33188" defined the permissions "rw-r--r--" instead of "---------" in S3 bucket (but reflected only in the EC2 mounted folder), which was later inherited by the EC2 mounted drive.Hope this helps someone. Let me know!
Using S3FS and FUSE to mount a S3 bucket to an AWS EC2 instance, I encountered a problem whereby my S3 files are being updated, but the new files doesn't adopt the proper permission.The ACL rights that the new files had were "---------" instead of "rw-r--r--". I've ensured that the bucket is mounted properly by:sudo /usr/bin/s3fs -o allow_other -o default_acl="public-read" [bucketname] [mountpoint]and creating an automount in /etc/fstab:s3fs#[bucketname] [mountpoint] fuse defaults,noatime,allow_other,uid=1000,gid=1000,use_cache=/tmp,default_acl=public-read 0 0and password file in /etc/passwd-s3fs with the right permissions.My setup is Ubuntu 13.04, PHP5, AWS SDK.After 2 days of experimenting, I've found a solution (for php) in the provided answer below.
Mount S3 (s3fs) on EC2 with dynamic files - Persistent Public Permission
This should work for you:REGION=`curl -s http://169.254.169.254/latest/dynamic/instance-identity/document|grep region|awk -F\" '{print $4}'` echo $REGIONIt's from this thread:Find region from within an EC2 instance
Given these parameters:An Ubuntu instance running anywhere(any region and any availability zone in that region)Only the AWS PHP SDK2 installed on the instance (no other EC2 Command line tools, etc.)CURL and WGET availableWhat is the most elegant way for an instance script to explicitly determine what Region it is running in?* **Instance ID:** wget -q -O - http://169.254.169.254/latest/meta-data/instance-id * **Availability Zone:** wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zoneThemeta-data does notcurrently provide this information.We are aware that currently, the trailing character can be stripped off the availability zone in order to naively determine the Region, however, there isno guaranteethat Amazon will not change that in the future.The ultimate goal is that our cron jobs and other custom services can figure out who and where they are, so that they can interact with other services and instances appropriately.
Find My Instance Region in AWS
I have fixed this issue:You have to make sure that you already have created a bucket with the same name; in this case, the name of the bucket would be 'myBucket'.s3.createBucket({Bucket: 'myBucket'}, function() { var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};Once you created the bucket, go to properties and see what region it is using - add this into:aws.config.update({ accessKeyId: 'KEY', secretAccessKey: 'SECRET', region: 'eu-west-1' })Now it should work! Best wishes
Good afternoon,I'm trying to set up a connection to my aws product api, however I keep getting a 301 Permanent Redirect Error as follows:{ [PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.] message: 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.', code: 'PermanentRedirect', name: 'PermanentRedirect', statusCode: 301, retryable: false }The code I am using to connect to the API is as follows:var aws = require('aws-sdk'); //Setting up the AWS API aws.config.update({ accessKeyId: 'KEY', secretAccessKey: 'SECRET', region: 'eu-west-1' }) var s3 = new aws.S3(); s3.createBucket({Bucket: 'myBucket'}, function() { var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'}; s3.putObject(params, function(err, data) { if (err) console.log(err) else console.log("Successfully uploaded data to myBucket/myKey"); }); });If I try using different regions, like us-west-1 I just get the same error.What am I doing wrong?Thank you very much in advance!
Node.js 303 Permanent Redirect when connecting to AWS-SDK
Here's some working code I use to list all my instances across potentially multiple regions. Its doing a lot more than you need, but maybe you can pare it down to what you want.#!/usr/bin/python import boto import boto.ec2 import sys class ansi_color: red = '\033[31m' green = '\033[32m' reset = '\033[0m' grey = '\033[1;30m' def name(i): if 'Name' in i.tags: n = i.tags['Name'] else: n = '???' n = n.ljust(16)[:16] if i.state == 'running': n = ansi_color.green + n + ansi_color.reset else: n = ansi_color.red + n + ansi_color.reset return n def pub_dns( i ): return i.public_dns_name.rjust(43) def pri_dns( i ): return i.private_dns_name.rjust(43) def print_instance( i ): print ' ' + name(i) + '| ' + pub_dns(i) + ' ' + pri_dns(i) regions = sys.argv[1:] if len(regions)==0: regions=['us-east-1'] if len(regions)==1 and regions[0]=="all": rr = boto.ec2.regions() else: rr = [ boto.ec2.get_region(x) for x in regions ] for reg in rr: print "========" print reg.name print "========" conn = reg.connect() reservations = conn.get_all_instances() for r in reservations: # print ansi_color.grey + str(r) + ansi_color.reset for i in r.instances: print_instance(i)
I'm having some issues with the EC2 bit of Boto (Boto v2.8.0, Python v2.6.7).The first command returns a list of S3 Buckets - all good! The second command to get a list of EC2 instances blows up with a 403 with "Query-string authentication requires the Signature, Expires and AWSAccessKeyId parameters"s3_conn = S3Connection(AWSAccessKeyId, AWSSecretKey) print s3_conn.get_all_buckets() ec2_conn = EC2Connection(AWSAccessKeyId, AWSSecretKey) print ec2_conn.get_all_instances()Also, my credentials are all good (Full admin) - I tested them using the Ruby aws-sdk, both EC2 and S3 work fine.I also noticed that thehostattribute in the ec2_conn object iss3-eu-west-1.amazonaws.com, "s3"...? Surely thats wrong? I've tried retro fixing it to the correct endpoint but no luck.Any help would be great appreciate Thanks
How do I get Boto to return EC2 instances - S3 works fine
S3FS uses special 'hidden' zero byte files to represent directories, because S3 doesn't really support directories. If you try a mkdir on your mounted s3fs bucket then use the AWS file browser you should see this in action. If your S3 bucket contains a directory structure that was not created by S3FS then S3FS won't recognise that structure. S3FS only works well with buckets that are only ever manipulated using S3FS.After trying to use s3fs for a project I was working on I concluded that it was better to be exposed directly to the limitaions of S3, rather than using something which attempts to hide those limitations.
I've been trying to set up Amazon S3 as a backup service for my files. I'd like to use the service by mounting it as a drive on my ubuntu install, and thes3fs projectis supposed to make that possible. But I'm having some trouble with it. I believe I have successfully installed fuse and s3fs. When I mount a drive, I get no errors; however, when I then enter the directory and issue the 'ls' command, nothing happens. If I create a test file with a command like:touch file.testthe file shows up in the aws console. But I am unable to either see the files that are already present in the bucket or navigate into subdirectories using the 'cd' command. I've done a bit of sniffing around in the projects google forum, and I think I have discovered that s3fs cannot see any contents of an s3 bucketunlessthose files or directories have been created using s3fs. Has anyone else encountered this problem? Is this really the way this project has been designed? Is this a bug? Or is there a way around this problem?
When using s3fs, 'ls' command shows nothing
You can tryuse Guzzle\Http\EntityBody; $s3Client->getObject(array( 'Bucket' => $s3Bucket, 'Key' => $s3Path, 'command.response_body' => EntityBody::factory(fopen($saveFile, 'w+')) ));Also$result = $s3Client->getObject(array( 'Bucket' => $s3Bucket, 'Key' => $s3Path )); file_put_contents ($saveFile, (string) $result['Body']);
I'm relatively new to using AWS and am stuck on what I believe should be a basic task. I'm using the PHP SDK version 2 to retrieve files from one of my buckets to a temp directory on my server. According to thedocumentationI can usegetObjectto do this. Using the following code snippets I am able to retrieve the file but am having trouble saving the actual contents to the temp directory.#1$result = $s3->getObject(array( "Bucket" => $s3Bucket, "Key" => $s3Path, "ResponseContentType" => "image/jpeg", "SaveAs" => EntityBody::factory(fopen($saveFile, "wb")) ));#2$result = $s3->getObject(array( "Bucket" => $s3Bucket, "Key" => $s3Path, "ResponseContentType" => "image/jpeg", "SaveAs" => fopen($saveFile, "wb") ));Both of these request are successful in the sense that they return the object but I am still getting a tmp file of 0 bytes. Any insight into this is greatly appriciated.Thanks!
AWS S3 - Unable to save file using getObject() with PHP
Take a look atGoogle App Engine. One big advantage it provides is unlimited scaling and geo-diversity right out of the box. So you deploy your app and it magically grows and shrinks to accomodate database demand. Re: the database itself, you have two options:Google Cloud SQL, which is a MySQL database engine in the cloud, or theApp Engine Datastore, which is a scalable NoSQL database built into App Engine.If you require MySQL, e.g. because you've got existing code you need to port, then Cloud SQL would be your best bet, but if you have the flexibility to use a NoSQL database the Datastore is extremely simple and very powerful (like App Engine, it automatically scales from day one). You can also use a new feature calledGoogle Cloud Endpoints, to provide a scalable API from your iOS clients to your App Engine app.Another nice feature of the Google Cloud: if you need to do analysis on your data, you can use theApp Engine MapReduce API,Google BigQuery, orFusion Tables.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened,visit the help center.Closed11 years ago.I'm trying to setup a database for an iOS app I'm building to be used amongst a lot of users. I've been looking into the services that Google Cloud & AWS offers, and I'm having difficulty figuring out both what services I would need to use exactly, and how much each would cost.As of now, I just want somewhere to host the data that will eventually be in MySQL databases that I'm going to use for the app.A quick 101 on how this stuff is supposed to work would be great! Be as explanatory as possible, because I'm totally new to all of this DB stuff.
Amazon Web Services/Google Cloud for iOS application [closed]
Generally, on a Hadoop cluster you can kill a particular task by issuing:hadoop job -kill-task [attempt_id]This will kill the given map task and re-submits it on an different node with a new id.To get theattemp_idnavigate on theJobtracker'sweb UIto the map task in question, click on it and note it's id (e.g: attempt_201210111830_0012_m_000000_0)
I have a job running using Hadoop 0.20 on 32 spot instances. It has been running for 9 hours with no errors. It has processed 3800 tasks during that time, but I have noticed that just two tasks appear to be stuck and have been running alone for a couple of hours (apparently responding because they don't time out). The tasks don't typically take more than 15 minutes. I don't want to lose all the work that's already been done, because it costs me a lot of money. I would really just like to kill those two tasks and have Hadoop either reassign them or just count them as failed. Until they stop, I cannot get the reduce results from the other 3798 maps!But I can't figure out how to do that. I have considered trying to figure out which instances are running the tasks and then terminate those instances, butI don't know how to figure out which instances are the culpritsI am afraid it will have unintended effects.How do I just kill individual map tasks?
How do I kill running map tasks on Amazon EMR?
The process will shut down when you logout if it's running in the foreground or if it tries to write to stdout and the terminal it's outputting to no longer exists. Try starting the server withnohup python startTornado.py &The nohup command redirects output to a file, and the & at the end runs the command in the background. Alternatively, you can use the screen utility which allows you to detach a terminal and reattach it in a different ssh session (see the screen man page for details).
I'm using SSH to remotely launch Tornado on Amazon Web Service. It works fine when I launch it by:python startTornado.pyHowever, after my SSH session times out or terminated, the Tornado server is also stopped immediately, so I can't access the webpage anymore. I did quite some search but couldn't find an answer on Google.How can I keep Tornado and the site running after my SSH session terminated?
Tornado stopped running on AWS immediately after I terminate my remote session
You can configure the framework bundle to do this:http://symfony.com/doc/2.0/reference/configuration/framework.html#trust-proxy-headersframework: trust_proxy_headers: true
I'm running a Symfony2 web application on AWS, and am using an Elastic Load Balancer.In a controller method, I need to do the following to get the IP of a user requesting a web page:$request->trustProxyData(); $clientIp = $request->getClientIp(True);Does this present any security risks? I'm not using the client IP for privilege escalation, I'm just logging it.Is there some way to forcetrustProxyData()always, or otherwise reconfigure$request->getClientIp()toDWIM? My app will always be behind a load balancer (except while I do development on my desktop).Related:http://fabien.potencier.org/article/51/create-your-own-framework-on-top-of-the-symfony2-components-part-2(but it doesn't say if there's some global config so I don't have to calltrustProxyData()everywhere).
symfony2 behind Amazon ELB: always trust proxy data?
Amazon provides a way to do this directly using the S3 api.You can use theprefixoption when calling listing S3 objects to only return objects that begin with the prefix. eg using the AWS SDK for PHP:// Instantiate the class $s3 = new AmazonS3(); $response = $s3->list_objects('my-bucket', array( 'prefix' => '000001-' )); // Success? var_dump($response->isOK()); var_dump(count($response->body->Contents))You might also find thedelimiteroption useful - you could use that to get a list of all the unique 6 digit hashes.
Assume 200,000 images in a flat Amazon S3 bucket.The bucket looks something like this:000000-1.jpg 000000-2.jpg 000000-3.jpg 000000-4.jpg 000001-1.jpg 000001-2.jpg 000002-1.jpg ... ZZZZZZ-9.jpg ZZZZZZ-10.jpg(a 6 digit hash followed by a count, followed by the extension)If I need all files matching000001-*.jpg, what's the most efficient way to get that?In PHP I'd userglob($path,'{000001-*.jpg}',GLOB_BRACE)to get an array of matches, but I don't think that works remotely.I can get a list of all files in the bucket, then find matches in the array, but that seems like an expensive request.What do you recommend?
How can I efficiently get a list of matching Amazon S3 files?
UpdateAs alreadypointed outby frisky (+1), AWS has meanwhile releasedDynamoDB Local for Desktop Development- please seeDynamoDB Localfor details, in particular sectionDifferences Between DynamoDB Local and DynamoDB.As of recently, this initial offering is also fully integrated in theAWS Toolkit for Eclipseand theAWS Toolkit for Visual Studio, see the following introductory blog posts:DynamoDB Local Test Tool Integration for EclipseAmazon DynamoDB Local Integration with AWS Toolkit for Visual StudioOriginal AnswerYou neither need nor can install anything local - please see the first paragraph of theAmazon DynamoDBproduct page for details, e.g:Amazon DynamoDB is afully managed NoSQL database servicethat provides fast and predictable performance with seamless scalability. [...] customers can launch a new Amazon DynamoDB database table, scale up or down their request capacity for the table without downtime or performance degradation [...]. Amazon DynamoDB enables customers tooffload the administrative burdens of operating and scaling distributed databasesto AWS, so they don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.[emphasis mine]Please note that you will likely install one of the AWS SDKs (e.g.the AWS SDK for Javaor theAWS SDK for .NET) on your local development system though, if you are planning to work with DynamoDB, they offer various otherDeveloper Toolsas well.
Since I'm not familiar with cloud services yet, I must ask.If I'll use DynamoDB from AWS, would it needed to be installed on local? Or is everything handled on the server-side?
Can Amazon DynamoDB be used without local database?
Yeah, the config file format and the service constructors changed slightly in version 1.5. They mentioned this as a backwards-incompatible change in the release notes.http://aws.amazon.com/releasenotes/PHP/3719565440874916
I am trying to use the latest SDK for PHP (v. 1.5.0). I am trying to send an email with AmazonSES. I have successully sent emails with the python scripts, so I know that my crendentials and other settings are okay.I have copied the sample code however, it does not work. When calling AmazonSES, I get an error saying:Catchable fatal error: Argument 1 passed to AmazonSES::__construct() must be an array, string given, called in sendemail.php on line 31 and defined in sdk-1.5.0/services/ses.class.php on line 67This is the code:$AWS_KEY = "AKIEDIEDEIMIAXEOA"; $AWS_SECRET_KEY = "Te+EDEwjndjndededededededj"; require_once("../library/lib_aws/sdk-1.5.0/sdk.class.php"); $amazonSes = new AmazonSES($AWS_KEY, $AWS_SECRET_KEY); $response = $amazonSes->send_email( "[email protected]", array("ToAddresses" => "[email protected]"), array( "Subject.Data" => "test", "Body.Text.Data" => "body test", ) ); if (!$response->isOK()) { echo "error"; }I cannot find how to set up the credentials correctly to send an email.
How to set the credentials to send an email with AmazonSES using the PHP SDK from aws
I can't speak to IMDB's specific implementation, but I have implemented a similar solution on Amazon EC2 and S3 in the past. Here is an overview of my implementation:All master (full-sized) images stored on S3, but NOT publicly accessible.All image src urls point to an EC2 web server.Smaller (thumbnail) versions of images also stored on S3 with a naming convention that identifies their size AND aspect ration:"myimage1_s200.jpg" is a square version of "myimage1.jpg" that has been resized as a 200x200 square."myimage1_h100.jpg" has been resized to a maximum height of 100 (with a variable width that conforms to the original aspect ratio).When the EC2 server receives a request for a specific image size: it checks to see if that size already exists, and if so it returns the existing image to the requester.When the EC2 server receives a request for an image size that DOES NOT exist: it retrieves a copy of the next larger size version of the same image and resizes it and returns the new image to the requester, AND ALSO saves a copy to S3 for future use.Performance Notes:Pointing image src's directly to previously resized images on S3 is a lot faster if you know they exist!Resizing the next larger version of the image vs always going back to the original is a LOT faster under load!
I've been researching CDNs and image thumbnail generation and I was impressed with how IMDb does its image manipulation. Here's an example of a thumbnail version:http://ia.media-imdb.com/images/M/MV5BMTc0MzU5ODQ5OF5BMl5BanBnXkFtZTYwODIwODk1._V1._SY98_CR1,0,67,98_.jpgAnd here's a tweaked version that plays with size and cropping:http://ia.media-imdb.com/images/M/MV5BMTc0MzU5ODQ5OF5BMl5BanBnXkFtZTYwODIwODk1._V1._SY400_CR10,40,213,314_.jpgIt seems pretty straight forward where everything from '.V1._...' on is used to determine how to manipulate the image. This is all done impressibly fast and I decided to find an existing solution that mimics this functionality.I was able to find plenty of solutions in image re-sizing and I did find the Google App Engine's page on Transforming Images in Java. However, I don't think Amazon's IMDb is using Google to serve its images and since all my images are on Amazon's S3, I don't think I can use that solution.After four hours of online searching, I decided to ask the intelligent crowd here.Further context: I am building a web application on Amazon's Elastic Beanstalk and I'm thinking of having a separate server (perhaps another Beanstalk) to handle the images...similar to what IMDb is doing.Thanks in advance for your insight.
How does IMDb do its on-the-fly image resize and cropping?
Yes, it does. I typed in "use amazon web services to store files" to Google andthe first link in the resultsled to theweb page you are looking for.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closedlast year.Improve this questionI want to use Amazon web services to store files, and I am new to Amazon web services. I want to know whether Amazon web services provides any API (e.g. web services) for us to upload, download, list files? Appreciate if anyone could provide some documents for a newbie.Another question is, if I upload video files to Amazon web services, does it provide video streaming capability?thanks in advance, George
integration wih amazon web services [closed]
The AMI represents the launchable machine configuration - it does NOT actually contain any of the machine's data, just references to it. An AMI can get its disk image either from S3 or (in your case) an EBS snapshot.The EBS Volume is associated with arunninginstance. It's basically a read-write disk image. When you terminate the instance, the volume will automatically be destroyed (this may take a few minutes, note).The snapshot is a frozen image of the EBS volume at the point in time when you created the AMI. Snapshots can be associated with AMIs, but not all snapshots are part of an AMI - you can create them manually too.More information on EBS-backed AMIs can be found inthe user's guide.It is important to have a good grasp on these concepts, so I would recommend giving the entire users guide a good read-over before going any further.If you want to delete all data associated with an AMI, you will have to use theDescribeImageAttributeAPI call on the AMI's blockDeviceMapping attribute to find the snapshot ID; then delete the AMI and snapshot, in that order.
I usedCreateImageRequestto take a snapshot of a running EC2 machine. When I log into the EC2 console I see the following:AMI - An image that I can launchVolume - I believe that this is the disk image?Snapshot - Another entry related to the snapshot?Can anyone explain the difference in usage of each of these? For example, is there any way to create a 'snapshot' without also having an associated 'AMI', and in that case how do I launch an EBS-backed copy of this snapshot?Finally, is there a simple API to delete an AMI and all associated data (snapshot, volume and AMI). It turns out that our scripts only store the AMI identifier, and not the rest of the data, and so it seems that that's only enough information to just Deregister an image.
Snapshots on Amazon EC2
It depends on how long you need to run your instance. A small linux instance will cost 8.5 cents per hour. If you spend a week at Pycon and have your instance running the entire week, it would cost $14.28 for the week. You probably won't need it while you are asleep, so you can turn it off when you are done each day. If you only need it for an hour it will cost you 8.5 cents.Here's more details on the pricing if you need a bigger server or you need a windows server instead:http://aws.amazon.com/ec2/#pricing
I'm about to go to Pycon, and while I have my hosting at Webfaction one of the tutorials (JKM) asks for students to have AWS instances. I've been trying to figure out what some minimum charge examples might look like? I'll have a lamp server with Django and a requisite amount of storage but next to no traffic.Anyone have some guidance/advice? My Google searches and look here did not turn up much useful info.
As an experiment I want to work a bit with AWS. How much might I expect to pay?
In addition, there'sCloud42, but while all of these tools, along with Amazon's new official Java API interface are quite nice, none of them (except Rightscale, which is awesome, but very incompatible with what I'm doing, sadly) have any sort of functionality remotely close to properly managing an application launch on the cloud.I suspect thatNimbusandOpenNebulaare actually tools closer to what I was asking about - proper automated system management, rather than just access for manual machine management, however I have not had a proper chance to investigate either of these.For my purposes we developed our own in house tool using theTypicalibrary and several other tools, that allowed us to give machines abstract names and launch, configure, and issue commands to them via their names rather than instance id's or private dns's. Might be released open source, but that's not my decision unfortunately. I'll update this if it is.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed10 years ago.I'm looking to manage a system (or preferably multiple systems) of machines on EC2, and at present the only way I can see doing that in a reasonable way is to extend theTypicalibrary and build a control panel that launches, configures, and checks in on machines for me.I don't expect there to be any prefabricated solutions to exactly my problem out there, but I'm wondering if there are any good tools for managing EC2 instances out there? Preferably in Java, but it'll more than likely be easier to learn a new language than to implement a seriously powerful control panel.And yes, I know about Elasticfox - it's a wonderful tool, but not nearly powerful enough for what I'm looking for.
What Are Good, Advanced Tools For Managing EC2? [closed]
Update:EC2 now supports "tags" for categorising instances.http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?Using_Tags.htmlI've always used security groups for categorising. I don't see anything wrong with using them!Groups not only allow categorising, they also allow different firewall rules. You can also have more than one group per instance, e.g. "production", "database"Reference:http://docs.amazonwebservices.com/AWSEC2/2008-12-01/DeveloperGuide/index.html?ApiReference-SOAP-RunInstances.html
Is there a way to distinguish between sets ofEC2instances?My use case is that I have a bunch of web tier machines and a bunch of search tier machines; currently the only way to track what each instance is doing is in a roll-your-own asset directory, like LDAP or a database.Ideally, I'd like to be able to determine theroleof a machine from the metadata available from the AWS APIs.Currently, the only approach I've come up with is to have different machine roles in different security groups (even if it's not strictly required). Is there a better way?
Categorising EC2 instances
The bug report for this issue ishereThe underlying cause is that the AWS cli shipped a breaking change in a minor version release. You can see thishereI'm assuming here you're using thepulumi-ekspackage in order to provision an EKS cluster greater thanv1.22. The EKS package uses a resource provider to configure some EKS resources like theaws-authconfig map, and this isn't the same transient kubeconfig you're referring to in~/.kube/configIn order to fix this, you need to do the following:Ensure youraws-cliversion is greater than1.24.0or2.7.0Ensure you've updated yourpulumi-ekspackage in your language SDK package manager to greater than0.40.0. This will mean also updated the provider in your existing stack.Ensure you have the version ofkubectlinstalled locally that matches your cluster version that has been provisioned
I get the following error message whenever I run a pulumi command. I verified and my kubeconfig file isapiVersion: v1I updatedclient.authentication.k8s.io/v1alpha1toclient.authentication.k8s.io/v1beta1and still have the issue, what could be the reason for this error message?Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
is there a way to solve " Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1 " with pulumi
If your app creates a connection pool of 100, that's the number of database connections it will try to open. It must be lower than your MySQL connection limit.Typically connection pools openallthe connections for the pool, so they are ready when a client calls the http API. The connections might normally be running no SQL queries, if there are not many clients using the API at a given moment. The database connections are nevertheless connected.Sort of like when yousshto a remote linux server but you just sit there at a shell prompt for a while before running any command. You're still connected.You asked if adb.t2.microinstance was not recommended for production. Yes, I would agree with that. It's tempting to use the smallest instance possible to save money, but adb.t2.microis too small for anything but light testing, in my opinion.In fact, I would not useanyt2instance for production, regardless of size. Thet2type uses "burstable" performance. This means it can provide only brief periods of good performance. Once the instance depletes its performance credits, they recharge slowly, and while they recharge, the performance of that instance is very low. This is okay for testing, but not for production, if you expect to provide consistent performance at any time.
I have anRDS instancehosting a mySQL database. Instance size isdb.t2.microI also have anExpressJSbackend connecting to the mySQLRDS instancevia a connection pool:Additionally i have a mobile app, the client, feeding off theExpressJSAPI.The issue i'm facing is, either via the mobile app or via Postman, there are times where i get a 'Too many connections' error and therefore several requests fail:On theRDS instance. On current activity i sometimes get 65 connections, showing it's reaching the limit. What i need clarity on is:When 200 mobile app instances connect to the API, to theRDS instance, does it register as 200 connections or 1 connection fromExpressJS?Is it normal to be reaching the RDS instance 65 connection limit?Is this just a matter of me usingdb.t2.microinstance size which is not recommended for prod? Will upgrading the instance size resolve this issue?Is there something i'm doing wrong with my requests?Thank you and your feedback is appreciated.
AWS: Too many connections
I think you have to take into account that other users may haveAllowin their policies, so the approach here should be to deny access to any users not being the user you want it to be. There is a detailed explanation in the AWS docs [1], but for the sake of brevity, I think the terraform code should look like the following:data "aws_iam_policy_document" "vulnerability-scans" { statement { sid = "AllExceptUser" effect = "Deny" principals { type = "AWS" identifiers = ["*"] } actions = [ "s3:PutObject", "s3:GetObject", "s3:ListBucket", ] resources = [ aws_s3_bucket.vulnerability-scans.arn, "${aws_s3_bucket.vulnerability-scans.arn}/*", ] condition { test = "StringNotLike" variable = "aws:userId" values = [ aws_iam_user.circleci.arn ] } } }Even though the reference URL says it is for an IAM role, the same applies for a user. TheStringNotLikecondition operator has more detailed explanation in [2].[1]https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/[2]https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_String
I have a bucket which I need to restrict to a specific user, I have written the following script but it still seems to allow all users to operate on the bucket.resource "aws_s3_bucket" "vulnerability-scans" { bucket = "vulnerability-scans" } resource "aws_s3_bucket_policy" "vulnerability-scans" { bucket = aws_s3_bucket.vulnerability-scans.id policy = data.aws_iam_policy_document.vulnerability-scans.json } data "aws_iam_policy_document" "vulnerability-scans" { statement { principals { type = "AWS" identifiers = [ aws_iam_user.circleci.arn, ] } actions = [ "s3:PutObject", "s3:GetObject", "s3:ListBucket", ] resources = [ aws_s3_bucket.vulnerability-scans.arn, "${aws_s3_bucket.vulnerability-scans.arn}/*", ] } }
Terraform AWS S3 - deny to all except specific user
After a lot of frustration and waiting I solved this by:Deleting the hosted zone associated with the domain.Creating a new hosted zone for the domain. Route 53 auto-generates an NS record with 4 pre-filled name server addressesGoing to Domains > Registered Domains > (domain) > Add or edit name servers. Deleting the four name servers there and replacing them with the four that Route 53 gave me in step 2 (without the periods at the end).Within a few min the domain started pointing to the IP I specified and was picked up by a bunch of DNS servers as measured byhttps://dnschecker.org/Hope this helps someone, someday
I registered a domain name on AWS Route 53, then deleted the generated hosted zone that they automatically generated for the domain and replaced it with this one:The values in the NS record match the name servers of the registered domainI haven't modified these settings for almost 72 hours, and the registered domain name still doesn't point to the IP I specified. Any idea why? It all looks correct to me.
AWS Route 53 domain name still not pointing to IP
Yes, you canspecify sensitive datato be automatically fetched and injected to your container.You do this usingsecretsparameter of your Task Definition:Amazon ECS enables you toinjectsensitive data into your containers by storing your sensitive data in eitherAWS Secrets Managersecrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.
We have a node application running in ECS and have local credentials in the .env file but we don't want to load credentials from the .env file due to security. Rather, we want those to be injected by AWS into the container environment. We don't want to use AWS SDK to fetch secrets in a node application. Is there any way to inject all secrets into the container environment?
Set AWS Secret Manager value in docker environment
Once is deployed with Amplify, go to the build page and look in theDeploytab. You will find a row that says[INFO]: - API Lambda@Edge: xxxxxxx-xxxxxxx. The ID you see will be needed to identify the logs insideAWS CloudWatch.The logs can be found going toCloudWatch->Log groupsand then you can find the API logs following this pattern:/aws/lambda/<region: this will probably be 'us-east-1'>.xxxxxxx-xxxxxxx
In the AWS Amplify dashboard I can't see a way to access my /api/* route logs after deploying a hello world NextJS application. Where would these be located?Steps:init extremely simple helloworld nextjs application with /api/hello.jsexport default (_, res) => res.send("hello world")Deploy to amplify/api/hello returns "hello world"Can't find the logs for this lambda function nor find it anywhere in AWS or the Amplify dashboard. Even after enabling "Amplify Studio" I can't see it listed under 'Functions' but obviously I can call the endpoint without enabling Amplify Studio at all.I can see a handler for /api/* in cloudfront distribution but can't find where the handler is.
Amplify - NextJS - Access lambda logs
You can find an implementation here:import { Construct } from 'constructs'; import { AwsCustomResource, AwsCustomResourcePolicy, AwsSdkCall, PhysicalResourceId } from 'aws-cdk-lib/custom-resources'; interface SSMParameterReaderProps { readonly parameterName: string; readonly region: string; } export class SSMParameterReader extends AwsCustomResource { constructor(scope: Construct, name: string, props: SSMParameterReaderProps) { const { parameterName, region } = props; super(scope, name, { onUpdate: { action: 'getParameter', service: 'SSM', parameters: { Name: parameterName, }, region, physicalResourceId: PhysicalResourceId.of(name), }, policy: AwsCustomResourcePolicy.fromSdkCalls({ resources: AwsCustomResourcePolicy.ANY_RESOURCE, }), }); } public getParameterValue(): string { return this.getResponseFieldReference('Parameter.Value').toString(); } }Source:https://github.com/Idea-Pool/aws-static-site/blob/main/lib/ssm-parameter-reader.ts(Based onCloudFormation Cross-Region Reference)
I am using CDK to deploy AWS resources but need to get some values from the parameter store from a different region. I can see this API in CDK's reference page to read a parameter:ssm.StringParameter.fromStringParameterAttributesBut it doesn't support passing region. How can I make it work across region?
How to read parameter store from a different region in CDK?
Typically, the data available inAction Cwill be dependent on what the result/output ofAction Bis.However, if you just care about the original input to the state machine execution, you can set the payload ofAction Cusing theContext Object.// roughly "Action C": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "Parameters": { "Payload.$": "$$.Execution.Input", "FunctionName": "<action c lambda>" },Check out theAWS documentation for Context Object
Let's say I have this state machine in AWS Step Function:And I had started it with this input:{ "item1": 1, "item2": 2, "item3": 3 }It's clear for me thatAction Ais receiving the input payload. But, how canAction Caccess the state machine input to get the value ofitem3? Is it possible?Thanks!!
How to access input of state machine in any node at AWS Step Functions
There is no specific IP adress shown to user in AWS Console, but you can find the hostname and FQDN of the DB underconnectivity and securitytab of the RDS Database.Reference :https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html
I recently opened a database instance on AWS using RDS. I can't find the IP or hostname to the database. Can somebody help? I'm using MariaDB version 10.5.13. If anymore info is needed, just tell me. I'm new to this so I'm not exactly sure whats needed.
How do I find hostname in AWS?
The Route53 record provides you the option to choose the target Load Balancer to get pointed to. To do so, once you enter in AWS Console:Route53.Choose the correspondingHosted Zone.Mark the targetRoute53record.Edit record.In the sectionRoute traffic, we have to choose:1- Alias to Application and ClassicLoad Balancer.2- Choose theregionwhere theLoad Balancerexists.3- In the 3rd field, you have to paste theload balancerendpoint.This is to do so manually, if you're seeking to do so via code, then you can checkHow can I create a Route 53 Record to an ALB? (AWS).
I am a developer, not a cloud expert, but I have learnt that having knowledge in many areas is key to success.I have a AWS EKS cluster, in which I have a public load-balancer service (check it out; it's a simple NodeJS Express API).I also have a domain on Route 53 along with a pending ACM SSL certificate.I was wondering what I have to do to map my Route 53 domain to my load-balancer, so I don't have to use the extremely long domain name AWS provides as a default for the load-balancer.Or maybe another way of putting it: how do I change the default domain name of my load-balancer to a Route 53 domain?
How to map a Route 53 domain to a load-balancer?
I had the same issue and I figured out the reason it was happening. When you mention yourDSNin theAmazon Athena (Beta), the next thing it will ask is your login (which has two options).Do not take the 2nd option. Pick the first option where the all setting are pulled from your DSN settings, which you did to setup your datasource.Now if you once did it wrong, It won't ask you again the same things to fix it. Create a new powerbi file and do the steps.
I am trying to connect to AWS Athena from Power BI using the new Athena connector. The first page prompts for a DSN which I supply (and which works when connecting through the old ODBC method) but when the new connector attempts to connect with this DSN the following error is thrown:Unable to connectWe encountered an error while trying to connect. Details: "We cannot convert the value null to type Record."
PowerBI Athena Beta Connector Details: "We cannot convert the value null to type Record."
Short answerYes, but don't do it. There are other solutions.Explanation why you shouldn't do itYou can use a lambda in between your containers and API gw but using a lambda as a 'middleware' is an antipattern, you will have to pay double by making your lambda wait on your microservices response.Other solutionsIf you want to handle authentication or check headers and cookies you should use a lambda authorizer.For your use-case you can make use of an application loadbalancer. That can do path redirects to different target groups.https://aws.amazon.com/premiumsupport/knowledge-center/elb-achieve-path-based-routing-alb/It might make sense in having a library that is shared by the different microservices that does the early response or request checking.Not sure what is your real goal and use case but if you elaborate more on what you'd like to achieve, I might can help.
I would like to use you AWS API Gateway as a single entry point to my backend which will proxy (redirect) request to different microservices based on URL prefix. However before to do proxy it would be nice to have lambda which may check requests and make a decision if allowed to make proxy or it's better to make a response imidiatly, so, in another word, I would like to have AWS lambda as middleware. Is it possible to do?
AWS API Gateway proxy with AWS Lambda as middle ware
No, there is no way to set one initially, your process is the one the officialaws docsalso recommend:When you create an account, AWS Organizations initially assigns a long (64 characters), complex, randomly generated password to the root user. You can't retrieve this initial password. To access the account as the root user for the first time, you must go through the process for password recovery. For more information, seeAccessing a member account as the root user.It may however not be necessary to login as root anyway sinceAWS Organizations automatically creates an AWS Identity and Access Management (IAM) role in the member account. This role enables IAM users in the management account who assume the role to exercise full administrative control over the member account.And generally setting / configuring passwords in terraform is risky because the password would show up in the state files and in the version control system you hopefully check your terraform files into.
I have a Terraform script that allows me to create new AWS accounts inside organizations usingaws_organization_accountresource. I do not have any problem when creating that new account but I am wondering what is the initial password to logging as a root user into the new account? Is there a way to set one? I currently need to click on "I forgot my password" if I want to logging as a root user with my email.main.tfresource "aws_organizations_account" "this" { name = "user1" email = "[email protected]" parent_id = module.organizations.sandbox_organizational_unit_id }After that I am going into the AWS logging page, log as a root user with[email protected]and click on"I forgot my password"since I don't know the inital password
Default password new AWS organization account
As mentioned inAWS SDK v3 docs Docs- Only HTTP API and CLI will get the base64 data. Other mediums will getUint8Arrayas response.So, we need some extra data conversion to achieve encryption and decryption using SDK.const { KMSClient, EncryptCommand, DecryptCommand } = require('@aws-sdk/client-kms'); const client = new KMSClient({ region: AWS_REGION }); // Encrypt // Convert Uint8Array data to base64 const input = { KeyId: kmsKey, Plaintext: Buffer.from(JSON.stringify(credentials)), }; const command = new EncryptCommand(input); const encryptedBlob = await client.send(command); const buff = Buffer.from(encryptedBlob.CiphertextBlob); const encryptedBase64data = buff.toString('base64'); // Decrypt // Convert Base64 data to Uint8Array // Uint8Array(response) convert to string. const command = new DecryptCommand({ CiphertextBlob: Uint8Array.from(atob(item.credentials), (v) => v.charCodeAt(0)), }); const decryptedBinaryData = await client.send(command); const decryptedData = String.fromCharCode.apply(null, new Uint16Array(decryptedBinaryData.Plaintext));
I am using@aws-sdk/client-kmsto encrypt the data. I was getting the base64 string as a response. Now I am gettingUint8Array.const encryptedBlob = await kms.encrypt({ KeyId: kmsKey, Plaintext: Buffer.from(JSON.stringify('data to encrypt')), });The encrypted plaintext. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.Mentioned in AWS docsIs there any way to get base64 as response in nodeJs.
How to get aws kms encrypt response as base64 string in sdk v3. Getting Uint8Array as response
As you can to see, your SO readsnvme1n1as name of device (not/dev/sdd).So, you could apply anuser_datawith the cloud-init instructions for your EC2 instance:resource "aws_instance" "your-instance" { .. user_data = file("user_data/ebs-mount.sh") .. }whereuser_data/ebs-mount.shhas the next content (considering that EBS disk have xfs format):#cloud-config hostname: your-instance runcmd: - sudo mkdir /custom -p - sudo echo '/dev/nvme1n1 /custom xfs defaults 0 0' >> /etc/fstab - sudo mount -a output : { all : '| tee -a /var/log/cloud-init-output.log' }
Could anyone advise on how I can auto-mount an EBS volume created using Terraform and make it available on/custom?resource "aws_instance" "ec201" { ... ebs_block_device { device_name = "/dev/sdd" volume_type = "gp2" volume_size = 10 delete_on_termination = true encrypted = true } ...Is it possible to auto-mount it?I've read these pages:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.htmlAutomatically mount an EBS volume upon starting an Amazon EC2 Linux instanceWhen I do a:> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ... nvme1n1 259:0 0 10G 0 disk nvme0n1 259:1 0 250G 0 disk └─nvme0n1p1 259:2 0 250G 0 part /I have a 10GB partition that is not mounted. Would it be possible to auto-mount it using terraform?
How to mount a ebs_block_device using Terraform?
This AWS Documentation guide will help you to configure your Code Build Project with your VPC.But I am sure, you must have gone through it. Please share the error as well.Link
How to enableVPCaccess forAWS CodeBuild/Code Pipeline?I am working on the Neptune database and it requiresVPCto access. While building code insideAWS CodeBuild. My tests are failing because it's not able to access the Neptune database. How can I configure the pipeline to allow CodeBuild to access the VPC?
How to enable VPC access for AWS CodeBuild/Code Pipeline?
"create ec2 instance from snapshot"Your almost there. The option is called "Create image". So:Go to your snapshot.Right click and choose "Create image" (assume the volume is bootable and it works).Fill out the info required.Image (aka AMI) will be created based on your snapshot and the info you will provide.Launch an instance from the AMI.More details inCreate a Linux AMI from a snapshot.
Is it not possible to create an EC2 instance from a snapshot in AWS?I tried to create volume, but then i was stuck starting an EC2 instance with that volume. Should that not be an easy process? I don't even know, if this way is correct. I am just guessing.What steps are necessary to create a EC2 instance only from the snapshot?
Creating an EC2 instance from a snapshot in AWS?