Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
We could hitEC2 Metadata endpointto determine EC2 instance or not.This curl woks only within an EC2 instancecurl http://169.254.169.254/latest/meta-data/Librarynode-ec2-metadataexposes an easy to use method which calls this metadata service behind the scenes.var metadata = require("node-ec2-metadata");
metadata.isEC2().then(function (onEC2) {
console.log("Running on EC2? " + onEC2);
});Point to keep in mind, straight from the docs(since they hit the HEAD url and timeout after 500ms):The initial call may take up to 500ms on a non-EC2 host, but the
result is cached so subsequent calls provide a result immediately | Is there a way in NodeJs to detect whether it's currently running on an EC2 instance or whether it's being run locally? I have some functionality that changes based on where it's deployed. I can detect the IP address but I want something more dynamic, not tied to specific server configurations. | NodeJs - Detect whether app is on AWS or local |
+50According the official Azure Docs, you have three options, I can say that the VPN option will be one of the easiest ones, but you can have problems like limited throughput, unpredictable routing via the public internet, and the cost of the AWS and Azure data transfer fees.To understand better which option to use you can check this flow chart:Option 1: Connect Azure ExpressRoute and the other cloud provider's equivalent private connection. The customer manages routing.Option 2: Connect ExpressRoute and the other cloud provider's equivalent private connection. A cloud exchange provider handles routing.Option 3: Use Site-to-Site VPN over the internet. For more information, seeConnect on-premises networks to Azure by using Site-to-Site VPN gateways.The options 1 and 2 are the best options to avoid use of the public internet, if you require an SLA, if you want predictable throughput, or need to handle data volume transfer. Consider whether to use a customer-managed routing or a cloud exchange provider if you haven't implemented ExpressRoute already.In the AWS side, you will be able to configure your VPC, to understand how to do this checkhere.For more information about these three options, checkhere | I have an app service (Rest API) in Azure and I am planning on hosting another service that has to be integrated with the Azure app service. Could someone please let me know the preferred way(s) to make sure the communication is on a private secure channel? | Best way to establish a private secure connection between an Azure App Service and a AWS ECS service |
The average statistic is the Sum / Sample Count. Sample Count is just the number of CloudWatch data points for the metric in the period. So it will be the total number of errors, divided by the number of error metrics reported to CloudWatch. For example, if you were tracking over 10 minutes, and the metrics were reported once a minute, then Average would give you the average number of errors over those 10 minutes. None of this is taking into account the total number of Lambda invocations during that period, just the number of errors.The average statistic gives you the average number of errors over a time period. You want the average number of errors over all invocations for a time period, so you'll have to use metric math in order to take into account 2 different metrics (Errors and Invocations). | I'm trying to add a cloudwatch alarm which is triggered if the error rate experienced by a lambda passes a certain percentage threshold.I've seen a fewplacesthat suggest taking the lambda error count and the lambda invocation count and using the metric math to doerror count / invocation count.That approach makes sense but what's the difference between doing that manual calculation above and using theaverage statisticfor errors? | How do you set an alarm for the percentage of errors experienced by a lambda in AWS? |
API Gateway REST API allows proxy integrations to other endpoints via two methods.HTTP_PROXYis for Public API endpoints only.VPC_LINKallows us to integrate API Gateway with Private Endpoints exposed via NLB (not ALB)API Gateway HTTP API also supports both public and private integrationsHTTP URI: For public endpoints.Private Resource: For NLB, ALB or CloudMap | Well, I'm creating a AWS ApiGateway and I can't understand when I should use integration type HTTP_PROXY or VPC_LINK, both ask me URL to proxy.I searched about it but can't found any concrect and simple example, when use one or other ? | AWS ApiGateway VPC Link vs HTTP Proxy integration |
You should configureNamed Profilesfor use with the AWS CLI, one for each account. Seeherefor more information. Once you do this, re-runamplify configurein each project and amplify should recognize that you have profiles available and ask whether it should use one for the given project. Select the correct profile for each project and you should no longer have to run configure going forward. | I am developing an AWS amplify backend for a startup. I have two aws accounts1- where the startup prod resides (prod env)
2- where I test features before making any changes to the prod environment. (dev/test env)In my local computer I have two amplify apps setup.1- prod-app linked to prod env.
2- dev-app linked to dev envThe problem I am facing is I have to use amplify configure each time when I move between one account to another which makes new roles everytime. is there any way where I can tie the role and account to amplify apps and it automatically gets the required user without using amplify configure command again & again? TIA | AWS Amplify configure command for different accounts |
This is long-lasting issue with task definition documented and discussed onGitHub Issues.So far, theissue is open, and the workaround reported is to manually remove current version of the task from the TF state. In your case it would be:# we can still get the task definition diff at this point, which we care about
terraform plan
# remove from state so that task definition is not destroyed, and we're able to rollback in the future if needed
terraform state rm aws_ecs_task_definition.tktest_terraform-td
# diff will show a brand new task definition created, but that's ok because we got the diff in step 1
terraform apply | I am creating ecs taskDefinition using terraform.resource "aws_ecs_task_definition" "tktest_terraform-td" {
family = "nodejs-webapp"
container_definitions = "${templatefile("${path.module}/taskdefinition/service-td.tpl", { webapp_docker_image = "${var.webapp_docker_image_name}:${var.webapp_docker_image_tag}"})}"
}When ever there is a changes to the taskdefinition a new revision is created but the problem is the older revision is gets deleted.Is it possible to create a new revision but the same time preserve the older revision? | how to preserve older aws ecs taskdefinition revision when creating creating a new one using terraform |
As you noticed, DynamoDB indeed does not have an option to sort items "globally". In other words, there is no way toScanthe database in sorted partition-key order. You can only sort items inside one partition, sorted by the "sort key".When you have a small amount of data, you can indeed do what you said: Have a single partition with everything in this partition. However it's not clear how practical this approach becomes as your single partition grows - to gigabytes or terabytes, and how well DynamoDB can load-balance when you have just a single partition (I never saw any DynamoDB documentation which answer this question).So another option is not to have a single partition but rather have a number of them. For example, consider that you want to sort items by date. Now insead of having a single partition, have a partitionper month, i.e., the partition key is the month number. Now, if you want to sort everything within a month, you can do it directly, but if you want to get a sorted list of a full year, you need toQuerytwelve partitions, in order, getting a sorted list in each one and combining it to a sorted list for the full year. So-calledtime-seriesdatabases are often modeled this way. | I'd like to list records from my DDB table ordered by creation date.
My table has an attributeDateCreated.All examples I can find describe ordering within some partition.
But I want global ordering.
Am I supposed to create an artificial attribute which will have the same value across all records, just to use it as a partition key? E.g. add new attributeGlobalPartitionwith value1to every record in the table, and create a GSI with partition keyGlobalPartitionand sort keyDateCreated. Isn't there a better way?Thx! | How to sort DynamoDB table by a single column? |
If you want to pass credentials to Ansible modules, Ansible has dedicatedsectionon how to do it usingenvironment variablesorvars_file.You can also explicitly set them usingsetcommand, e.g.:aws configure set aws_access_key_id default_access_key
aws configure set aws_secret_access_key default_secret_key
aws configure set default.region us-west-2You can also have your Ansible recipe to create the config files~/.aws/credentialsand~/.aws/config. Their format is shownhere. | I am trying to find an Ansible module that would allow me to set up the Secret and Access Keys for a certain user on a target machine. The command line equivalent of this would be:-11:14:26 root@ov90-NAT ~ [33] {e=255}
# aws configure
AWS Access Key ID [None]: something
AWS Secret Access Key [None]: something
Default region name [None]: us-east-1
Default output format [None]: jsonI'm fairly new to both Ansible and AWS so any help would be appreciated! | Configuring AWS keys for a Linux user using Ansible |
for dynamic generation you can use theAddPermissionto add the necessary permissions.function addPermission ({ lambdaArn, restApiId }) {
const { region, namespace } = parseArn(lambdaArn)
const params = {
Action: 'lambda:InvokeFunction',
FunctionName: lambdaArn,
Principal: 'events.amazonaws.com',
StatementId: `scheduleName`,
SourceArn: `RuleARN`
}
return lambda.addPermission(params).promise()
}If you are usingserverlessframework.functions:
myFunction:
handler: index.handler
events:
- eventBridge:
schedule: rate(10 minutes)
input:
key1: value1The above definition simply create the rule and add your lambda as the target. It will take care of the necessary permissions as well whichever is needed.Setting up event pattern matchingEventBridge Use Cases and ExamplesSchedule | I am using serverless framework and the AWS Node.js SDK for adding scheduled cron expression based rule to default event bus.eventBridge.putRule(params, function (err, data) {...After that I add target to this rule.const params = {
Rule: data.ruleName,
Targets: [
{
Arn: process.env.SCHEDULED_EVENT_LAMBDA_ARN, /* required */
Id: process.env.SCHEDULED_EVENT_LAMBDA_ID, /* required */
Input: JSON.stringify(someData)
},
],
};
eventBridge.putTargets(params, function (err, data) {...adding target is successful on the dynamically created scheduled cron rule on event bridge but when I navigate to lambda dashboard it does not seems like the triggering layer is updated and eventually the lambda function is not triggered.The AWS SDK documentation for event bridgeputTargetsis mentioning:For AWS Lambda and Amazon SNS resources, EventBridge relies on resource-based policiesSo if the resource policy is the issue(not confirmed) is there any configuration regarding resource policy which I can set inserverless.ymlfile for a specific function that allows event-bridge service to add layer to deployed targated lambda function. | adding AWS Lambda as target using AWS SDK for event bridge rule |
Or Kinesis Data Streams can directly write to lambda somehow?Data Streams can't write directly to S3. InsteadFirehosecan do this:delivering real-time streaming data to destinations such as AmazonSimple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.What's more Firehose allows you tobufferthe records before writing them to S3. The writing can happen based on buffer size or time. In addition to that you canprocess the recordsusing lambda function before writing to S3.Thus, colectively it seems that Firehose is more suited to your use-case then Data Streams. | I have events that keep coming which I need to put to S3. I am trying to evaluate if I muse use Kinesis Stream or Firehose. I also want to wait for few minutes before writing to S3 so that the object is fairly full.Based on my reading of Kinesis Data stream, I have to create an analytics app which will then be used to invoke a lambda. I will then have to use the lambda to write to S3. Or Kinesis Data Streams can directly write to lambda somehow? I could not find anything indicating the same.Firehose is not charged by hour(while stream is). So is firehose a better option for me? | Writing to S3 via Kinesis Stream or Firehose |
There areseveral issueswith youraws_cloudwatch_metric_alarm.alb_arn_suffixis invalid, thus error.dimensionsare also incorrectnamespaceis sadly also wrong.UnHealthyHostCountmetric is part ofAWS/ApplicationELBnamespace, which has only two set of dimensions:TargetGroup, LoadBalancerTargetGroup, AvailabilityZone, LoadBalancerAssuming that you would use first set,aws_cloudwatch_metric_alarmwould be something like the following:resource "aws_cloudwatch_metric_alarm" "this" {
alarm_name = "alb-alarams"
alarm_description = "unhealthy"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = 1
threshold = 1
period = 60
unit = "Count"
namespace = "AWS/ApplicationELB"
metric_name = "UnHealthyHostCount"
statistic = "Sum"
alarm_actions = ["arn:aws:sns:eu-west-2:124531745575:alb-alerts"]
dimensions = {
TargetGroup = aws_lb_target_group.lb-tg.arn_suffix
LoadBalancer = aws_lb.lb.arn_suffix
}
}You would have to substituteaws_lb_target_groupandaws_lbfor your values. | I am trying to create cloudwatch alarams for LB using terraform using below code. I am getting an error
An argument named "alb_arn_suffix" is not expected here.here is the sample code which i am using.resource "aws_cloudwatch_metric_alarm" "this" {
alarm_name = "alb-alarams"
alarm_description = "unhealthy"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = 1
threshold = 1
period = 60
unit = "Count"
namespace = "ALB"
metric_name = "UnHealthyHostCount"
statistic = "Sum"
alb_arn_suffix = ["arn:aws:elasticloadbalancing:eu-west-2:124531745575:loadbalancer/app/alb-
123/1cd382a00a565a8b"]
alarm_actions = ["arn:aws:sns:eu-west-2:124531745575:alb-alerts"]
dimensions = {
Name="ALB"
Value="test"
}Please advise. | CloudWatch alarams using terraform for load balancer |
Yes. AWS Congnito Identity Pool supportUnauthenticated Identities:Amazon Cognito identity pools support both authenticated andunauthenticatedidentities. Authenticated identities belong to users who are authenticated by any supported identity provider.Unauthenticated identities typically belong to guest users.How to set them up is explain inAWS docs. | Is it possible to use anonymous authentication with AWS cognito? I would like to know if it is possible to do anonymous authentication and control S3 access rights through it. | anonymous authentication with AWS cognito |
Choice state after step 1 to check if there is at least one record in Map, we can't check length of array, so usingisPresenton first element of map$.inputForMap[0]Step Definition{
"StartAt":"Dummy Step 1 Output",
"States":{
"Dummy Step 1 Output":{
"Type":"Pass",
"Result":[
"iter 1",
"iter2"
],
"ResultPath":"$.inputForMap",
"Next":"does map has atleast one record?"
},
"does map has atleast one record?":{
"Type":"Choice",
"Choices":[
{
"Variable":"$.inputForMap[0]",
"IsPresent":true,
"Next":"loop on map"
}
],
"Default":"End of Step Function"
},
"End of Step Function":{
"Type":"Pass",
"End":true
},
"Step three":{
"Type":"Pass",
"Next":"End of Step Function"
},
"loop on map":{
"Type":"Map",
"Next":"Step three",
"Iterator":{
"StartAt":"Step 2 - Looping on map",
"States":{
"Step 2 - Looping on map":{
"Type":"Pass",
"End":true
}
}
},
"ItemsPath":"$.inputForMap",
"MaxConcurrency":1
}
}
}When Map is not emptyWhen Map is empty | I have a workflow with 3 steps:Task - Upload N filesProduces array of N job definitions to be used as input to Step 2 map stateMap - process each jobDue to map state, this is executed N timesTask - do some other thingWhat I would like is to only do Step 3 if any iterations occur in Step 2. The way this is designed, Step 1 usually produces no output, so Step 2 is basically skipped.I've noticed that in the scenario I've outlined, the output from Step 2 is just[], where as normally it contains a whole lot of information about the iterations. Is it possible to perform this kind of workflow? | AWS Step Functions - Utilize empty Map state output when no iterations occur |
VPC peering connections do not support transitive routing. It violates source/destination check.An instance will not receive any traffic if destination is not within the VPC. So, peered VPC without IGW will not be able to access internet with Peered VPC because when traffic does arrive into VPC which has IGW, source is outside VPC and destination is not local VPC (outside network).Un Supported VPC Configuration is listedhereWe can do it by routing traffic from private VPC to a proxy EC2 in public VPC(by disabling source/dest check on EC2) which forwards the requests to IGW.We can also use Transit Gateway,hereis a blog | I have two custom VPCs for the purpose of Private & public access: VPC1(private) & VPC2(public). Each VPC has one subnet and further one EC2 with proper inbound rules. I am able to update software in public EC2 which is absolutely fine. Also, I am able to establish SSH connection between those two Ec2 after VPC-peering. But my goal is to use internet on pvt EC2 via public EC2. To achieve that I must add the NAT-gateway of VPC2 onto Route-table of VPC1(if I am not wrong). However, the NAT-gateway is not visible on VPC2-routetable. Though, I can use NAT-gateway from private subnet to public subnet in the case when they both subnets are within a single VPC. But, here I am struggling when they are in two different VPCs. Any advise pls ? | Connecting to internet through VPC peering |
From Boto3 1.26.161 documentation:Note that the docs Copy() API Call mentions that it allows ALLOWED_DOWNLOAD_ARGShttps://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGSALLOWED_DOWNLOAD_ARGS = ['ChecksumMode', 'VersionId', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'RequestPayer', 'ExpectedBucketOwner']
ALLOWED_UPLOAD_ARGS = ['ACL', 'CacheControl', 'ChecksumAlgorithm', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'ExpectedBucketOwner', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACP', 'Metadata', 'ObjectLockLegalHoldStatus', 'ObjectLockMode', 'ObjectLockRetainUntilDate', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId', 'SSEKMSEncryptionContext', 'Tagging', 'WebsiteRedirectLocation'] | ExtraArgs (dict) -- Extra arguments that may be passed to the client operationI'm looking at the documentationhttps://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.copyand I couldn't see any information on what possible values can we pass for this parameter. Any ideas? | possible ExtraArgs values in boto3 s3 client copy function |
There isno build-in mechanismto lock DynamoDB table to safeguard from concurrent overwriting. But there are design patterns which you can implement yourself, or depending on your programing language, find an existing implementation ready to be used, even provided by AWS.InAWS docsyou can find information how to implementOptimistic locking:Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB. If you use this strategy,your database writes are protected from being overwritten by the writes of others, and vice versa.AWS also providesAmazon DynamoDB Lock Clientfor java:The DynamoDB Lock Client implements a protocol allowing similar applications to take advisory locks on any part of your problem domain, big or small. This protocol ensures your players “stay in possession of the ball” for a certain period of time.And the end of the day, its up to you to design a solution "to safe guard the table" which meets your needs. | I am updating dynamodb table from a lambda function. In a high throughput case, there could be multiple lambda instances running at the same time to update the same table. Is there a way to safe guard the table? Whether I can lock the table? | How can I avoid overwriting dynamodb from two lambdas? |
Using an Amazon SQS FIFO queue means that you want toreceive messages in order. It will also try to ensure orderingwithina Message Group.This means that, if some messages for a given Message Group ID are currently being processed ("in flight"),no more messages for that Message Group will be providedsince an earlier message might be returned to the queue if not fully processed. This could result in messages being processed out-of-order.FromUsing the Amazon SQS message group ID - Amazon Simple Queue Service:To interleave multiple ordered message groups within a single FIFO queue, use message group ID values (for example, session data for multiple users). In this scenario, multiple consumers can process the queue, but the session data of each user is processed in a FIFO manner.When messages that belong to a particular message group ID are invisible, no other consumer can process messages with the same message group ID.Therefore, your choices are:Don't uses a FIFO queue, orUse different Message Group IDs, orBe happy with what it is doing because that is desired FIFO behaviour | I'm learning AWS SQS and I've sent6messages to a FIFO queue, with the sameGroupId. But when I try to poll for messages, I can only receive 2 of them (Why? I set theMaxNumberOfMessages=10usingboto3API, but I can only receive 2. How can I receive all of the messages?).(As shown in this picture, I have 5 messages available, but I can only receive 2 messages.)I tried to delete one of two received messages and poll again. The deleted one is gone, and I received a new message. But in total, it's still 2 messages. | AWS SQS FIFO can't receive all messages |
Correct.Amazon SNS normally uses a Public/Subscribe model for messages.The one exception is the ability to send an SMS message to a specific recipient.If you wish to send an email to a single recipient, you will need touse your own SMTP server, or use Amazon Simple Email Service (Amazon SES). | I am working on sending OTP messages for user login leveraging Amazon SNS. I am able to send Text message as suggestinghere. For the email notification as well I would like to use a similar approach. But looks like for email notifications, a topic has to be created in SNS and a subscriber has to be created for each email id registered in the application.Is it not possible to send email to mail-id dynamically as done for text messages without creating topics and subscribers? If not please suggest a way to set email id dynamically based on the user logged in.Code for Text Messaging:public static void main(String[] args) {
AmazonSNSClient snsClient = new AmazonSNSClient();
String message = "My SMS message";
String phoneNumber = "+1XXX5550100";
Map<String, MessageAttributeValue> smsAttributes =
new HashMap<String, MessageAttributeValue>();
//<set SMS attributes>
sendSMSMessage(snsClient, message, phoneNumber, smsAttributes);
}
public static void sendSMSMessage(AmazonSNSClient snsClient, String message,
String phoneNumber, Map<String, MessageAttributeValue> smsAttributes) {
PublishResult result = snsClient.publish(new PublishRequest()
.withMessage(message)
.withPhoneNumber(phoneNumber)
.withMessageAttributes(smsAttributes));
System.out.println(result); // Prints the message ID.
} | AWS SNS OTP emails |
Aurora serverlesscan't be accessed from the internet. Fromdocs:You must create your Aurora Serverless DB cluster in an Amazon Virtual Private Cloud (Amazon VPC). Aurora Serverless DB clusters areaccessible only from an Amazon VPC and can't use a public IP address.Thus, you need to setup VPN or some proxy (e.g. ssh tunnel through a bastion host) to be able to connect to Aurora serverless from outside of AWS. | I'm currently having issues setting up the AWS Explorer plugin in DataGrip to recognise the Aurora Serverless Clusters (MySQL). I have set up credentials from IAM in the credentials file, and can access other AWS services (if I select the dropdown "Schemas", for example, I can see the list of schemas in my org) but clicking the RDS dropdown shows "empty", and doesn't even show the list of database engines. I have tried connecting with secrets manager and using the correct secret for the DB cluster but no luck. When I try and add the database cluster as a data source, it just hangs on "Introspecting" and then the endpoint for that cluster.I found this issue on the aws-toolkit for jetbrains githubhttps://github.com/aws/aws-toolkit-jetbrains/issues/2124which mentions that it could be a driver problem. I have tried changing to the mySQL driver, and that hasn't seemed to fix it. DataGrip also seems to heavily encourage using the recommended Aurora MySQL driver.Is this a bug with DataGrip, or AWS Explorer, or am I missing something obvious? Do I need to enable SSL CAs to get AWS Explorer the correct permissions?Thanks!EDIT: I have gone through the prerequisites listed on the AWS docs:I have installed the AWS CLI and AWS SAM CLII have installed Docker (but I haven't set up any containers - I think this is
only needed if I'm running localhost?)I'm running Windows 10. | Can AWS Aurora Serverless Clusters be configured via AWS Explorer in DataGrip? |
TheVolumeWriteIOPsandVolumeReadIOPsare pretty misleading due to their name.Psdoes not stand for "per second".The metric is the sum of IO operations in a five-minute interval.
You can calculate the per-second value by dividing it by 300. That leads to around 2 IO per second in your case. | I've created an RDS Aurora Serverless cluster with a maximum ACU of 1 and have noticed a high number of Volume Write IOPS, despite not creating a database or ever connecting to the cluster:I've looked through thegeneral_logand noticed this statement, which is executed approximately every 2 seconds:INSERT INTO mysql.rds_heartbeat2(id, value) values (1,1607638395773) ON DUPLICATE KEY UPDATE value = 1607638395773 ;This would explain some of the Write IOPS but nothing close to the 550 per minute that the graph is showing.Can someone explain where these IOPS are coming from? | AWS Aurora Serverless unexplained write IOPS |
SecurityGroupscanonly be used for default VPC. Since you are explicitly assigningVPCIDtoInstanceSecurityGroup, this will be considered as non-default, resulting in failed deployment.YoumustuseSecurityGroupIds(notSecurityGroups) in your case as your VPC use will be considered asnon-default:SecurityGroupIds:
- !GetAtt 'InstanceSecurityGroup.GroupId' | I am trying to EC2 instance (new), Security group (new) and VPC(existing). Here is my cloudformation template.When I run the template in Stack, I got error as*"Value () for parameter groupId is invalid. The value cannot be empty"*. How to solve this?Template:Parameters:
VPCID:
Description: Name of an existing VPC
Type: AWS::EC2::VPC::Id
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.medium
AllowedValues:
- t2.medium
- t2.large
AccessLocation:
Description: The IP address range that can be used to access to the EC2 instances
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref 'InstanceType'
SecurityGroups:
- !Ref 'InstanceSecurityGroup'
KeyName: !Ref 'KeyName'
ImageId: !Ref 'ImageId'
InstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPCID
GroupDescription: Enable SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref 'AccessLocation' | How to use existing VPC in AWS CloudFormation template for new SecurityGroup |
Theget_caller_identity()function returns a dict containing:{
'UserId': 'AIDAEXAMPLEHERE',
'Account': '123456789012',
'Arn': 'arn:aws:iam::123456789012:user/james'
}So, to use this:import boto3
sts = boto3.client('sts')
response = sts.get_caller_identity()
print('User ID:', response['UserId'])Or you can useresponse.get('UserId')to get the user ID. The key to the user ID in the response dictionary is always the literalUserId. It doesn't vary (you cannot callresponse.get('james'), for example).You cannot retrieve the identity of an arbitrary IAM principal usingsts.get_caller_identity(). It only gives you the identity associated with the credentials that you implicitly used when making the call. | I am trying to get the UserId after the creation of user, suing "account_id = boto3.client('sts').get_caller_identity().get('some_other_user')", but for output I get none, what could be the reason. I am very new to boto and python so it might be something very small.import boto3
import sys
import json
iam = boto3.resource('iam') #resource representing IAM
group = iam.Group('group1') # Name of group
created_user = iam.create_user(
UserName='some_other_user'
)
account_id = boto3.client('sts').get_caller_identity().get('some_other_user')
print(account_id)
create_group_response = iam.create_group(GroupName = 'group1')
response = group.add_user(
UserName='some_other_user' #name of user
)
group = iam.Group('group1')
response = group.attach_policy(
PolicyArn='arn:aws:iam::196687784:policy/boto-test'
) | Get the UserID for an IAM user programatically using Boto3 |
I think doing this in place is going to be extremely challenging.What you could do is boot a second prometheus pod, backed by an encrypted PVC, and configure the first prometheus toremote-writeto the second instance.If you set up the constraints on your cluster nodes correctly via taints and tolerations, you can ensure both prometheus pods run on the same node. You will then be able to ssh in to the eks node, find the two PVC volumes as local filesystem mounts, and cp -R from the source unencrypted volume to the target encrypted volume.Thisshouldallow you to shift the data with no loss.While on the subject of prometheus - take a look atVictoriaMetrics- it is a near-100% compatible drop-in for prometheus which uses less memory and is much more io and cpu efficient. These are major wins if you need prometheus in a EKS environment. | I have a Prometheus server deployment running inside EKS cluster. The EBS volume attached to prometheus deployment is un-encrypted. I want to encrypt the volume attached to the prometheus server deployment. I don't want to suffer data loss or maybe minimum data loss. Challenges foreseen are with process of creating encrypted volume and attaching it to the prometheus deployment since time taken for that process would be too long maybe for 600GB of data.
Can anyone provide any suggestion, it would be great if someone could provide some sort of help. | Encrypt EBS volume with PVC without data loss inside Kubernetes |
We were in the same boat... couldn't delete the functions for days.How we fixed it:Create a new distributionAssociate the lambda with the new distributionRemove the association you just madeDelete the lambda versionsDelete the lambdaThere might be some time in between these steps, but that finally got us unstuck! | Even though CloudFront distribution which was assiciated with a number of Lambda functions deployed @edge was already destroyed a couple of days ago, I still can't delete my lambda: it keeps referencing me to the "documentation for Deleting Lambda@Edge Functions and Replicas." which says only one thing: you should wait for a couple of hours (not days)Any suggestions what else could be preventing the lambda from being deleted?P.S. I also double-checked that ALL versions of lambda do not have an association with any cloudfront distributions | Can't delete Lambda@Edge even though previously associated CloudFront distribution was already destroyed |
Once you get into the instance usingeb ssh, then you can becomerootuser simply by executing:sudo su - | Using SSH for an Elastic Beanstalk instance has become as easy as usingeb ssh.
But now I need root access to an environment file and I just can't figure out how I can access the instance as root? | SSH to Elastic Beanstalk instance as ROOT |
The DynamoDB console only allows GSIs using types of string, binary, or number.So you could use strings ("t" or "f"), numbers (1 or 0) or binary (also 1 or 0) to represent a boolean value if you'd like.It sounds like you're trying to build a sparse index (e.g. only certain items are in the index). Keep in mind that you can do this by the mere existence of the attribute that makes up the GSI.For example, you could include theignoredattribute on items you want to project into the index and remove theignoredattribute from items you donotwant in the index. | It's just a doubt that i cant find on the internet.I have a table like this:| id | infos | ignored || 1 | abc | true || 2 | def | false || 3 | ghi | false |I see i cant create a DynamoDB GSI on booleans columns. It's right?I want to create a GSI on this ignored column. | DynamoDB GSI using boolean like a hash key |
couldn't find any service that provides a certificate for an IP.This is because you need domain to obtain valid public certificate. You can't register SSL cert for an IP. But if you already have your own domainwww.xyz.com, you can get acertificate for its subdomain, e.g.api.xyz.com.However, ACM certs can't be used on instances. Thus, you need to get a valid public SSL cert from athird party. A popular choice ishttps://letsencrypt.org/withcertbotwhich provides free SSL certificates. By the way, StackOverlow is using letsencrypt for its SSL cert provider, thus its widely used and trusted ssl provider. | I'm creating a simple website. The frontend is stored in S3, and hosted by Cloudfront. I managed to add a trusted SSL certificate to my frontend domain (www.xyz.com) using AWS Certificate Manager.The backend is running on an EC2 instance. I added a self-signed certificate to it. I'm able to hit the APIs using Postman but the requests from frontend are failing because of the self-signed certificate.I checked the AWS Certificate Manager again if it could provide me with a cert for my backend server, but it requires a domain. My server is running on an IP and port, and I couldn't find any service that provides a certificate for an IP. I don't want to spend extra money to get a domain for my backend.So how do I get a trusted SSL certificate for a backend server, running on something like 10.12.12.10:9000? | Trusted SSL Certificate for Backend Server |
NAT Gateway is for outgoing traffic only.if you have to access the private EC2 instance then you need bastion on public subnet in same VPC.
OR VPN to connect or AWS system manager. | When you set up an EC2 instance in a private subnet to access the internet through a NAT gateway (with all the necessary routing and association through route table), how do you go about SSH'ing into the private EC2?For example, EC2 in the NAT Gateway public subnet and making a connection through the public EC2 to the private EC2. | NAT Gateways-how do you go about SSH'ing into the private EC2? |
Make sure you're not blocking the ports yore using (from public nets). You can go toAmazon EMR, thenBlock public accessand add the ports you want to access from public network inExceptions(or just disabling the optionBlock public access). | When we try launch AWS EMR in Mumbai region, it gets terminated in 5-6 seconds with the following validation error."Terminated with errors The EC2 Security Groups [sg-XXXXXXXXXX] contain one or more ingress rules to ports other than [22] which allow public access."These are default security groups created for AWS EMR in Mumbai region. How to overcome this? | AWS EMR terminated with validation error - security group error |
Make sure that your Lambda function's execution role has sufficient permissions to write logs to CloudWatch, and that the log group resource in the IAM policy includes your function's name.In the IAM console, review and edit the IAM policy for the execution role to make sure that:The write actions CreateLogGroup and CreateLogStream are allowed. You should attach these policies in the IAM roles of the Lambda functionNote: If you don't need custom permissions for your function, you can attach the managed policy AWSLambdaBasicExecutionRole, which allows Lambda to write logs to CloudWatch.The AWS Region specified in the Amazon Resource Name (ARN) is the
same as your Lambda function's Region.The log-group resource includes your Lambda function name. For
example, if your function is named myLambdaFunction, the log-group is
/aws/lambda/myLambdaFunction.Here is an example of the permissions in the JSON format{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:region:accountId:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
" arn:aws:logs:region:accountId:log-group:/aws/lambda/functionName:*"
]
}
]
} | I'm getting this error message when trying to see the log file in AWS CloudWatch for my AWS Lambda function.An error occurred while describing log streams.
The specified log group does not exist.
Log group does not exist
The specific log group: /aws/lambda/xxxxx does not exist in this account or region.By the way, I'm using the Singapore region. | AWS CloudWatch - Log group does not exist |
I have almost exactly the same use case you have. I have written and released a simple library that can do what you want, creating a presigned URL to connect to AWS WebSocket API Gateway secured by IAM:https://github.com/MohammedNoureldin/aws_url_signerBasically you will get your signed URL just like this:String getSignedWebSocketUrl(
{String apiId,
String region,
String stage,
String accessKey,
String secretKey,
String sessionToken}) | We are building a mobile app using Flutter that connects to WebSocket (AWS).
The user will SignUp / SignIn to the app using AWS Amplify Auth. After authentication is successful the app will establish a connection to WebSocket on AWS.
In order to make our connection to WebSocket secure, we want to use AWS Signer v4 to sign the URL. But we couldn't find any support on AWS Signer v4 for Flutter.
Kindly provide us help so that we can use Signer v4 using Flutter something like plugin etc.Thank You | AWS Signer support for Flutter |
EB email notification ismanaged by SNS. Thus to add extra emails or notifications subscribes, you can add them using SNS console.If you create your email notification, inSNS consolethere will be automatically created topic with the nameElasticBeanstalkNotifications-Environment-<your-env>. Once you open up the topic you will have option toCreate subscriptionwhere you can add new emails, SQS queues, HTTP endpoint, Lambda function and more. | how do I set up notifications for more than one email in elastic beanstalk?I've tried and got an error.one email works fine | multiple email addresses as notification subscribers to aws elastic beanstalk |
AWS provides static IPs via NAT. The static IP is part of the AWS network, and traffic to that IP will be routed to the private IP of your instance within the AWS network. | I have created AWS Lightsail instance and attached a public IP address. So Curranty machine has 2 IPs, public and private.the command ifconfig shows only private IP and nowhere in public. Hence I am unable to bind my golang based application to public IP address.Am I missing anything here? I have spend 1 hour on chasing this and could not see any article that could relate to this, need help. | AWS Lightsail - Static IP not visible in ifconfig |
When you adds the replication to your the bucket then the Objects that existed before will not be copied to the other bucket. Replication will also not let you replicate if Objects created with server-side encryption using customer-provided (SSE-C) encryption keys. for more detail you should readthis.So in this case, either you can use the AWS S3 Sync or AWCCLi's cp command (will be slower) or use Snowball Edge (Which you can do't do as per the description)aws s3 cp --recursive s3://<bucket>>
aws s3 sync s3://<bucket> s3://<bucket>>AWS Sync is good for small size objects/buckets but as you mentioned you have peta bytes of data then I will provide you two solutions:S3 batch Operations: You can use Amazon S3 batch operations to
copy multiple objects with a single request.S3DistCp: The S3DistCp operation on Amazon EMR can perform parallel copying of large volumes of objects across Amazon S3 buckets.More ReadOnce you have copied your data to another S3 bucket you can enable the replication which will replicate all new objects.Notes:These solutions can be expensive, so make sure you read about the cost if using these operations. | I have ~1.5PB data in S3 us-west-1. I want to copy this to us-east-2 region. Should I use cross-region replication or S3 Sync? And, what are the pros and cons of using the two options?I researched a few AWS threads and found that they describe each one in great detail (E.g.https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-transfer-between-buckets/andhttps://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-migrate-region/), without explaining the difference between the two.Please note that our security policies don't allow Snowball Edge.Can someone please help me? | S3 Sync vs. Cross-region Replication |
You will need to create quick script and use the aws cli for this.The script will first list all the buckets you have in the accountaws s3 lsthen save that list and loop over the list of buckets using this command which will output the policy as a json file:aws s3api get-bucket-policy --bucket mybucket --query Policy --output text > policy.jsonYou can then modify thepolicy.jsonfile as needed. Finally you can apply this modified policy back to the S3 bucket by running:aws s3api put-bucket-policy --bucket mybucket --policy file://policy.jsonSource | I have 400+ buckets in my AWS account some of which can be accessed by users using user groupdev-user-group&prod-user-group. Few S3 buckets's policies are something like this"aws:arn": [
"arn:aws:sts::123XXXXX43:assumed-role/dev-user-group/*"
"arn:aws:sts::123XXXXX43:assumed-role/prod-user-group/*"
]Now, we would like to change it to the following"aws:arn": [
"arn:aws:sts::123XXXXX43:assumed-role/dev-eid/*"
"arn:aws:sts::123XXXXX43:assumed-role/dev-p-eid/*"
"arn:aws:sts::123XXXXX43:assumed-role/prod-eid/*"
"arn:aws:sts::123XXXXX43:assumed-role/prod-p-eid/*"
]Few buckets have only any one of accesses & few don't have any access.
We would like to automate the process for updating the bucket policies using a script such that the script need to check ifdev-user-group&prod-user-groupis defined in the bucket policies. If so, it should remove them & add new policies.I hope I conveyed better. Kindly give me suggestions on this. | List S3 buckets by bucket policies |
There is no "expired event"...So you'd get the actual delete event at some point within 48 hours of the expiration..You can tell the delete was done by AWS due to a TTL expiring by looking forRecords[<index>].userIdentity.type
"Service"
Records[<index>].userIdentity.principalId
"dynamodb.amazonaws.com" | We want to use AWS Dynamodb Streams to manage a subscription renewal service as outlined in the documentation herehttps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-streams.html.AWS also states that the TTL actual deletion can take up to 48 hours.https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.htmlIf we set a TTL on a record in Dynamodb for 30 minutes would we get the expired event after 30 minutes or would it be 30 minutes plus up to 48 hours for the actual deletion event? | Does AWS Dynamodb TTL Stream fire separate expired versus deleted events? |
Yes! Now you can with Logs Insights :)First... you need to have the new UI or in another way go to "Logs Insights" service... jajaCloudWatch -> CloudWatch Logs -> Log groups -> [your service logs]With the new UI you can see this button (or go to Logs Insights in the search engine of aws cli):Now you can see this:It's a box for querys, it's like a SQL.The time range in which you will searchNow in your case.. you need this query (tell me if you need to filter another thing)fields @message
| sort @timestamp desc
| filter @message like /4{1}[0-9]{1}[0-9]{1}/I see your logs and you have spaces between your status code and I think this is the bestfields @message
| sort @timestamp desc
| filter @message like / 4{1}[0-9]{1}[0-9]{1} /And that's allNow run the query and you will see only logs that contains status codes [4xx].
I hope that solve your problemNOTE: if you go directly from search engine to Logs Insights you need to select the service logs that you scan with the query. On the combobox in top of query box. | I have a several crawlers that crawls multiple sites and stores the contents in a database. The logs from the program are stored in CloudWatch Logs.If the crawlers successfully pulls back content it looks like similarly to belowHTTP GET: 200 - https://www.thecheyennepost.com/news/national/rHTTP GET: 200 - https://www.thecheyennepost.com/news/f-e-warren-housThe issue I'm dealing with is identifying when 400 errors pop up. Below is an example:HTTP GET: 429 - https://www.livingstonparishnews.com/search/?l=25&sort=HTTP GET: 429 - https://www.livingstonparishnews.com/search/?l=25&sort=releHTTP GET: 429 - https://www.ktbs.com/search/?l=25&s=start_time&sd=desc&f=I tried usingstatus_code=4*but that didn't do anythingI just want to be able to filter any and all 400 errors.Any help that can be provided would be greatly appreciated. | AWS CloudWatch Logs filter pattern issues |
The only event type supported by CloudWatch Events (CWE) for CW Logs (CWL) is:AWS API Call via CloudTrailTherefore, you can catch the events of interests when you enabledCloudTrail (CT) trail. Once enable, API events would be available in CWE. Then, you would have tocreate CWE rule which capturesCreateLogGroupAPI call. The rule would trigger your lambda function.An example CWE rule could be:{
"source": [
"aws.logs"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"logs.amazonaws.com"
],
"eventName": [
"CreateLogGroup"
]
}
} | How can I trigger a lambda when a log group is created in cloudwatch? What I am thinking the easiest way to do is to create a cloudwatch rule to send cloudtrail event to lambda. Is it reasonable to do? If yes, how can I filter out other events but only trigger lambda when a log group is created? | How can I trigger a lambda when a log group is created in cloudwatch? |
Theuser interface changedovernight. Now you have to useDeploybutton: | everyone!
I can't see the save button but see the test button only in amazon lambda function.
I can't know what's the reason for it.
I searched on goolge, but can't find answer.
Save button has been shown before yesterday, but today I can't see.
I hope anyone can help me.
Thanks. | I can't see the save button but see the test button only in amazon lambda function |
You can refer herehttps://github.com/aws/aws-cdk/issues/4067at the last post.You can define EIP allocations then assign it into Nat Gateway while CDK deployment.Of course, you must manually create EIP first. | After struggling with that for several hours, here is my question. I am using CDK to create a VPC in the most simple form currently:let vpc = new Vpc(this, "myVpc", {maxAzs: 1});This gets me a public Subnet and a private one with the all the Gateways (internet and NAT). My NAT Gateway got a public EIP from the AWS pool. Of course when i destroy the stack and re-create it, i will get a new EIP from AWS, butTHISi dont want.What i want is:Creating an Elastic IP outside of my CDK project (manually via CLI or AWS Console) and attach it to my NAT GW, so that even after destroying the stack, i can re-attach my (external) EIP to the "new" NAT GW.So there must be a way tonothave the AWS::EC2::NatGateway created automatically by the VPC but manually with the proper EIP association and then attach it to the VPC / Public Subnet. Pretty much the same way i can explicitly define Subnets and associate them with the VPC instead of CDK construct magic. | associate custom Elastic IP to NAT Gateway with AWS CDK |
I had to append a/at the end of the folder name. Otherwise it gave invalid key error.
As a result the S3 object key or S3 folder [in my case it was S3 folder]
It became<folder-name>/Note the/ | [Container] 2020/09/03 09:27:34 Waiting for agent ping
[Container] 2020/09/03 09:27:36 Waiting for DOWNLOAD_SOURCE
NoSuchKey: The specified key does not exist.
status code: 404, request id: , host id: = for primary sourceSource provider: Amazon S3
Bucket: <bucket_name>
S3 object key: <folder_name> | NoSuchKey: The specified key does not exist AWS S3 Codebuild |
When a Lambda resides in AWS network it is able to use the internet to connect to these services, however once it joins your VPC outbound internet traffic is also routed through your VPC. As there is presumably no outbound internet connectivity the Lambda is unable to reach the internet.If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet access or a public IP address.For your Lambda to be able to communicate with other AWS services when it resides within a VPC, one of the following must be in place.The first option is that you create either aNAT gatewayor aNAT instance, and then add this to theroute tablewhich your Lambda resides in. To be clear this subnet should be a private subnet only as by utilizing a NAT for a0.0.0.0/0record it will stop inbound traffic to instances which have a public IP address that share the same subnet.The second option is that you utilizeVPC endpointsfor the services, by doing this any traffic that would have previously traversed the public internet will instead use a private connection directly to the AWS service itself. Please note that not every AWS service is covered yet for this. | My AWS Lambda function code works fine when I run it outside of an Amazon Virtual Private Cloud (Amazon VPC). However, when I configure my function to connect to a VPC, I get function timeout errors. How do I fix these?def get_db_connection_config():
# Create a Secrets Manager client.
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
# In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
# See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
# We rethrow the exception by default.
try:
logger.info("Retrieving MySQL database configuration...")
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as error:
logger.error(error)
sys.exit()
else:
# Decrypts secret using the associated KMS CMK.
# Depending on whether the secret is a string or binary, one of these fields will be populated.
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
return json.loads(secret)
else:
return base64.b64decode(get_secret_value_response['SecretBinary']) | Trying to connect to Boto3 Client from AWS Lambda and Receiving Timeout |
Here is the solution I ended using. I'm using a single lambda with a trigger for each file. For each trigger, the lambda adds the name of the file that triggered it to a file stored on s3. It then checks if the file has all the required file names to continue. If it does, it kicks off the stepfunction and clears out the list. I am capturing the name of the file that triggers the lambda from the event object that is passed in by default via lambda_function(event,context).So on a given day, the same lambda gets triggered multiple times, recording the filename each time until it has collected them all. | I have a step function that runs a series of lambda functions. I would like to set up a lambda that will trigger this each day when ALL of the 3 input files have been updated.I know you can trigger off of s3 events, but how can I have the requirement that all three have to have been updated? That is, I don't want the trigger to fire on any of the three events ('or' condition), I want it to trigger when all three have been updated ('and' condition).How would I approach setting up such a trigger event?Thanks | Check for multiple s3 events to trigger lambda when all are met? |
One reason of using theaws:SourceAccountis the mitigation ofThe Confused Deputy Problem.Specifically, in the context of S3, it is used so that S3 is not considered as theconfused deputy. | In theDocumentationfor Resource-Based Policies for Lambda, it mentions that it's best practice to include thesource-accountincase for example you specified asource-arnwhich referred to an s3 bucket which does not have the account id in thearn, so if you were unlucky and somebody deleted your bucket, and another account created a bucket with the same name they could indirectly access your Lambda function.But then you also have the notation of aPrincipal, as in one of the examples they have:"Principal":{"AWS":"arn:aws:iam::210987654321:root"}What is the difference betweenPrincipal&source-account. Do you use thePrincipalin the case when you want to refine the permissions down to a particular role or user within an account? And if this isn't your situation and you only want to grant access to yourLambdafrom an entire account you would usesource-account? | Difference between principal and source-account in Lambda Resource-Based Policy |
The error states that credentials are missing so it is an authentication issue, you can try setting theaccessKeyIdandsecretAccessKeyor thecredentialsfields directly on the SSM constructor.So simply maintain your code as is, just make the following change:// From
const ssm = new AWS.SSM({ region: 'eu-west-1' });
// To
const ssm = new AWS.SSM({
region: 'eu-west-1',
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key'
});
// Or To
const ssm = new AWS.SSM({
region: 'eu-west-1',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key'
}
}); | I'm getting the following error when running my react app locally which make an api request to AWS via the AWS-SDK:CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1I've tried:I've triedexporting my aws credentials directlyI already have my aws credentials set up in ~/.aws/credentials and I use the CLI everyday with no issueI've tried copying the ~/.aws directory to my project rootI've tried using dotenv and a config fileas suggested in these repliesThis is how I'm making the request:import AWS from 'aws-sdk';
import { useState, useEffect } from 'react';
const ssm = new AWS.SSM({ region: 'eu-west-1' });
export const useFetchParams = (initialValue) => {
const [result, setResult] = useState(initialValue);
useEffect(() => {
const params = {
Path: '/',
MaxResults: '2',
Recursive: true,
WithDecryption: true
};
ssm.getParametersByPath(params, function (err, data) {
if (err) console.log(err, err.stack);
else setResult(data);
});
}, []);
return result;
};
export default useFetchParams;Any help would be massively appreciated. Thanks. | AWS creds error when making calls from local react app "Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1" |
I had the same error, I've got rid of it providing my AWS credentials for programmatic access (AWS Access Key ID, AWS Secret Access Key):$ aws configureNext time I usedeksctlit just didn't try to authenticate on its own and command passed. | Unable to create AWS EKS cluster with eksctl from Windows 10 PC. Here is the command which I'm executingeksctl create cluster --name revit --version 1.17 --region ap-southeast-2 --fargateVersion of eksctl: 0.25.0AWS CLI Version: aws-cli/2.0.38 Python/3.7.7 Windows/10 exe/AMD64Error on executing create cluster command2020-08-08T19:05:35+10:00 [ℹ] eksctl version 0.25.0
2020-08-08T19:05:35+10:00 [ℹ] using region ap-southeast-2
2020-08-08T19:05:35+10:00 [!] retryable error (RequestError: send request failed
caused by: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.) from ec2metadata/GetToken - will retry after delay of 54.121635ms
2020-08-08T19:05:35+10:00 [!] retryable error (RequestError: send request failed
caused by: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.) from ec2metadata/GetToken - will retry after delay of 86.006168ms | Unable to create AWS EKS cluster with eksctl |
You are missing several components. Most importantly there isno link betweenyouraws_apigatewayv2_routeandaws_apigatewayv2_integration.The link is established usingtargetargument.Similarly, there is no link betweenaws_apigatewayv2_stageandaws_apigatewayv2_deployment.You can have a look at the following version of the code:resource "aws_apigatewayv2_deployment" "example" {
api_id = aws_apigatewayv2_api._.id
description = "Example deployment"
lifecycle {
create_before_destroy = true
}
depends_on = [
aws_apigatewayv2_route.apigateway_route
]
}
resource "aws_apigatewayv2_api" "_" {
name = "example"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_route" "apigateway_route" {
api_id = aws_apigatewayv2_api._.id
route_key = "GET /sitemap.xml"
target = "integrations/${aws_apigatewayv2_integration.apigateway_intergration.id}"
}
resource "aws_apigatewayv2_integration" "apigateway_intergration" {
api_id = aws_apigatewayv2_api._.id
integration_type = "HTTP_PROXY"
connection_type = "INTERNET"
description = "Gateway intergration for EC2"
integration_method = "ANY"
integration_uri = "https://www.google.com"
passthrough_behavior = "WHEN_NO_MATCH"
}
resource "aws_apigatewayv2_stage" "apigateway_stage" {
api_id = aws_apigatewayv2_api._.id
name = "example-stage"
deployment_id = aws_apigatewayv2_deployment.example.id
}The above codecorrectlycreates integration: | I am trying to use Terraform to deploy API Gateway that routes traffic into my domaingoogle.comas an example.I useaws_apigatewayv2_apifor creating HTTP_PROXY type for integrations. I went through the docs but still cannot find the way to attach an integration into a routeGET /sitemap.xml. How to deal with this?resource "aws_api_gateway_deployment" "_" {
rest_api_id = aws_api_gateway_rest_api._.id
stage_name = ""
lifecycle {
create_before_destroy = true
}
}
resource "aws_apigatewayv2_api" "_" {
name = "example"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_route" "apigateway_route" {
api_id = aws_apigatewayv2_api._.id
route_key = "GET /sitemap.xml"
}
resource "aws_apigatewayv2_integration" "apigateway_intergration" {
api_id = aws_apigatewayv2_api._.id
integration_type = "HTTP_PROXY"
connection_type = "INTERNET"
description = "Gateway intergration for EC2"
integration_method = "ANY"
integration_uri = "https://www.google.com"
passthrough_behavior = "WHEN_NO_MATCH"
}
# resource "aws_apigatewayv2_deployment" "apigateway_deployment" {
# api_id = aws_apigatewayv2_route.apigateway_route.api_id
# description = "Example deployment"
# lifecycle {
# create_before_destroy = true
# }
# }
resource "aws_apigatewayv2_stage" "apigateway_stage" {
api_id = aws_apigatewayv2_api._.id
name = "example-stage"
} | Terraform cannot attach created integrations into routes for API Gateway |
According tothis postandVTL user guide page, the way concatenation is done is just by "putting items together". From the VTL guide:A common question that developers ask is How do I do String concatenation? Is there any analogue to the '+' operator in Java?.
To do concatenation of references in VTL, you just have to 'put them together'. The context of where you want to put them together does matter, so we will illustrate with some examples.
In the regular 'schmoo' of a template (when you are mixing it in with regular content) :#set( $size = "Big" )
#set( $name = "Ben" )
The clock is $size$name.So I guess that's the only way. | I want to construct query string in the API Gateway mapping template.
I have something like this#foreach($entry in $entries)
#set($count = $foreach.count)
#set($entriesQueryString = "$!{entriesQueryString}Id=${count}&"
#endThe idea is to append new string as long as there are entries provided in the input.Is my code valid?
Any other ways to do append? | append for string variable in Apache Velocity Template Language |
From thedocs:The iterator object (setting in the example above) hastwo attributes:keyis the map key or list element index for the current element. If the for_each expression produces a set value then key is identical to value and should not be used.valueis the value of the current element.Based on that, in thecontent, you should usedlb.value["arn"], as perexample. Thus, the following could be tried:content {
endpoint_id = lb.value["arn"]
weight = 100
} | I am trying to do this basically:module "california" {
source = "./themodule"
# ...
}
module "oregon" {
source = "./themodule"
# ...
}
resource "aws_globalaccelerator_endpoint_group" "world" {
# ...
dynamic "endpoint_configuration" {
for_each = [
module.california.lb,
module.oregon.lb
]
iterator = lb
content {
endpoint_id = lb.arn
weight = 100
}
}
}
# themodule/main.tf
resource "aws_lb" "lb" {
# ...
}
output "lb" {
value = aws_lb.lb
}I am outputtinglbfrom a submodule in Terraform, and trying to use that in the parent module in afor_eacharray, with a customiteratorname. It is giving me this error:This object does not have an attribute named "arn".But it DOES have that attribute, it's anaws_lb. What am I doing wrong in the usage of thisfor_eachand module setup, and how do I fix it? Thank you very much!https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/If I change it to this it seems to work:resource "aws_globalaccelerator_endpoint_group" "world" {
listener_arn = aws_globalaccelerator_listener.world.id
endpoint_configuration {
endpoint_id = module.california.lb.arn
weight = 100
}
} | How to use for_each with iterator in dynamic block in Terraform? |
If you want to connect to a host they must always have a source IP address from which the traffic originated from within the packet headers so it is not possible to hide a return IP address from the destination.If your instance has a public IP address in a subnet with an internet gateway, then outbound traffic will be using the public IPv4 address of the instance.If the instance can be made private then assuming it is communicating to a destination that is on the public internet it will use either a NAT Gateway or NAT instance for outbound communication. The destination will see the source here as the EIP of the NAT when it connects (not the instance).Other approaches that could be taken are:Forward all traffic in route table for route0.0.0.0/0(and::/0if IPv6) to another EC2 host or to an on-premise resource via VPN or Direct Connect. This would require the connecting appliance to support forwarding internet traffic.Forward traffic to a proxy using network configuration on your host. You will be entirely responsible for setting this up and managing. | I've launched aws linux ec2 instance and I'm running node server over it.
I'm requesting resource from server as followconst data = await request('https://www.example.com/data');Is it possible to hide ip of my aws ec2 instance from example.com ?Please help me..! | Is it possible to hide ip of aws ec2 instance while requesting resources from server? |
You are including fileApp.cssbut there is onlyapp.css. Fix the case of the first letter.Some operating systems (or better said, some file systems) are case-sensitive. You have to write the name of the file correctly.If you are using a versioning system (e.g. git), make sure you have renamed the file in the versioning system. On a case-insensitive file system a change in case won't be detected. | I am trying to deploy my react app to aws-amplify but I keep getting this error in the build process in aws.When I run the app in production, "yarn run build" it runs fine locally, but when I push my changes to aws I get the following error:"Failed to compile.
2020-07-26T15:51:52.356Z [INFO]: ./src/index.js
Cannot find file './App' in './src'.":more detailed aws error image attachedMy folder structure is as follows:- public
- favicon.ico
- index.html
- logo192.png
- logo512.png
- manifest.json
- robots.txt
- src
- assets
- components
- Contact.js
- Home.js
- Navigation.js
- NotFound.js
- Project.js
- Projects.js
- App.css
- App.js
- App.test.js
- contact.css
- home.css
- index.css
- index.js
- logo.svg
- projects.css
- serviceWorker.js
- setupTest.js
- .gitignore
- package.json
- README.md
- yarn.lockscreenshot of my index.js and the folder structure | Why am I getting this Error: "Cannot find file './App' in './src'.":? |
Well to answer your questionsYes, you can distinguish the service metrics by using label just use like this in your configMap of prometheusstatic_configs:
- targets:
- "<yourfirstservicename>.<namespace>.svc.cluster.local:<yourservice1portnumber>"
labels:
instance: 'service1'
- targets:
- "<yourservice2name>.<namespace>.svc.cluster.local:<yourservice2port>"
labels:
instance: 'service2'Yes you have to do that port-forward but if you are planning to use grafana for visualization then new grafana version provide built in query run functionality.I hope this will help !! | I am running my services on EKS clusters. In order to collect the application metrics [API response times, status and number of calls], I came across Prometheus. There are following steps that I think needs to be done:Cluster role, Service account and role binding: this will allow my prometheus service to talk to the cluster nods, pods and services [defined in the resources section].Configmap: this allows the scraping process and defines different roles.Service and ingress: to establish the endpoints [e.g.: 9090] and routes the traffic from internet.I came acrossprometheus using helmwhich describes how we can make use of helm predefined prometheus charts in order to get the raw metrics from kubernetes.I followed the steps:kubectl create namespace prometheus
helm install prometheus stable/prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="gp2"
kubectl get pods -n prometheusI can see the pods running with that namespace. Now, I have two questions,I am having multiple services (For example, service A and service B)
running on the cluster. So, how can I distinguish the metrics on
Prometheus.Do I need to runkubectl --namespace=prometheus port-forward deploy/prometheus-server 9090everytime to see the results? I seetargetPortis defined as9090then why do I need to run the
command? Can I justvalues.yamlinstead? | Prometheus : Distinguish Application metrics |
You can use aoutputs.tffile to define the outputs of a terraform module. Your output will have the variables name such as the content below.output "vpc_id" {
value = "${aws_vpc.default.id}"
}These can then be referenced within yourprd/instances.tfby referencing the resource name combined with the output name you defined in your file.For example if you have a module namedvpcwhich uses this module you could then use the output similar to below.module "vpc" {
......
}
resource "aws_security_group" "my_sg" {
vpc_id = module.vpc.vpc_id
} | Is there a way of using output values of a module that is located in another folder? Imagine the following environment:tm-project/
├── lambda
│ └── vpctm-manager.js
├── networking
│ ├── init.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ └── vpc-tst.tf
├── prd
│ ├── init.tf
│ ├── instances.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── security
└── init.tfI want to create EC2 instances and place them in a subnet that is declared innetworkingfolder. So, I was wondering if by any chance I could access the outputs of the module I used innetworking/vpc-tst.tfas the inputs of myprd/instances.tf.Thanks in advances. | Using outputs from other tf files in terraform |
Using the @aws_subscribe directive, you can declare the mutations a subscription is subscribed to. For example:type Mutation {
createPost(): Post
editPost(): Post
deletePost(): Post
}
type Subscription {
updatedPost(): Post
@aws_subscribe(mutations: ["createPost","editPost","deletePost"])
} | I am readingrealtime datadocumentation but do not understandwhat is the purpose of@aws_subscribe. Could somebody explain this in simple English? Examples about how subscriptions works with/without annotation will help a lot. | What is the purpose of @aws_subscribe annotation? |
You would need some sort of program to call the Amazon S3 API to retrieve the object. For example, a PowerShell script (usingAWS Tools for Windows PowerShell) or a Python script that uses the AWS SDK.You could alternatively generate anAmazon S3 pre-signed URL, which would allow a private object to be downloaded from Amazon S3 via a normal HTTPS call (egcurl). This can be done easily using the AWS SDK for Python, or you could code it yourself without using libraries (it's a bit more complex).In all examples above, you would need to provide the script/program with a set of IAM Credentials for authenticating with AWS. | Is it possible to download a file from AWS s3 without AWS cli? In my production server I would need to download a config file which is in S3 bucket.I was thinking of having Amazon Systems Manger run a script that would download the config (YAML files) from the S3. But we do not want to install AWS cli on the production machines. How can I go about this? | How to download a file using from s3 private bucket without AWS cli |
This is because of recent changes by AWS regarding s3.When usingvirtual hosted–style buckets with SSL, the SSL wild-card
certificate only matches buckets that do not contain dots ("."). To
work around this, use HTTP or write your own certificate verification
logic. For more information, see Amazon S3 Path Deprecation Plan.amazon-s3-path-deprecation-plan-the-rest-of-the-storyCreate a bucket without a dot or use the path style URL or you checkVirtualHostingCustomURLs.S3 support two types of URL to access Object.Virtual hosted style accesshttps://bucket-name.s3.Region.amazonaws.com/key namePath-Style Requestshttps://s3.Region.amazonaws.com/bucket-name/key nameImportantBuckets created after September 30, 2020, will support onlyvirtual
hosted-style requests. Path-style requests will continue to be
supported for buckets created on or before this date. For more
information, see Amazon S3 Path Deprecation Plan – The Rest of the
Story.S3 VirtualHosting | I am building a small REST API service to store and retrieve photos. For that, I am using S3 as following:public String upload(InputStream uploadedInputStream,
Map<String, String> metadata, String group, String filename) {
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(amazonS3)
.build();
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentType(metadata.get(Configuration.CONTENT_TYPE_METADATA_KEY));
// TODO: 26/06/20 Add content-type to metadata
String filepath = group + "/" + filename;
s3transferManager.upload(new PutObjectRequest(
configuration.getProperty("aws.s3.bucket"),
filepath,
uploadedInputStream,
objectMetadata)).waitForUploadResult();
return amazonS3.getUrl(configuration.getProperty("aws.s3.bucket"), filepath).toString();
}urlreturned by the function looks likehttps://photos.tarkshala.com.s3.ap-south-1.amazonaws.com/default-group/1593911534320%230. When accessed it shows up like thisWhen I open it using the object url(https://s3.ap-south-1.amazonaws.com/photos.tarkshala.com/default-group/1593911534320%230) given in AWS S3 console it shows up fine.WhygetUrlmethod not returning the second url or is there a way to get second method/api that does it? | S3 object url is not secure(ssl) when opened in browser |
it is a DNS issue.
Please try using a Google's dns server 8.8.8.8 and if this resolves to a working host. | I am trying to reach my database server from pgadmin but I keep getting this errorUnable to connect to server:
could not translate host name "dbname.xxxxx.eu-west-3.rds.amazonaws.com" to address: Unknown hostI followed these exact instructionshttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.htmlI could not find any solution to this specific problem | Connect to Postgresql server on AWS RDS from PGADMIN |
SQS is built to be a single producer to consumer for its queues so the intended functionality is happening.However, there is a solution available for this exact scenario but it will require you to update your architecture.The solution is to use afanout architecture.You would instead publish to an SNS topic, which has your SQS queue subscribed to it. Then create additional SQS queues for parallel channels (1 per each unique Lambda).Add each Lambda function as consumer of its own SQS queue, each with their own processing. | In my application we are using a SQS to queue messages to be processed by another module. SQS doesn't send notification that a message has come and I don't want to make my application to go to check on it every "X times". So I'm trying to use a lambda trigger to make a http request to my module and make it pool messages from SQS when a message got there.The problem is SQS deletes the sent messages if there is no error on the lambda function (as far I know). Forcing an error just to keep the messages on the pool can't be right. So I need a way to keep messages on the SQS after the lambda was triggered.Maybe I should move the code that process the message to the lambda function, but I'm looking for ways to keep it there.Anyone could give some guidance?Thanks in advance | How keep messages on SQS after triggering lambda |
Unfortunately at this time there is no way to connect to private AWS resources, there are 2 types of Origin.S3 - A public S3 bucket, with security hardened between communication through the usage ofOrigin Access Identity.Custom Domain - Forward to a publicly resolvable and connectable domain name. This is the option you would need to use.Just because your load balancer is public you can still enhance your security to reduce the threat of an unknown source accessing your load balancer.You could add acustom headerto your requests containing a secret. Then if you use an application load balancer attach aWAFwith a default to block all requests. Finally add an allow rule to WAF to allow where the header has a value of your secret. | I want to connect Cloudfront to an internal load balancer which is connected to my application. Inbound traffic comes from a third party application so I cannot only use the internal load balancer.
The process would be:third party app <-> cloudfront <-> internal load balacner <-> my applicationHowever, I am not sure if Cloudfront can access the load balancer in my VPC.
Any ideas how that would that be setup? | Can Cloudfront access resources in a VPC? |
Sadly, youcan't mount S3 objectsas a filesystem to your instance, nor access them directly. They have to be downloaded first.However, you can usethird part toolwhich make S3 bucket appear to you and your application as a filesystem. One popular tool for that is3fs-fusewhich :allows Linux and macOS tomount an S3 bucketvia FUSE. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.For that to work you would have to setup youruser_datain your instance to do itautomaticallyduring the instance launch. | I have an archive that contains some binary on S3 which I need to put on EC2 during the provisioning.
On the moment, I am downloading the archive to the machine(host provisioned) and upload it to the machine which I need to provision.Or how can I get a link from
aws_s3_bucket_object? or is there a way to mount s3 object as file into ec2 instance with terraform?data "aws_s3_bucket_object" "release" {
bucket = data.aws_s3_bucket.artifacts.id
key = "release.tgz"
}
resource "aws_instance" "engine" {
ami = data.aws_ami.server.id
instance_type = var.aws_instance_type
...
} | Terraform: how to access S3 bucket object from EC2 instance? |
As suggested by @Mathias in the comments, I updated the script to includeImport-Module -name AWSPowerShelland it worked like a charm. | I am usingNew-EC2Tagto create/update the EC2 tags through Azure Pipeline. I am using "AWS Tools for Windows PowerShell Script" task and below is the code:$Tag = New-Object Amazon.EC2.Model.Tag
$Tag.Key = "DesiredInstanceState"
$Tag.Value = "Stopped"
New-EC2Tag -Resource $instanceName -Tag $TagWhen the task runs I get below error:2020-06-16T18:40:57.4642775Z ##[error]New-Object : Cannot find type
[Amazon.EC2.Model.Tag]: verify that the assembly containing this type
is loaded. At
C:\1_work_temp\d2378116-3224-4a4a-b92c-61744a291aac.ps1:2 char:8
+ $Tag = New-Object Amazon.EC2.Model.Tag
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidType: (:) [New-Object], PSArgumentException
+ FullyQualifiedErrorId : TypeNotFound,Microsoft.PowerShell.Commands.NewObjectCommand
2020-06-16T18:40:57.4655375Z ##[error]PowerShell exited with code '1'.I installed the module on both the build server and the server for which I am trying to create the tag, still the same.Any input would be greatly appreciated. Thanks! | New-Object : Cannot find type [Amazon.EC2.Model.Tag] |
Rather than trying to specifically add an authentication layer take a look at adding a VPC endpoint to the VPC(s) that you want to be able to access your S3 bucket.Once you have this in place (and added to the route tables) then you can update the bucket policy for your S3 bucket to add a condition to Deny all traffic not from the source VPC endpoint (aws:sourceVpce).The advantage to this approach is that you will not need to make any changes to the servers themselves.More documentation availablehere. | We store RPMs required for our deployment in a S3 bucket, we are hosting a yum repo on the bucket to make it easier for updating RPMS.Currently, our bucket is accessible publicly over the S3 endpoint (s3.amazonaws.com) and open to the world for access as we currently can’t pull down yum packages from a private S3 repo.We need to harden the security of the Repo bucket to enable authentication based access to S3 over s3.amazonaws.com endpoint. Any suggestion towards it ? Thanks !`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow Access From QA, dev",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::XXXXXXX:root"
]
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::test-repo",
"arn:aws:s3:::test-repo/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"X.X.X.X/32"
]
},
"StringNotEquals": {
"aws:sourceVpce": "vpce-xxxxxxx"
}
}
}
]
}
` | Hardening S3 bucket |
Unfortunately to use the SDK you will need to use the entire directory (and all of its individual dependencies).Whilst you could prune individual directories and files you would then be responsible for maintaining this, including new features which may require additional classesBest practice forpulling in the SDKis to use the composer dependency manager.If you were looking for a lighter version you would need to look for someone else's implementation or look at implementing your own library to interact with theAWS S3 API endpoints. | I would like to know is there a simple way to reduce the AWS PHP SDK to use only S3 ? I tried to delete certain directory but there are so many it will take an incredible time, and I have many errors depending on the files I delete (21,6Mo - 2 368 elements) ?! Is it possible to know the architecture of the basic files necessary to use only S3 with the SDK PHP please?I found old posts on this subject but the file structure has changed and they are no longer current.The complete SDK is very heavy with a lot of files that I don't need to keep my sources with an optimization in reasonable size.Thanks for your help | Is there a simple way to reduce the AWS PHP SDK to use only S3? |
You have theshstep.steps {
script {
username = sh (script: "aws secretsmanager get-secret-value --region us-east-2 --secret-id myID | jq -r .SecretString | jq -r .username", returnStdout: true)
password = sh (script: "aws secretsmanager get-secret-value --region us-east-2 --secret-id myID | jq -r .SecretString | jq -r .password", returnStdout: true)
}
} | I have a Jenkins Pipeline that runs Cypress Tests on a Docker Container.
The tests need a username and password to login to the web application. I have saved the username and password in AWS Secrets Manager. I can do that when I execute a shell command as a build stepUSERNAME=$(aws secretsmanager get-secret-value --region us-east-2 --secret-id myID | jq -r .SecretString | jq -r .username)
PASSWORD=$(aws secretsmanager get-secret-value --region us-east-2 --secret-id myID | jq -r .SecretString | jq -r .password)
docker run -e NO_COLOR=1 -v "$PWD":/workdir -w /workdir --entrypoint=cypress 1.dkr.ecr.us-east-2.amazonaws.com/cypress/included:3.8.3 run --env username="$USERNAME",password="$PASSWORD"However, I want to create a Jenkins Pipeline job and do this from JenkinsFile.
How can I read the username and password from AWS Secrets Manager in the Jenkinsfile? | How to use a username and password stored in AWS Secrets Manager in my Jenkins job? |
You could use CloudTrail and CloudWatch Events to enable this workflow.By default S3 API calls are not logged so you'd want to enable that following the instructionshere.Then enable a CloudWatch event rule for the Simple Storage Service where the "GetObject" operation occurs.Have this event invoke a Lambda function that will remove the object.More information availablehere. | I have a usecase where I want to put data into an S3 bucket, for it to read later, by another account. I only want the other account to be able to read the file in S3, and once they have read it, I will then delete the file myself.I have been reading the S3 documentation, and cannot see they cover this usecase: of sending a notification when a file in an S3 bucket is read ?Can anyone help, or suggest an alternative workflow ? I have been looking at AWS SNS and was wondering if that would be a better solution ? | AWS S3 is there a notification on GetObject? |
+50Use SSM but don't include AWS SDK in your Lambda function.Lambda documentationsays that the AWS SDK is included in the Lambda runtime.To test this, I created a new Node.js 12 Lambda function from scratch in the Lambda console & replaced its existing code with this:const AWS = require('aws-sdk');
const SSM = new AWS.SSM();
exports.handler = async() => {
return {
statusCode: 200,
body: await SSM.getParameter({ Name: 'my-param' }).promise(),
};
};This works!Downloading the deployment package of this function from the Lambda console showed that it's just 276 bytes in size. I then deployed this to Lambda@Edge &that worked too! | I'm building Lambda function for CloudFront which checks if request has cookies, if not then forwards to login page. I need to customize response headerlocationbased on environment - for each env that will be different.Initially I tried with environment variables but I got an error during deployment:InvalidLambdaFunctionAssociation: The function cannot have environment variablesSo I switched to useaws-sdkwith SSMssm.getParameterbut after zipping lambda archive withaws-sdkand one more depedency it's around 13 MB. The limit for Lambda@Edge functions is 1 MB.I'm wondering would be the best way to approach that. Generate file with environment variables on each Lambda build and require it inindex.js? | How to approach use of environment variables for Lambda@Edge |
I ended up not using SageMaker for this, but for anybody else having similar problems, I solved this by opening the file using s3fs and writing it to atempfile.NamedTemporaryFile. This gave me a file path that I could pass into eithertorchaudio.loadorlibrosa.core.load. This was also important because I wanted the extra resampling functionality oflibrosa.core.load, but it doesn't accept file-like objects for loading mp3s. | I stored like 300 GB of audio data (mp3/wav mostly) on Amazon S3 and am trying to access it in a SageMaker notebook instance to do some data transformations. I'm trying to use either torchaudio or librosa to load a file as a waveform. torchaudio expects the file path as the input, librosa can either use a file path or file-like object. I tried using s3fs to get the url to the file but torchaudio doesn't recognize it as a file. And apparently SageMaker has problems installing librosa so I can't use that. What should I do? | Trouble opening audio files stored on S3 in SageMaker |
I got the issue. It was because of the user who created the PR will not be able to approve the PR even though he has permissions. Secondly, the user who will be approving the PR should be given respective approval permissions i.e.Action: UpdatePullRequestStatusin IAM policy. Only then the user will be able to see the approval button. | I'm new to AWS CodeCommit. I'm trying to figure out how to approve a Pull Request on AWS CodeCommit. I know how to create and manage Approval Rules. I know how to approve a PR using CLI. But I couldn't figure out how a user can login to AWS console and Approve a PR. I searched the internet but couldn't the answer. No AWS docs available on this.Can someone help me out here? | How to Approve a Pull Request on AWS CodeCommit? |
I've found the--format shortparameter onaws logs tailis useful. It produces:$ aws logs tail --follow /aws/lambda/do_something_neato --format short
2021-04-30T07:01:40 START RequestId: 8...1 Version: $LATEST
2021-04-30T07:01:40 info: Do something v1.0.0
2021-04-30T07:01:40 info: Doing some things(5 total).
2021-04-30T07:01:40 info: Doing thing 1...
2021-04-30T07:01:40 info: Doing thing 2...That would otherwise be:2021-04-30T07:01:40.683000+00:00 2021/04/30/[$LATEST]ce094d1cc6014a0da8c3df300aae4f36 START RequestId: 8...1 Version: $LATEST
2021-04-30T07:01:40.683000+00:00 2021/04/30/[$LATEST]ce094d1cc6014a0da8c3df300aae4f36 info: Do something v1.0.0
2021-04-30T07:01:40.683000+00:00 2021/04/30/[$LATEST]ce094d1cc6014a0da8c3df300aae4f36 info: Doing some things(5 total).
2021-04-30T07:01:40.683000+00:00 2021/04/30/[$LATEST]ce094d1cc6014a0da8c3df300aae4f36 info: Doing thing 1...
2021-04-30T07:01:40.683000+00:00 2021/04/30/[$LATEST]ce094d1cc6014a0da8c3df300aae4f36 info: Doing thing 2...I initially went down the path of putting together an awk script to format things the way I wanted, only to find that piping the output ofaws logs taildoesn't reliably work. But the above accomplishes what I want. | I'd like to stream Cloudwatch logs from a specific group and log stream.This command does a good job at streaming a group (including all the corresponding streams):aws logs tail /aws/batch/job --follow --since 1dI tried piping the result to grep and also specifying the--filter-patternwith the prefix of the desired stream but it simply returns nothing. | AWS logs tail for specific stream name |
You can use terraform local-execprovisioner.resource "null_resource" "kubectl" {
depends_on = <CLUSTER_IS_READY>
provisioner "local-exec" {
command = "aws eks --region us-west-2 update-kubeconfig --name clustername"
}
}
} | I am deploying AWS Elastic Kubernetes Cluster on AWS Cloud. While deploying the cluster from my local machine I am facing a small error, even we can't say exactly it is an error.So when I am deploying eks cluster using terraform charts from my local machine, it's deploying all the infra requirement on AWS, but when it has to deploy the cluster it is tying to deploy throughkubectl, but kubectl is not configured with the newly created cluster, then the terraform throwing an error.I easily solve this error by binding kubectl with newly created cluster with the below command, but I don't want to do it manually, is there any way in then that I can configure kubectl with the same.Command -aws eks --region us-west-2 update-kubeconfig --name clusternameFYI - I am using AWS CLI. | Terraform AWS EKS kubectl configuration |
AWS CLI doesn't support presigned PUT URL yet. You can easily generate one using Python Boto3 though. The documentation ishere. If you want a presigned PUT, you just need to letClientMethodparam beput_object. | I've been trying to make use of the presign URL to put a file into my private S3 bucket.
But I kept receiving this error message<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>The presign URL was generated using this commandaws s3 presign s3://<bucket-name>/<file-name> --expires-in 300But when I do a curl PUT request, the error occurredcurl -T "<file-name>" "<Generated presign url>"Did some research saw people talking about adding the 'Content_Type' in the header when requesting for the presign URL, but the aws cli doesn't have that flag to include.Is it possible to do a put request through aws cli?? | Possible to send a PUT request to aws s3 presign url? |
Your question is very well described. Thanks for the little graph you drew to help clarify the overall architecture. After reading your question, here are the things that I want to point out.The link to the CloudFront data transfer price is very outdated. That blog post was written by Jeff Barr in 2010. The latest CloudFront pricing page is linkedhere.The data transfer from CloudFront out to the origin S3 is not free. This is listed in "Regional Data Transfer Out to Origin (per GB)" section. In your region, it's $0.02 per GB. Same thing applies to the data from CloudFront to ALB.You said "within the same region, there should be no charge between an EC2 and an RDS DB Instance". This is not accurate. Only the data transfer between RDS and EC2 Instancesin the same Availability Zoneis free. [ref]Also be aware that S3 has request and object retrieval fees. It will still apply in your architecture.In addition, here is a nice graph made by the folks inlastweekinawswhich visually listed all the AWS data transfer costs.Source:https://www.lastweekinaws.com/blog/understanding-data-transfer-in-aws/ | I'm trying to calculate the price of network data transfer in and out from an AWS WP website.Everything is behind Cloudfront. EC2/RDS returns dynamic resources and few statics, S3 returns only static resources. The Application Loadbalancer is there just for autoscaling purpose.Even if everything seems simple the experience taught that the devil is in the detail.
So, at the end of my little journey (reading blogs and docs) I would like to share the result of my search and understand what the community thinks of.Here is the architecture, all created within the same region/availability zone (let's say Europe/Ireland):At time of writing, the network data transfer charge is:thetraffic out from Cloudfront(first 10 TB $0.15/GB per month, etc.)thetraffic in and out from the Application load balancer(processed bytes: 1 GB per hour for EC2 instance costs ~7.00$/GB)For the rest, within the same region is free of charge and Cloudfront does not charge the incoming data.For example: within the same region, there should be no charge between an EC2 and an RDS DB Instance.Do anyone knows if I'm missing something? There are subtle costs that I have to oversee? | AWS wordpress - calculating network data transfer charge |
The fix for this was to addmodule "vpc" {
enable_dns_support = true
enable_dns_hostnames = true
}In the module block within the vpc module to allow the DNS hostnames to be resolved within my VPC | I have 2 services within ECS Fargate running.I have set up service discovery with a private dns namespace as all my services are within a private subnet.When I try and hit my config container from another I am getting the following error.http://config.qcap-prod:50050/config: Get
"http://config.qcap-prod:50050/config": dial tcp: lookup
config.qcap-prod on 10.0.0.2:53: no such hostBelow is my Terraformresource "aws_service_discovery_service" "config" {
name = "config"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.qcap_prod_sd.id
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 1
}
}Is there another step I need to do to allow me to hit my container from another within ECS using Fargate?My terraform code for my namespace is:resource "aws_service_discovery_private_dns_namespace" "qcap_prod_sd" {
name = "qcap.prod"
description = "Qcap prod service discovery"
vpc = module.vpc.vpc_id
} | Service discovery using ECS Fargate |
Edit: It works after 48 hours with prefixsigned_. Just that AWS takes a bit more time is all. | I want to delete the previous versions of files in an S3 bucket that are not in a folder but directly uploaded in a bucket and also with a specific prefix.
Eg. Some S3 keys are like:signed_2020_04_15.pdfsigned_2020_04_17.pdfunsigned_2020_04_15.pdfunsigned_2020_04_17.pdfinfo/signed_2020_04_16.pdfinfo/unsigned_2020_04_16.pdfSo I want my lifecycle to delete only the previous versions of the files starting withsigned_butnotthe ones in the folderinfo. That means in the above list onlysigned_2020_04_15.pdfandsigned_2020_04_17.pdfmust be deleted.How do I put my prefix? I tried prefix assigned_and waited for the lifecycle policy to run but it doesn't work. But in another bucket, the prefix was likefolder/and it works.So, do lifecycle policies work only for the files that are in a folder and not the ones that are uploaded directly? | AWS S3 Lifecycle Expiration Prefix Rule |
There's a workaround for this problem by setting the S3 bucket policy to include"s3:ListBucket"on the bucket resource itself. For example,{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity INSERT-CLOUDFRONT-OIA-ID"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::EXAMPLE-BUCKET/*",
"arn:aws:s3:::EXAMPLE-BUCKET"
]
}
]
}The difference is that error pages will then have a 404 HTTP status code (from S3) instead of 403 (from S3), which meansCloudFront will actually respect the cache TTL of the custom error pageso it won't hit origin (S3) again.After this change I was seeing 10ms response times on the non-root/routes on CloudFront.Important side effectof this change is that people could in theory view your bucket's directory listings with theListObjects APIe.g.http://EXAMPLE-BUCKET.s3.amazonaws.com/?delimiter=/, so it is important to useOAI (Origin Access Identity)to protect the S3 bucket to be only accessible from CloudFront (where the ListObjects URL will not work). | I have a single page application hosted on S3 and served with Cloudfront. Everything works fine, but I am trying to improve the performance of the first load of the application by caching all files on Cloudfront. Right now all files are served very quickly except one: the page HTML. There is only one HTML file (/index.html), that is served every time the file cannot be found on the origin (S3) by using a custom error page on Cloudfront. This file is served, for example, on the root of my domain.I setup the Error Caching Minimum TTL of the custom error page to cache the response for 1 whole day (86400 seconds), as the image shows.Cloudfront customer error page settingsThis cache configuration, however, seems to have no effect. Everytime the URL is not present on the origin (and S3 returns the status 403), the response is correct, but Cloudfront indicates a Miss on the x-cache header and takes around 500ms to respond. If the file is requested by the path "/index.html" on my domain, Cloudfront indicates a Hit on the x-cache header and responds in 20ms.The index.html file has a cache control header set for a max age equal to the Cloudfront error caching minimum TTL.Am I missing something or Cloudfront custom error pages are just slow? | Are AWS Cloudfront custom error pages slow? |
At the end of the day, all these approaches achieve the same goal - run some user-defined actions upon the instance initialization.Launch ConfigurationandLaunch Templateallow you to specify the configuration of your instance once and then reuse it in multiple places. With or without CloudFormation.Launch Configurationis specific to the AutoScaling group. If you need to spin up an instance, that is not in the autoscaling group, useLaunch Templateto achieve the same result.Now, in both cases above you can use eitherBash script in UserDataorAWS::CloudFormation::Init.Bash in UserDatais just that - Bash script. If you are familiar with it and feel confident that you can achieve what you need in just Bash - go for it.AWS::CloudFormation::Initis a higher-level abstract, simplifying bunch of things, such as file creation, permissions etc. Nothing you can't do with just bash, but surely making it easier and more maintainable.One thing to keep in mind - bash+userdata approach would work as is on all cloud providers, not limited to AWS. Google, Azure. -they would let you run the same scripts with maybe minor modifications. AWS::CloudFormation::Init is AWS - specific. | In a cloudformation template, which are the differences between defining the initializazion script into the Userdata section of a LaunchConfiguration resource, or by using AWS::CloudFormation::Init metadata?
In which cases should we prefer one over the other?
Let's suppose I have to setup the EC2 instaces, based on this LaunchConfiguration, installing tomcat and defining some config file, and maybe copying some packages from an S3 bucket. It's better doing it via a Userdata bash script or via an AWS::CloudFormation::Init section?Thanks. | LaunchConfiguration Userdata vs AWS::CloudFormation::Init |
You can retrieve this information by querying theECS Task Metadata Endpoint, exposed to your container via theECS_CONTAINER_METADATA_URIenvironment variable. Here is an example response, taken from the documentation linked above:{
"DockerId": "43481a6ce4842eec8fe72fc28500c6b52edcc0917f105b83379f88cac1ff3946",
"Name": "nginx-curl",
"DockerName": "ecs-nginx-5-nginx-curl-ccccb9f49db0dfe0d901",
"Image": "nrdlngr/nginx-curl",
"ImageID": "sha256:2e00ae64383cfc865ba0a2ba37f61b50a120d2d9378559dcd458dc0de47bc165",
"Labels": {
"com.amazonaws.ecs.cluster": "default",
"com.amazonaws.ecs.container-name": "nginx-curl",
"com.amazonaws.ecs.task-arn": "arn:aws:ecs:us-east-2:012345678910:task/9781c248-0edd-4cdb-9a93-f63cb662a5d3",
"com.amazonaws.ecs.task-definition-family": "nginx",
"com.amazonaws.ecs.task-definition-version": "5"
},
"DesiredStatus": "RUNNING",
"KnownStatus": "RUNNING",
"Limits": {
"CPU": 512,
"Memory": 512
},
"CreatedAt": "2018-02-01T20:55:10.554941919Z",
"StartedAt": "2018-02-01T20:55:11.064236631Z",
"Type": "NORMAL",
"Networks": [
{
"NetworkMode": "awsvpc",
"IPv4Addresses": [
"10.0.2.106"
]
}
]
} | for example i am spinning four containers in aws ecs fargate. Is it possible to know container name or container ID | Is it possible to fetch container ID or Name of ecs fargate? |
I have come across this problem. But the problem I came across is that I wanted to control the number of requests so that the number of API calls will not exceed the relevant quotas.I solved it by using theRateLimiterclass from Guava library, which is an open-source common libraries for java by Google, which can control the rate of process happening.I was able to limit the number of API calls to 3 times per second and then I got the issue solved.
The import is →com.google.common.util.concurrent.RateLimiter;Try it outhttps://www.baeldung.com/guava-rate-limiterI hope it helps | I want to gain some insights into the performance of my application that connects to several services in AWS e.g.IAMandS3. One metric interesting to me isrequests-per-minute, I have checked around for possible approaches, AWS Metrics is limited to enterprise customers as stated in thisAWS document. Another approach is generatingJava SDK metrics, viaCloudWatch. I have enabled this by adding the command below to the system property-Dcom.amazonaws.sdk.enableDefaultMetrics=credentialFile=/path/aws.propertiesI see some metrics in theCloudWatchdashboard, however, there is norequest-per-second. I'd like to find out if someone has experience with this or maybe I am missing something. | How to measure request for minute for AWS Java SDK |
You will likely be able to retrieve most of the IP addresses from multiple services by calling the AWS EC2 ENI API:https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-interfaces.htmlThis will gather all IP addresses in for supported services and output them for you.aws ec2 describe-network-interfaces --query "NetworkInterfaces[*][].PrivateIpAddresses[*][].{Private: PrivateIpAddress, Public: Association.PublicIp}" | Using the AWS CLI, I'd like to retrieve a list ofallIP addresses, whether EIP or statically assigned etc.I've been using describe-instances and describe-addresses but want to know if there is an easier way to get all public IP addresses?aws ec2 describe-addresses --public-ips --region eu-west-1 --query 'Addresses[*].PublicIp'
aws ec2 describe-instances --region eu-west-1I've searched through the AWS documentation, but haven't found anything that encompasses everything. | Return list of _all_ ip addresses using AWS CLI |
Update: This is actually not the case. I was failing to properly wait for the call to adjust the timeout to finish. As such the lambda was closing before that request was completing. The timeout I'm setting on the message within the lambda is being respected. I'm then throwing an error to prevent the message from being deleted. | I'm implementing my own webhooks service which will send out events to subscribed webhooks.Overview of architecture:events are pushed onto an SQS queuea lambda function is triggered by SQS messages (event source mapping)for each event, I make outgoing http requests to subscribed webhooksnon-2xx responses must be retried with exponential backoff (in such event, I change the message visibility on the received message)since lambdas that are invoked by SQS will automatically delete the message upon completion I throw an error at the end of the function to prevent the automatic deleteAs far as I can tell, the call to change the message visibility is succeeding. I'm wondering if there's something else baked into lambdas that are invoked by SQS. Upon failure from the lambda, is it internally changing the message visibility again? Or do lambdas that are invoked by SQS not respect message visibility changes (this really doesn't make any sense to me). Curious if anyone has any insight into this problem. I was quite surprised to find out that lambda automatically deletes messages upon success since it makes my particular use-case a little clunky feeling - throwing an error to fail the lambda function to prevent the message from being deleted.Thanks in advance! | AWS Lambda Function invoked by SQS trigger is not respecting the visibility timeout I'm manually setting within the function |
Generally speaking, your DNS should have an Apex (A) record pointing to something. If there's nothing yet, and although it is 100% not best practice, then yes, 1.1.1.1 will work (or anything, really).Once you add your A record, head over to Amazon Certificate Manager to create your ACM certificate for your domain. Make sure your ACM certificate covers your subdomain, and verify it using DNS method. Verification takes about 5 minutes and once your certificate is verified, you'll be able to head over to the Cognito console to set up your custom domain using the certificate you just created. | I am trying to set cognito up with a custom domain.
I have a registered domain name, hosted zone with route53. let's say mydomain.com.
I also created certs for mydomain.com, *.mydomain.com in us-east-1 (N.Virginia) as document instructs.
When I tried my domain, cognito gave me an error saying that I must have an A record. I tried creating an Alias A record. But I don't have an actual Target. I just was to use something like auth.mydomain.com for logging in.
Since I couldn't make sense of an alias record I created a regular A record and set the target to a dummy ip 1.1.1.1,
Since I read that the target isn't really relevant for cognito.
At first it didn't work. But I thought that it's dns proportion thing and I tested it the next day and was able to add the domain to cognito.My questions are:Did I do right? Is it ok to set the A record to a dummy ip as long as my domain doesn't actually point to anything?
Is it possible to remove it after the association with cognito?Why did it only work after a day? Is this DNA caching/propogation time?
Would that be the case with alias record? Or since alias is AWS aware it would be instant?Thanks! | AWS cognito custom domain A record without actual targer |
The other account would need to have granted access to the account. The role in the other account would need a trust relationship similar to this (often it has conditions added to it as well):{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountId_A>:root"
},
"Action": "sts:AssumeRole"
}
]
}This example assumes that is the account you are granting the IAM permission in. | My understanding is,Service control policyandresource based policiesare mainly used to allow/deny cross account access to resources.From the policy evaluation procedure explainedhere, I learned that IAM permission policy(managed or inline) is used to grant/deny permissions toPrincipalwithin an AWS account.{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::*:role/Somerole",
"Effect": "Allow"
}
]
}But above is the IAM permission policy, written to grant permissions toPrincipalin the source account, to have access(sts::AssumeRole) to other account resources(Somerole).Can IAM permission policy be defined to allowPrincipalin source AWS account get permissions(sts:AssumeRole) to access resources(Somerole) that are present in other accounts(*:role)? In our casePrincipalis anIAM rolein the source AWS account. | Can IAM permission policy used to allow access to cross account resource? |
We had this issue, but had problems figuring out where / how to submit this without a support plan. Eventually we received the following instructions from Amazon:They stated the request could be submitted at the following URL:https://aws.amazon.com/premiumsupport/knowledge-center/ec2-port-25-throttle/And to do the following:Sign in with your AWS account root user credentials, and then open the Request to Remove Email Sending Limitations form.In the Use Case Description field, provide a description of your use case.(Optional) Provide the AWS-owned Elastic IP addresses that you use to send outbound email, as well as any reverse DNS records that AWS needs to associate with the Elastic IP addresses. AWS will use this information to help reduce the chance that email sent from the Elastic IP addresses is marked as spam.Choose Submit. | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed3 years ago.Improve this questionI have a Linux box on AWS LightSail. My application (NOT WordPress) needs to relay emails out to a smarthost (email security system) on smtp port 25. However, when I tested this port with telnet, I found that this port is blocked. I know that for example for EC2 instance you can submit a request per use case to lift this restriction, or with WordPress you have plugins or other options. However, cannot find any solution for LightSail Linux instances. I do not want to use AWS SES. | How to open port 25 on AWS LightSail Linux instance [closed] |
Typically, an Amazon RDS instance is running on one server in one subnet.However, when launching the database, you are asked to provide aSubnet Group, which identifies which subnets the databasecouldlaunch in. These are typically private subnets within the VPC.If you are using aMulti-AZdatabase, then it will usetwo subnets-- one for the Master (running) database and one for the secondary (standby) database.It is also possible to create Read Replicas that could be in a different subnet to the Master database.Bottom line:You are probably viewing the list of subnets in the Subnet Group that itcanuse, but it is likely to only be in one subnet at the moment. | I haven't changed my vpc/subnet settings since making an aws account, and I've recently found my rds instance is apparently in 3 subnets (subnet is listed as default with 3 subnet names underneath), one of which also has my application server. Is it necessary to have my rds in all 3 subnets? I want to move it to a separate subnet away from the application server and make it private - if that's the case is there anything in particular I will need to do? | why is rds in 3 subnets in aws |
$aws configure
AWS Access Key ID [****************LT6U]:
AWS Secret Access Key [****************iGrm]:
Default region name [ap-south-1]:
Default output format [json]:
specify: Default region name [ap-south-1]:Mine isap-south-1. Yours may be different. | botocore.exceptions.NoRegionError: You must specify a region.
During handling of the above exception, another exception occurred:
-----------------------------------------------------------------------------I've already configured my AWS region in[~/.aws/config]but this problem arises again and again.How can I solve this issue? | botocore.exceptions.NoRegionError: You must specify a region |
You can't point to cloudfront from your application load balanacer instead you can create behaviours or behavior groups in cloudfront to point to your load balancer.Just likeDefault (*) -> s3/xyz -> application load balancer | Hi I would like to use AWS application load balancer and create target group which should point default to my CloudFront distribution and based on the rule it will point to other apps. I could not find the resource to do it. Anyway have done such things.Our landing page is pointing to the CloudFront distribution(+AWS S3) and we wanted to have with /xyz it should point to our ec2 instance. | Is it possible to use AWS application load balance with CloudFront distribution and EC2 as target group |
Each of the AWS Amplify domains that you reference refer to a branch of your app eg master or feature. Use the full domain name egmaster.xxxxxxxx.amplifyapp.comas the target of your CNAME record for the branch you want to expose on your custom domain.All of the standard DNS propagation warnings say allow 24 to 48 hours but in practice it's usually much much quicker so don't worry about waiting for two days too much.I can see your DNS TTL is set for 1 hour. This value is how long the DNS system will cache your DNS records. Which means you can make a change and it would take up to an hour for those records to be updated throughout the internet. You could drop that to 5 minutes or less if you want to do trial and error testing or make quick switches to a different branch. | I am trying to connect my Amplify app to a GoDaddy website and the AWS instructions are not clear on how to do this.Followingthese instructionsI created a CNAME record to point to my Amplify app.(Image from the documentation)I have a "master.xxxxxxxx.amplifyapp.com" and a "feature.xxxxxxxx.amplifyapp.com", am I supposed to use one of these or just the "xxxxxxxx.amplifyamp.com"?It seems from the docs that these records take up to 2 days to update and I do not want to waste 4 days attempting this by trial and error.EditFollowing @Rodrigo M's answer I used the 'master.xxxxxxxx.amplifyapp.com' route for the CNAME record but when I go to the page all I see is the error:This page isn’t working xxxxx.domain.com redirected you too many times.And then when I look in the Network tab I see that the page did a bunch of 302 redirects where the name and the initiator were "Index.html".Does anyone have any ideas of what is going wrong? | AWS Amplify Connecting to GoDaddy - Documentation Unclear - Redirects Too Many Times |
The credits will only come back if you reduce your CPU load. You can enableT2 Unlimitedto avoid the limitation, but please note that extra costs will likely apply.If you are frequently running out of Credits, you should consider using a larger instance type (egt2.small,t2.medium) or a different instance family. T2/T3 instances are good for workloads that occasionally burst, but is not ideal for sustained workloads.See:CPU Credits and Baseline Performance for Burstable Performance Instances | CPU Credit balance on my AWS EC2 server dropped to zero and now my system is very slow.
What can I do to fix this?
I use t2.micro EC2 instance.For how long it will be zero? To the end of month, forever? | How to increase on AWS CPU credit balance dropped to zero? |
Finally, I have found a solution to my problem.We cannot scale the background jobs in the way I want. It required me to look into the solution from a completely different angle.The ideal solution to my problem is that I should generate SQS messages (with a payload describing the tenant id, the job needs to be executed and any additional parameters) corresponding to the number of tenants on a set interval and queue it.For example, if I have 100 tenants and I want to run "Job 1" every our, the main application will generate 100 SQS messages and queue it in a particular SQS Queue every hour. It will do the same for all 15 different jobs I have per tenant.On the other end, a scalable AWS Lambda function listening to the SQS queue will pick up the payload and execute the intended task based on the data being carried by the payload.But unfortunately, my expertise lies in PHP/Laravel technology which is still not in the AWS Lambda stack. Hence I figured out a workaround as follows.I built a Docker image with my PHP/Laravel application and placed it in Amazon ECS (EC2 container service). Still, I have the AWS Lambda function in place but this time it acts as a trigger to my docker containers. The Lambda picks an SQS Message, processes the payload and spawns a Docker container on ECS based on my Docker image. I got some of the ideas from the following article to arrive at this solution.https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ | I am building a Multi-Tenant web application using Laravel/PHP that will be hosted on AWS as SaaS at the end. I have around 15-20 different background jobs that need scheduling for each tenant. The jobs need to be fired every 5 minutes as well. Thus the number of jobs which need to be fired for 100 tenants would be around 2000. I am left with 2 challenges in achieving thisIs there a cloud solution that distributes and manages the load of the scheduled jobs automatically?If one is out there, how can we create those 15+ scheduled jobs on the fly? Is there an API available?Looking for your assistance | Creating scheduled jobs in a Multi-Tenant application |
Thanks for the question. Detecting presence is not currently support out of the box but you can likely build similar features yourself depending on the use case.For example, a resolver on a subscription field is invoked every time a new device tries to open a subscription. You can use this resolver field to update some data source to tell the rest of your system that some user is currently subscribed. If using something like DynamoDB, you can use a TTL field to have records automatically removed after a certain amount of time and then require a user to "ping" every N minutes to specify that they are still online.You could also have your application call a mutation when it first starts to register the user as online, then have the application call another mutation when the app closes to register it as offline. You could combine this with TTLs to prevent stale records in situations where the app crashes or something prevents the call to register as offline.Thanks for the suggestion and hope this helps in the meantime. | I am working on an app that relies heavily on detecting when users go offline and go back online. I wanted to do this with AWS AppSync, but I can't seem to find a way to do this in the documentation. Is there a way to do it in AppSync? | Detect Presence in AWS AppSync |
While I couldn't find a way to do this in the Athena console, I found in Cloudtrail when searching theStartQueryExecutionevent | In Athena I can view the query history for a Workgroup.I can also get thequery execution detailsIs there a way to discover who (IAM user/role) executed the query? | Discover which IAM user/role executed Athena query |
You can either edit the name directly in the console or attach aNametag to your security group.Using AWS CLI:aws ec2 create-tags --resources <sg_id> --tags Key=Name,Value=Test-Sg | Creating an EC2 security group through the console allows you to set a "group name" and it automatically provides a "group id".However the "name" is always blank, unless the security group was generated automatically by elastic beanstalk or another resource.Is there any way to set this name in the console, otherwise how is it done in the CLI? | How to set "Name" of security group (AWS EC2) |
According to thedocumentation, the guarantee is that it won't let the same message be submitted to the queue in a 5 minute window. Message equality is defined by either body hash or an id you pass when submitting.Unlike standard queues, FIFO queues don't introduce duplicate messages. FIFO queues help you avoid sending duplicates to a queue. If you retry theSendMessageaction within the 5-minute deduplication interval, Amazon SQS doesn't introduce any duplicates into the queue.So it sounds to me like they have a central hash table of hashes/ids, they check against it for every new message, and automatically remove hashes/ids older than 5 minutes. You can use Redis with TTL to implement that pretty easily.I couldn't find any information about the scalability of this but it sounds like it's more expensive to scale than normal queues, judging by the added TPS (transactions per second) limits. | SQS FIFO guarantees that it will process a message from a queue exactly once. I am wondering what is the core logic behind it. How does it process a message exactly once from the queue in a distributed environment when thousands of transaction are happening. I just wanted to know the basic architecture and design principles. So I think someone who worked one such architecture could really give some insights. | How does SQS FIFO ensure exactly once processing? |
Even though lambda is not designed to host UI application, it doesn't mean that its impossible to do so. I had success runningnodejs expressserver in lambda where the express endpoints returned html/css.when it comes to server side sessions, the session could be saved on the database since lambda is stateless. | Is there a way to run a dynamic UI based application on AWS Lambda? If not , why doesn't AWS Lambda fit this use case? Since HTTP traffic is stateless and since session can be maintained in a backing datastore , what restricts lambda from hosting a dynamic web application? | Running a UI application on AWS Lambda |
Parquet file consists of two parts[1]:DataMetadataWhen you try reading this file through Athena then it will attempt to read the metadata first and then the actual data. In your case you are compressing the parquet file using Gzip and when Athena tried to read this file it fails to understand as the metadata is abstracted by the compression.So the ideal way of compressing parquet file is "while writing/creating the parquet file" itself. So you need to mention the compression code while generating the file usingparquetjs | I'm trying to build skills on Amazon Athena.
I have already successed to query data in JSON and Apache Parquet format with Athena.
What I'm trying to do now is add compression (gzip) to it.My JSON Data :{
"id": 1,
"prenom": "Firstname",
"nom": "Lastname",
"age": 23
}Then, I transform the JSON into Apache Parquet format with an npm module :https://www.npmjs.com/package/parquetjsAnd finally, I compress the parquet file I get in GZIP format and put it in my s3 bucket : test-athena-personnes.My Athena Table :CREATE EXTERNAL TABLE IF NOT EXISTS personnes (
id INT,
nom STRING,
prenom STRING,
age INT
)
STORED AS PARQUET
LOCATION 's3://test-athena-personnes/'
tblproperties ("parquet.compress"="GZIP");Then, to test it, I launch a very simple request:Select * from personnes;I get the error message :HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://test-athena-personnes/personne1.parquet.gz (offset=0, length=257): Not valid Parquet file: s3://test-athena-personnes/personne1.parquet.gz expected magic number: [80, 65, 82, 49] got: [-75, 1, 0, 0]Is there anything I didn't understand or that I'm doing bad? I can request apache parquet files without using gzip compression but not with it.Thank you in advance | Amazon AWS Athena HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split / Not valid Parquet file, parquet files compress to gzip with Athena |
It definitely is! I'm no good at Python, but the steps are rather simpleConfigure your API Gateway as Proxy Integration, so that your Lambda function can set the return code (seeSet up a Proxy Integration with a Proxy ResourceandServerless.com - Lambda Proxy IntegrationUpload your file to your S3 bucketMake sure the file is accessible by the user. A bucket can be public, or
with restricted access, through both ACL (seeAccess Control List (ACL) Overview) or IAM.Return the proper response, with the proper response code. I'm not sure, but it'd probably be similar to the payload below{
statusCode: 302,
headers: {
'Location': 'Your URL',
}
} | I want to trigger a download when a POST request is made to my API gateway+Lambda setup. I've read that this is done by converting the file to base64. But is it possible to write the file to an S3 bucket and trigger a redirect?import boto3
s3 = boto3.client('s3')
def main(event, context):
s3.upload_file(PATH,BUCKET_NAME,FILE_NAME)
# Make file accessable via URL and return redirect | Is it possible to make a 302 redirect request from AWS lambda to S3 bucket for triggering download? |
You need to assign therole to lambdafunction to read from the secret manager.AWS roleThe following IAM policy allows read access to all resources that you create in AWS Secrets Manager. This policy applies to resources that you have created already and all resources that you create in the future.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": ["*"]
}
]
}You can find more specific example belowiam-policy-examples-asm-secrets | created a secret manager key (non-rotational)with plain text option and encrypted. When i tried to get the value in lambda function , I am getting the error as permission denied.
Could you please help how to resolve the issue | AWS lambda function to use secret manager |
Use STSAssumeRole to achive this@Value("${my.aws.assumeRoleARN:}")
private String assumeRoleARN;
@Bean
@Primary
public AWSCredentialsProvider awsCredentialsProvider() {
log.info("Assuming role {}",assumeRoleARN);
if (StringUtils.isNotEmpty(assumeRoleARN)) {
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.standard()
.withClientConfiguration(clientConfiguration())
.withCredentials(awsCredentialsProvider)
.build();
return new STSAssumeRoleSessionCredentialsProvider
.Builder(assumeRoleARN, "role")
.withStsClient(stsClient)
.build();
}
return awsCredentialsProvider;
}
@Bean
@ConfigurationProperties(prefix = "aws.configuration")
public ClientConfiguration clientConfiguration() {
return new ClientConfiguration();
}
@Bean
@Primary
public AmazonS3 amazonS3() {
return AmazonS3ClientBuilder.standard().
withCredentials(awsCredentialsProvider()).
withClientConfiguration(clientConfiguration()).
build();
} | How to configure spring boot app to use IAM Role? Is this code below enough? Or I'm totally wrong?@Bean
public AmazonS3 amazonS3Client() {
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSCredentialsProviderChain(InstanceProfileCredentialsProvider.getInstance(), new ProfileCredentialsProvider()))
.build();
} | How can I configure spring app to use IAM Role(running inside AWS ECS) on aws and credentials on dev env? |
I think I only had that issue, when was using the custom authorizer with the type token. The query string information will only be present on the authorizer with the type request.functions:
create:
handler: posts.create
events:
- http:
path: posts/create
method: post
authorizer:
arn: xxx:xxx:Lambda-Name
resultTtlInSeconds: 0
identitySource: method.request.header.Authorization, context.identity.sourceIp
identityValidationExpression: someRegex
type: requesthttps://serverless.com/framework/docs/providers/aws/events/apigateway/Note that changing the type from type token to type request it will change the way you cache key of the policies.Also more information here:https://aws.amazon.com/blogs/compute/using-enhanced-request-authorizers-in-amazon-api-gateway/ | I have create a Lambda authorizor method (token based), with custom vpc and integrated with another lambda for api gateway authorization, when the authorization succeeds and when it wents to the destination lambda the path parameters and query parameters in event are coming as null.in serverless.yml file authorizor functionauthorizer:
handler: authorizerHandler.verifyUser
vpc: ${customvpc}in serverless.yml file normal lambdauser:
handler: user.router
vpc: ${customvpc}
integration:lambda
events:
- http:
path:api/v1/user/{id}
cors: truewhen user authorized, i am passing returning the object as{
"principalId": "yyyyyyyy", // The principal user identification associated with the token sent by the client.
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:{regionId}:{accountId}:{apiId}/{stage}/{httpVerb}/[{resource}/[{child-resources}]]"
}
]
}
}but when i tried to use the id in event.pathParameter it is returning null, the same goes with queryStringParameters.
any one can help?thanks in advance :-) | Path parameters, body coming as null while using authorizer in aws lambda |
It counts from the date and time you created the CloudWatch event. If your rate expression israte(1 day)and you created the event at 22:00:00 UTC, then it will run at that time the next day. A rate expression for 2 days will fire two days later at the same time. And similarly a rate expression at 5 minutes will fire every five minutesbeginning at the time the CloudWatch event was created.To verify this, you can create a Lambda function that uploads an object to an S3 bucket, create a CloudWatch event for it, making note of the time you created it, and observe that when the event fires, the modification date of the object is that many days/minutes/hours from the time you created the event. | I have scheduled Fixed rate of 2 days for an event rule. Where can I find the next run time? | Next run time for cloudwatch rate expression |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.