Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You can do this with a Rack Middleware:class RescueFromNoDB < Struct.new(:app)
def call(env)
app.call(env)
rescue Mysql::Error => e
if e.message =~ /Can't connect to/
[500, {"Content-Type" => "text/plain"}, ["Can't get to the DB server right now."]]
else
raise
end
end
endObviously you can customize the error message, and thee.message =~ /Can't connect to/bit may just be paranoia, almost all other SQL errors should be caught insideActionController::Dispatcher. | With the launch of Amazon'sRelational Database Servicetoday and their 'enforced' maintenance windows I wondered if anyone has any solutions for handling a missing database connection in Rails.Ideally I'd like to be able to automatically present a maintenance page to visitors if the database connection disappears (i.e. Amazon are doing their maintenance) - has anyone ever done anything like this?Cheers
Arfon | Automatically handle missing database connection in ActiveRecord? |
In the case of python, as this is IO bound, multiple threads will use of the CPU, but it will probably use up only one core. If you have multiple cores, you might want to consider the newmultiprocessormodule. Even then you may want to have each process use multiple threads. You would have to do some tweaking of number of processors and threads.If you do use multiple threads, this is a good candidate for theQueueclass. | What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files).At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one.Would some type of concurrency help? PyCurl.CurlMulti object?I am open to all suggestions. Thanks! | Downloading a Large Number of Files from S3 |
There's aC# libraryfor working with SDB.If you want to roll your own, the API, WSDL and other documentation can be found athttp://aws.amazon.com/simpledb/#resources.It's a pretty straight forward API that rides over HTTP. The hardest part is writing the signing code. There's plenty of implementations in other languages.As for using it for session state, there's a huge speed difference between using SimpleDB from EC2 and anywhere else on the internet. If you're hosting your app on EC2, it'll be fine, otherwise, it'll be brutally slow. | If not, are there any fundamental limitations of the service that prevent one from being built? | Has anyone tried building an ASP.NET Session State Provider for Amazon SimpleDB? |
+200Chosen algorithmwould be underTraffic configurationin the AWS Console UI for the Target group. It can be one of:Round robin (this is the default option)Least outstanding requestsWeighted randomNow, for the primary part of your question:If you want to route multiple people from the same team to the same target, you can useapplication-based stickinessas you mentioned. To use it, you need to generate a cookie in your application. When you are setting up the stickiness on the target group, set:Stickiness type- Application-based cookieApp cookie name- the name of the cookie that you will generate in your application (for exampleteam_session)On the AWS ALB part, that's it. In your application, you should now generate this cookie and make sure that the value of the cookie is the same for all members of the team, so incorporate that in the logic of generating this cookie for users. | In my application users create “teams” (aka workspaces). I’d like to configure AWS ALB sticky sessions to route requests from the same team to the same EC2 instance so that in-memory team-level caches are more effective. It’s not a requirement that all requests go to the same EC2 instance but it would mean there are fewer cache misses.The team ID is present in either the URL or an HTTP header depending on the request.It’s unclear to me how to accomplish this from the AWS ALBsticky session documentation. In the section titled “Application-based stickiness” the documentation says:Application-based stickiness gives you the flexibility to set your own criteria for client-target stickiness. When you enable application-based stickiness, the load balancer routes the first request to a target within the target group based on the chosen algorithm.Which sounds like what I want? Though the docs don’t detail how to configure the “chosen algorithm” for that initial routing.How would you accomplish routing multiple users of the same team to the same EC2 instance with AWS ALB? Is it possible? | AWS ALB sticky sessions for all accounts in the same workspace |
I had the same issue with version 12.12 and 12.13. I solved by downgrading to version 12.11.I suspect the issue is caused by this PRhttps://github.com/cypress-io/cypress/pull/26573/files | I'm running a controlled test in cypress and after some tests, this error is randomly occurring in the electron browser:There was an error reconnecting to the Chrome DevTools protocol.Please restart the browser.In google chrome it is running without errors, but in electron it is breaking.I need to run in electron because of settings in AWS. | Cypress - There was an error reconnecting to the Chrome DevTools protocol |
As explained in theAWS docs,Servicefor EventBridge Scheduler should bescheduler.amazonaws.com, notevents.amazonaws.com. | I trying to configure autostop/start rds with this link:https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/but when i configure schedule in amazon eventbridge i have error:The execution role you provide must allow AWS EventBridge Scheduler to
assume the role.i create a role with policy{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:xxxxxxxxxxxxxx:function:stoprds:*",
"arn:aws:lambda:xxxxxxxxxxxxxx:function:stoprds"
]
}
]
}and add in trust relationships{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}but it's doesn't work | The execution role you provide must allow AWS EventBridge Scheduler to assume the role |
Your condition should include the name of the partition key attribute, not its value. For example:attribute_not_exists(pk)Also, seeUniqueness for composite primary keysfor an explanation of why you only need to indicate the partition key attribute name, not both the partition key and the sort key attribute names. So, the following, while not harmful, is unnecessary:attribute_not_exists(pk) AND attribute_not_exists(sk) | I want to do conditionalputItemcall into DynamoDb i.e. don't insert an entry in dynamoDb if the primaryKey(partitionKey + Sort Key already exists). My schema's key look like this:PartitionKey: abc:def
SortKey:abc:123To do a conditional I do something like this:private static final String PK_DOES_NOT_EXIST_EXPR = "attribute_not_exists(%s)";
final String condition = String.format(PK_DOES_NOT_EXIST_EXPR,
record.getPKey() + record.getSortKey);
final PutItemEnhancedRequest putItemEnhancedRequest = PutItemEnhancedRequest
.builder(Record.class)
.conditionExpression(
Expression.builder()
.expression(condition)
.build()
)
.item(newRecord)
.build();However I run into following errorException in thread "main" software.amazon.awssdk.services.dynamodb.model.DynamoDbException: Invalid ConditionExpression: Syntax error; token: ":123", near: "abc:123)" (Service: DynamoDb, Status Code: 400I am assuming this is because of:present in my condition, because the same expression without:in the key succeeds. Is there a way to fix this? | DynamoDb Invalid ConditionExpression due to : present in expression |
+100You can use upload method if putObject doesn't work, it supports promise since 2.6.12(https://github.com/aws/aws-sdk-js/blob/master/CHANGELOG.md#2612), so something like this should work:const response = await s3.upload(uploadParams).promise();
console.log('end s3 put object: ', response);You can also find a complete example from the original documentation here(it's with callback,but the concept is the same):https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javascript/example_code/s3/s3_upload.js | import AWS from 'aws-sdk';
const s3 = new AWS.S3();
export async function putObjectToS3(
data: any,
nameOfFile: string,
contentType: string
) {
const params = {
Bucket: process.env.S3_BUCKET || '',
ContentType: contentType,
Key: `${nameOfFile}`,
Body: data
};
console.log('start s3 put object');
const response = await s3.putObject(params).promise();
console.log('end s3 put object: ', response);
return response;
}
await putObjectToS3(artistsCsv, `artists/artists.csv`, 'text/csv');We have the following function that uploads a file to AWS S3. This is run in an AWS lambda function. While ourartistsCsvfile is successfully uploaded (we see the CSV file in S3), the function never returns... We see the log'start s3 put object'in our logging, but we never see'end s3 put object: ', response.Is there something wrong with the way we have built this function, that is causing it to never return theresponse? We are usingaws-sdkversion"aws-sdk": "^2.1147.0",.Could this simply be an issue with AWS? (seems unlikely though). Perhaps something with the.promise(), andasync/await, that we are missing? How can we even troubleshoot this? | s3.putObject() uploading file to S3 but then never returning a response? |
The short answer is that groups cannot be used as a principal in a resource policy and the bucket policy is a type of resource policy [1]:You cannot identify a user group as a principal in a policy (such as a resource-based policy) because groups relate to permissions, not authentication, and principals are authenticated IAM entities.[1]https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#Principal_specifying | I have a an IAM policy which I have created and it seems to keep complaining that the policy document should not specify a principal. Am really unsure of what is wrong with my policy. This policy will be attached to my S3 Bucket which specifies only a certain group is allowed to do the following actions GetObject and ListBucket.Error : MalformedPolicyDocument: Policy document should not specify a principalMy IAM Policy is as follows :data "aws_iam_policy_document" "s3_admin_access" {
statement {
sid = "AllowGroupAAccess"
effect = "Allow"
actions = [
"s3:GetObject",
"s3:ListBucket"
]
resources = local.s3_etl_bucket_array
principals {
type = "AWS"
identifiers = [aws_iam_group.iam_group_team["admin-team"].arn]
}
}
statement {
sid = "DenyAllOtherUsers"
effect = "Deny"
actions = [
"s3:*"
]
resources = local.s3_etl_bucket_array
principals {
type = "AWS"
identifiers = ["*"]
}
condition {
test = "StringNotEquals"
variable = "aws:PrincipalArn"
values = [aws_iam_group.iam_group_team["admin-team"].arn]
}
}
}
resource "aws_iam_policy" "s3_admin_access" {
name = "${local.csi}-s3_admin_access"
path = "/"
policy = data.aws_iam_policy_document.s3_admin_access.json
} | Policy document should not specify a principal - terraform aws_iam_policy_document |
At present, Neptune is a single-tenant database service. This means that a single Neptune cluster can only host a single logical database.If you're looking to use a single cluster to host data for multiple contexts/users, you would need to do this within the application and use different aspects of the data model to denote these different contexts. For example, if you have a Person node label in your graph, you could use separate prefixes to denote which Person nodes relate to different users: User1.Person, User2.Person, ..., UserX.Person. Similar for edges and property keys. | In mysql we can create multiple databases and then we create different different tables in those database. e.g.mysql> create database demo;
mysql> use demo;
mysql> create table test_demo (id int);This allows us to create multiple tables under different different databases which provides virtual seggregation.I am looking for similar stuff in amazon neptune. Is it possible to create different databases in amazon neptune and then to build the graph in those database which are independent from each other? If it is possible then how ?Note: I don't want to create the separate cluster for my each graph hence above question. | Having multiple databases in amazon neptune |
Change the code toself.security_group_ = SecurityGroup_.SecurityGroup(
self.scope_object,
id_=self.id,
name=self.name,
vpc_id=self.vpc_id,
ingress=[SecurityGroup_.SecurityGroupIngress(from_port=3306,to_port=3306, "security_groups":['test-sg'])])Ingress takes alistof class objSecurityGroupIngress | ProblemUnable to create security group rules in aws using CDKTFCodeimport cdktf_cdktf_provider_aws.security_group as SecurityGroup_
self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}])Error29: "ingress": [
30: {
31: "cidr_blocks": null,
32: "description": "smartstack_dependency",
33: "from_port": null,
34: "ipv6_cidr_blocks": null,
35: "prefix_list_ids": null,
36: "protocol": "tcp",
37: "security_groups": null,
38: "self": null,
39: "to_port": null
40: }
41: ],
The argument "ingress.0.to_port" is required, but no definition was found.Tried the following code-import cdktf_cdktf_provider_aws.security_group as SecurityGroup_
self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}]) | How to parse ingress object in cdktf security group? |
The CDK (and CloudFormation) don't support running a single ECS task on deployments like that. There is an answerherethat appears to use an EventBridge event to trigger the ECS task.Alternatively, you could run it as a non-essential container in your main ECS task, so that it starts up and runs every time your ECS task starts, and being marked as non-essential the container can exit without ECS trying to redeploy the task. However you might need to look into some sort of distributed locking mechanism if you are running multiple instances of your task and your DB migration tools don't handle locking automatically.I've had to solve this exact problem on multiple projects, and I've stopped trying to run the DB migration in ECS all together. I'm using AWS CodePipeline now to deploy application updates, and spawning a CodeBuild task inside the VPC which runs the DB migration as part of the deployment. | I can't find a solution on how to deploy stand-alone task. It is a database migration script that I have to execute during the deployment.I created an image and task definition and from UI I can do it:or it could be done from cmd:aws ecs run-task --launch-type FARGATE --cluster MyECSCluster --task-definition app-migrations:1 --network-configuration "awsvpcConfiguration={subnets=[subnet-xxxx,subnet-yyyy],securityGroups=[sg-xxxxxxxxxxx]}").But I want to use CDK for this (my whole stack is in CDK).In CDK I found EcsRunTask in the step function package but I don't know how to use it. But, as fary I understund it is dedicated to handle flow with the lambdas and I'm not sure if it is the correct approach for me.Maybe, someone, has code sniped with the example. I use typescript but it could be in any language.If not snippet maybe some suggestions on how to deal with this. | How to deploy ECS standalone task that run only once via CDK |
An example I'm usingconst DataTable = new dynamodb.Table(this, 'Example', {
tableName: 'Example',
partitionKey: {
name: 'id',
type: dynamodb.AttributeType.STRING
},
sortKey: {
name: 'name',
type: dynamodb.AttributeType.STRING
},
pointInTimeRecovery: true,
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST
});
// Backup rules
// https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_backup-readme.html
const backupVault = new backup.BackupVault(this, "ExampleBackupVault", { backupVaultName: "ExampleBackupVault" })
const plan = new backup.BackupPlan(this, "ExampleBackupPlan")
plan.addRule(backup.BackupPlanRule.weekly(backupVault))
plan.addRule(backup.BackupPlanRule.monthly5Year(backupVault))
plan.addSelection("ExampleBackupSelection", {
resources: [backup.BackupResource.fromDynamoDbTable(DataTable)]
}) | I want to create a DynamoDB table and backup using AWS Typescript CDK. Creating DynamoDB using CDK is pretty straightforward, but implementing backup is not easy. Could anyone help me to implement a backup using CDK? I tried to solve this problem, but not enough references on the internet. I would appreciate it if anyone could provide a full example of this scenario. Thanks in advance.I tried using thishttps://aws-cdk.com/aws-backup/, but not really helpful. | AWS DynamoDB Backup using CDK |
There is a bug in the metric.
Anyway, In case of AWS Aurora, the writes are done only in memory and to the storage cluster. Inspecting the write latency on the storage cluster, using SSD, is 1ms per write. It is still concerning that the EBS (in our case) write latency on the DB instance is high but the database storage (logs, pages) is not on instances disk or volume or EBS, but only in discs on the storage cluster. So it is a bit less concerning and less impacting on the INSERT or COMMIT latency. | In AWS RDS Aurora - Monitoring section we notice that, even though most time there is no database activity (according to Monitoring and Performance Insights),We still notice that for the past weeks,Aurora Write IOPS was high at all timesAurora Write Latency was high at all times (multiple 100s of ms to seconds)Why could this be?
What could caue the Write IOPS saturation?
There is no Database activity that we can see. | Why is AWS Aurora Write IOS high at all times? |
No, that's not possible. CloudWatch metricEstimatedChargesin theAWS/Billingnamespace doesn't provide tag dimension (only ServiceName dimension).AWS Cost Explorerdoesn't use CloudWatch metric - there is different AWS API used, which is not implemented in the Grafana. | Is there a way to get the costs of AWS resources by tags? It is possible using Linked accounts but I'm trying to figure out if we can filter out costs by tags.For linked accounts the query isdimension_values(us-east-1, AWS/Billing, EstimatedCharges, LinkedAccount, {"Currency": "USD"})But i'm not sure what the query is for tags? This is for variable/templating.This is how a normal graph dashboard filtering looks like. | Grafana - Get AWS Cost usage by tags |
Unfortunately, it seems like stack name is NOT part of the SAM templates. This is done via the command arguments to deploy the stack.From the same link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-creating-stack.htmlThe following example creates the myteststack stack in an Amazon S3 bucket:PROMPT> aws cloudformation create-stack \
--stack-name myteststack \
--template-body file:///home/testuser/mytemplate.json \
--parameters ParameterKey=Parm1,ParameterValue=test1 ParameterKey=Parm2,ParameterValue=test2So when creating the stack, the--stack-nameargument is how this is set.The reason I was confused is because I didn't realize where that command was being issued. | This page describes how to set a stack name in some AWS console GUI:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-parameters.htmlHow do I set these values in the SAM Template .yml files?I'm specifically doing this on a Stack that is only a Lambda Layer if that matters.I can see that there is some way to do this via CLI as described here:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-creating-stack.htmlaws cloudformation create-stack --stack-name myteststack --template-url "ssm-doc://arn:aws:ssm:us-east-1:123456789012:document/documentName"Is it even possible to set the name in the template? | How do I set an AWS Stack name (for a Lambda Layer) in a SAM Template? |
Obviously this is not a good workaround - but it worked for me.I went into the file GlueKernel.py in the directory: \site-packages\aws_glue_interactive_sessions_kernel\glue_pysparkand hard-coded the 2nd line of this function to set the version to "3.0"I'm on windowsdef set_glue_version(self, glue_version):
glue_version = str("3.0")
if glue_version not in VALID_GLUE_VERSIONS:
raise Exception(f"Valid Glue versions are {VALID_GLUE_VERSIONS}")
self.glue_version = glue_version | Using interactive Glue Sessions in a Jupyter Notebook was working correctly with the aws-glue-sessions package version 0.32 installed. After upgrading withpip3 install --upgrade jupyter boto3 aws-glue-sessionsto version 0.35, the kernel would not start. Gave an error message in GlueKernel.pyline 443 in set_glue_version Exception: Valid Glue versions are {'3.0', '2,0}and the Kernel won't start.Reverting to version 0.32 resolves the issue. Tried installing 0.35, 0.34, 0.33 and get the error, which makes me think it's something I'm doing wrong or don't understand and not something in the product. Is there anything additional I need to do to upgrade the version of the aws-glue-sessions? | set_glue_version exception after upgrading aws-glue-sessions |
I think there are two small issues here:you're using the high-levelservice resourceinterface so you don't need to explicitly tell DynamoDB what the attribute types are. They are inferred through automatic marshaling. So you can simply use"key" : "value"rather than"key": {"S": "value"}for the string keyswhen deleting an item you need to provide the full primary key, to include both partition key and sort keySo, for example, if your partition and sort keys are namedpkandsk:'DeleteRequest': {
'Key': {
'pk': pk,
'sk': sk
}
} | This issue has been raised before but so far I couldn't find a solution that worked in boto3. GSI is set on 'solutionId' and partition key being 'emp_id'. Basically, I just want to delete all records in the table without deleting the table itself. What am I missing here?https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.batch_write_itemtable_name = "solutions"
dynamodb_client = boto3.client('dynamodb')
dynamodb_resource = boto3.resource('dynamodb')
table = dynamodb_resource.Table(table_name)
data = table.scan()
delete_list = []
for item in data['Items']:
delete_list.append({
'DeleteRequest': {
'Key': {
'solutionId': {"S": f'{item["solutionId"]}'}
}
}
}
)
def list_spliter(list, size):
return (list[pos:pos + size] for pos in range(0, len(list), size))
for batch in list_spliter(delete_list, 25):
dynamodb_resource.batch_write_item(RequestItems={
f'{table_name}': batch
}
) | DynamoDB - boto3 - batch_write_item: The provided key element does not match the schema |
spark.sql('use my_database')
df = spark.sql('show tables in my_database')
for t in df.collect():
print('table {}'.format(t.tableName))
display(spark.sql('describe table extended {}'.format(t.tableName)).where("col_name='Type' and data_type='MANAGED'"))
#use if condition to filter out the Managed data_type and collect the database and table names
#loop over all databases using "show databases" in outer loop | I need to identify and list all managed tables in a Databricks AWS workspace. I can see that manually in the table details, but I need to this for several thousand tables on different databases, and I cannot find a way to automate it.
The only way I found to tell programmatically if a table is managed or external is with the DESCRIBE TABLE EXTENDED command, but that returns it as a value on a column, and cannot be used with SELECT or WHERE to filter, even if I try running it as a subquery.
What is the easiest way to filter the managed tables? | See managed tables in Databricks AWS |
If you change the policy, it won't have any impact on existing users, their state will remain the same and they will be able to continue logging in with their existing password, even if it doesn't meet the new policy.The reason for this, as you mentioned, is that the passwords are stored in a hashed way, making it impossible to know which particular criteria it met. Storing any additional metadata about the password (length, cases, number and symbol usage) would strongly defy the security of it.New users will be impacted by it when signing up, they need to meet the new policy.As for existing users, if you want to make sure the passwords match the new policy, you'll need to do it by calling theAdminResetUserPasswordaction to invalidate the existing password (orAdminSetUserPasswordwith a non-permanent new password, if you want to set one) and require a password reset at next login. | New security guidelines in the organisation require to change our password policy from 8 characters to 12 (and require uppercase, lowercase and special characters).Our users are currently managed in a user pool on AWS Cognito.
The policy change on first look seems straightforward, since you can do it in the UI.
But what does this mean for existing users ?Will their state change toFORCE_CHANGE_PASSWORD?Will they automatically get an email to reset their passwordOr will they just be denied access next time they try to login? Or is the policy gonna be applied only for new users ?If that is not the case, I suppose this has to be handled by the developer, and change the status of ALL users toFORCE_CHANGE_PASSWORDvia script. Since passwords are hashed, there's no way to tell which current users already have passwords which comply even to the new policy. | Updating Cognito user pool password policy |
One of myFargateservices had mysteriously stopped and I was seeing this error as well.In my case, I had deleted the log group which the service was using and this caused the launching of the task to fail on subsequent retries.You say that your log group still exists though, so I'm not sure if it's the same problem.Anyway, myFIXusing the AWS Console:Navigate to the task that failed to launchView in CloudWatchGet the name of the CloudWatch log group that the task is expecting to save toCreate a new CloudWatch log group matching with this nameNow the service has a log group to save to and your tasks should successfully start again. | I am trying to set up an ECS Fargate container but it throws me this error:"ResourceInitializationError: failed to validate logger args: create stream has been retried 1 times: failed to create Cloudwatch log stream: ResourceNotFoundException: The specified log group does not exist. : exit status 1"I've already checked and the log group exists and it has the same name in the task definition. I've checked the ecsTaskExecutionRole policy (it has cloudWactchLogsFullAccess policy), also thought that could be the internet access but I checked the vpc, subnet, and route table and everything seems ok (I don't know how to check if the container really has internet access). | Error with ECS container: ResourceInitializationError : failed to create Cloudwatch log stream |
You have to let the index.html page handle direct paths such as example.com/path in your cloud front as well, you can do this by adding your custom error response in the cloud front.Click the ID of your newly created distribution to reach its settings page, then click on the Error Pages tab. Select Create Custom Error Response.Select Yes for a custom error response, set/index.html for the response page path and 200: OK for the response code. This custom error page in the CloudFront distribution is analogous to the Error Document on the S3 bucket (and will work on IE, too). When done, click Create. | I have a React SPA app that is on AWS S3 and I'm using Cloudfront. I was getting 404 errors if I refreshed or attempted to directly load any URL other than the root. I have read on other answers that I need to set up a custom error message on Cloudfront to redirect to index.html and show 200 OK.
I have done that and I am no longer getting the error message but now I just get shown a white screen. From what I have read this fix seems to work with everyone that has tried it. Does anyone know what I might be doing wrong or how I can fix it? | React SPA using react router on S3 with Cloudfront. Can't refresh or direct load link |
Failed to connect to <> port 80: Timed outThis message indicates that the curl can't connect to the ALB, not the ALB to lambda. Your ACL and route table look good. So I'd suggest checking a security group of the ALB. It must allow traffic on port 80 from at least your IP (or from specific CIDR depending on your requirements). | I have an application load balancer with a HTTP listener that should be invoking a Lambda.However, there is no response when I make a request to the ALB's endpoint (This site can’t be reached).There is no logs in Lambda's Cloudwatch from the requests I'm making, so it seems doesn't get invoked.I also enabled Access Logs for ALB, however the bucket only contains one file (AWSLogs/ELBAccessLogTestFile) that was created when logging was enabled.Additionally, I enabled health checks on the Target Group, and it's showing that the target Lambda is healthy. I can see the health check requests in Lambda's CloudWatch.ACL allows all traffic:There are 3 subnets associated with the ALB, they all use the same route table that does link to Internet Gateway:So to me it looks like everything that's mentioned in theAWS troubleshooting for ALBis fine.Other relevant settings: | AWS application load balancer not forwarding requests |
Is it with a new AWS account? If it is I imagine you're hitting a constraint for new accounts. A quick message to support to let them know will probably fit it. | Whenever i try to increase the memory above 3008 MB of my lambda function, i get the error:'MemorySize' value failed to satisfy constraint: Member must have value less than or equal to 3008. Although it says i can Set the memory between 128 MB and 10240 MB, and i am in a supported region for setting the memory above 3008MB (us-east-1 -AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions), its still giving me the error, i'm honestly stuck because i keep getting this error:Error: Runtime exited with error: signal: killedwhich requires more memory, but i cant set it higher than 3008 MB. This is a screenshot of the error i'm getting: | 'MemorySize' value failed to satisfy constraint: Member must have value less than or equal to 3008 |
By following the instructions onAWS Amplify's multiple frontends workflow page, you could get this to work.According to the docs,If you’re building a mobile and web app in separate repositories, the recommended workflow is to keep the backend definition (the amplify folder) in only one of the repositories and pull the metadata (the aws-exports.js or amplifyconfiguration.json file) in the second repository to connect to the same backend.Therefore, you can keep the backend code in one of the reposities with a fake/demo frontend and then have the real/current frontend in a different folder with onlyaws-exports.jsoramplifyconfiguration.json. | I would like to develop a React app using Amplify and two dev teams. Can one team work on the frontend exclusively, without giving them access to the backend code (theamplifyfolder)? The backend team can have access to both the backend and frontend code. If possible, how would I set it up? | In AWS Amplify, is it possible to have two separate teams (repos) working on the frontend and backend exclusively? |
From the link you provided, the 24 hours value is fixed and cannot be changed with configuration:The verification code or link is valid for 24 hours.One way to workaround it would be to set aPost confirmation Lambda triggerthat would check the time between User creation and the confirmation, if greater than 10 minutes, delete (or any other preferred operation) the User, eg:const AWS = require('aws-sdk');
// expiry time of 10 minutes, in ms
const CODE_EXPIRY = 10 * 60 * 1000;
exports.handler = async (event) => {
var cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider({ apiVersion: '2016-04-18' });
// get the user from the pool, ids are in the lambda event
let params = {
UserPoolId: event.userPoolId,
Username: event.userName
};
let user = await cognitoIdentityServiceProvider.adminGetUser(params).promise();
let currentDate = +new Date();
let createDate = +new Date(user.UserCreateDate);
if ((currentDate - createDate) > CODE_EXPIRY) {
// timeout exceeded for confirmation, revert user creation
await cognitoIdentityServiceProvider.adminDeleteUser(params).promise();
// this makes the confirmation return a failure message
throw 'Confirmation code expired';
}
return event;
};When the custom timeout is expired, it appears the confirmation has failed, as both Hosted UI and API will fail with message "PostConfirmation failed with error Confirmation code expired." | Currently when I am creating a user I am sending one verification 6digit code on user added email, which expires after 24 hours. I had gone through the AWS Cognito Email verification document but didn't get anything the modify the expiry time of Email verification code. Can anyone please let me know how can I change the timing from 24hours to 10minsThis is thelinkwhich i had gone through | change the expiry time set to the verification code sent through Email using AWS cognito |
Thebasicdifference are:AWS Inspector- analyzes instances and ECR docker images from the inside (e.g. malware, virus) in terms of security. On the instances, a special Inspector agent is required to be running.AWS Config- can be applied to any resource, does not require any agent on the instances (thus does not check them from the inside), and you can write your own, fully custom security checks. | i am looking for some basic difference between AWS Inspector & AWS Config Rules | What is the basic difference between AWS Inspector & AWS Config Rules |
You have to setupS3 Event Notificationsfor your bucket that will trigger your custom lambda function.Once the object is uploaded to the bucket, your function is going to get invoked and obtain all the associated data about the event, such as object name, key, bucket, data, etc. Then your lambda may use that information toconstruct custom messages(entire object URI) that will be then uploaded to SNS or Kinesis. | I am new to AWS, I wanted to get S3 object URI/path when the object creation event is generated. Object can be created anywhere in the bucket, like there can be multiple sub folders that dynamically created in s3 bucket based on the date. So I want to know exactly where object is created. Is there anyway to do so? Seems like most of the message structure examples I can see only object name and bucket name not the entire object URI. I'm planning to use this message to SNS or kinesis streams with eventbridge. | How to send S3 object path or URI when an object created event is generated |
I cannot find any report or anything showing me this, which seems unbelievable to meLambda exists to let you write functions without thinking about the infrastructure that it's deployed on. It seems completely reasonable to me that it doesn't give you visibility into its public IP. It may not have one.AWS has the concept of anelastic network interface. This is an entity in the AWS software-defined network that is independent of both the physical hardware running your workload, as well as any potential public IP addresses. For example, in EC2 an ENI is associated with an instance even when it's stopped, and even though it may run on different physical hardware and get a different public IP when it's next started (I've linked to the EC2 docs because that's the best description that I know of, but the same idea applies to Lambda, ECS, and anything else on the AWS network).If you absolutely need to know what address a particular non-VPC Lambda invocation is using, then I think your only option is to call one of the "what's my IP" APIs. However, there is no guarantee that you'lleversee the same IP address associated with one of your Lambdas in the future.As people have noted in the comments, the best solution is to run your Lambdas in a private subnet in your VPC, with a NAT and Elastic IP to guarantee that they always appear to be using the same public IP. | We're using Lambda to submit API requests to various endpoints. Lately we have been getting 403-Forbidden replies from the API endpoint(s) we're using, but it's only happening randomly.When it pops up it seems to happen for a couple of days and then stops for awhile, but happens again later.In order to troubleshoot this, the API provider(s) are asking me what IP address / domain we are sending requests from so that they can check their firewall.I cannot find any report or anything showing me this, which seems unbelievable to me. I do see other threads about setting up VPC with private subnet, which would then use a static IP for all Lambda requests.We can do that, but is there really no report or log that would show me a list of all the requests we've made and the Ip/domain it came from in the current setup?Any information on this would be greatly appreciated. Thanks! | How to see which IP address / domain our AWS Lambda requests are being sent from..? |
There are two ways to accomplish getting multiple keys mapped to different slotsYou can use curly braces to ensure they all end up on the same slot {UniqueID}10 and {UniqueID}11 will be on the same slot since only the name inside the braces is hashed.Instead of using MGET use apipelinebeing sure to set transaction to False in pythonpipe = client.pipeline(transaction=False)
if len(sys.argv) > 1:
load_file = sys.argv[1]
else
load_file = 'pop_users.csv'
with open(load_file, newline='') as csvfile:
reader = csv.DictReader(csvfile)
row_count = 0
for row in reader:
pipe.hset("user:%s" %(row['username']), mapping = row)
row_count += 1
if row_count % 500 == 0:
pipe.execute()
pipe.execute() | I am trying to get multiple values from Redis using Python .Using method redisClient.keys("UniqueID*") method to get list of keys that match pattern such as "UniqueID10", "UniqueID11", "UniqueID13"while passing the list of keys to method redisClient.mget([list of keys]) , i am getting the errormget - all keys must map to the same key slotBelow is the code snippet (Python Library)import redis
rc = redis.RedisCluster(host,port)
all_keys_list = rc.get_client().keys("UniqueID*")
all_values = rc.get_client().mget(all_keys_list )Error:mget - all keys must map to the same key slotCan this be solved from python using any other method or
concurrency.Do I have to use slot hashing while putting keys and is it possible that all same keys entry do not land in same slot due to memory constraint of the slot
and I will get this issue again. | Redis [ Exception : mget - all keys must map to the same key slot ] |
Sadly, you can't controlretry policiesas explained in thedocs:With the exception of HTTP/S, youcan't change Amazon SNS-defined delivery policies. | My use case :
From the spring-boot application, I am publishing a payload to AWS SNS, this SNS is triggering the Lambda function.If the lambda function fails, is there any configuration available on AWS lambda where we can specify the number of retries and the duration after which each retry happens? | Understanding AWS lambda retry mechanism |
try to configure security group.
EC2 -> Security Groups -> choose your security group -> edit -> check source
(0.0.0.0/0 allows everyone to connect to the database) | I'm getting this error while performing the steps from the video:https://www.youtube.com/watch?v=XDMgXZUfa10&t=897sThe error: | Unable to connect to PostgreSQL database on Amazon RDS from pgAdmin4: internal server error: port 5432 failed: timeout expired |
Would this query do DynamoDB query operations or DynamoDB scan operations under the hood?It will be doing multiple DynamoDBqueriesas yourWHEREclause condition statement is filtering on a DynamoDBpartition key.This is confirmed as perdocumentation:To ensure that a SELECT statement doesnot result in a full table scan, the WHERE clause condition must specify apartition key. Use the equality or IN operator. | We need to query a large (2TB+) DynamoDB table to get multiple items based on their partition keys.We are planning to use PartiQL as it supports theINoperator as such:SELECT * FROM table_test where pk IN ('1234','1112');Would this query do DynamoDB query operations or DynamoDB scan operations under the hood?We would like to avoid table scans due to them being more expensive. | Does the IN PartiQL operator query or scan DynamoDB tables? |
AWS API gateway is more suited for client credentials oAuth authentication flow for point to point connectivity. It don't provide much features such as rate limiting based on users. You can use lambda authoriser with dynamodb to store user limits and current value and provide rate limiting based on user.
There is no feature provided by AWS API gateway for user based limiting. | Currently I have a serverless API using lambda and API gateway.The next feature I want to build is user authentication using AWS cognito and then apply rate limiting to each user.how would I go about doing this? can API gateway communicate with cogito?I readin the AWS docs:Per-client throttling limits are applied to clients that use API keys associated with your usage policy as client identifier.However as far as I understand this is referring to rate limiting perx-api-keywhich is used to invoke the lambda.I don't really want to have to create a new one of these keys for every user as there is a hard limit of 10000 issued at one time. I would much rather use cognito user pool keys.I know an alternative approach would be to, build a custom authorizer which would write user IDs to an in memory database such as Redis or ElastiCache, this would then be queried on every request to calculate the last time that user made a request.However, I don't really like this approach as if it won't be as scalable as the serverless API and may pose as a bottleneck for the entire API.How is everyone implementing rate limiting like this? have I missed something fundamental? does amazon have an out of the box solution I can use? | How do you implement rate limiting on a serverless lambda application? |
Check if your hostname type setting in the subnet and launch template configuration is using resource name and if that is the case then switch to using IP name instead. I think this is caused by some weird pattern matching going on with the AWS EKS control plane (as of v1.22) where it would not issue a certificate for a node if that node's hostname doesn't match its requirements. You can test this quickly by adding another node group to your cluster with the nodes' hostnames set to IP name. | Both Logs/ Exec commands are throwing tls error:$ kubectl logs <POD-NAME>
Error from server: Get "https://<NODE-PRIVATE-IP>:10250/containerLogs/<NAMESPACE>/<POD-NAME>/<DEPLOYMENT-NAME>": remote error: tls: internal error$ kubectl exec -it <POD-NAME> -- sh
Error from server: error dialing backend: remote error: tls: internal error | kubectl exec/logs on EKS returns "remote error: tls: internal error" |
From docs aboutRequestCountPerTarget:The average number ofrequests receivedby each target in a target group.andsends its metrics in60-secondintervals.And it representsaveragenumber of requests in 1 minute intervals. I think you shouldread upon how metrics work in AWS. | When using a scaling policy in AWS fargate service, i want to scale using the "request count per target" metric.But i am having difficulty understanding how this is determined. Is there a time period associated with the request count?eg: requests per target per minuteOr are these concurrent requests? If it is concurrent requests, would concurrent requests be determined as requests which have been sent but not responded to? | AWS AutoScaling - how does the "Request count per target" measurement works? |
Seems like ecs fargate task have different stages and inDeprovisioningstage it deletes all the networking related stuff where network interface is also deleted.Also I was viewing this task inStoppedstage that's why i was getting the error. | While running ecs fargate task aws automatically stopped my task with errorThere was an error while describing network interfaces.
The networkInterface ID 'eni-0c21gdfgerg' does not existMy task was running for more than a day but now it suddenly stopped.I checked that eni- and that eni is not existing.How can I troubleshoot it? | aws: Networking issue |
You can only do that usingcustom resourceor amacrothat you would have to develop yourself in the form of lambda functions. | My templaste is (for one emeil):Parameters:
MailAlarmsSNS:
Type: String
Default:[email protected]MessagesInErrorTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: foo
DisplayName: This topic is used to send an email
Subscription:
- Endpoint: !Ref MailAlarmsSNS
Protocol: emailI want use a dynamic list input (comma separated)? | How to use email dynamic list to SNS Topic Subscription in AWS CloudFormation? |
I don't speak on behalf of AWS, but from my digging this is what I found.FromAWS Terms of Use, youmustindicateIntendedUseasStorageif you are going to store the data (emphasis mine).82.3 You may not:e. Store or cache Location Data,except(i) for geocoding and reverse-geocoding results (other than as prohibited in Section 82.4.a)when you indicate the result will be stored in the API parameteror (ii) to comply with legal, regulatory, or reasonable internal record-keeping requirements.Reading the other terms for this service, you can feel thatthey are very concernedabout customers using their location services as a means to offer their own location services. So from this perspective, it makes sense that they require you to declare yourintentof storing the data. Keeping this in mind, if you think about the parameter nameIntendedUse, it sounds a lot like a legal declaration rather than a technical configuration.Furthermore, thepricingfor API withstored resultsis 8x more expensive.Addresses geocoded $0.50 per 1,000Addresses geocoded (stored results) $4.00 per 1,000So, to answer your question if this parameter has a purpose at all, it seems it does have a significant purpose legally and financially. Certainly not what I expected, but all evidences I found point to this conclusion.And again disclaimer: I don't speak on behalf of AWS. | I'm implementing an address suggestions solution using theAWS Locationservice.On thePlace Indexresource, there is anIntendedUseproperty that takes eitherSingleUse(default) orStorage(See theCloudFormation definition).From the description in the CloudFormation doco above, it sounds like if I intend to store or cache results, I should useStorage. Since I intend to eventually store results of Place Index functions I chose storage with the Esri data provider.However, once I did this and called theSearchPlaceIndexForSuggestionsfunction I received a validation error:{
"errorType": "ValidationException",
"errorMessage": "PlaceIndex [redacted] cannot be used for SearchPlaceIndexForSuggestions because it has IntendedUse Storage",
...
}Following this, I don't really understand the purpose of this property or if it has any practical effect. | What is the effect of IntendedUse in AWS Location Place Index resources? |
You have to modify your.platformas shown in thedocs.For example, you could have the following.platform/nginx/conf.d/myconfig.confwith content:client_max_body_size 20M; | When I post a form with an image taken from my phone I reserve the error "413 request entity too large" I realize that an image included in the form taken by the phone camera is too large, and the server rejects the request... but how can I fix this issue, I'm using Java Spring framework, and MySQL database, all of this handled with Amazon aws services. | How to solve 413 request entity too large |
Well, as you found out involume.kubernetes.io/selected-node never cleared for non-existent nodes on PVC without PVs #100485- this is a known issue, with no available fix yet.Until the issue is fixed, as a workaroud, you need to removevolume.kubernetes.io/selected-nodeannotation manually. | I experiencing some issues when scale down/up ec2 of my k8s cluster. It might happen that sometimes I have new nodes, and old are terminated. k8s version is 1.22Sometimes some pods are in ContainerCreating state. I am trying to describe pod and see something like this:Warning FailedAttachVolume 29m attachdetach-controller Multi-Attach error for volume
Warning FailedMount 33s (x13 over 27m) kubelet....I am checking that pv exists, pvs exists as well. However on pvc I see annotationvolume.kubernetes.io/selected-nodeand its value refers to the node that already not exist.When I am editing the pvc and deleting this annotation, everything continue to work.
Another thing that It happens not always, I don't understand why.I tried to search information, found some couple of linkshttps://github.com/kubernetes/kubernetes/issues/100485andhttps://github.com/kubernetes/kubernetes/issues/89953however I am not sure that I properly understand this.Could you please helm me out with this. | Kubernetes pods are stuck after scale up AWS. Multi-Attach error for volume |
The--failflag will causecdk diffto exit with exit code 1 in case of a diff. Addconditional logicto handle the exit code cases:cdk diff --fail && echo "no diffs found" || echo "diffs found" | I am usingCDKto deploy cf stack to AWS. It hascdk diffcommand to tell me what changed in this deployment. If there is nothing changed, it just showsThere were no differencesfor each stack included in thecdk project.I have a requirement to run different command based on whether the cdk requires a change. How can I know whether it requires a change from a script? I have checked thatcdk diffreturn code is 0 for bothchangeandno change. What is the right way to know whether the change-set will change anything? | How to use `cdk diff` to programmatically check whether a stack needs an update? |
Locally, I've been able to make it works by setting environment variable PYTHONPATH to usr/local/airflowIs it the best way ? If not how can I make it works on MWAA ?When deploying Airflow to an MWAA environment, you don't explicitly set thePYTHONPATHenvironment variable.I try to use functions from ./dags/utils/secrets by importing them like :from dags.utils.secrets import get_secretAdjust the Pythonimportstatement relative to the MWAA environment's DAGs folder. For example, if the DAGs folder iss3://<bucket>/dags, then the import statement would be:from utils.secrets import get_secretExample DAGs folder:s3://<bucket>/dags/__init__.py
s3://<bucket>/dags/my_dag/__init__.py
s3://<bucket>/dags/my_dag/dag.py
s3://<bucket>/dags/utils/__init__.py
s3://<bucket>/dags/utils/file.py
s3://<bucket>/dags/utils/secrets.py
s3://<bucket>/dags/utils/date.py | I'm trying to use local module inside a dag on MWAA.The folder structure looks like :.
├── __init__.py
├── dags
│ ├── __init__.py
│ └── my_dag
│ ├── __init__.py
│ └── dag.py
│ └── utils
│ ├── __init__.py
│ └── file.py
│ └── secrets.py
│ └── date.pyI try to use functions from./dags/utils/secretsby importing them like :from dags.utils.secrets import get_secretLocally, I've been able to make it works by setting environment variable PYTHONPATH tousr/local/airflowIs it the best way ? If not how can I make it works on MWAA ?Thank you, | Set PYTHONPATH in MWAA |
AWS Compute optimizer is working as it should because:-According to faqhttps://aws.amazon.com/compute-optimizer/faqs/#AWS_Lambda_function_recommendationsCompute Optimizer helps you optimize two categories of Lambda functions. The first category includes Lambda functions that may be over-provisioned in memory sizes. You may consider downsizing the memory sizes of these functions to save costs. The second category includes compute-intensive Lambda functions that may benefit from additional CPU power. You may consider increasing their memory sizes to trigger an equivalent increase in CPU available to these functions and reduce execution time. For functions that do not fall under any of these categories, Compute Optimizer does not deliver recommendations for them.For functions that do not fall under any of these categories, Compute Optimizer does not deliver recommendations for them. | I have opted for theAWS Compute Optimizerin order to get recommendations on how to save costs in our infrastructure. As expected, I get recommendations for EC2 instances, Auto Scaling groups and EBS volumes.However, it fails to show the same forLambda functions, as can be seen from the below screenshot, in spite of active Lambda usage in the account.Haven't been able to understand what seems to be missing. Is there a way I can fix this? | AWS Compute Optimizer - Lambda Data Unavailable |
It would beagainstleast privilege rule. A permissions in a single role should be just enough for a given task to be completed.Since a role can assume other role, and the other role can assume yet new role, and so on, thecumulative permissionsafter a chain of assumptions is against the least privilege rule. | When you assume a role (user, application or service), you give up your
original permissions and take the permissions assigned to the role. Why can't new permissions from the assumed role be added to the existing ones? Is this to avoid potential security issues when existing and new policies are mixed up? | In AWS, why am I giving up existing permissions when assuming a role |
You can utilizeS3 byte-range fetchingwhich allows the fetching of small parts of a file in S3. This capability then allows us to fetch large objects by dividing the file download into multiple parts which brings the following advantages:Part download failure does not require full re-downloading of the file.Download pause/resume capability.Download progress trackingRetry packets that failed or interrupted by network issuesSniff headers located in the first few bytes of the file if we just need to get metadata from the files.You can split the file download by your size of choice (I propose 1-4mb at a time) and download the parts chunk by chunk, when each of the get object promises complete, you can trace how many have completed. A good start is by looking at theAWS documentation. | I'm using node.js with the aws-sdk (for S3).... When I am downloading a huge file from s3, how can I regularly retrieve the progress of the download so that the front-end can show a progress bar? Currently I am using getObject. (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property)The code to download the file works. Here's a snippet of my code...return await new Promise((resolve, reject) => {
this.s3.getObject(params, (error, data) => {
if (error) {
reject(error);
} else {
resolve(data.Body);
}
});I'm just not sure how to hook into the progress as it's downloading. Thanks in advance for any insight! | Retrieving the progress of getObject (aws-sdk) |
I think is because of the format that the!Ifcondition expects, following the documentation fromaws, the format is:!If [condition_name, value_if_true, value_if_false]And if you check your template, you have four elements, not three.Also, the pseudo parameter is (with double :):AWS::NoValueSo, a possible solution to add the two accounts that you need when the condition is True could be trying to add a new condition that combines condition1 and condition2 that you already have with the !And function, like this:Conditions:
Condition1: your condition
Condition2: your condition
ConditionCombined: !And [!Condition Condition1, !Condition Condition2]
Resources:
Role1:
Type: AWS::IAM::Role
Properties:
RoleName: 'ABCRole'
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
AWS:
- 1234567890 # Some AWS account
- !If
- ConditionCombined
- arn:aws:iam::11111111111111:role/ABCDE_Role # first role
- !Ref AWS::NoValue
- !If
- ConditionCombined
- arn:aws:iam::22222222222222:role/ABCDE_Role # second role (different account number)
- !Ref AWS::NoValue | I have the following template defining a IAM policy which is not working:RoleName: 'ABCRole'
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
AWS:
- 1234567890 # Some AWS account
- !If
- !And
- !Condition Condition1
- !Condition Condition2
- arn:aws:iam::11111111111111:role/ABCDE_Role # first role
- arn:aws:iam::22222222222222:role/ABCDE_Role # second role (different account number)
- !Ref AWS:NoValueI am trying to achieve that: when bothCondition1andCondition2are true, I will be able to attacharn:aws:iam::11111111111111:role/ABCDE_Roleandarn:aws:iam::22222222222222:role/ABCDE_Roleas two additional principals. Otherwise, do nothing -- having1234567890as the only principal.Please note that,arn:aws:iam::11111111111111:role/ABCDE_Roleandarn:aws:iam::22222222222222:role/ABCDE_Roleare only different from the aws account, so maybe I could use!Subto replace the the account number? Somewhat like:for account in [11111111111111, 22222222222222]:
!Sub arn:aws:iam::${account}:role/ABCDE_RoleHow should I modify my template above? Thank you in advance! | CFn: Use !If to add multiple Principals to a statement |
+75I know you planned to do a custom Lambda, but check if WAF already fulfills your use case. For example, the rate limit section in this article here clearly allows you to define the rate per 5-minutes for a given IP:https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-rules-creating.htmlIf you are not doing anything else, a custom Lambda function may not be needed.EDITIf you want to go down the path of CloudWatch alarms, I think you can define ametric filterto create aCloudWatch metric. Then you can create the alarm based on the metric.https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html | My problem:I would like to blacklist IPs which are accessing my public AWS API Gateway endpoint more than 5 times a hour.My proposed solution:Requests are logged to CloudWatchRequests are counted and grouped by IPAn alarm monitors IPs send a message to a SNS topic in case the threshold is metLambda is triggered by the message and blacklists the IPI am able to log and count the IPs by using the Insight query below:fields ip
| stats count() as ipCount by ip
| filter ispresent(ip)
| sort ipCount descWhat I am struggling to accomplish is getting an CloudWatch Alarm based on this query.I have searched a lot but no success. Any ideas on how to create such a metric / alert? | Create an alarm based on a CloudWatch insight query |
The bucket is retained due to its RetentionPolicy. From the CDK documentation onRetentionPolicy:The removal policy controls what happens to the resource if it stops
being managed by CloudFormation.NameDescriptionDESTROYThis is the default removal policy.RETAINThis uses the 'Retain' DeletionPolicy, which will cause the resource to be retained in the account, but orphaned from the stack.SNAPSHOTThis retention policy deletes the resource, but saves a snapshot of its data before deleting, so that it can be re-created later.Regarding your question on which resources will be retained:Many stateful resources in the AWS Construct Library will accept a removalPolicy as a property, typically defaulting it to RETAIN.Typically, this includes resources like S3 Buckets, Database resources, etc.From AWS CDK documentation forS3 Buckets:removalPolicy?Type: RemovalPolicy (optional, default: The bucket will be orphaned.)Theoverview pagealso has more details. | For example, my CDK project has S3 bucket, IAM role, and Lambda function.$ cdk bootstrap
$ cdk deployThis creates an S3 bucket, IAM role, and Lambda function.$ cdk destroyIt removes the IAM role and Lambda function but the S3 bucket is retained.Of course, the S3 bucket is empty.Is this the correct behavior? if so, which resources will be retained other than S3 buckets? | S3 bucket is not removed by CDK destroy |
As of 24 March 2022, Lambda supports a configurable ephemeral storage up to10 GBof space.Reference:https://aws.amazon.com/blogs/aws/aws-lambda-now-supports-up-to-10-gb-ephemeral-storage/ | AWS Lambda is limited to storing 512 MB of ephemeral data in /tmpFor a particular use case I need to process more than this - up to several GB in a few hundred files.I could mount an EFS drive but that then requires mucking about with VPC and NAT Gateway which I am trying to avoid.Am using various executables (via layers) on these files so I can't just load files into memory and process.Is there a way of setting up a ramdisk in Lambda (I understand that I would have to provision and pay for a large amount of memory).I have tried executingmount -t tmpfs -o size=2G myramdisk /tmp/ramdiskbut receive errormount: command not found | AWS Lambda - mounting RAM disk |
This is because the new 1.139 release upgraded the schema version to 16.0.0, whereas 2.5.0 is still on 15.0.0. 16.0.0 in CDKv2 will be included in a future release. In the meantime, install the 1.139 version of the CLI, it will work.A general way to solve this would be to upgrade your constructs to v2 to never have this mismatch.GitHub issue.UPDATE:The latest CDK CLI 2.9.0 supports schema version 16.0.0. | we executed the same workflow an hour apart. The initial run was successful and then we received the following error in the subsequent execution:This CDK CLI is not compatible with the CDK library used by your
application. Please upgrade the CLI to the latest version. (Cloud
assembly schema version mismatch: Maximum schema version supported is
15.0.0, but found 16.0.0)This error occurs in the cdk synth stage. As far as I can tell, we are installing aws-cdk@latest (2.5.0) and our requirements.txt is installing a number of packages. When I compared the dependencies between the two runs I found the following:Successful build:Collecting aws-cdk.cloud-assembly-schema==1.138.2Downloading aws_cdk.cloud_assembly_schema-1.138.2-py3-none-any.whl (150 kB)Failed build:Collecting aws-cdk.cloud-assembly-schema==1.139.0Downloading aws_cdk.cloud_assembly_schema-1.139.0-py3-none-any.whl (153 kB)I'm assuming the "latest" version was picked up? However, how can I track this type of information? I have tried a number of searches include aws-cdk versions, aws-cdk 1.139.0 release date, etc... Perhaps, I don't understand the package versioning?Any feedback is appreciated. Thank you! | Github action failed on aws-cdk dependency |
There's no issue for Vector agent to access the token, but the token will now expire within an hour by default; compare to previous where it has no expiry. When the token has past the validity time, the agent application needs to reload the token from the mounted token volume (previously was a secret volume). The change is needed in the agent application to support this paradigm, not on K8s. | How can Imountservice account token,
we are using a chart which doesn't support it and after a hour the chart is failing.https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume?I understand that from 1.22.x its by default behavior of k8sitsBoundServiceAccountTokenVolumein the following linkhttps://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/Im referring tomanually mounting the service account token.Im talking about vectordev which doesnt support thehttps://vector.dev/docs/setup/installation/platforms/kubernetes/updateaccording to this post this is the way to do it on k8s 1.22.x
please provide an example since im not sure how to make it workhttps://github.com/vectordotdev/vector/issues/8616#issuecomment-1010281331 | k8s mount service account token |
You can useTarget's Input or InputTransformer attribute to send information to target (SNS/SQS in your scenario). You can pass a static JSON message or modify input message depending on the event data.Note: AWS Eventbridge console has these fields so you can test them without writing code.You won't see target input information on sample event detailsbut if you go to SQS console and see available messages (Poll for messages), you can confirm that messages passed to SQS include the JSON string you defined in the EventBridge side.SQS sample message: | I want to schedule events via Event bridge, so that
Event Bridge will send the events to SNS and subscribe with SQS, then in my springboot application i will listen to SQS ..but the problem here is, i cannot find a way to provide details in this event.i want to send something like this:{
"version": "0",
"id": "89d1a02d-5ec7-412e-82f5-13505f849b41",
"detail-type": "Scheduled Event",
"source": "aws.events",
"time": "2016-12-30T18:44:49Z",
"detail": {"use-case-name": "Update all customers"}
}is there any possibility i can put details in there?i try to configure like thisbut the event is still does not have any information in details{
"version": "0",
"id": "7e62a5fa-2f75-d89d-e212-40dad2b9ae43",
"detail-type": "Scheduled Event",
"source": "aws.events",
"resources": [
"..."
],
"detail": {}
} | Schedule events via EventBridge with details |
Are you doing it from Amplify Studio?I faced a similar problem for my React.js project while doing it from Amplify Studio. But using Amplify CLI from the root folder of the project, I could deploy it.Pro Tip:
In case you have copied the project to a new location/machine,
runamplify configureandamplify initagain to ensure there is noteam-provider-info.json does not existerror | I am trying to deploy my data model for a react native App on AWS amplify. After creating my model and importing my custom Auth from the cognito user pool i created Ealier. but i keep getting this error "Parameters: [unauthRoleName] must have values." on deployment. Please How do I solve this? | Parameters: [unauthRoleName] must have values |
Currently there is no way to do so. They did not provide any syntax for overwriting. A workaround is usingParameterswith an extra depth:{
"StartAt": "Task1",
"States": {
"Task1": {
"Type": "Task",
...
"Parameters": {
"executionId.$": "$$.Execution.Id",
"input.$": "$"
},
...
}
}
}and get output:{
"input": {
"name": "A",
"address": "B"
},
"executionId": "arn:aws:states:us-east-1:xxxx:execution:xxx-us-east-1:121b6750-5182-18eb-fd02-3b72c3e2f644"
}Referenceshttps://states-language.net/spec.html | StepFunction's input is like:{
"name": "A",
"address": "B"
}How to add key/dynamic_value (like"executionId": "$$.Execution.Id") to root path:{
"name": "A",
"address": "B",
"executionId": "arn:aws:states:us-east-1:xxxx:execution:xxx-us-east-1:121b6750-5182-18eb-fd02-3b72c3e2f644"
} | How to add a new key/value(dynamic) to the input (root path) of a StepFunction? |
You can add temp user as follows:export AWS_ACCESS_KEY_ID=<your AWS_ACCESS_KEY_ID >
export AWS_SECRET_ACCESS_KEY=<your AWS_SECRET_ACCESS_KEY>
export AWS_REGION=<your AWS_REGION>When you set these values, you will be able to see similar like these:{
"Account": "2*********4",
"UserId": "A*****************V",
"Arn": "arn:aws:iam::275*******04:user/s3ba*****ser"
}Once you are done, do the rest :unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_REGION | How can I configure a temporary AWS-CLI user if I already have a default user in the.aws/path ??? if I could create a temp user, I could test my task without interfering default user !! | configure temp aws cli user still existing a default cli user |
It is not possible to filter messages that come from CloudWatch. I had the same issue recently. In order to filter messages in SNS the topic, it must have appropriate Message Attributes.What you can do is this:Create Lambda function (give it permission to send messages to the SNS topic)Point your CloudWatch alarms to send an alarm to the Lambda functionIn your Lambda function write the parser that would recognize for which project the message is supposed to beFrom the Lambda function publish a message to the SNS topic and add a Message Attribute that you can use in SNS for filtering | We are using an SNS topic that is shared through enterprise for different project and it has to be that way, but with everyone using that SNS topic in cloudwatch alarms. we get email notifications for all the alarms which we dont want, we want to receive the notifications for just our alarms.the solution could be to add a filter on the subscription but the message coming from cloudwatch alarm doesn't have any message attributes on which we can put the filter. Can anyone please suggest a solution for the problem or let me know if there is a way to add the custom message attributes based on which we can filter. | Filter messages published by cloudwatch alarms on an SNS topic to receive email notifications |
Tables in athena save data in external sourse which in aws is S3. When you see theddl of create tablethere is aLOCATIONwhich point to the S3 bucket. If theLOCATIONis different it is probably the reason that you see no rows when you execute a select on this table.CREATE EXTERNAL TABLE `test_table`(
...
)
ROW FORMAT ...
STORED AS INPUTFORMAT ...
OUTPUTFORMAT ...
LOCATION s3://bucketname/folder/If the location is correct, could be that you have to runMSCK REPAIR TABLEcommand to update the metadata in the catalog after you add Hive compatible partitions. From thedoc.Use the MSCK REPAIR TABLE command to update the metadata in the catalog after you add Hive compatible partitions.
The MSCK REPAIR TABLE command scans a file system such as Amazon S3 for Hive compatible partitions that were added to the file system after the table was createdMake sure to check theTroubleshootingsection as well. One thing that I was missing once was theglue:BatchCreatePartitionpolicy on my IAM role. | I have an AWS athena table that has records which are returned by the standard "Select * from...." query. I do a "show create table on this and say create a new table test2".The same select query on test2 always returns empty rows. Why is this happening? | AWS athena table empty result |
Based on the error you get, it seems that you are missing some IAM permissions. I would start by addingAWSElasticBeanstalkManagedUpdatesCustomerRolePolicyManaged policy to your user.This policy is probably more permissive than what you actually need, but it would be difficult to pinpoint exactly, which permissions are necessary. | I tried to run the DescribeConfigurationSettings API method for the ElasticBeanstalk as follow:AWSElasticBeanstalk ebs = AWSElasticBeanstalkClientBuilder.standard().withRegion(Regions.EU_CENTRAL_1).withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
for(ApplicationDescription ad : ebs.describeApplications().getApplications()){
System.out.println(ad);
for(EnvironmentDescription ed : ebs.describeEnvironments(new DescribeEnvironmentsRequest().withApplicationName(ad.getApplicationName())).getEnvironments()) {
System.out.println(ebs.describeConfigurationSettings(new DescribeConfigurationSettingsRequest().withApplicationName(ad.getApplicationName()).withEnvironmentName(ed.getEnvironmentName())).getConfigurationSettings());
}
}However, I got the exception of Access Denied with the following message:Exception in thread "main"
com.amazonaws.services.elasticbeanstalk.model.AWSElasticBeanstalkException:
Access Denied: S3Bucket=elasticbeanstalk-env-resources-eu-central-1,
S3Key=eb_patching_resources/instance_patch_extension.linux (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
NB44V0RXQG2WHH4T; Proxy: null) (Service: AWSElasticBeanstalk; Status
Code: 400; Error Code: InvalidParameterValue; Request ID:
b058aa54-fc9c-4879-9502-5cb5818bc64a; Proxy: null)How can I resolve this issue? | Access Denied for ElasticBeanstalk DescribeConfigurationSettings API method |
From this post:https://aws.amazon.com/blogs/mt/aws-partners-determine-aws-support-plans-in-organization/Seems that it can not be done directly (does not exist some API Call to get the Support plan) but you can use thedescribe-severity-levelsAPI and based on the response determine which Support plan you have.If an AWS account has an Enterprise support plan, the highest severity levels returned are critical and urgent.If an account has a Business support plan, the highest severity level returned is urgent.For the Developer support plan, the severity levels returned are low and normal.If a premium AWS Support plan is not currently enabled, the following error is returned: "An error occurred (SubscriptionRequiredException) when calling the DescribeSeverityLevels operation: AWS Premium Support Subscription is required to use this service." | I am looking toprogrammaticallyList my Current Support Plan that is active in AWS (Basic, Business, Enterprise On-Ramp, Enterprise). I cannot find this anywhere in AWS's AWSPowerShell Help or AWS CLI Help.Is this possible to find this value programmatically using AWS CLI or AWSPowerShell?
Requested call and output would be similar to:C:\> Get-CurrentPremiumSupportPlanOutput:"Business"Reference:Similar StackOverflow question, but my question is only about showing/listing/describing the
current value, not changing it:Can the AWS Support Plan be changed via CLI/API?AWS Support Plans. There are only 4 to choose from -https://aws.amazon.com/premiumsupport/plans/AWS PowerShell help (general) -https://docs.aws.amazon.com/powershell/AWS CLI help (general) -https://docs.aws.amazon.com/cli/index.html | Programmatically list Current AWS Support Plan? (AWSPowerShell or AWS CLI 1/2) |
Add your domain and certificate by updating your "AWS solutions" CDK app. CDK apps are designed be modified and redeployed.TheDistribution constructaccepts thecertificate?:ICertificateanddomainNames?: string[]as props to the constructor.
Instances also expose aaddBehavior(pathPattern, origin, behaviorOptions?), which seems handy.If the app is in production, be mindful that updates sometimes result inresource replacement or interruption.
TheCloudFormation docsnote the update behaviour for each service property. In the happy case you will seeUpdate requires: No interruption. Run thecdk diffcommand to preview the changes
CloudFormation will make to your resources.What aboutcloudfront.Distribution.fromDistributionAttributes?Many CDK classes havestatic from...methods
to get a reference to an existing AWS resource. These methods are handy (or even necessary) when resources are shared between apps, but should be used only when you cannot modify the original CDK construct. | I just deployed a CloudFormation solutions from the AWS Solutions. The solutions included a new CloudFront distribution. My challenge is that I want to add a custom domainmysite.example.comto thedxxxxxx.cloudfront.netdistribution. I already created an alias and certificate using Certificate Manager. My question is how do I add a new domain to the existing CloudFront.I understand that we can import an existing distribution usingDistribution.fromDistributionAttributes.for exampleconst distribution = cloudfront.Distribution.fromDistributionAttributes(this, 'ImportedDist', {
domainName: 'd111111abcdef8.cloudfront.net',
distributionId: '012345ABCDEF',
});Let's say I have the alias domain name and certificate ARN ready to use.const domainName = 'mysite.example.com';
const certificateArn = 'arn:aws:acm:us-east-1: 123456789012:certificate/abcdefgh-1234-5678-9012-abcdefghujkl';Where do I go from here? | How to add domain alias to existing CloudFront distribution using AWS CDK |
So - it turns out some subnet combinations work, some don't. I believe it's a bug in MWAA. | In the past two days, we can't create a new working MWAA environment. We started with Terraform - after apply, the environment is indicated as "Available" in the console, but when I click on the "Open UI" link, the UI never comes up. Then we manually created a couple environments, but with the same outcome. For us, MWAA as a service is practically down.Here is what we are seeing when we click on "Open Airflow UI":This page isn’t workingzxxcvbnm-6666-4516-935b-bb9701f525e5-vpce.c20.us-west-2.airflow.amazonaws.com
didn’t send any data.ERR_EMPTY_RESPONSEAny insight/tip is appreciated! | Can't create a new working MWAA environment |
. Unix epoch to timestampTo convert a unix epoch time to timestamp in Timestream you can use Timestream functionfrom_milliseconds:from_milliseconds(unixtime_in_millisecond)In your example:SELECT
*
FROM "data-api-timestream-test"."table_test"
WHERE
time = from_milliseconds(1637339664248)2. Timestamp to unix epochFor, the other way around - converting a timestamp to milliseconds since unix epoch origin - you can use functionto_milliseconds:to_milliseconds(CURRENT_TIME)Full example:SELECT
time,
to_milliseconds(time) AS unixtime,
from_milliseconds(to_milliseconds(time)) AS unixtime_to_time
FROM (
SELECT
CURRENT_TIMESTAMP AS time
) | I want to execute this query:select * FROM "data-api-timestream-test"."table_test" where time = 1637339664248I get the error:line 1:71: '=' cannot be applied to timestamp, bigintI also triedselect * FROM "data-api-timestream-test"."table_test" where time = cast(1637339664248 as timestamp)I get the error:line 1:73: Cannot cast bigint to timestamp | How to convert a bigint in AWS timestream DB to timestamp? |
There should be aspace:env:
variables:
CRYPTOGRAPHY_DONT_BUILD_RUST: "1" | I am getting the YAML error inbuildspec.yamlfile. The error is:[Container] 2021/11/09 06:18:34 Waiting for agent ping
[Container] 2021/11/09 06:18:35 Waiting for DOWNLOAD_SOURCE
[Container] 2021/11/09 06:18:40 Phase is DOWNLOAD_SOURCE
[Container] 2021/11/09 06:18:40 CODEBUILD_SRC_DIR=/codebuild/output/src909937249/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/nftytest
[Container] 2021/11/09 06:18:40 YAML location is /codebuild/readonly/buildspec.yml
[Container] 2021/11/09 06:18:42 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2021/11/09 06:18:42 Phase context status code: YAML_FILE_ERROR Message: Expected Variables to be of map type: found string instead at line 5, check indentation or content around the line numMy Buildspec looks like the followingversion: 0.2
env:
variables:
CRYPTOGRAPHY_DONT_BUILD_RUST:"1"
phases:
install:
commands:
- yum install python3 -y
- python3 -m venv venv
- source venv/bin/activate
- curl https://sh.rustup.rs -sSf | sh -s -- -y
- pip3 install -r requirements/local.txtIt gives onCRYPTOGRAPHY_DONT_BUILD_RUST:"1"line. | AWS Codebuild: YAML_FILE_ERROR Message: Expected Variables to be of map type: |
You can do something like this with theGenerateSequencetransform in Beam. It would be something like this:pipeline.apply(GenerateSequence.from(0).withRate(1, standardMinutes(1))
.apply(ParDo.of(new ListAllFilesInFtpFn(serverAddress))
.apply(ParDo.of(new DownloadFilesFromFtpFn(serverAddress));Does this make sense? | I just have a few questions on achieving the $subject. I have an FTP location and I want to use a beam pipeline to read these files and do some processing. I basically want to read the file list from the FTP location every one minute and do the processing. Do you have any thoughts on this?I have already written the pipeline for the processing part, just struggling with reading the FTP location every one minute.Any help would be appreciated. | Reading files from a SFTP location using Apache Beam |
You can do this usingIf:Parameters:
environment:
Type: String
Default: dev
AllowedValues:
- dev
- prd
Conditions:
isDev: !Equals [ !Ref environment, dev]
Resources:
StandAlonePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: "s3-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource: "*"
Action:
- "s3:Get*"
- !If
- isDev
- Sid: new-statement-for-dev-only
Effect: Allow
Resource: "*"
Action:
- "s3:Put*"
- !Ref "AWS::NoValue" | I am creating some IAM roles, policies via cloudformation but I would like to add policies based on the condition I have, say if it is dev then i would like to add certain policy statement. any suggestions ?Parameters:
environment:
Type: String
Default: dev
AllowedValues:
- dev
- prd
Condition:
isDev: !Equals [ !Ref environment, dev]
Resources:
StandAlonePolicy:
Type: AWS::IAM::Policy
Properties:
#How to add a condition - isDev
PolicyName: "s3-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource: "*"
Action:
- "s3:Get*" | how to add a condition when writing a aws policy via cloudformation? |
Obtaining Amazon SES SMTP credentials requires the below IAM policies per thedocs:Your IAM policy must allow you to perform the following IAM actions:iam:ListUsers,iam:CreateUser,iam:CreateAccessKey, andiam:PutUserPolicy.What happens behind the GUI is:An IAM user name is either inputted (and validated usingiam:ListUsers) or is created (usingiam:CreateUser)An inline policy is added to the user's permissions (usingiam:PutUserPolicy) to grant them access to performses:SendRawEmail:"Statement":[{"Effect":"Allow","Action":"ses:SendRawEmail","Resource":"*"}]SMTP credentials are then generated for the above user (usingiam:CreateAccessKey)You essentially need to do the above using the@aws-cdk/aws-iammodule, not the@aws-cdk/aws-sesmodule (as that's for actually using SES).For extra confirmation, here's the AWS console mentioning the above: | There is a manual on how to obtain SMTP credentials using GUI:Obtaining Amazon SES SMTP credentials using the Amazon SES consoleIs there a way to achieve this using Amazon CDK? So far, I've tried usingaws-sespackage with zero luck.I don't expect you to write the code for me, just point me to the right direction.Describing a workflow will do just fine, thanks. | How can I generate AWS SES SMTP credentials using the CDK? |
Usebucket.upload_fileobj(destination_file, fileName,ExtraArgs={'ContentType': "application/gzip"})SeeAWS Content Type Settings in S3 Using Boto3 | I am trying to convert files to gzip files and then upload them to S3. When I check in S3, the files are there but they don't have a type specified. How can I specify the content type?for i in testList:
with contextlib.ExitStack() as stack:
source_file = stack.enter_context(open(i , mode="rb"))
destination_file = io.BytesIO()
destination_file_gz = stack.enter_context(gzip.GzipFile(fileobj=destination_file, mode='wb'))
while True:
chunk = source_file.read(1024)
if not chunk:
break
destination_file_gz.write(chunk)
destination_file_gz.close()
destination_file.seek(0)
bucket.upload_fileobj(destination_file, fileName, ContentType='application/gzip')If I add ContentType as an argument to the last line, I get an error:"errorMessage": "bucket_upload_fileobj() got an unexpected keyword argument 'ContentType'", | how to specify ContentType for S3 files? |
The documentation includes an example of how toInsert JSON Format Data into a DynamoDB Table:// using Amazon.DynamoDBv2;
// using Amazon.DynamoDBv2.DocumentModel;
var client = new AmazonDynamoDBClient();
var table = Table.LoadTable(client, "AnimalsInventory");
var jsonText = "{\"Id\":6,\"Type\":\"Bird\",\"Name\":\"Tweety\"}";
var item = Document.FromJson(jsonText);
table.PutItem(item);More broadly, per the samedocumentation:The AWS SDK for .NET supports JSON data when working with Amazon DynamoDB. This enables you to more easily get JSON-formatted data from, and insert JSON documents into, DynamoDB tables. | Looking atthisexample in the AWS DynamoDB documentation, I see thatPutItemRequest.Itemis aDictionary. I'm trying to insert a complex JSON object into DynamoDB that can have anywhere between 200 to 300 attributes within a hierarchy of nested objects and arrays.Do I really have to convert that JSON object to aDictionarybefore I can insert it into DynamoDB? If so, is there a way to do this that doesn't involve hardcoding and/or manually converting it one attribute at a time?Apologies if this question is a little vague. I'm really just looking for pointers on how to proceed from here. | New to DynamoDB. Is there a more convenient way to add/put items? |
this will work:AWSTemplateFormatVersion: 2010-09-09
Parameters:
Name:
Type: String
myuserparameter:
Type: String
mypasswordparameter:
Type: String
Resources:
SecretsManager:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Ref Name
SecretString: !Sub '{"username": "${myuserparameter}","password": "${mypasswordparameter}"}' | Is there any way to reference parameters in SecretString field in Secrets Manager via CloudFormation?The way I made the script, the !Ref parameter is a text and not a reference to the parameter.AWSTemplateFormatVersion: 2010-09-09
Parameters:
Name:
Type: String
myuserparameter:
Type: String
mypasswordparameter:
Type: String
Resources:
SecretsManager:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Ref Name
SecretString: '{"username":"!Ref myuserparameter,"password":"Ref mypasswordparameter"}' | Reference Secrets Manager Parameters to Secret String |
the metric points at a single message?Metric don't point at individual messages. They measure the age of messages in a sliding windowperiod, e.g. 5 minutes. Withing this interval you can multiple messages:Eachstatistic represents an aggregation of the metrics data collected for a specified period of time. Periods are defined in numbers of seconds, and valid values for period are 1, 5, 10, 30, or any multiple of 60. | I'm creating dashboards for SQS and would like to display the age of the oldest message in the queue.SQS has the metricApproximateAgeOfOldestMessageand thedocumentationstates:ApproximateAgeOfOldestMessagemetric points at the second-oldest message that hasn't been received more than three timesThis metric exposes: Average, Minimum, and Maximum.But in this case wouldn't Average, Minimum and Maximum be equivalent if the metric points at a single message? | Difference between SQS ApproximateAgeOfOldestMessage Average and Maximum? |
Each drive is mounted as a different device.Using a Windows analogy, C: would be theAmazon EBS boot disk, while D: would beInstance Store. You choose the device-type by choosing which drive/mount point you want to use.In the days before Amazon EBS, the EC2 instances would boot from Instance Store. This meant it was not possible to 'Stop' the instance, since it would lose the boot disk. These days, instances boot from EBS volumes. However, any data kept on Instance Store will be lost when the instance is Stopped. It is great for temporary storage, caches or where the data is available elsewhere (eg can be reloaded from S3). | I understand the differences between instance storage and EBS.However, if I used anm5d.4xlargeinstance which is backed by nvme instance store and I also attach some EBS to it, which type of storage is used by default? Is there a process to determining which storage type gets used first? | Instance Store and EBS - Which is used by default? |
It is now available withidentitystore_user:https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/identitystore_user(in case it helps other like me who arrived here before finding the ressource) | Please let me know Is it possible to create an SSO user using Terraform in AWS. I could see that doc for the creation of AWS SSO permission sets and assignment to AWS SSO entities and AWS Accounts. but could not found any doc for creating an SSO user using terraform. | Is it possible to create an SSO user using terraform - AWS |
You can take advantage of clusters.I deployed an express app to elastic beanstalk using cluster. It uses all the cores. First, application load balancer on AWS side distributes the load to the instances. Inside instance NodeJS Cluster API distributes requests to the workers based on round-robin by default.AWS beanstalk nodejs multicoreIts a good use case if you can't find an ideal instance for your needs.After a few months, I noticed an issue with the logs. Since the thread count increased inside instance, the log count is multiplied by thread count. I learned that Journal service inside Amazon Linux instances has a burst limit of 1000 logs per 30 seconds. Apart from that, it works smoothly. | Hi I have a silly question, but couldn't find any answers.NodeJS runs on a single thread - if I deploy my express API to elastic beanstalk, does it make any sense to use instance types with multiple vcpus? Does the nodejs environment for elastic beanstalk employ nodejs clustering?If my app is a straightforward express API, won't it just start one process that will end up utilizing just one cpu? If yes, I feel like its better to rely on single vcpu-instances and have the ASG do the work instead of clustering? | Nodejs API on elastic beanstalk - will it use multiple vCpus? |
The most reliable way to createlxmllayer is using Docker as explain in theAWS blog. Specifically, theverifiedsteps are (executed on Linux, but windows should also work as long as you have Docker):Create empty folder, e.g.mylayer.Go to the folder and createrequirements.txtfile with the content oflxmlRun the following docker command:The command will create layer for python3.8:docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"Archive the layer as zip:zip -9 -r mylayer.zip pythonCreate lambda layer based onmylayer.zipin the AWS Console. Don't forget to specifyCompatible runtimetopython3.8.Add the the layer created in step 5 to your function.I tested the layer using your code:from lxml import etree
def lambda_handler(event, context):
root = etree.Element("root")
root.append( etree.Element("child1") )
print(etree.tostring(root, pretty_print=True))It workscorrectly:b'<root>\n <child1/>\n</root>\n' | I'm trying to import thelxmllibrary in Python to execute an AWS Lambda function but I'm getting the following error:[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'lxml'. To solve this, I followed the recommendation fromthisSO answer and used precompiled binaries from the followingrepo.I used thelxml_amazon_binaries.zipfile from that repo, which has this structure:lxml_amazon_binaries
├── lxml
└── usrI uploaded the entirezipfile to an AWS Lambda layer, created a new Lambda function, and tested with a simplefrom lxml import etree, which led to the above error.Am I uploading/using these binaries correctly?I'm not sure what caused the error. Using different Python runtimes didn't help. | How to import lxml from precompiled binary on AWS Lambda? |
As per theBeanStalk documentation, Your source bundle must meet the following requirements:Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)Not exceed 512 MBNot include a parent folder or top-level directory (subdirectories are fine)You can reduce the size of your source bundle by adding the unnecessary files to your .gitignore file.
You can estimate the folder sizes using the commanddu -shc *in the root directory of your application. | I use eb cli to deploy django applicatoin. It suddenly started showing errorInvalidParameterValueError - Source bundle is empty or exceeds maximum allowed size: 524288000while deploying my app usingeb deploycommand. It showing error for both my production and stating environment.My source bundle size should be below the limit. What is the cause and how to fix it? | InvalidParameterValueError - Source bundle is empty or exceeds maximum allowed size: 524288000 |
The solution is to use backticks around the column name.Example:.select_fields(["journalId", "`json.rowId`"]) | In AWS S3 I have json docs that I read-in with AWS Glue'screate_dynamic_frame.from_options("s3" ...)and the DynamicFrame.printSchema() shows me this, which matches the schema of the documents:root
|-- updatedAt: string
|-- json: struct
| |-- rowId: intThen Iunnest()orrelationalize()(have tried both) the DynamicFrame to a new dyF and then.printSchema()shows me this, which seems correctly unnested:root
|-- updatedAt: string
|-- json.rowId: intThe problem is that I can't seem to use the nested fields.dyF.select_fields(["updatedAt"])will work and give me a dyF with the "updatedAt" field.ButdyF.select_fields(["json.rowId"])gives me an empty dyF.What am I doing wrong? | AWS Glue - Can't select fields after unnest or relationalize |
This post helped me.I useAutoKeyto auto-type my email address. It seems that the AWS login form has now been rigged with some sort of detection if this field is filled too quickly that flags you as a bot.I mean, some sort of feedback would be nice "sorry, you've still got that whiff ofbotabout you, we'll just need to do a few more of these if you don't mind..." but no, let's just dump the user into an endless loop questioning their sanity.Anyway, typing out my email instead got me in. | This issue has started coming up for me. Signing in using 2FA and solving the Captcha just sends me back to the login form again when trying to access my AWS dashboard. Doesn't matter which browser I use. Originally came up a few years ago:AWS Amazon - Sign in Loop Stuck | AWS Amazon - Sign in Loop Stuck again |
Please have a look at theAccess Analyzer quotasBased on the error message you hit the quota of 100,000 AWS CloudTrail log files processed per policy generation.You can reduce the period of the policy or reduce the number of regions selected. | While generating a policy in IAM for a specific role using feature "Generate policy based on CloudTrail events", I get error "Policy generation failed. CloudTrail log files processed per policy generation limit exceeded. Please fix before trying again."And if generated for few days, policy does not include DynamoDB and SQS policies used by the role | How to generate policy based on CloudTrail events and resolve errors |
Turns out there were 3 things at play here:There was a service quota on my account of 5 public IP addresses, and each container was getting its own IP address so it could communicate with the S3 bucket. I made one of the subnets a private subnet and put all my containers in that subnet. I then set up a NAT gateway in a public subnet and routed all my traffic through the gateway. (More details athttps://aws.amazon.com/premiumsupport/knowledge-center/nat-gateway-vpc-private-subnet/)As Marcin pointed out, Fargate does scale slowly. I switched to using EC2, which scaled much more quickly but still stopped scaling at around 30 container instances.There was a service quota on my account called "EC2 Instances / Instance Limit (All Standard (A, C, D, H, I, M, R, T, Z) instances)" which was set to 32. I reached out to AWS, and they raised the limit, so I am now able to run over 100 jobs at once. | I'm just getting started with AWS. I have a (rather complicated) Python script which reads in some data from an S3 bucket, does some computation, and then exports some results to the same S3 bucket. I've packaged everything in a Docker container, and I'm trying to run it in parallel (say, 50 instances at a time) using AWS Batch.I've set up a compute environment with the following parameters:Type: MANAGEDProvisioning model: FARGATEMaximum vCPUs: 256I then set up a job queue using that compute environment.Next, I set up a job definition using my Docker image with the following parameters:vCpus: 1Memory: 6144Finally, I submitted a bunch of jobs using that job definition with slightly different commands and sent them to my queue.As I submitted the first few jobs, I saw the status of the first 2 jobs go from RUNNABLE to STARTING to RUNNING. However, the rest of them just sat there in the RUNNABLE state until the first 2 were finished.Does anyone have any idea what the bottleneck might be to running more than 2 or 3 jobs at a time? I'm aware that there are some account limitations, but I'm not sure which one might be the bottleneck. | How can I get AWS Batch to run more than 2 or 3 jobs at a time? |
KMSWith KMS you have AWS Managed Key and Customer Managed Key (CMK).To allow another account to use your key it needs to be a CMK, because you need to allow it on your key police.https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.htmlYou can't change police on AWS Managed Key. Which means you can't allow other accounts to use an AWS Managed Key.So you can't share your encrypted AMI with another account when it is using AWS Managed Key.AMIAn AMI can't be transferred, but you share it with another account. When it is encrypted you need to share the key as well. See documentation below.https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.htmlAs your current AMI is encrypted with AWS Managed Key, what you can do is create a new AMI without encryption or encrypted with CMK and share it. See documentation below.https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-ec2-instance/ | I like to share the image across accounts but the image is encrypted with an AWS managed key and I was wondering how I can transfer this image to another account.I gather an image encrypted with custome keys is transferrable, and is it the same with the image with an AWS key? | Sharing an AWS EC2 image encrypted with an AWS managed key across the accounts |
+50Based on the comments.ES does not use9200port. Only ports 80 for http and https on port 443 are supported.From docs:Amazon ES only accepts connections over port80 (HTTP) or 443 (HTTPS).Alsospring-data-elasticsearchexpects only the domain, sohttpsshould not be used.Removinghttpsand using port443resolved the issue.uris: vpc-website-qa-xxxxxxxxxxxx.ap-south-1.es.amazonaws.com:443 | I have a Spring Boot image deployed using AWS Fargate and the Elasticsearch cluster using AWS Elasticsearch Service.
Both are under same VPC and subnet. Below is the access policy of Elasticsearch:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:ap-south-1:8655488xxxxx:domain/website-qa/*"
}
]
}Security groups:Fargate:sg-test033f776d5fbed5c0000Elasticsearch:sg-test0e5a570cbfc389e8555Subnet:Fargate:subnet-test025f49153cf245a2d11,subnet-test01f19783c005010f122,subnet-test076dfbba51d92d49033Elasticsearch:ap-south-1a: subnet-test025f49153cf245a2d11Under the security group of elasticsearch, I have allowed the security group of Fargate for port443and9200.And below is from application.yml file:spring:
elasticsearch:
rest:
connection-timeout: 5000 #milliseconds
read-timeout: 5000 #milliseconds
uris: https://vpc-website-qa-xxxxxxxxxxxx.ap-south-1.es.amazonaws.com:9200So spring boot tries to make a connection to Elasticsearch but getjava.net.UnknownHostException https://vpc-website-qa-xxxxxxxxxxxx.ap-south-1.es.amazonaws.com:9200Tried with port443also but didn't work. Why host is not resolved at Fargate cluster? What am I missing here? | Unable to connect AWS Elasticsearch from Fargate. Getting java.net.UnknownHostException |
Yes, different triggers will use the same containers since the execution environment is the same for different triggers, the only difference is the event that is passed to your Lambda.You can verify this by executing your Lambda with two types of triggers (i.e. API Gateway and simply the Test function on the Lambda Console) and looking at the CloudWatch logs. Each Lambda container creates its own Log Stream inside of your Lambda's Log Group. You should see both event logs going to the same Log Stream which means the 2nd event is successfully using the warm container created by the first event. | Here's what I know, or think I know.In AWS Lambda, the first time you call a function is commonly called a "cold start" -- this is akin to starting up your program for the first time.If you make a second function invocation relatively quickly after your first, this cold start won't happen again. This is colloquially known as a "warm start"If a function is idle for long enough, the execution environment goes away, and the next request will need to cold start again.It's also possible to have a single AWS Lambda function with multiple triggers. Here's an example of a single function that's handling both API Gateway requests and SQS messages.My question: Will AWS Lambda reuse (warm start) an execution environment when different event triggers come in? Or will each event trigger have it's own cold start? Or is this behavior that's not guaranteed by Lambda? | AWS Lambda Functions: Will Different Triggers Reuse an Exection Enviornment? |
In CloudFormation you createAWS::EC2::Instance. To have thelatest AMI of Amazon Linux 2, you can usedynamic references.The basic example of them:Parameters:
LatestAmiId:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
Resources:
Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: !Ref LatestAmiId
InstanceType: t2.micro | Is it possible to use the AWS CloudFormation stack with Amazon Linux 2? Currently, I only found Amazon docs that are pointing to Amazon Linux AMI. Unfortunately, AMI would stop being supported in 2023 (and is marked as deprecated already). | (AWS) CloudFormation stack with Amazon Linux 2 |
Yes, changing the storage class incurs costs, regardless of whether it's done manually or via a lifecycle rule.If you do it via the console, it will create a deep archive copy but will retain the existing one as a previous version (if you have versioning enabled), so you'll start being charged for storage both (until you delete the original version).If you do it via a lifecycle rule, it will transition (not copy) the files, so you'll only pay for storage for the new storage class.In both cases, you'll have to pay for LIST ($0.005 per 1000 objects inSTANDARDclass) and COPY/PUT ($0.05 per 1000 objects going toDEEP_ARCHIVEclass) actions.Since data is being moved within the same bucket (and therefore within the same region), there will be no data transfer fees.The only exception to this pricing is the "intelligent tiering" class, which automatically shifts objects between storage classes based on frequency of access and does not charge for shifting classes.No additional tiering fees apply when objects are moved between access tiers within the S3 Intelligent-Tiering storage class. | I have some files in my AWS S3 bucket which i would like to put in Glacier Deep Archive from Standard Storage. After selecting the files and changing the storage class, it gives the following message.Since the message says that it will make a copy of the files, my question is that will I be charged extra for moving my existing files to another storage class?Thanks."This action creates a copy of the object with updated settings and a new last-modified date. You can change the storage class without making a new copy of the object using a lifecycle rule.Objects copied with customer-provided encryption keys (SSE-C) will fail to be copied using the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the Amazon S3 REST API." | AWS storage class change |
Whoops, found the problem!cdk.Fnis unnecessary (the CDK should resolve the dynamic "token" anyway in string formatting), but as noted inthe StringParameter docthesimple_nameparameter is required, when using a tokenizedparameter_name.input_bucket_ssm_param = ssm.StringParameter(
self,
"MyCoolResourceSSMParam",
string_value=my_cool_resource.arn,
description="...",
parameter_name=f"/Projects/{project_name_param.value_as_string}/MyCoolResource",
simple_name=False, # < Need this too!
) | I have a CDK app with a CloudFormation Stack Parameter something like:project_name_param = cdk.CfnParameter(
self,
"ProjectName",
default="MyCoolProject",
)Since multiple instances of this stack can be deployed, I'd like to create anSSM Parameterwith name based on this project name, to keep things organized.In plain CloudFormation, this could be achieved achieved by e.g:MyCoolResourceArnParam:
Type: 'AWS::SSM::Parameter'
Properties:
Description: ARN of this project's MyCoolResource
Name: !Sub '/Projects/${ProjectName}/MyCoolResource'
Type: String
Value: !GetAtt MyCoolResourceArn...But I'm struggling to figure out how I'd use theproject_id_paramobject in CDK to achieve the same thing. For e.g. have tried and failed with various combinations similar to:input_bucket_ssm_param = ssm.StringParameter(
self,
"MyCoolResourceSSMParam",
string_value=my_cool_resource.arn,
description="...",
parameter_name=cdk.Fn.sub(
f"/Projects/{project_name_param.value_as_string}/MyCoolResource"
),
)Probably I'm missing something basic as still pretty new to using CFn parameters in CDK - can anybody enlighten me on how it's supposed to work? | Name an SSM parameter from a stack parameter within an SSM parameter name in AWS CDK |
AWS Amplify currently doesn't have support for Flutter web apps.There's anopen feature requeston the amplify-futter GitHub repo in case you'd like to keep track of this. | Hi im working on a personal flutter web app project and I was playing around with AWS AmplifyI followed the instructions posted on the flutter web dev page but when I tried to deploy the default flutter web app onto AWS Amplify I got thisenter image description hereso I was wondering if there was a way to deploy my flutter web app onto AWS Amplify or another AWS service | How can I host my Flutter web app onto AWS Amplify |
There is not. Fromdocs:Unlike automated backups, manual snapshots aren't subject to the backup retention period.Snapshots don't expire.It means that you would have to develop your own,custom solutionfor that. For example using a lambda function which periodically is invoked and checks for old snapshots to remove them. | Is there a way to set the lifecycle for a manual snapshot in AWS RDS? For the automated ones, there is a time that can be set, but I cannot find anything for the manual snapshots. Thanks for the help. | AWS RDS manual snapshot lifecycle |
You keep mentioning Lambda as an entire service, so if that is what you mean, then AWS operates a regional health page by service:https://status.aws.amazon.com/You can also use the Health APIhttps://docs.aws.amazon.com/health/latest/ug/monitoring-logging-health-events.htmlto return a status of 'healthy' unless it finds a entry for Lambda (or whichever) that indicates unhealthy.If you are looking instead to deploy a Lambda function that says 'I am alive and can access specific resources I need', then perhaps you should develop a simple function to deploy in/healthcheckthat has the same permissions as the real function and does some small actions like check and record a dummy value in DynamoDB to make sure it can access it/ read it/ modify it/ delete it or whatever else it is supposed to do there. It could also return some simple stats on the dynamodb table that are recorded in cloudwatch to indicate the health of the table to you in a more simple manner than searching in the console
(https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/metrics-dimensions.html) | I'm new to serverless and AWS so I'm unsure how to have a health check endpoint/healthcheckfor my actual processing Lambda or if it's even needed at all. I want to be able to check the health even without access to AWS account and without calling the actual Lambda. I'm using just a simple workflow of API Gateway > Lambda > DynamoDB. As far as I understand, it is possible for the service to be down in all 3 stages.I know of Route 53 but I don't think it fits what I want because it calls the endpoint repeatedly and I think access to AWS account is needed as well.It is possible to have/healthcheckLambda to just return that the endpoint is up and if service is down, then there would be nothing returned but this does not seem like the correct approach since the endpoint can never return down.Maybe AWS health API to report public health events would work but it seems like it works in the reverse manner - report when there's an issue instead of having an endpoint to check myself. Is that the recommended method to check health for serverless? | Health check endpoint for AWS |
One can now enable execute command via CDK:declare const cluster: ecs.Cluster;
const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
memoryLimitMiB: 1024,
desiredCount: 1,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
},
enableExecuteCommand: true
});Source:https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs_patterns-readme.html#ecs-exec | Currently have configured AWS Fargate service withApplicationLoadBalancedFargateServicevia AWS CDK(Python), would like to enable ExecuteCommand on the fargate containers to get access over them.But currently unable to find a method to enable Exec on this fargate service.Any help on this would be much appreciated. | Enable ExecuteCommand on AWS Fargate via cdk in a ApplicationLoadBalancedFargateService |
What you're looking for is the FormData object
Here's an example using axiosfor (let i = 0; i < newFiles.length; i++) {
let file = newFiles[i]
let formData = new FormData()
formData.append("file", file)
formData.append("otherProperties", otherProperties)
await axios.post(url, formData, {
headers: {
'content-type': 'multipart/form-data',
Authorization: `Bearer ${token}`,
}
}).then(res => {
console.log(res.data)
successfulUploads.push(res.data)
}).catch(err => {
console.log(err)
})
} | I've been working on a project involving thereact-dropzonepackage. I successfully built a container me to add files but now I can't figure out how I can upload these files to my Amazon S3 bucket. When I add files, it creates these "file" objects but all it contains is information like the name, size, path, etc. Doesn't seem like it contains the actual file itself. Even the file path isn't the full file path, its just the name of the file. The documentation doesn't have any information on what you can do after dragging a file to the browser. I don't believe this entire package that has 1.3 million NPM downloads per week is all for display purposes. I'm still new to the world of web-dev so there's probably something obvious I don't understand. Any advice? | How can I upload files to Amazon S3 using react-dropzone? |
I believe you can useEMR stepsto do this. Here is a somewhat relevantWhat is the correct syntax for running a bash script as a step in EMR?description on how to use it.Update:You cannot use EMR steps since steps onlyrun on the master. | Bootstrap actions run before Amazon EMR installs the applications that
you specify when you create the cluster and before cluster nodes begin
processing data. If you add nodes to a running cluster, bootstrap
actions also run on those nodes in the same way. You can create custom
bootstrap actions and specify them when you create your cluster.https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.htmli need to patch the application (presto) after it is installed on all nodes. a few possible solutions arepasswordless ssh, but for some security concern we disabled it.in the bootstrap schedule a cron job and check if the application is installed then act upon it.usessm. but never really tried yet.any idea?[Update]
what actually has been done in our case is scheduling a background scripts (the&) in the bootstrap scripts which won't block bootstrap. inside the job, it will periodically check if the package is installed or not, if it is installed (e.g.rpm -q presto), then patch it. | Run script on all nodes after application installed on EMR |
i finally figured it out, to fix this problem i had to add thisssl: true,
extra: {
trustServerCertificate: true,
Encrypt: true,
IntegratedSecurity: false,
}you add this right after integrated security: true, as shown in my question above and it should work out | I am trying to Connect to my AWS RDS SQL Server DB using Nestjs i followed the documentation of nestjs and also some tutorials, but i keep getting the error when nestjs tries to connect to the Database.
This is a fraction of my code where i establish the connection.@Module({
imports: [
ConfigModule.forRoot({
envFilePath:'.env',
isGlobal: true
}),
TypeOrmModule.forRoot({
type: 'mssql',
host: 'xxx.xxxxxx.xxxxxx.rds.amazonaws.com',
port: 1433,
username: 'root',
password: 'root',
database: 'DB',
autoLoadEntities: true,
synchronize: true,
extra: {
trustServerCertificate: false,
Encrypt: true,
IntegratedSecurity: true,
}
}),
],
controllers: [AppController, EstadoProyController],
providers: [AppService],
})
export class AppModule {}And the error im getting is:I would also like to add that i am able to log in directly via my SQL Server Management Studio using the Server name, user and password, but via NestJS i can't | Error Connecting to RDS AWS DB from nestjs Unable to get local issuer Certificate |
You are correct -- theprint()messages will be available in CloudWatch Logs.It is possible that a long-running function might show logs before it has completed (I haven't tried that), but AWS Lambda functions only run for a maximum of 15 minutes and most complete in under one second. It is not expected that you would need to view logswhilea function is running. | I am new to AWS. I have just developed a lambda function(Python) which print messages while executing. However I am not sure where I can watch the log printed out while the function is executing.I found CloudWatch log in the function, but it seems that log is only available after function completed.Hope you can help,many thanks | AWS Lambda: is there a way that I can watch live log printed by a function while it is executing |
Yes, you can createroutedynamically, because blockrouteacts asAttributes as Blocks. So you can do (example)# define all your routes in a variable (example content)
variable "routes" {
default = [
{
cidr_block = "0.0.0.0/0"
gateway_id = "igw-0377483faa64bf010"
},
{
cidr_block = "172.31.0.0/20"
instance_id = "i-043fc97db72ad1b59"
}
]
}
# need to provide default values (null) for all possibilities
# in route
locals {
routes_helper = [
for route in var.routes: merge({
carrier_gateway_id = null
destination_prefix_list_id = null
egress_only_gateway_id = null
ipv6_cidr_block = null
local_gateway_id = null
nat_gateway_id = null
network_interface_id = null
transit_gateway_id = null
vpc_endpoint_id = null
instance_id = null
gateway_id = null
vpc_peering_connection_id = null
}, route)
]
}
resource "aws_route_table" "example" {
vpc_id = aws_vpc.example.id
# route can be attribute, instead of blocks
route = local.routes_helper
tags = {
Name = "example"
}
}Docs do not recommend to use that in general, but I think route is a good example where this would be acceptable. | I am trying to create a terraform module for aws_route_table creation, here is an example of this resource definition:resource "aws_route_table" "example" {
vpc_id = aws_vpc.example.id
route {
cidr_block = "10.0.1.0/24"
gateway_id = aws_internet_gateway.example.id
}
route {
ipv6_cidr_block = "::/0"
egress_only_gateway_id = aws_egress_only_internet_gateway.example.id
}
tags = {
Name = "example"
}
}I am trying to make it more dynamic by using dynamic blocks. The problem is that I always have to define the keys in content blockresource "aws_route_table" "example" {
...
dynamic "route" {
for_each = var.route
content {
cidr_block = route.value.cidr_block
gateway_id = route.value.gateway_id
}
}
...
}So in this case, I will need to write two dynamic blocks, one for the content withcidr_blockandgateway_idand one for the content withipv6_cidr_blockandegress_only_gateway_id.Is there any way to do this without defining keys explicitly. Something like this:dynamic "route" {
for_each = var.route
content {
var.route.map
}
} | Terrafrom dynamic block with dynamic content |
Aurora serverless can be only accessed from within VPC. It hasno public Ipaddress. Fromdocs:You can't give an Aurora Serverless v1 DB cluster a public IP address. You can access an Aurora Serverless v1 DB clusteronly from within a VPC.This means you either have to connect to it from an EC2 instance running in the same VPC, or setup ssh tunneling or VPN connection between your local computer and the aurora. How to setupssh tunnelis explainedhereandhere.Alternatively, useDATA APIto interact with your database from outside of a VPC. | I am trying to connect my AWS aurora database with pgAdmin 4 and it throws this error. I have tried all the previous solutions provided by the stack overflow answers like add inbound my IP and update pg_hab.conf. It still not working for me. Thank you in advance.Error facing with pgAdmin | Unable to connect to server: timeout expired AWS aurora rds |
Adding the response I got from AWS support. They don't charge when the request to DDB gets throttled. | Whenever dynamoDB throws the ProvisionedThroughputExceededException, does DynamoDB still charge us for the request sent to it.Update :we use both the schemes, provisioned + auto scaling and OnDemand for our DDB tables.I am trying to understand if DDB will still consider WCU/RCU consumed for throttled requests which resulted in ProvisionedThroughputExceededExceptionDynamoDB in their definition of WCU and RCU states that every API call to DDB is considered as write/read request. (https://aws.amazon.com/dynamodb/pricing/on-demand/) Does it mean that even failed API calls ( Internal Error 500, ProvisionedThroughputExceededException) will be charged ? | Does dynamoDB charge for throttled requests? |
I found an answer inthispost by henhal on serverless forums.Basically you have to create new resource ofAWS::Lambda::Permissiontype.resources:
Resources:
InvokeGenerateReportLambda:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName: ${env:LAMBDA_FUNCTION_ARN}
Principal: events.amazonaws.com
SourceArn: ${env:RULE_ARN} #can include wildcards | I am trying to define in the serverless YAML file with a resource based policy that ** allows any rule from EventBridge ** to invoke the function; this is due to in my application, EventBridge rules are dynamically generated.In the AWS's console, it does not allow create a Lambda permission's EventBridge with wildcard.The following was my attempt but it did not generate any resource policy when deployed:provider:
resourcePolicy: ${self:custom.resourcePolicies.test}
... other things
custom:
resourcePolicies:
test:
- Effect: Allow
Principal: "*"
Action: lambda:InvokeFunction
... other thingsGuidance appreciated. | Serverless Lambda Resource Based Policy - All Principles |
Second option is always better. Creating multiple lambda function for each functionality.Lambda latency depend on how many calls get from API gateway. If you are using multiple endpoint and single lambda calls then it is going to bottleneck or high latency issue. Plus lambda charge based on per lambda calls. Each lambda 1 million request is free if you use one lambda for all use going to hit this limit early.Recommendation is use different lambda function for each functionality and this is beauty of Micro Service. Keep simple and light weight. | When I deploy a serverless framework codebase to AWS, I am curious about what method will be better.
For now, there are 2 options.Use Nest.js or Express.js so I deploy one function to Lambda and this function will handle all API endpointsDeploy number of functions so each of them represents a single API endpointRegarding scalability, which option is a good approach? | Concerning with AWS Scalability with Serverless framework |
You'll want to useec2.InterfaceVpcEndpointwhich creates a new Vpc Endpoint and allows for you to add in security groups ids. Borrowing fromhereit might look like this:ec2.InterfaceVpcEndpoint(
self,
"VPCe - Redshift",
service=ec2.InterfaceVpcEndpointService("redshift.amazonaws.com")
),
private_dns_enabled=True,
vpc=self.vpc,
security_groups=[securityGroup],
) | I have an existing VPC endpoint on my AWS account. When I deploy my CDK stack i need to somehow add a security group to that VPC endpoint for my server to be able to talk to a Redshift cluster on another network.I define my security group like this:const securityGroup = new ec2.SecurityGroup(this, "SecurityGroup", {
vpc,
allowAllOutbound: true,
});How can I add that security group to the VPC endpoint? I know the endpoint ID but somehow cant figure out how to do this. I have tried to get the VPC endpoint by ID and played around with security groups | How to add security group to VPC Endpoint in CDK (AWS) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.