Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
If you are not happy with DB, as suggested in comment then you can useAWS store parameterwhich will answer your this query.way to modify environment variable in AWS Lambda?You can consume Your environment variable from thestore parameter, also it will keep the state across different lambda function, Variable outside handle may work as suggested @Jan, but what if you update the lambda function?So, for example, the flow will beIf store-paramter == true;
#do the job,after job done
#update store-paramter value
store-paramter=false
else
#play with with valueonce you generatesecret in secretsmanager, AWS will popup with complete code in a different language just copy the code and paste it in lambda that sample but you should assign permission to lambda.Also, you explore a handy npm packageaws-param-store.BTW application should not update ENV, but to deal with your use case you can follow.You can check thisarticletoo from scratch how to set and consume secret in lambda. | I have written a lambda function which does some processing.There is 1 environment variable which is set by default.Is there a way I can change it after every run ? | Is there a way to modify environment variable in AWS Lambda? |
The problem you are encountering has nothing to do with ALB, it is just a pass-through load balancer, listens on one port and forwards the requests to the target group as you configure.Your requests are blocked by the browser due to the mixed content. As you understood you need to serve the contents using the same protocol either HTTPS or HTTP.There are two possibilities that I can think ofYour application code/configuration is mixing up the contentOr your proxy server(Apache) is switching the protocol for your endpoints | After introducing ALB (Application Load Balancer on AWS) in front of one EC2 instance, Chrome browser shows Mixed Content Error. (I edited the content of the error a little by security reason)Mixed Content: The page at 'https://www.sample.com/talk' was loaded
over HTTPS, but requested an insecure EventSource endpoint
'http://www.sample.com/api/getData?param1=123¶m2=456'. This
request has been blocked; the content must be served over HTTPS.Pattern1 has no errors.Pattern2 shows the above error. I don't know where is a problem in my technology stack.Pattern1: ALB(443) => EC2(443)
Pattern2: ALB(443) => EC2(80)My tech stack:ALB
Apache 2.4
Laravel 5.7
React 16.9I tried the following solution but the error still happened.Trust Proxy of Laravel.HTTP request to server from React is written like relative path (/api/getData?param1=123¶m2=456) so I replace the code to absolute path specifying protocol and domain name (https://www.sample.com/api~).In Apache web server, rewrite http to https (I think this is bad idea).Is this a general problem?If you have any hints, please help me.I'm sorry for my poor English. | After introducing ALB, Mixed Content Error happened |
Convert Glue's DynamicFrame into Spark's DataFrame and useforeachfunction to iterate rows:def f(row):
print(row.name)
...
datasource0.toDF().foreach(f) | I'm quite new to AWS Glue and still trying to figure things out, I've tried googling the following but can't find an answer...Does anyone know how to iterate over a DynamicFrame in an AWS Glue job script?For example, I'm trying to do the following:datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "...",
table_name = "...",
transformation_ctx = "datasource0")
for r in datasource0:
print(r)But receive the following error:'DynamicFrame' object is not iterable
Traceback (most recent call last):
TypeError: 'DynamicFrame' object is not iterable | Iterate over AWS Glue DynamicFrame |
+50You currently cannot do this in Cloudwatch, but there is aworkaroundusing Lambda. This feature has been requested from AWS and they are working on implementing it. | I want to create a rule in Cloudwatch that listens for the events when ECS fails to place a task.I see examples in the AWS documentation about when a task fails, or when the state of a container instance changes. But this is not what I want. I want specifically to listen for when ECS emits the event "failed to place task". I know it will have to be some sort of event pattern that matches it, but I am not sure about the specifics of the event pattern.This example matches a task state change, but is not what I want. But I think it is similar:{
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Task State Change"
],
"detail": {
"lastStatus": [
"STOPPED"
],
"stoppedReason": [
"Essential container in task exited"
],
"containers": {
"exitCode": ["1", "2", "3", and so on...]
}
}
}I would like to be able to match the event "failed to place task". | Listen for ECS failed to place task in Cloudwatch |
I asked about this on the AWS Slack and it's not possible to use resource policies and would add a lot of networking complexity:https://awsdevelopers.slack.com/archives/C6LDW0BC3/p1570618074008500From an AWS dev in that thread:hey there - when Lambda is VPC enabled, its subject to all routing rules of your VPC and Subnet.To hit any public resource, you will need a NAT GW, routing rules, and SG setting to allow communication.Resource polices will not work. | I have a public API in API Gateway using Websockets protocol. I'm storing its connection IDs in a datastore inside my VPC, and trying to write a Lambda to read those connection IDs and then send data to each of them - usingawait apigwManagementApi.postToConnection({ ConnectionId: connectionId, Data: postData }).promise();. This times out - the Lambda is unable to send messages to the API gateway. So I tried adding a Gateway toexecute-api:aws ec2 create-vpc-endpoint --vpc-id vpc-xyz --vpc-endpoint-type Interface --service-name com.amazonaws.eu-west-1.execute-api --subnet-ids subnet-xyz --security-group-id sg-xyz. Now I getForbiddenException: Forbiddenthrown by my calls to apigwManagementApi.I've tried looking at the docs for the execute-api Gateway, but the doc for Interfaceshttps://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.htmlpoints tohttps://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.htmland leads to creating private APIs - I don't want this, I need my API to be public.I think I might be able to use a resource policy usuallyhttps://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-examples.htmlBut this is a websocket API so these instructions don't work as they don't have a resource policies option. | AWS: Can I give a Lambda function inside a VPC access to a public Websockets API Gateway? |
Well, there is a bug in terraform > 0.12 versionTheterraform planandterraform applywill say that its going to add tag.email, however it will ignore the tag.email when terraform apply command has run.I tested using terraform staterm --target=resource-nameand did an import and then did terraform state show resource-name, the tag.email was not imported(was ignored) !More details:https://github.com/hashicorp/terraform-plugin-sdk/issues/167lifecycle {
ignore_changes = [
read_capacity,
write_capacity,
tags.email
]
} | How can I ignore a certain tag defined in the locals variable?
For example: I would want to ignore the email tag for this dynamodb table resource.Local definitionlocals {
global_tags = {
email = "xxx.com"
owner = "xxx"
}
common_tags = {
Name = "live"
}
}
lifecycle {
ignore_changes = [
read_capacity,
write_capacity,
local.global_tags.email
]
}
tags = merge(local.global_tags,local.common_tags,var.received_nexgen_events_tags)
}Details:Terraform v0.12.0
+ provider.aws v2.30.0I tried this but got an errorError: Unsupported attributeon ../../../../tf_module_dynamodb/events.tf line 22, in resource "aws_dynamodb_table" "events":
22: local.global_tags.emailThis object has no argument, nested block, or exported attribute named
"local".2: I also tried like this, got static variable reference is required , what is static variable reference?lifecycle {
ignore_changes = [
read_capacity,
write_capacity,
local.global_tags["xxx.com"]
]
}
error :
22: local.global_tags["xxx.com"]
A static variable reference is required. | Ignore certain tags from terraform locals |
+50I know this Elastic Beanstalk stuff is not documented well, but since I did the AWS DevOps certification some time ago which covered this, I remember some points:You should bind your HTTP server to0.0.0.0. I see you already did that.Your app is not running with root privileges on your EB instance. Usually what they want you to do - probably for security reasons - is to proxy your connection through the nginx proxy which comes pre-configured on your instance. They pass thePORTenvironment variable to your node.js app and you should use it to listen for upstream traffic by the proxy. [1]For SSL termination on your nginx proxy to work, you must then configure ssl on the proxy accordingly as already pointed out correctly by vikyol. [2]The connection between the proxy and your app will then be unencrypted. This should not be an issue since it does not leave the machine in between.Some more thoughtsI would prefer SSL termination on the load balancer for performance reasons if you have some $$ somewhen.SSL Certificate management usually is much more comfortable viaACMand ELB.References[1]https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/nodejs-platform-proxy.html[2]https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-nodejs.html | I'm creating a simple website on AWS Elastic Beanstalk using node js. I'm trying to add an SSL certificate to the EC2 instance but it keeps saying"Error: listen EACCES: permission denied 0.0.0.0:443"What did I miss?EC2 Security Group:Inbound Rules:HTTP TCP 80 0.0.0.0/0
HTTP TCP 80 ::/0
HTTPS TCP 443 0.0.0.0/0
HTTPS TCP 443 ::/0Outbound Rules:All traffic All All 0.0.0.0/0Node JS:var ipaddress = "0.0.0.0";
var port = 443;
var options = {
key: sslKey,
cert: sslCert,
ca: [sslCa]
}
server = https.createServer(options, handleRequest);
server.listen(port, ipaddress, function () {
console.log("Server listening on port "+port);
}); | Why is https being blocked on AWS Elastic Beanstalk? |
The @Aaron_H comment and answer were useful for me, but the response mapping template provided in the answer didn't work for me.
I managed to get a working response mapping template for my case which is similar for the case in question. In images below you will find info for query -> message(id: ID) { ... } (one message and the associated user will be returned):SQL request tousertable;SQL request tomessagetable;SQL JOIN tables request for message id=1;GraphQL Schema;Request and response templates;AWS AppSync query.https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.message.jsNext example for query messageshttps://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.messages.js | I am using Amazon RDS with AppSync. I've created a resolver that join two tables to get One-to-One association between them and returns columns from both tables. What I would like to do is to be able to put nest some columns under a key in the resulting parsed JSON object evaluated using $util.rds.toJSONObject().Here's the schema:type Parent {
col1: String
col2: String
child: Child
}
type Child {
col3: String
col4: String
}Here's the resolver:{
"version": "2018-05-29",
"statements": [
"SELECT parent.*, child.col3 AS `child.col3`, child.col4 AS `child.col4` FROM parent LEFT JOIN child ON parent.col1 = child.col3"
]
}I tried naming the resulting column with dot-syntax but,$util.rds.toJSONObject()doesn't putcol3andcol4underchildkey. The reason it should is because otherwise, Apollo won't be able to cache and parse the entity.Note:Dot-syntax is not documented anywhere. Usually, some ORMs use dot-syntax technique to convert SQL rows to proper nested JSON objects. | AWS AppSync RDS: $util.rds.toJSONObject() Nested Objects |
You're attempting to run the go source code file. You need to run the binary:# Build the binary for your module
GOOS=linux go build main.go
# Package the binary, note we're packaging "main", not "main.go" here:
zip function.zip main
# And upload "function.zip" this package to LambdaFor more details, including directions to run through this process on other platforms, see theAWS Lambda Deployment documentationsAlso, you'll need to set the executable bit in the zipfile. There are a bunch of ways to do this, if you want to do it on Windows, you'll need to run a python script like this:import zipfile
import time
def make_info(filename):
info = zipfile.ZipInfo(filename)
info.date_time = time.localtime()
info.external_attr = 0x81ed0000
info.create_system = 3
return info
zip_source = zipfile.ZipFile("source_file.zip")
zip_file = zipfile.ZipFile("dest_file.zip", "w", zipfile.ZIP_DEFLATED)
for cur in zip_source.infolist():
zip_file.writestr(make_info(cur.filename), zip_source.open(cur.filename).read(), zipfile.ZIP_DEFLATED)
zip_file.close()This will take asource_file.zipand repackage it asdest_file.zipwith the same contents, but with the executable bit set for all of the files. | I've created a AWS Lambda function that I'm using a Webhook to call anAPI GatewayBelow is the code I've built withgo build -o main.gosince I've been reading that you have to specify the extension.package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
)
func HandleRequest(ctx context.Context) (string, error) {
return fmt.Sprintf("Hello!"), nil
}
func main() {
lambda.Start(HandleRequest)
}The issue is even though I havepublic permissionson my uploadedS3 function .zipas well asrole permissionsI'm still getting a permissions error.{
"errorMessage": "fork/exec /var/task/main: permission denied",
"errorType": "PathError"
} | Permissions denied when trying to invoke Go AWS Lambda function |
You need to build a CodePipeline which will have :Source: It could be Github, code will be checked out from here.CodeBuild: It will build your artifact and upload it to the S3 bucket so that it could use it while deploying.CodeDeploy: It will fetch out the code from S3 bucket and do the deployment on Deployment Group created by you.Create buildspec.yml for codebuild and put it in root. Similarly, create appspec.yml for codedeploy and put it in root.Sample buildspec.ymlversion: 0.2
phases:
pre_build:
commands:
- echo "creating <path-to-folder> folder"
- mkdir -p ./<path-to-folder>/
build:
commands:
- echo "Copying the file"
- cp index.html ./<path-to-folder>/
artifacts:
files:
- <path-to-folder>/**/*
- appspec.ymlbuildspec.yml will create a folder and will copy your index.html into it and put it on S3.sample appspec.ymlversion: 0.0
os: linux
files:
- source: <path-to-folder>/index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: <location-of-script-you-want-to-run>
timeout: 300
runas: root
- location: <location-of-script-you-want-to-run>
timeout: 300
runas: root
ApplicationStop:
- location: <location-of-script-you-want-to-run>
timeout: 300
runas: rootappspec.yml will download the artifact from S3 and copy the file from your folder to /var/www/html/ and you can provide other script to start or stop the service. | I'm trying to build a simple hello world onAWS Codebuildbut I can't get thebuildspec.ymlworking... I just want to put a simple html with some css in a folder. That's it.This is therepothat I'm trying to build from.If you look inside, the.ymlhas the following:version: 0.2
run-as: ec2-user
phases:
install:
run-as: ec2-user
runtime-versions:
nodejs: 10
artifacts:
files:
- /index.html
name: artifact-name
- source: /
destination: /var/www/html
# base-directory: /var/www/html/Thisandthisare the doc for the.ymlbut I don't understand what to write, it's not java, not python, just anhtml.EDIT: I forgot to put the error:YAML_FILE_ERROR: mapping values are not allowed in this context at line 14EDIT2:
This is how I have thebuildspec.yml:And this is how I have the env (the codedeploy and pipeline I'm using my own ec2 instance, is that a problem?)Enviroment:FINAL EDIT:The problem was the image! Change it to Ubuntu version 1.0 | Codebuild - build project simple hello world |
If you are using a custom CMK, you have to update the key policy and assign permissions explicitly. For EBS encryption, a principal usually requires the following permissions:kms:CreateGrantkms:Encryptkms:Decryptkms:ReEncrypt*kms:GenerateDataKey*kms:DescribeKeyThe best way to troubleshoot key permission issues is to check the Cloudtrail event history. Filter the events by event source and check if there is any "access denied" error.Filter:Event source: kms.amazonaws.comYou can see which action is denied here and adjust the key policy accordingly. "User name" field in the event gives you a hint to determine the ARN of the principal to use in the policy.In your case, it is very likely that one of the service-linked roles requires permissions to access the KMS key. There is a good explanation for key permissionsherefor auto-scaling service-linked role. | I recently enabled default ebs encryption as mentioned here:https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/. Afterwards, when attempting to launch a beanstalk instance, I get a generic 'ClientError' and the instance immediately terminates. If I disabled default encryption it works fine.Does anyone know what changes are required to get beanstalk to work with a customer managed encryption key? I suspected it was a permissions issue so I temporarily gave the beanstalk roles full admin access but that did not solve the issue. Is there something else I am missing?I saw thisrelated questionbut it was before default EBS encryption was released and I was hoping to avoid having to copy and encrypt the AMI manually... | How to use Beanstalk with default EBS encrpytion enabled? |
Based on the documentation ofAWS::ElasticLoadBalancingV2::LoadBalancerthe expected value ofSubnetMappingsisList of SubnetMappingbut you are passing two lists. You should change it to the following:LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
IpAddressType: ipv4
Name: network-loadbalancer
Scheme: internet-facing
SubnetMappings:
- AllocationId: !Ref LoadBalancerElasticIP
SubnetId: !Ref Subnet1a | In my CloudFormation template I created elastic IP and network load balancer.
There is no problem during creating:Subnet1a:
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: VPC
AvailabilityZone: 'eu-west-1a'
CidrBlock:
Ref: 042SubnetCidr
MapPublicIpOnLaunch: true
LoadBalancerElasticIP:
Type: AWS::EC2::EIP
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
IpAddressType: ipv4
Name: network-loadbalancer
Scheme: internet-facing
Subnets:
- Ref: Subnet1a
Type: networkListeners and Target Groups are created separately and also are working with LB without elastic IP.Now I'm trying to assign this elastic IP to load balancer, by chaning Subnets to SubnetMappings property, but it's gaving me error: "LoadBalancer CREATE_FAILED Property SubnetId cannot be empty."LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
IpAddressType: ipv4
Name: network-loadbalancer
Scheme: internet-facing
SubnetMappings:
- AllocationId: !Ref LoadBalancerElasticIP
- SubnetId: !Ref Subnet1a
Type: networkI've been trying different solutions for a while. Can't see what's wrong. Any ideas? Should I create network interface? And Assign eip to interface then interface to load balancer? | AWS network Load Balancer CloudFormation IP |
In my case I use this method:In the same file.tfI created the resourceloggroupwith retention:resource "aws_cloudwatch_log_group" "loggroup" {
name = "/aws/codebuild/test"
retention_in_days = 30
}And the I pass the variable to my codebuild project:logs_config {
cloudwatch_logs {
group_name = aws_cloudwatch_log_group.loggroup.name
}
}And then I apply my terraform code. | CodeBuildLogs Config- CloudWatch supports Time-based log retention.For CodeBuild logs - Can we achieve Log Retention which says like - "Retain the latest 3 Successful build logs" + "Retain the latest 3 Failed build logs" ? | CodeBuild Log Retention Config |
TheAWS Free Tierfor Amazon S3 includes:5GB of standard storage (normally $0.023 per GB)20,000 GET requests (normally $0.0004 per 1,000 requests)2,000 PUT requests (normally $0.005 per 1,000 requests)In total,it is worth up to 13.3 cents every month!So, don't be too worried about your current level of usage, but do keep an eye on charges so you don't get too many surprises. You can alwaysCreate a Billing Alarm to Monitor Your Estimated AWS Charges.The AWS Free Tier is provided to explore AWS services. It is not intended for production usage. | I have been using the AmazonS3 service to store some files.I have uploaded 4 videos and they are public. I'm using a third party video player for those videos (JW Player). As a new user on theAWS Free Tier, my free PUT, POST and LIST requests are almost used up from 2000 allowed requests, and for four videos that seems ridiculous.Am I missing something or shouldn't one upload be one PUT request, I don't understand how I've hit that limit already. | Amazon S3 Requests Usage seems high |
The problem is, that your local computer do not know myserver.
So you have several option:1.) You can edit your local /etc/hosts
and set there the public IP to myserver, but than you need to do it on all computer which should access myserver2.) If you are own a domain, you can set the public IP of your server to myserver.mydomain.com in your DNS configuration.3.) You can also set myserver in .ssh/configHost myserver
Hostname ec2-xxx-xxx-xxx-xxx.eu-central-1.compute.amazonaws.com
IdentityFile /Users/TNowak/.ssh/id_rsa | I'm trying login from local machine to aws ubuntu server but I'm unable to login on aws ubuntu server with hostname.ssh -i key.pem ubuntu@myserverGetting below error:ssh: Could not resolve hostname myserver: Name or service not knownI'm able to login with public IP of this server without any issue.for example:ssh -i key.pem[email protected]I have changed the hostname with below commands.1) sudo vim /etc/hosts2) 127.0.0.1 localhost myserver3) sudo hostnamectl set-hostname myserverused the below link to change hostname.https://aws.amazon.com/premiumsupport/knowledge-center/linux-static-hostname/How can login on aws ubuntu server with hostname instead of IP address?Please help me. | Ubuntu ssh: Could not resolve hostname myserver: Name or service not known |
You can add tags to all resources that are created using serverless.
In serverless.yml, add the following under provider section -
provider:
name: aws
runtime: {your runtime}
region: {your region}
stackTags: ${file(config/tags.yml):tags}
tags: ${file(config/tags.yml):tags}
Note -
1. tags - will add tags information to all functions.
2. stackTags - will add tags information to all other resources generated by CloudFormation template. | I am adding tags using serverless and my service also using other resources e.g. kinesis. Is there any way to add tags in kinesis through serverless? | Is there any way to tag all resources through serverless? |
This Python3 code will list all of your own account's AMIs that arenotWindows:import boto3
ec2_client = boto3.client('ec2', region_name='us-east-2')
images = ec2_client.describe_images(Owners=['self'])
for image in images['Images']:
if 'Platform' not in image:
print(image['ImageId']) | I am trying to find all non-window images:aws ec2 describe-images --region us-east-2 --image-ids ami-** --filters "Name=platform, Values=windows"Above gives me all windows platform id's. Is there a way to do not inside this cli? I tried Values!=, <>. Search through stackoverflow but did not find anything. | How to filter not equal to in AWS CLI |
If you are using the Elastic Load Balancer service on AWS, then it isnot possible to route based upon CPU Utilization.FromHow Elastic Load Balancing Works - Elastic Load Balancing:WithApplication Load Balancers, the load balancer node that receives the requestevaluates the listener rulesin priority order to determine which rule to apply, and then selects a target from the target group for the rule action using theround robin routing algorithm. Routing is performed independently for each target group, even when a target is registered with multiple target groups.WithNetwork Load Balancers, the load balancer node that receives the connection selects a target from the target group for the default rule using aflow hash algorithm, based on the protocol, source IP address, source port, destination IP address, destination port, and TCP sequence number. The TCP connections from a client have different source ports and sequence numbers, and can be routed to different targets. Each individual TCP connection is routed to a single target for the life of the connection.WithClassic Load Balancers, the load balancer node that receives the request selects a registered instance using theround robin routing algorithmfor TCP listeners and theleast outstanding requests routing algorithmfor HTTP and HTTPS listeners. | How to setup a load balancer between 2 instances based on CPU utilisation?If my first instance having more than 50% utilisation, second should load. | Load balancer based on CPU utilization |
I've seen this happen when the underlying data is being secured by row-level-security but the row-level-security table does not contain an entry for the current user'sUserName.If this sounds like your case, ensure your RLS is properly hooked up and ensure that your username is correct. When you're embedding, it's important to note that the user-name can be prefixed with the embedding role (e.g.my_embedding_role/some_userrather than justsome_user). | I'm embedding a QuickSight dashboard on a web page and when it loads, I get a message saying:There is an issue with your data set rules. Contact your data set owner for asssistance. Error code: DatasetRulesUserDeniedI can't find any information about this message. Has anyone run into this problem? | Error message from embedded QuickSight dashboard - what is the meaining of DatasetRulesUserDenied? |
Rekognition does not allow you to provide multiple face images for a single face ID. But you can upload multiple faces of the same person and give them the same ExternalImageId. That's the ID that you typically use to correlate a face match to your person database. When your app later presents a face for matching, Rekognition will return zero or more face matches. | I can manage to add an image to AWS Rekognition Collection using the IndexFacesRequest.However, to improve accuracy I would like to add more images of the same user. How do I let the request know it is the same user? | AWS Rekognition: Add extra faces of same person |
Let me share piece of the code. The key to understand it is that when you send GetBatchItem request to dynamodb, you specify map of table names and keys for that table, so response you get is a map of tables names and matched itemsplaceIDs := []string { "london_123", "sanfran_15", "moscow_9" }
type Place {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
}
mapOfAttrKeys := []map[string]*dynamodb.AttributeValue{}
for _, place := range placeIDs {
mapOfAttrKeys = append(mapOfAttrKeys, map[string]*dynamodb.AttributeValue{
"id": &dynamodb.AttributeValue{
S: aws.String(place),
},
"attr": &dynamodb.AttributeValue{
S: aws.String("place"),
},
})
}
input := &dynamodb.BatchGetItemInput{
RequestItems: map[string]*dynamodb.KeysAndAttributes{
tableName: &dynamodb.KeysAndAttributes{
Keys: mapOfAttrKeys,
},
},
}
batch, err := db.BatchGetItem(input)
if err != nil {
panic(fmt.Errorf("batch load of places failed, err: %w", err))
}
for _, table := range batch.Responses {
for _, item := range table {
var place Place
err = dynamodbattribute.UnmarshalMap(item, &place)
if err != nil {
panic(fmt.Errorf("failed to unmarshall place from dynamodb response, err: %w", err))
}
places = append(places, place)
}
} | I am using GO SDK and using the DynamnoDBBatchGetItemAPI.I saw this code example -https://github.com/aws/aws-sdk-go/blob/master/service/dynamodb/examples_test.goIs there any other code example which shows unmarshalling of the response from theBatchGetItemAPI? | AWS DynamoDB : Unmarshalliing BatchGetItem response |
You can also use an sqs delay queue and check after 5 secs if the disconnect is true. That is way cheaper than using step functions. This is also the official solution.Handling client disconnections
The best practice is to always have a wait state implemented for lifecycle events, including Last Will and Testament (LWT) messages. When a disconnect message is received, your code should wait a period of time and verify a device is still offline before taking action. One way to do this is by using SQS Delay Queues. When a client receives a LWT or a lifecycle event, you can enqueue a message (for example, for 5 seconds). When that message becomes available and is processed (by Lambda or another service), you can first check if the device is still offline before taking further action.https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html#connect-disconnect | I need to get IoT devices status reliable.Now, I have Lambda connected toSELECT * FROM '$aws/events/presence/#'events on IoT.But I can't get reliable device status in the case when a connected device was disconnected and connected back within ~ 40 seconds. The result of this scenario - events in the order:
1. Connected - shortly after device was connected again
2. Disconnected - after ~ 40 seconds.It looks like the messagedisconnectedis not discarded when device is connected back and emitted after connection timeout in any case.I've found a workaround - request device connectivity fromAWS_ThingsIoT index. In fact, I also receive previous connectivity state, but it has timestamp field. Then, I just compare the current event.timestamp with the timestamp from index and if it higher that 30 seconds - I discarddisconnectedevent silently. But this approach is not reliable, because I am still able get wrong behavior when switching device faster - with 5 seconds interval. This is not acceptable for my project.Is it possible to use IoT events to solve my problem? I wouldn't like to go in devices index polling.. | AWS IoT thing connectivity status not reliable |
According toSpecial Information for Amazon SNS Policiesthesns:Unsubscribeis not listed as a valid SNS policy action.Try usingclient.unsubscribe(SubscriptionArn='string')instead as perBoto3 documentation. | This one's a head scratcher:sns_policy = {
"Version":"2012-10-17",
"Statement":[{
"Effect" : "Allow",
"Principal" : { "AWS": "*" },
"Action" : ["sns:Publish", "sns:ListSubscriptionsByTopic", "sns:Unsubscribe"],
"Resource" : "arn:aws:sns:us-west-2:234234234:test",
"Condition" : {
"ArnEquals" : {
"aws:SourceArn" : "arn:aws:lambda:us-west-2:234234234:function:*"
}
}
}]
}
sns.set_topic_attributes(TopicArn = "arn:aws:sns:us-west-2:234234234:test",
AttributeName = "Policy",
AttributeValue = json.dumps(sns_policy)
)Adding the thirdActionarray item,sns:Unsubscriberesults inInvalid parameter: Policy statement action out of service scope! (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter;and removingsns:Unsubscribeworks fine.Why is this not being allowed by AWS? I need my lambda function to be able to subscribe and unsubscribe Queues to SNS::test topic. | Adding additional sns:Unsubscribe to SNS access policy results in InvalidParameter error |
Since this is kind of an open-ended question and you mentioned Lambdas, I would suggest checking out the Serverless framework. They have a couple of template applications in various languages/frameworks. Serverless makes it really easy to spin up Lambdas configured to an API Gateway, and you can start with the default proxy+ resource. You can also define DynamoDB tables to be auto-created/destroyed when you deploy/destroy your serverless application. When you successfully deploy using the command 'serverless deploy' it will output the URL to access your API Gateway which will trigger your Lambda seamlessly.Then once you have a basic "hello-word" type API hosted on AWS, you can just follow the docs along for how to set up the DynamoDB library/sdk for your given framework/language.Let me know if you have any questions!-PS: I would also, later on, recommend using the API Gateway Authorizer against your Cognito User Pool, since you already have auth on the Flutter app, then all you have to do is pass through the token. The Authorizer can also be easily set up via the Serverless Framework! Then your API will be authenticated at the Gateway level, leaving AWS to do all the hard work :) | I am trying to create an app that uses AWS Services, I already use Cognito plugin for flutter but can't get it to work with DynamoDB, should I use a lambda function and point to it or is it possible to get data form a table directly from flutter, if that's the case which URL should I use?I am new in AWS Services don’t know if is it possible to access a dynamo table with a URL or I should just use a lambda function | DynamoDB + Flutter |
If you want to list the users in your Cognito User Pool, you need to allowcognito-idp:ListUsers. You can restrict this action to a specific user pool like this:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "cognito-idp:ListUsers",
"Resource": "arn:aws:cognito-idp:<region>:<account>:userpool/<userpoolid>"
}
]
}Have a look atActions, Resources, and Condition Keys for Amazon Cognito User Pools. | I have an AWS Lambda function that needs to be able to run this code:var cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
cognitoidentityserviceprovider.listUsers(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
.... other useful code ....
});In other words it should be able tolistUsers.When setting the role in IAM for that, what kind of policy do I need? | Permissions for a AWS Lambda function to list users |
The issue was that the extension probably was added when a schema was active. Thank you @ Antti Haapala for the link to the same question:https://dba.stackexchange.com/questions/135093/in-rds-digest-function-is-undefined-after-creating-pgcrypto-extension.I've made the following when no schema selected:DROP EXTENSION pgcrypto;
Query executed OK, 0 rows affected. (0.031 s)
CREATE EXTENSION pgcrypto;
Query executed OK, 0 rows affected. (0.046 s)
SELECT gen_salt('bf');
gen_salt
$2a$06$kyj11fcRtpwxrqgCfZEIaOAnd everything works fine now. | I have the fresh AWS RDS Postgres (v 11) instance. I've installed thepgcryptoextension and it doesn't allow to do that again:CREATE EXTENSION pgcrypto;
Error in query (7): ERROR: extension "pgcrypto" already existsBut I can't use the extension functions:select gen_salt('bf');
Error in query (7): ERROR: function gen_salt(unknown) does not exist
LINE 1: select gen_salt('bf')
HINT: No function matches the given name and argument types. You might need to add explicit type casts.What I'm doing wrong? | AWS RDS Postgres Crypto functions doesn't work even with the pgcrypto extension enabled |
There are two more key differences that are not mentioned. You can see a full comparison between the two index types in theofficial documentation.If you use a LSI, you can have a maximum of 10 Gb of data per partition key value (table plus all LSIs). For some use cases, this is a deal breaker. Before you use a LSI, make sure this isn’t the case for you.LSIs allow you to perform strongly consistent queries. This is the only real benefit of using a LSI.The AWSgeneral guidelinesfor indexes sayIn general, you should use global secondary indexes rather than local secondary indexes. The exception is when you need strong consistency in your query results, which a local secondary index can provide but a global secondary index cannot (global secondary index queries only support eventual consistency).You may also findthis SO answera helpful discussion about why you should prefer a GSI over a LSI. | I have been reading the Amazon DynamoDB documentation to compare Global Secondary Index (GSI) and Local Secondary Index (LSI). I am still unclear that in the below use case, does it matter to me what I use? I am familiar with things like LSI ought to use the same partition key etc.Here is the use case:I already know the sort key for my index.My partition key is the same in both casesI want to project ALL the attributes from original table onto my indexI know already prior to creating the table what index I need for my use case.In the above use case, there is absolutely no difference apart from minor latency gain in LSI Vs GSI because LSI might end up in the same shard. I want to understand the Pro Vs Con in my use case.Here are some questions that I am trying to find the answer to and I have not encountered a blog that is explicit about these:Use GSI only because the partition key is different?Use GSI even if the partition key is same but I did not know during table creation that I would need such an index?Are there any other major reasons where one is superior than the other (barring basic stuff like count limit of 5 vs 20 and all). | DynamoDB Local Secondary Index vs Global Secondary Index |
You can sign up users withadminCreateUserAPI call. They will receive an email with temporary passwords. This approach is configurable.See:https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AdminCreateUser.html | I want to make a simple flow for registration app.User sign up with only email -> The verification/registration link is sent to the email -> People register (putting in their password) on that linkI've googled anything but haven't found any way to make it with AWS Cognito.Looks like Cognito is forcing users to sign up with at least email AND password to get the confirmation link | AWS Cognito sign up without password to get email confirmation link |
Go toIAM -> Rolesand delete following roles:
AWSDeepRacerServiceRole
AWSDeepRacerSageMakerAccessRole
AWSDeepRacerRoboMakerAccessRole
AWSDeepRacerLambdaAccessRole
AWSDeepRacerCloudFormationAccessRoleThen try resetting Account Resources again | I want to give the AWS DeepRacer competition a try but It's not properly setting up my "Account resources" and I have no idea why.
This is what it's telling me:These are the red errors:Error in IAM role creation
Please try again after deleting the following roles: AWSDeepRacerServiceRole, AWSDeepRacerSageMakerAccessRole, AWSDeepRacerRoboMakerAccessRole, AWSDeepRacerLambdaAccessRole, AWSDeepRacerCloudFormationAccessRole.There is an issue with your IAM roles
Unable to create all IAM rolesI have tried resetting the resources as it's telling me to do so.
But it still doesn't work afterwards. When I go to my IAM roles theres none of the described above. I have checked my account and everything else seems to be working fine. I checked and I can also manually create S3 buckets and IAM roles.It's not giving me clear instructions on whats wrong or what I should do besides the ones on the image above so I'm not sure how to proceed! | AWS DeepRacer fails to set up account resources, and I don't know why |
You can use both HTTP and HTTPS listeners.Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check outrulesfor ALB.If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.I'm not sure I fully understood your question number three, but here'ssomethingyou may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though. | This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.I want to host my website. My website supports HTTPSonly. I want to put my backend servers behind an Application Load Balancer.I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).Questions:Should the communication between backend servers and myALB be
through HTTP or HTTPS?I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPSonly. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.I would appreciate any general advice. | Using Application Load Balancer with HTTPS |
I don't see an issue storing your tokens client side. The user can copy paste the token from the header request anytime. The token is not a secret. It can't be tampered with because it's digitally signed.For example below contains the headers of a request.
The JWT token is stored inAuthorizationand can be decoded inhttps://jwt.io/, but it cannot be modified:Host: aa.aa.aa
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: application/json, text/javascript, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Content-Type: application/x-www-form-urlencoded
Authorization: Bearer: token234567890-eddedede
X-Requested-With: XMLHttpRequest
Connection: keep-aliveIn addition, it’s best practice to expire your tokens and renew at certain intervals. | I'm writing a serverless app with AWS (Lambda, API Gateway, Cognito, etc) and I find myself wondering how to best secure my stack.I've read that for applications using a server, EC2 or otherwise, best practice is to keep user's ID tokens stored on the backend. This makes sense, since a node process would provide me a long term solution for hanging onto and reusing ID tokens. A serverless app on the other hand, does not provide this luxury. I've considered just keeping it on the front end- since after all, JWT tokens provided by cognito are signed, and should therefore be tamper proof, but this seems a bit unsettling from my end. I'd much prefer a system where users have no direct access to their own tokens. I've also thought about just requesting a new token for every request sent to Lambda, but this too seems like a far from perfect solution.Is there some kind of accepted best practice surrounding serverless authentication and authorization? Am I on the right track just storing my tokens client side while the user has the app open? | Access token and ID token storage for serverless app |
Yes, the read replica will be automatically updated after the migration is committed on the primary server. It is just using postgres' built-in replication features. | I created a read replica from Postgres DB in AWS RDS. Now if I run a migration in main DB will the read replica automatically migrate? | Does migration in a DB automatically gets reflected in read replica? |
The benefits I can think of are:a simple, declarative DSL.features like logging provided by cfn-init.cfn-hup can be used to detect changes in resource metadata when you run update-stack.If you like a simple, declarative DSL for your configuration that is similar to Puppet & Chef, then use AWS::CloudFormation::Init.And if you are finding it cumbersome, it could be that UserData is a better fit. Too many configSets could be an indication that you're using the wrong tool (or ordering things that don't really need to be ordered!).Also, be aware that AWS::CloudFormation::Init is old, and it predates CloudFormation's support for YAML templates.Prior to YAML support, putting scripts in UserData was difficult, because each line of your shell script needed to be encoded in a JSON Array. This made it difficult to read, and easy to make mistakes.The use of AWS::CloudFormation::Init, in my opinion, made more sense given only those two choices.These days, my preference is to keep shell scripts outside CloudFormation, unit test them externally, then feed them as a base64 encoded string in as a parameter. (Beware the 4096 character limit on parameters however!).Of course, it also depends on the complexity of your configuration. You wouldn't want to do too much in a shell script in UserData as it would quickly become unmaintainable. | I've been using cloudformation templates for a while now and I'm keep asking myself the following question:What are the benefits of usingAWS::CloudFormation::Initover adding those statements directly into theUserDatablock?So far I've found theAWS::CloudFormation::Initway more verbose, especially when you need severalconfigSetsto assure some kind of ordering on your statements.Also, certain AMIs doesn't support running that init block out-of-the-box and need from extra scriptscfn-initwhich it adds even more verbosity. | Benefits of using AWS::CloudFormation::Init |
I was using a Amazon Linux 2 image so i have to mount the volume using this commandsudo mount -o nouuid /dev/xvdf1 /mnt/tempvolAnd now it works. | I attached a volume to an EC2 instance and now when i'm trying to mount it i'm getting this error :sudo mount /dev/xvdf /mnt/tmp
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error.What's the problem ? | Unable to mount a volume on an EC2 instance |
There are several options depending on price and efforts needed:Simplest but a bit more expensive solution is to useEFS+NFS Persistent Volumes. However, EFS has serious throughput limitations, readherefor details.You can create pod with NFS-server inside and again mount NFS Persistent Volumes into pods. See examplehere. This requires more manual work and not completely highly available. If NFS-server pod fails, then you will observe some (hopefully) short downtime before it gets recreated.For HA configuration you can provisionGlusterFSon Kubernetes. This requires the most efforts but allows for great flexibility and speed.Although mounting S3 into pods is somehow possible using awful crutches, this solution has numerous drawbacks and overall is not production grade. For testing purposes you can do that. | I have the following:2 pod replicas, load balanced.
Each replica having 2 containers sharing network.What I am looking for is a shared volume...I am looking for a solution where the 2 pods and each of the containers in the pods can share a directory with read+write access. So if a one container from pod 1 writes to it, containers from pod 2 will be able to access the new data.Is this achievable with persistent volumes and PVCs? if so what do i need and what are pointers to more details around what FS would work best, static vs dynamic, and storage class.Can the volume be an S3 bucket?Thank you! | Kubernetes AWS shared persistent volume |
I think you should be cloning the Github repo in SageMaker instance and not importing the files from S3. I was able to reproduce the Bitcoin Trading Bot notebook from SageMaker by cloning it. You can follow the below stepsCloning Github Repo to SageMaker NotebookOpen JupyterLab from the AWS SageMaker console.From the JupyterLab Launcher, open the Terminal.Change directory to SageMakercd ~/SageMakerClone the BitCoin Trading Botgit repogit clone https://github.com/llSourcell/Bitcoin_Trading_Bot.git
cd Bitcoin_Trading_BotNow you can open the notebookBitcoin LSTM Prediction.ipynband select the Tensorflow Kernel to run the notebook.Adding files from local machine to SageMaker NotebookTo add files from your local machine to SageMaker Notebook instance, you can usefile uploadfunctionality in JupyterLabAdding files from S3 to SageMaker NotebookTo add files from S3 to SageMaker Notebook instance, use AWS CLI or Python SDK to upload/download files.For example, to downloadlstm.pyfile from S3 to SageMaker using AWS CLIaws s3 cp s3://mybucket/bot/src/lstm.py .Usingboto3APIimport boto3
s3 = boto3.resource('s3')
s3.meta.client.download_file('mybucket', 'bot/src/lstm.py', './lstm.py') | I have a file I want to import into a Sagemaker Jupyter notebook python 3 instance for use. The exact code would be 'import lstm.' I can store the file in s3 (which would probably be ideal) or locally, whichever you prefer. I have been searching the internet for a while and have been unable to find a solution to this. I am actually just trying to run/understand this code from Suraj Raval's youtube channel:https://github.com/llSourcell/Bitcoin_Trading_Bot. The 'import lstm' line is failing when I run, and I am trying to figure out how to make this work.I have tried:
from s3://... import lstm. failed
I have tried some boto3 methods and wasn't able to get it to work.import time
import threading
import lstm, etl, json. ##this line
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
configs = json.loads(open('configs.json').read())
tstart = time.time()I would just like to be able to import the lstm file and all the others into a Jupyter notebook instance. | Importing a file into jupyterlabs from s3 |
In Amazon Cognito, the User Pool ID is considered to be a sensitive piece of information, and it is used only inAdminAPI calls. The SignUp API call is not AWS SigV4 Signed, and it is meant to run on the browser side instead of the server side.From the App Client ID, Cognito implicitly understands which User Pool you are referring to in your code. Hence, you can use the code in the documentation, and users will get added to your User Pool without the User Pool ID being a parameter in the API call. | In the aws-sdk cognito documentation there is a function listed calledsignUp()that quote "Registers the user in the specified user pool and creates a user name, password, and user attributes." However, there is no parameter for a user pool Id. How exactly does one specify the user pool they want to add to? At first glance it thought maybe it was just missing in the documentation, but I tried adding UserPoolId as a property in the parameters object, to which it responded with an error about an unexpected field. There is also no function parameter to accept the pool id. My only other guess was that maybe theCognitoIdentityServiceProviderobject accepted it in its constructor, but that also does not appear to be the case. I am aware that the API also provides the functionAdminCreateUser()to add users, but don't want to use it if there's a better way.documentation herehttps://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#signUp-propertyAny ideas? | CognitoIdentityServiceProvider.signUP() doesn't accept user pool id? |
You can do it like this:const AWS = require('aws-sdk');
// config.json
{"accessKeyId": <YOUR_ACCESS_KEY_ID>, "secretAccessKey": <YOUR_SECRET_ACCESS_KEY>, "region": "us-east-1" }
AWS.config.loadFromPath('./config.json');You can also do it like this:var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
"accessKeyId": <YOUR_ACCESS_KEY_ID>,
"secretAccessKey": <YOUR_SECRET_ACCESS_KEY>
}); | I am building an environment which let users to run their nodejs code. It is pretty much like whatCode Penorrunitdoes. If users need to run aws sdk code in the environment, I don't know how to handle their credentials and configs. I know aws nodejs sdk has a methodconfig()which I can pass all configuration in. But usually developers aws credentials and config are saved in~/.aws/credentialand~/.aws/configfiles. If I ask users to upload these files into the environment, how can I convert them into a parameter can be read by aws sdk? Is there a easy way to do or I have to manually parse these files? | How to pass aws credential and config to aws sdk in nodejs programmatically? |
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_lower_case_table_namesIt is prohibited to start the server with a lower_case_table_names setting that is different from the setting used when the server was initialized. The restriction is necessary because collations used by various data dictionary table fields are based on the setting defined when the server is initialized, and restarting the server with a different setting would introduce inconsistencies with respect to how identifiers are ordered and compared.This is a question for AWS regarding support for this option. It depends on how they initialize RDS instances. I'm guessing that they clone an image of a pre-initialized InnoDB tablespace, instead of initializing a new tablespace. | I'm trying to create a new MySQL v8.0.11 RDS DB Instance with "lower_case_table_names=1".The creation of the database is stuck and in the logs I can see the following error:"Different lower_case_table_names settings for server ('1') and data dictionary ('0')."Anyone has gone through this?Please help. | Can not create a new RDS MySQL DB instance with "lower_case_table_names=1" |
As it turns out this was not a problem with security groups. It was just coincidental, that it worked at the time when I changed the security groups.It seems the containers aren't starting fast enough to accept connections from the alb when it starts the health checks.What helped:changinghealthCheckGracePeriodto two minutestweaking the healthcheck paremeters for the target group,interval,unhealthyThreshold,healthyThresholdAlso, in my application logs it looks like the service gets two health check requests at once. Per default theunhealthy thresholdis set to 2. So maybe the service was marked unhealthy only after one health check. | I have an ecs fargate cluster with an ALB to route the traffic to. The docker containers are listening on port 9000.My containers are accessible over the alb dns name via https. That works. But they keep getting stopped/deregistered from the target group and restarted only to be in unhealthy state immediately after they are registered in the target group.The ALB has only one listener on 443.
The security groups are set up so that thesg-alballows outbound traffic on port 9000 tosg-fargateandsg-fargateallows all inbound traffic on port 9000 fromsg-alb.The target group is also setup to use port 9000.I'm not sure what the problem is, or how to debug it.Everything is set up with cdk. Not sure if that's relevant. | AWS Application Load Balancer health checks fail |
Your configuration is like this:@Bean
public MessageProducerSupport sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(amazonSqs, SQS_QUEUE_NAME);
adapter.setOutputChannel(inboundChannel());
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setVisibilityTimeout(RETRY_NOTIFICATION_AFTER);
return adapter;}Where theinboundChannelis like this:@Bean
public QueueChannel inboundChannel() {
return new QueueChannel();
}So, this is aqueue, therefore async and the message from that queue is processed on a separate thread by theTaskSchedulerwhich polls this kind of channel according yourPollerMetadataconfiguration. In this case any errors in the consumer are thrown into that thread as well and don't reach theSqsMessageDrivenChannelAdapterfor expected error handling.This technically is fully different from your@SqsListenerexperience which is really called directly on the container thread, and therefore its error handling is applied.Or you need to revise your logic how you would like to handle errors in that separate thread or just don't use aQueueChanneljust afterSqsMessageDrivenChannelAdapterand let it throw and handle errors in the underlying SQS Listener Container as it is in case of@SqsListener. | I'm working on integrating Spring Integration with AWS SQS queue.I have an issue when my method annotated with@ServiceActivatorthrows an exception. It seems that in such cases the message is removed from the queue anyway. I've configuredMessageDeletionPolicytoON_SUCCESSinSqsMessageDrivenChannelAdapter.Here is my channel/adapter configurationhttps://github.com/sdusza1/spring-integration-sqs/blob/master/src/main/java/com/example/demo/ChannelConfig.javaI've tried doing the same using@SqsListenerannotation and messages are not deleted as expected.I've created a mini Spring Boot app here to demonstrate this issue:https://github.com/sdusza1/spring-integration-sqsPlease help :) | Spring Integration + SQS - retry on exception doesn't work |
I'd question your architecture. If you are running into problems with how AWS has designed a service (i.e. lambda 250mb max size) its likely you are using the service in a way it wasn't intended.An anti-pattern I often see is people stuffing all their code into one function. Similar to how you'd deploy all your code to a single server. This is not really the use case for AWS lambda.Does your function do one thing? If not, refactor it out into different functions doing different things. This may help remove dependencies when you split into multiple functions.Another thing you can look at is can you code the function in a different language (another reason to keep functions small). I once had a lambda function in python that went over 250mb. When I looked at solving the same problem with node.js, my function size dropped to 20mb. | When I want to launch some code serverless, I use AWS Lambda. However, this time my deployment package is greater than 250MB.So I can't deploy it on a Lambda...I want to know what are the alternatives in this case? | Alternative to AWS lambda when deployment package is greater than 250MB? |
Traditionally in Big Data processing ("Data Lakes"), information related to a single table are stored in adirectory rather than a single file. So, appending information to a table is as simple as adding another file to a directory. All files within the directory will need to be the same schema (such as CSV columns, or JSON data).The directory of files can then be used with tools such as:Spark, Hive and Presto on HadoopAmazon AthenaAmazon Redshift SpectrumA benefit of this method is that the above systems canprocess multiple files in parallelrather than being restricted to processing a single file in a single-threaded method.Also common is tocompress the filesusing technologies likegzip. This lowers storage requirements and makes it faster to read data from disk. Adding additional files is easy (just add anothercsv.gzfile) rather than having to unzip, append and re-zip a file.Bottom line:It would be advisable to re-think your requirements for "one great big CSV file". | I'm trying to build a very large CSV file on S3.I want to build this file on S3I want to append rows to this file in batches.Number of rows could be anywhere between 10k to 1MSize of each batch could be < 5Mb(So multi-part upload is not feasible)What would be the right way of accomplishing something like this? | Write 1 million rows of CSV into S3 by batches |
FromAccess Control Policies - Amazon Athena:To run queries in Athena, you must have the appropriate permissions for:The Athena actions.The Amazon S3 locations where the underlying data is stored that you are going to query in Athena....So, it seems that theIAM Userwho is executing the Athena query requires access to the Amazon S3 location.This could be done by adding a Bucket Policy to the S3 bucket in the other account that permits the IAM User access to the bucket.To explain better:Account-AwithIAM-User-Aand AWS AthenaAccount-BwithBucket-Bthat has a Bucket Policy granting access toIAM-User-A | Can I create a database and table in Athena service within my account to access S3 data in another account?I went over the below link and I assume as per this documentation both Amazon Athena and S3 bucket have to be in the same account and access is provided to the user in another account.https://console.aws.amazon.com/athena/home?force®ion=us-east-1#query | Amazon Athena Cross Account Access |
Just put every action into one single stage. AWS CodePipeline limits stage executions to one by default. | My pipeline has a few "static" resources (a few CloudFormation stacks). If the pipeline is running on several source changes in parallel, it leads to errors.
is there an option to queue AWS CodePipeline executions, maybe lock the pipeline for only 1 execution at a time? | AWS CodePipeline queue |
Athena uses the Glue catalog to store all the information about databases and tables. Athena itself is just the execution engine. When you run a query in Athena it starts by parsing the SQL, then asking Glue about the tables that are included in the query, what columns they have, and where their data is located. It uses this information to validate the query (do all the columns mentioned in the query exist, for example), and then it uses the data location(s) to plan the execution of the query.You can read all about how Athena and Glue work together in theIntegration with AWS Gluedocument. | I am using the code listed here to query data using Athenahttps://gist.github.com/schledererj/b2e2a800998d61af2bbdd1cd50e08b76This needs the below policy to work -{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BroadAccess",
"Action": [
"glue:GetTable",
"glue:GetPartitions"
],
"Effect": "Allow",
"Resource": "*"
}
]
}Why is permission required for Glue resources for this to work? | Fetching data from Athena and glue permissions |
If you can visit your s3 resources in your lambda function, then basically do this to check the rows,def lambda_handler(event, context):
import boto3 as bt3
s3 = bt3.client('s3')
csv1_data = s3.get_object(Bucket='the_s3_bucket', Key='1.csv')
csv2_data = s3.get_object(Bucket='the_s3_bucket', Key='2.csv')
contents_1 = csv1_data['Body'].read()
contents_2 = csv2_data['Body'].read()
rows1 = contents_1.split()
rows2=contents_2.split()
return len(rows1), len(rows2)It should work directly, if not, please let me know. BTW, hard codingthe bucket and file nameinto the function like what I did in the sample is not a good idea at all.Regards. | good afternoon. I am hoping that someone can help me with this issue.I have multiple CSV files that are sitting in an s3 folder. I would like to use python without the Pandas, and the csv package (because aws lambda has very limited packages available, and there is a size restriction) and loop through the files sitting in the s3 bucket, and read the csv dimensions (length of rows, and length of columns)For example my s3 folder contains two csv files (1.csv, and 2 .csv)
my code will run through the specified s3 folder, and put the count of rows, and columns in 1 csv, and 2 csv, and puts the result in a new csv file. I greatly appreciate your help! I can do this using the Pandas package (thank god for Pandas, but aws lambda has restrictions that limits me on what I can use)AWS lambda uses python 3.7 | AWS Lambda: read csv file dimensions from an s3 bucket with Python without using Pandas or CSV package |
Just use the `profile_nameˋ parameter when creating the session object.session = boto3.Session(profile_name='dev')
# Any clients created from this session will use credentials
# from the [dev] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html | When using the AWS CLI it references the credentials and config files located in the ~/.aws directory. And you use the --profile flag to indicate which account you want. Such as:aws ec2 describe-instances --profile=company-lab
aws ec2 describe-instances --profile=company-nonprodetc.But I am new to scripting in python 3 and boto 3 and want to do the same thing there. How can I switch between AWS accounts using python? | Switching AWS Accounts in Python |
You are looking at 2 AWS services here. In AWS CLI,run-instancesrefer to creating EC2 server.create-instanceis used in AWS OpsWorks, to create an instance in OpsWorks stack. In OpsWorks there are stacks and Layers. A stack is a collection of layers and layer represents a stack component, such as a load balancer or a set of application servers.stack-idrefers to the Stack's ID to identify the desired OpsWorks stack andlayer-idrefers to a particular layer in the given stack.I'll add the CLI documentation below since you didn't find it.EC2 -https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.htmlOpsWorks -https://docs.aws.amazon.com/cli/latest/reference/opsworks/create-instance.html | I know run instances is to create the ec2-instance in aws-cli but what is create instances?? and also what is stack-id and layer-ids. I google it but didn't found any answer. | What is the difference between Run-instances and create-instances in aws-cli? |
In general, you no longer need to worry about hot partitions in DynamoDB, especially if the partition keys which are being requested the most remain relatively constant.More Info:https://aws.amazon.com/blogs/database/how-amazon-dynamodb-adaptive-capacity-accommodates-uneven-data-access-patterns-or-why-what-you-know-about-dynamodb-might-be-outdated/ | I’m looking at adding row-level permissions to a DynamoDB table usingdynamodb:LeadingKeysto restrict access per Provider ID. Currently I only have one provider ID, but I know I will have more. However they providers will vary in size with those sizes being very unbalanced.If I use Provider ID as my partition key, it seems to me like my DB will end up with very hot partitions for the large providers and mostly unused ones for the smaller providers. Prior to adding the row-level access control I was using deviceId as the partition key since it is a more random name, so partitions well, but now I think I have to move that to the sort key.Current partitioning that works well:HASHKEY: DeviceIdWith permissions I think I need to go to:HASHKEY: ProviderID (only a handful of them)
RangeKey: DeviceIdAny suggestions as to a better way to set this up? | How do I avoid hot partitions when using DyanmoDB row level access control? |
I was assuming that testing from an EC2 instance would verify that there was no routing or firewall or DNS issue. This was a bad assumption, as it turns out that an API gateway does not necessarily live in the same network or have the same access as an EC2 in the same region. Thanks to help from @Michael - sqlbot I was able to determine that this was in fact a network access issue, but it was not one that my DevOps team was able to resolve due to the API gateway not being in the right network.Instead, the solution turned out to be that I had to write a small lambda function (fronted by an API gateway resource with lambda proxy integration), similar to how I have written other lambdas for the RESTful APIs in our application. From the lambda I have more flexibility in accessing internal resources, including the ability to configure VPCs, so I was able to use standard HTTP client APIs in the lambda to proxy the call to the back-end resource. | I am trying to use an AWS API gateway to configure simple http proxy, following the example from this page:https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-http.htmlThe issue I'm running into is that it seems to work if my endpoint URL is another AWS API gateway, but I can't get it to work for any other URL.I'm creating a proxy resource with resource path /{proxy+} and enabling API gateway CORS, then creating ANY method as HTTP Proxy and content handling passthrough (just like the petshop example in the above mentioned example). If I set my endpoint to be another AWS API gateway, it works.However, if I set my endpoint to be a non-AWS URL I get back a 500 response and I see in my API gateway Cloudwatch log:Execution failed due to configuration error: Invalid endpoint addressMy endpoint is on my internal company network, but as a test I also tried proxying to an Internet address and this failed with the same error. (I should note that in both cases, I am trying to proxy to an https address, not just http.)In order to rule out a network routing or firewall issue I logged into an AWS EC2 instance in our same region and tested access to the endpoint URL via curl, and this was successful.Has anyone successfully used API gateway simple https proxy to anything other than another AWS API gateway? | AWS API Gateway simple http proxy Invalid endpoint address |
You'll want to use thesend_raw_email()method instead since SMTP Priority is a SES-supported customer header field, though not as a boto3 method argument.You can read more about the SMTP Priority field in theSMTP sending an priority emailStackOverflow answer. | I'm sending emails using AWS Lambda function that calls the SES service via boto3. I managed to get everything working, however i would like to add 'important' priority on the email. Reading the boto3 api docs it does not state setting priority. Has anyone done this for SES please. Below is example of call to boto3:import boto3
ses = boto3.client('ses')
email_response = ses.send_email(
Destination={
'BccAddresses': [
],
'CcAddresses': [
],
'ToAddresses': [
email_address
],
},
Message={
'Body': {
'Html': {
'Charset': 'UTF-8',
'Data': html_output,
},
},
'Subject': {
'Charset': 'UTF-8',
'Data': 'My msg'
},
},
Source=SENDER
) | How to set Importance priority when sending email via SES using boto3 |
<project_name>, <build-status>, <current-phase>needed to be passed as separate values. You cannot use them for string interpolation.[doc]You will need to modify you lambda input format and construct your message inside the lambda function.{
"channel":"#XYZ",
"project_name": <project_name>,
"current-phase": <current-phase>,
"build-status": <build-status>
} | Goal: I want to trigger notification to slack on any phase change in codebuild.
I have a lambda that does for me and it expects a request as follows:{
"channel":"#XYZ",
"message":"TESTING <project_name> from <build-status> to <current-phase>"
}So I try create a event from cloudwatch events and trigger my lambda:So I try to useInput TransformerIn which the place holders are values of input path from cloudwatch{
"project_name": "$.detail.project-name",
"current-phase": "$.detail.current-phase",
"build-status": "$.detail.build-status",
}But on adding this
i get the errorThere was an error while saving rule input_transformer_test. Details:
InputTemplate for target Id64936775145825 contains placeholder within
quotes..What am i doing wrong here ? | How to create JSON from AWS cloudwatch Input Transformer |
The following codes can print the active emr name and id:import boto3
client = boto3.client("emr")
response = client.list_clusters(
ClusterStates=[
'STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING', 'TERMINATING'
]
)
for cluster in response['Clusters']:
print(cluster['Name'])
print(cluster['Id']) | I'm trying to list all active clusters on EMR using boto3 but my code doesn't seem to be working it just returns null.Im trying to do this using boto31) list all Active EMR clustersaws emr list-clusters --active2) List only Cluster id's and Names of the Active one'scluster namesaws emr list-clusters --active --query "Clusters[*].{Name:Name}" --output textCluster id'saws emr list-clusters --active --query "Clusters[*].{ClusterId:Id}" --output textBut i'm blocked in the starting stage of using boto3import boto3
client = boto3.client("emr")
response = client.list_clusters(
ClusterStates=[
'STARTING',
],
)
print responseAny suggestions how can i convert those CLI commands to boto3Thanks | List all "Active" EMR cluster using Boto3 |
Here you got an errorInvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our recordsand in err block you setres.statusso first this block is executed and a response is sent to the client, after this when it tries to execute `res.json' this error occurs, all you need is use another condition for data.s3Client.upload(params, (err, data) => {
if (err) {
res.status(500).json({error:"Error -> " + err});
} else if(data){
res.json({message: 'File uploaded successfully! -> keyname = ' + params.Key,file_name: params.Key});
}
});with this code, you can handle your error without getting anyERR_HTTP_HEADERS_SENTerror. | Im running out of idea,how im going to solve this problemError [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the clientwhen I run my node js code in ec2 instance(production) but works perfectly in my localhost.What Im actually doing is uploading img to s3 bucket using node js APIhttps://grokonez.com/aws/node-js-restapis-upload-file-to-amazon-s3-using-express-multer-aws-sdkvar stream = require('stream');
const s3 = require('../config/s3.config.js');
exports.doUpload = (req, res) => {
const s3Client = s3.s3Client;
const params = s3.uploadParams;
params.Key = Date.now() +'_'+req.file.originalname
params.Body = req.file.buffer;
s3Client.upload(params, (err, data) => {
if (err) {
res.status(500).json({error:"Error -> " + err});
}
res.json({message: 'File uploaded successfully! -> keyname = ' + params.Key,file_name: params.Key});
});
}this is my code in controller | Cannot set headers after they are sent to the client in production |
Finally I solved by myself.Looks like it was Glue/AWS specific issue, not spark or python.After several trials, I got an error message that says "ListObject" operation has failed when starting Spark(pyspark) REPL.ListObject is obviously the name of boto3's API call to access contents on S3.So I checked its IAM role which had AWSGlueConsoleFullAccess with some S3Access included in it already, attached AmazonS3FullAccess policy to it, and the error disappeared.Also, I made another glue-development-endpoint cluster and also there was no error on the new cluster either, even without S3FullAccess.Maybe every time I wake up Spark on a glue cluster, the cluster automatically tries to fetch some update from some designated S3 bucket, and sometimes it got in trouble when the cluster was built just before some update release. | I've learned Spark in Scala but I'm very new to pySpark and AWS Glue,so I followed this official tutorial by AWS.https://docs.aws.amazon.com/ja_jp/glue/latest/dg/aws-glue-programming-python-samples-legislators.htmlI successfully created development endpoint,connected to pyspark REPL via ssh and typed in these commands:import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
glueContext = GlueContext(SparkContext.getOrCreate())But on the last line, I got>>> glueContext = GlueContext(SparkContext.getOrCreate())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/share/aws/glue/etl/python/PyGlue.zip/awsglue/context.py", line 44, in __init__
File "/usr/share/aws/glue/etl/python/PyGlue.zip/awsglue/context.py", line 64, in _get_glue_scala_context
TypeError: 'JavaPackage' object is not callableI also tried importing py4j manually, but it just didn't work.How can I fix this?Any little help will be appreciated. | TypeError: 'JavaPackage' object is not callable on PySpark, AWS Glue |
As I mentioned in the question, I have created a SQS queue and subscribed it to the SNS topic. Then I can check if the event was published.private String subscriptionArn;
private String queueUrl;
@BeforeEach
public void createAndRegisterQueue() {
queueUrl = sqs.createQueue("mytest-" + UUID.randomUUID()).getQueueUrl();
subscriptionArn = Topics.subscribeQueue(sns, sqs, TOPIC_ARN, queueUrl);
}
@AfterEach
public void deleteAndUnregisterQueue() {
sns.unsubscribe(subscriptionArn);
sqs.deleteQueue(queueUrl);
}
@Test
public void testEventPublish() throws Exception {
// request processing
Response response = httpClient.execute(new HttpRequest(ENDPOINT));
assertThat("Response must be successful.", response.statusCode(), is(200));
// wait for processing to be completed
Thread.sleep(5000);
// check results
Optional<String> published = sqs.receiveMessage(queueUrl).getMessages()
.stream()
.map(m -> new JSONObject(m.getBody()))
.filter(m -> m.getString("TopicArn").equals(TOPIC_ARN))
.map(m -> new JSONObject(m.getString("Message")))
// ... filter and map the expected result
.findAny();
assertThat("Must be published.", published.isPresent(), is(true));
}If there is not an easier solution without creating additional resources (queue), this works fine. | I want to test a live component which as a result of execution sends a message in a SNS topic.Is there a way how to create an "inline" client subscription with Java SDK?Something line this (pseudocode):@Test
public void testProcessingResult() throws Exception {
final Box<Object> resultBox = new Box();
snsClient.subscribe(new SubscribeRequest(topicArn,
msg -> resultBox.setValue(extractResult(msg))
));
...
httpClient.post(endpoint, params); // send the request
Thread.sleep(2000); // wait for eventual processing
assertEquals(expected, resultBox.getValue());
}One way, how to achieve this, could be to create an Amazon SQS queue and register the test client to it, then to get the result via polling.Is there an easier way? | Amazon SNS Inline Java Subscription for Testing |
Pre Token Generation is currently not available in theUserPool LambdaConfigand hencenot supported by CloudFormation(which serverless framework use). At the moment it can only be configured via console or AWS CLI. | I have a PreTokenGenerator function which adds an additional claim to the id token.In my serverless.yml I have the following definition.functions:
issueAuthToken:
handler: src/handlers/cognitoPreToken.handler
events:
- cognitoUserPool:
pool: ${self:provider.stage}-user-pool
trigger: PreTokenGenerationThis runs and deploys, however does not wire up the user pool trigger in the userpool (see below)How can I get this trigger to be setup? The documentation seems to be pretty lacking when it comes to cognito triggers | Serverless framework Cognito Userpool Pre Token Generator |
There are 2 scenarios:Clients pay for their own account: Create a cross account role in each of your customer's account that gives access to your account to do things in their account. Take a look at this tutorial -https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html#tutorial_cross-account-with-roles-3. You will be able to use the cross account role to gain access to their account from your account by switching to their account from console. Take a look at the steps here -https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.htmlYou pay for all the clients: In this case you can use AWS organizations in your account and add the accounts of your customer's to it. You will also need to create cross account role like in step1 so that you have access to do things in their account. This will allow to to have a single consolidated bill for all the accounts while you still get the bifurcated billing details of each account. Take a look at the tutorial here:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html | I have a software business and different unrelated customers. I manage their servers and other services on their own AWS accounts. Each has its own.I'd like to simplify the management by having a root aws account of my company, and link different accounts to it with different payment methods. In most cases, clients use their own payment method..What is the best way to achieve this? | Managing clients in AWS |
need to style the login/register controls. How to do it?If you're using the latest Amplify modules (April 2020) you can customize the look and feel using CSS::root {
--amplify-primary-color: #008000;
--amplify-primary-tint: #0000FF;
--amplify-primary-shade: #008000;
}and you can customize the text:<AmplifySignIn headerText="My Custom Sign In Header" slot="sign-in" />
<AmplifySignUp headerText="My Customer Sign Up Header" slot="sign-up" />Docs:https://aws-amplify.github.io/docs/js/ui-components#customizationTutorial from scratch:https://aws.amazon.com/blogs/mobile/amplify-framework-announces-new-rearchitected-ui-component-and-modular-javascript-libraries/Migrating to the latest UI packages:https://aws-amplify.github.io/docs/js/ui-components#migration-guide | In the tutorial -https://medium.com/@nickwang_58849/i-got-the-following-error-after-following-the-steps-82757cfaf9f0, it usesamplify-authenticatorof npm package@aws-amplify/authto login and register.auth.component.htmlhas just one line,<amplify-authenticator></amplify-authenticator>auth.component.tsimport { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-auth',
templateUrl: './auth.component.html',
styleUrls: ['./auth.component.css']
})
export class AuthComponent implements OnInit {
constructor() { }
ngOnInit() {
}
}However, I will need to have a customized login (will need to add some required attributes when registering. And need to style the login/register controls). How to do it?Is there any example of user write the login UI and just call the authentication (by amplify?) to log in cognito? | Customize AWS Amplify cognito login/register component? |
I think you don't need to copy postgres jar in slaves as the driver programme and cluster manager take care everything. I've created dataframe from Postgres external source by the following way:Download postgres driver jar:cd $HOME && wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jarCreate dataframe:atrribute = {'url' : 'jdbc:postgresql://{host}:{port}/{db}?user={user}&password={password}' \
.format(host=<host>, port=<port>, db=<db>, user=<user>, password=<password>),
'database' : <db>,
'dbtable' : <select * from table>}
df=spark.read.format('jdbc').options(**attribute).load()Submit to spark job:Add the the downloaded jar to driver class path while submitting the spark job.--properties spark.driver.extraClassPath=$HOME/postgresql-42.2.5.jar,spark.jars.packages=org.postgresql:postgresql:42.2.5 | I have existing EMR cluster running and wish to create DF from Postgresql DB source.To do this, it seems you need to modify the spark-defaults.conf with the updatedspark.driver.extraClassPathand point to the relevant PostgreSQL JAR that has been already downloaded on master & slave nodes,oryou can add these as arguments to a spark-submit job.Since I want to use existing Jupyter notebook to wrangle the data, and not really looking to relaunch cluster, what is the most efficient way to resolve this?I tried the following:Create new directory (/usr/lib/postgresql/ on master and slaves and copied PostgreSQL jar to it. (postgresql-9.41207.jre6.jar)Edited spark-default.conf to include wildcard locationspark.driver.extraClassPath :/usr/lib/postgresql/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/$Tried to create dataframe in Jupyter cell using the following code:SQL_CONN = "jdbc:postgresql://some_postgresql_db:5432/dbname?user=user&password=password"
spark.read.jdbc(SQL_CONN, table="someTable", properties={"driver":'com.postgresql.jdbc.Driver'})I get a Java error as per below:Py4JJavaError: An error occurred while calling o396.jdbc.
: java.lang.ClassNotFoundException: com.postgresql.jdbc.DriverHelp appreciated. | Using Postgresql JDBC source with Apache Spark on EMR |
Answering my own question after figuring out the problem.It turns out that the real problem was associated with the way st1 HDD drive works rather than kafka or GC.st1 HDD volume type is optimized for workloads involving large, sequential I/O, and performs very bad with small random IOs. You can read more about ithere.
Although It should have worked fine for just Kafka, but we were writing Kafka application logs to the same HDD, which was adding a lot to the READ/WRITE IOs and subsequently depleting our burst credits very fast during peak time. Our cluster worked fine as long as we had burst credits available and the performance reduced after the credits depleted.There are several solutions to this problem :First remove any external apps adding IO load to the st1 drive as its not meant for those kinds of small random IOs.Increase the number of such st1 parallel drives divide the load.This is easy to do with Kafka as it allows us to keep data in different directories in different drives. But only new topics will be divided as the partitions are assigned to directories when the topic is created.Use gp2 SSD drives as they kind of manage both kinds of loads very well. But these are expensive.Use larger st1 drives fit for your use case as the throughput and burst credits are dependent on the size of the disk.READ HEREThisarticle helped me a lot to figure out the problem.Thanks. | One of our Kafka brokers had a very high load average (about 8 on average) in an 8 core machine. Although this should be okay but our cluster still seems to be facing problems and producers were failing to flush messages at the usual pace.Upon further investigation, I found that my java process was waiting too much for IO, almost 99.99% of the time and as of now, I believe this is a problem.Mind that this happened even when the load was relatively low (around 100-150 Kbps), I have seen it perform perfectly even with 2 Mbps of data input into the cluster.I am not sure if this problem is because of Kafka, I am assuming it is not because all other brokers worked fine during this time and our data is perfectly divided among the 5 brokers.Please assist me in finding the root cause of the problem. Where should I look to find the problem? Are there any other tools that can help me debug this problem?We are using 1 TB mounted EBS Volume on an m5.2x large machine.Please feel free to ask any questions.GC Logs Snapshot | Why is IO 99.99 % even though the Disk Read And write seems to be very small |
Yes, You can use Route53 along with CloudFront for the best results with Alias records (When you purchase your domain with AWS only if you purchased it from outside AWS then you can directly configured/add your CloudFront details there as in this case adding Route53 will increase the number of ip visits.Read More here).CloudFront will distribute your content over 100+ edge location which will decrease your response time with low latency and save your cost as well. It will deliver the content from the nearest location.Route53 will manage your DNS things.CloudFront is more than enough for the delivery of content from the nearest edge location. It will also help you to copy data to multiple edge locations as well.It's like Content Delivery Network(CloudFront) + DNS(Route53).Read this for good understanding.When you create a web distribution, you specify where CloudFront sends requests for the files that it distributes to edge locations. CloudFront supports using Amazon S3 buckets and HTTP servers (for example, web servers) as origins.Route53 is a DNS service and is an origin for data. The term Origin is a term for where the original data resides before it is cached in the CDN (CloudFront). | Can we use CloudFront with Geolocation policy or does CloudFront internally have this feature and can be used alone to satisfy? Or Route53 is a correct option while having the requirement to serve requests from the nearest geo-location for a global website to improve the customer experience.Also, I am not clear whether we can use both CloudFront with Route53 together or not?
Thanks. | AWS Cloudfront with Geolocation policy vs Route53 |
You can use theformatlistfunction here to format a list.It uses the string format syntax, takingnlists and returning a single list.So in your case you probably want something like:locals {
azs = [
"a",
"b",
"c",
]
}
output "azs" {
value = "${formatlist("us-east-1%s", local.azs)}"
} | I have a next list:azs = ["us-east-1a", "us-east-1b", "us-east-1c"]And I am using it during subnets creation. In names of subnets I would like to use short names likea, b, cso I need a list["a", "b", "c"]. Obviously I need to generate it dynamically (in locals block for example) whenazswill be set manually.How to create such list with Terraform? | Modify simple list/array in Terraform |
I had this same problem this week and I found a solution for anyone who revisits this question later. Take a look at the example below:response = table.update_item(
Key={'email': email },
UpdateExpression="ADD emails :i",
ExpressionAttributeValues={":i": set([email])},
ReturnValues="UPDATED_NEW"
)This worked for me to either create the string set or append to an existing string set. | I am trying to create an update that either adds the email to the string set if the string set exists or creates a string set with the email if it does not exist.I took some code from this answer:Append to or create StringSet if it doesn't existbut I cant seem to make it work.I end up with the error"errorMessage": "An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: MAP"
}response = table.update_item(
Key={'email':email},
UpdateExpression='ADD emails :i',
ExpressionAttributeValues={
':i': {SS': [email]},
},
ReturnValues="UPDATED_NEW"
)How can I make an update expression that creates a stringset if none exists or adds an item to it if it does? | AWS DynamoDB create update expression - Add new stringset if none exists |
I have been using scripts from several years ago, seems like the aws config has changed since, and I had to review my/etc/awslogs/awslogs.confEspecially, the default state file had changed. The new one beingstate_file = /var/lib/awslogs/agent-state(under /lib/). Previously this file was in a different folder, and therefore did not exist in Amazon Linux 2, hence generating the crash | I am using Awslogs on Amazon Linux 2, but my awslogs agent does not seem to start successfully. I am usingthis documentationWhen I look at the service journalsystemctl -l status awslogsd● awslogsd.service - awslogs daemon
Loaded: loaded (/usr/lib/systemd/system/awslogsd.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2018-12-14 15:04:44 UTC; 1s ago
Process: 32407 ExecStart=/usr/sbin/awslogsd (code=exited, status=255)
Main PID: 32407 (code=exited, status=255)
Dec 14 15:04:44 ip-172-31-47-115.eu-central-1.compute.internal systemd[1]: awslogsd.service: main process exited, code=exited, status=255/n/a
Dec 14 15:04:44 ip-172-31-47-115.eu-central-1.compute.internal systemd[1]: Unit awslogsd.service entered failed state.
Dec 14 15:04:44 ip-172-31-47-115.eu-central-1.compute.internal systemd[1]: awslogsd.service failed.When looking at /var/log/awslogs.log I have2018-12-14 15:02:04,640 - cwlogs.push - INFO - 31514 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to use gzip encoding.
2018-12-14 15:02:04,640 - cwlogs.push - INFO - 31514 - MainThread - Missing or invalid value for queue_size config. Defaulting to use 10
2018-12-14 15:02:04,640 - cwlogs.push - INFO - 31514 - MainThread - Using default logging configuration.
unable to open database filelooping infinitelyAny help ? | Awslogs awslogsd - unable to open database file |
No there is no way to do that when using the HEADER option, because Redshift does not have case sensitive column names. All identifiers (table names, column names etc.) are always stored inlower casein the Redshift metadata.You can optionally set a parameter so that column names are all returned as upper casein the results of a SELECT statement.https://docs.aws.amazon.com/redshift/latest/dg/r_names.htmlASCII letters in standard and delimited identifiers are case-insensitive and are folded to lowercase in the database. In query results, column names are returned as lowercase by default. To return column names in uppercase, set the describe_field_name_in_uppercase configuration parameter to true. | Previously, unload command did not create header row. This functionality is now available with "HEADER" option. However, it does not preserve the case of the headers.The following statement creates a file with header "my column header 1"...UNLOAD ('SELECT col1 "My Column Header 1", col2 "My Column Header 2" FROM mytable;')
TO 's3://mybucket/filename.csv.'
CREDENTIALS 'aws_iam_role=mycredentials'
DELIMITER ','
HEADER
ALLOWOVERWRITE
ADDQUOTES
PARALLEL OFF;Is there a way to preserve case in column headings? | Redshift Unload with case-sensitive headers |
I struggled with the same problem using Pycharm in Intellij.Boto3 couldn't locate the credentials file and I didn't have a default profile set.> os.environ["AWS_SHARED_CREDENTIALS_FILE"]
None
> os.environ["AWS_DEFAULT_PROFILE"]
NoneSolution, I set the variables explicitly> import boto3
> import os
> os.environ["AWS_SHARED_CREDENTIALS_FILE"] = "<full-path>/.aws/credentials"
> os.environ["AWS_DEFAULT_PROFILE"] = "<profile-name>"
> boto3.client('sts').get_caller_identity().get('Account')
1234567890 | I am developing AWS DynamoDb tables in Pycharm. For this I have created a virtual environment with Python 3.6 and installed required libraries like boto3. I have also set my AWS credentials using AWS CLI tool in ~/.aws/credentials file.Problem is when I simply run the code, it works like a charm and is able to read the credentials file. However, when I select to run the code in "Python console", I get the error that credentials have expired. It appears to me that somehow "Python console" is unable to access the ~/.aws/credentials file and is looking somewhere else for credentials. Or boto3 is not accessing the credentials file from ~/.aws/credentials when I select code to run in python console.Can someone guide me as how to set up credentials in Python console so that I can run the code interactively.Thanks, | PyCharm: Why "Python Console" is not accessing ~\.aws\credentials file? How to set it within "Python Console" |
You are right. This is the current behavior of Amazon Cognito Tokens. If you do global signout than youraccessTokenandRefreshTokenwill be expired.But your IdToken will be still valid till 1 hour.If you call the Global SignOut again, Than you will see the message thataccess token is expiredI hope this helps! | I have set up an API Gateway authenticated using AWS Cognito. Once the user signs in, I use the following script to verify their credentials:const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
const params = {
AuthFlow: 'ADMIN_NO_SRP_AUTH',
ClientId: APP_CLIENT_ID,
UserPoolId: USER_POOL_ID,
AuthParameters: {
'USERNAME': username,
'PASSWORD': password,
},
};
return cognitoidentityserviceprovider.adminInitiateAuth(params)
.promise();And this will return a JSON like so:{
"ChallengeParameters": {},
"AuthenticationResult": {
"AccessToken": "....",
"ExpiresIn": 3600,
"TokenType": "Bearer",
"RefreshToken": "....",
"IdToken": "...."
}
}On the client side, I will take note of theIdTokenand include it as a header with a name mentioned in the API Gateway's Authorizer.Now, I'm trying to create a lambda function to sign the user out. So far, I've got this:const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
const params = {
UserPoolId: USER_POOL_ID,
Username: username,
};
return cognitoidentityserviceprovider.adminUserGlobalSignOut(params)
.promise();When I send a request to call this code, even though everything works just fine (no error is thrown), but theIdTokenis still valid and I can still call authenticated requests with it. My question is, what is the proper way of signing out a user and why this is not working? | Cannot sign out the user from AWS Cognito |
The Serverless framework tooling uses AWS CloudFormation for provisioning resources in the AWS cloud. Have you checked theAWS CloudFormation web console? | I used serverless toolkitserverlessto deploy an application and all works fine.After I logged in ASW console and I was looking for a dashboard or something where I can found and manage the deployed application.The question is: Where I can find inside the AWS console the application deployed with serverless toolkit? | Where I can find the application deployed with AWS serverless toolkit |
Firstly, you need to remove default S3 encryption on your bucket.Then, you can use this AWS CLI command that will copy all objects and remove encryption:aws s3 cp s3://BUCKET_NAME/ s3://BUCKET_NAME/ --recursive | I am looking ats3apiand trying to remove encryption on all my S3 objects. Looks like there is no easy way to remove from CLI. From the console I can do select few (multiple) files but it is tedious.Suggestions please.. thank you. | Remove encryption from all s3 objects using CLI |
Default MethodThrottling(like Account Level Throttling) is the total number of requests per second acrosseveryonehitting your API.Client-level limits are enforced withUsage Plans, based on api-keys.For more detailed information about API Gateway throttling checkout:https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.htmlhttps://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html | For a stage belonging to an API in AWS API Gateway I have the option to limit Default Method Throttling. Does this limit thetotal number of requestsper second, or the number of requestsfrom a particular clientper second? | Is API Gateway Default Method Throttling per all requests or per client? |
Here you go, the below worked for me.hal config provider docker-registry account add my-ecr-registry \
--address https://< Your ECR Endpoint> \
--username AWS \
--password-command "aws --region us-east-1 ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'" | AboutHalyard,I want to use--password-commandoption to refresh Amazon ECR Authorization Token. If you have experience of using this option, please teach me.Thank you! | Halyard - How to use --password-command option? |
Technically speaking, uploading a video is just like uploading any binary file (e.g. Image, audio file ... etc). However, because those video files can get too big quickly so I'd suggest you utilize one or more of the following:Multipart upload: Highly recommended if the file is above 100MB. It will help you reach higher throughput, the ability to resume interrupted uploads and pausing and resuming uploads. Read morehere.S3 Transfer Acceleration: Using a CDN users will be uploading to a geographically closer location which will also speed things up for them. Read morehere.Some helpful libraries to checkout:EvaporateJS,react-s3-uploader-multipart | How do I upload video files into S3 bucket using React JS?I am currently developing a React JS application and I have to upload video files into S3 bucket. I searched a lot but I can only find out"Image"uploading part. But I would like to know how to upload video files into S3 bucket. | How to upload video files into an S3 bucket using React |
It is not supported. And agree that the error message is not good.The reason, I think is just that they want to promote another practice:Fanout S3 Event Notifications to Multiple Endpoints | AWS Compute Blog | I have a S3 bucket, to store objects.On the object creation event, I wish to send a specific category of objects to two lambdas, in parallel:my_email_lambdamy_logging_lambdaI set the rule as follows:Rule 1:
Prefix: /my/folder
Suffix:
Send to: lambda
Lambda: my_email_lambda
Rule 2:
Prefix: /my/folder
Suffix:
Send to: lambda
Lambda: my_logging_lambdaWhen I try to do this, I get an error:Configuration is ambiguously defined. Cannot have overlapping suffixes
in two rules if the prefixes are overlapping for the same event type.Why is this ambiguous? I want to send the events to two separate lambdas. If this were amoveoperation, then we could consider this setup to be ambiguous. This is an event notification operation, though. This is not ambiguous. If the operation is unsupported, the error message should state this instead. | Why can I send AWS S3 bucket events to only one AWS lambda? |
I'm assuming that "RDS hostname" is your RDS endpoint?You can add to your EC2 Userdata, like the code below. I'm not very used to linux, so not sure if this would be the way to set your environment variable, but you get the idea.Resources:
Rds:
Type: 'AWS::RDS::DBInstance'
Properties:
...
Ec2:
Type: 'AWS::EC2::Instance'
Properties:
...
UserData: !Base64
'Fn::Sub':
- |-
<script>
export DB_CONNECTION="${RdsEndpoint}"
</script>
- { RdsEndpoint: !GetAtt Rds.Endpoint.Address }UpdateIn this particular case, you need to use the long syntax ofFn::Sub, since your reference needs to use theFn::GetAtt. If the information you wanted was retrieved by a simpleFn::Ref, you could use the short syntax:UserData: !Base64
'Fn::Sub':
<script>
export DB_CONNECTION="${Rds}" # <-- this will get the DBInstanceIdentifier
</script>Update 2: as pointed out by Josef, you can still use the short syntax, regardless if the source is !Ref or !GetAtt. So this is valid:UserData: !Base64
'Fn::Sub': |-
<script>
export DB_CONNECTION="${Rds.Endpoint.Address}"
</script> | I have a CloudFormation template that creates both RDS and EC2 under the same stack. My problem is, how do I get the RDS hostname into one of my environment variables inside my EC2, without having to install AWS cli and adding credentials? | How do I get the hostname of RDS instance into an Environment variable in EC2? |
Whenusing an SQS Queue as a Lambda event source, a component of the Lambda service actually polls the queue and passes the message payload to the function invocation, in an arrayevent.Records, which will contain one or more messages from the queue. The messages are temporarily invisible in the queue (they are "in flight").You don't need to interact directly with SQS in this application.You process the messages and exit the Lambda function successfully and all the messages just given to you are automatically deleted from the queue by the Lambda poller.If an exception is thrown, all the messages you were just handed are set back to being visible in the queue. | I have the following code in lambda to receive SQS messages:
When I inject a message into SQS, the lambda triggers, but saysdata.Messagesis null.function receiveMessages(callback)
{
var params = {
QueueUrl: TASK_QUEUE_URL,
MaxNumberOfMessages: 2,
WaitTimeSeconds: 1,
AttributeNames: ["All"]
};
SQS.receiveMessage(params, function(err, data)
{
if (err)
{
console.error(err, err.stack);
callback(err);
}
else if (data.Messages == null)
{
console.log("null message", data);
callback(null,null);
}
else
{
callback(null, data.Messages);
}
});
}It is not obvious what I might be doing wrong. I tried both a fifo and a non-fifo queue | SQS ReceiveMessage succeeds but gets a null message |
Assume_role_policy don't accept the aws policy json files.So the above code is not working.For detailed explanation of assume_role_policy in aws_iam_role, see thisthread.Update the code as shown below and execute.variable policy_arn{
default = "arn:aws:iam::aws:policy/service-role/AWSLambdaRole"
}
resource "aws_iam_role" "edb_role" {
name = "edb_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": ["ec2.amazonaws.com" ]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "test-attach" {
role = "${aws_iam_role.edb_role.name}"
policy_arn = "${var.policy_arn}"
}
output "role" {
value = "${aws_iam_role.edb_role.name}"
}Here, we are using the AWSLambdaRole Policy present in Policies section of IAM.Add multiple policies to a role usingaws_iam_role_policy_attachUse the default policies provided by aws as show above. Else to create a new policy, see the docshere | I would like to create a aws_iam_role with terraform but after runningterraform applyI get the following error message:aws_iam_role.role: Error Updating IAM Role (edb_eb_role) Assume Role Policy: MalformedPolicyDocument: Has prohibited field ResourceThat is my policy:resource "aws_iam_role" "role" {
name = "edb_eb_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
},
{
"Action": [
"logs:*"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
}
]
}
EOF
}What did I wrong? I also tried to do it only with Principals but then I get the message that "Principals" is also not prohibited? | How to create roles in terraform |
The correct value to pass isCOGNITOclient.update_user_pool_client(
UserPoolId=USER_POOL_ID,
ClientId=user_pool_client_id,
SupportedIdentityProviders=[
'COGNITO'
]
)I only discovered this by reviewing source code of someone else CloudFormation Custom resourcehttps://github.com/rosberglinhares/CloudFormationCognitoCustomResources/blob/master/SampleInfrastructure.template.yaml#L105I can not find the correct soluion to this from offical AWS Docs/Boto3 docs. If anyone knows where the possible values forSupportedIdentityProvidersare documented please comment. | There is a setting I want to change via Python SDK reguarding AWS Cognito. I can change the setting in the AWS Web Console via "Cognito -> User Pools -> App Client Settings -> Cognito User Pool" (See image)Here is my codeclient = boto3.client('cognito-idp')
client.update_user_pool_client(
UserPoolId=USER_POOL_ID,
ClientId=user_pool_client_id,
SupportedIdentityProviders=[
'CognitoUserPool'
]
)The error I am receiving isAn error occurred (InvalidParameterException) when calling the
UpdateUserPoolClient operation: The provider CognitoUserPool
does not exist for User Pool xxxxxxIt is unclear what string values I should pass forSupportedIdentityProviders. The only hint I have seen is fromhttps://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-idp-settings.html--supported-identity-providers '["MySAMLIdP", "LoginWithAmazon"]'I am not even 100% sure if theSupportedIdentityProvidersrelates to the setting I am trying to change, but can't find any clarification in the docs. | Change AWS Cognitio "Enabled Identity Providers" via Python SDK |
The error messageUnable to launch provisioned product because: No launch paths found for resourceisn't super helpful. It can mean any of the following:The product doesn't existThe provisioning artifact doesn't existThe product exists but it's in a failed stateYou don't have access to the productYou don't have access to the product's portfolioThe product isn't associated with a portfolioThe launch path does not existSince the error message is not helpful, it doesn't tell you which of these are to blame.To see how unhelpful the error message is, try this for fun:% aws servicecatalog provision-product --provisioned-product-name no --product-id nope --provisioning-artifact-id nopity-nope
An error occurred (ResourceNotFoundException) when calling the ProvisionProduct operation: No launch paths found for resource: nopeSome pointers to getting it to work:Associate the product to a portfolio.Associate a principal that is or includes you to the portfolio.Make sure the product is properly created by not usingDisableTemplateValidation. When you create the product, you'll get an error if the template has an error.Try describing the provisioning artifact to make sure it exists.Try describing the product. If you can describe the product, it exists, and you have access. You should see a launch path as part of the product description. If you can describe the product but it doesn't have a launch path, I suspect the template is bad. | I'm using Javascript SDK of AWS to access Service Catalog in my Lambda function.https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ServiceCatalog.html#provisionProduct-propertyI have successfully created portfolio and product and attached the product to this portfolio. When I try to provision the product it throws the error "No launch path is found". To get launch path list I hit the listLaunchPath API and it returns empty array with message "No launch path found for this product"I have explored AWS Docs in detail but did not find any way to set launch path.
Can anybody guide me how to create and get a launch path for a product in AWS service Catalog? | How to add or get a launch path to a product in AWS Service Catalog using Javascript sdk |
I found out that this issue resolved after a few hours by itself. | I am trying to upload an image to my Amazon S3 bucket. But I keep getting this CORS error, even though I have set the CORS configuration correctly.
This is my CORS configuration:<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://localhost:3000</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>I would appreciate the help. | response for preflight is invalid (redirect) for aws s3 |
As you've seen S3 is optimised towards getting an object that you already know the path of, rather than listing an querying files. In fact the listObjects API is not massively stable during iteration and you're likely to miss files in large sets if they're added before you started the query.Depending on the number of buckets you have, a way round this would be to use lambda triggers on S3 events:S3 automatically raises s3:ObjectCreated event and invokes lambdaLambda sets "LastUpdate" attribute for that bucket's entry in DynamoDbEvery 20 minutes (or so) you query/scan the Dynamo table to see when the latest update is.Another solution would be to enable CloudWatch monioring on the bucket:https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.htmlYou could then sum thePutRequestsandPostRequestsmetrics over the last two hours (you can fetch cloudwatch metrics this programmatically using boto3) to get an indication of updates (although, your count is only likely to be accurate if files are written once and never edited). | I need to create a monitoring tool, that checks buckets (with 1000+ files each) for new objects, created in last two hours, and if the objects were not created, sends a message.
My first idea was to create a lambda function, that runs every 20 minutes. So I've created python3 + boto3 code:import boto3
from datetime import datetime,timedelta
import pytz
import sys
s3 = boto3.resource('s3')
sns = boto3.client('sns')
buckets = ['bucket1', 'bucket2', 'bucket3']
check_fail = []
def check_bucket(event, context):
time_now_UTC = datetime.utcnow().replace(tzinfo=pytz.UTC)
delta_hours = time_now_UTC - timedelta(hours=2)
for bucket_name in buckets:
bucket = s3.Bucket(bucket_name)
for key in bucket.objects.all():
if key.last_modified >= delta_hours:
print("There are new files in the bucket %s" %bucket)
break
else:
check_fail.append(bucket)
if len(check_fail) >= 1:
sns.publish(
TopicArn='arn:aws:sns:us-east-1:xxxxxxxxxxxxxx:xxxxxx',
Message="The following buckets didn't receive new files for longer than 2 hours: %s" %check_fail,
Subject='AWS Notification Message' )
else:
print("All buckets have new files")This approach is not working, due to the high number of objects inside every bucket. Checking by "key.last_modified" is taking too long.Does anyone have an idea on how I can achieve this?Thank you! | Check S3 bucket for new files in last two hours |
In a nutshell:map_valuesis your friend.Let's suppose your template is in the file template.json. Then the following script will perform the specified transformation:#!/bin/bash
# As far as this example is concerned,
# there is no need to export any variables
AWS_DEFAULT_REGION=us-east-2
AWS_ACCOUNT_ID=123456789012
ARN_PREFIX="arn:aws:sns:${AWS_DEFAULT_REGION}:${AWS_ACCOUNT_ID}:"
jq --arg prefix "$ARN_PREFIX" '
.snstopic |= map_values($prefix + .)
' template.jsonExampletemplate.json{
"snstopic": {
"topic-project1": "team-project1-dev",
"topci-project2": "team-project2-dev"
}
}Output:{
"snstopic": {
"topic-project1": "arn:aws:sns:us-east-2:123456789012:team-project1-dev",
"topci-project2": "arn:aws:sns:us-east-2:123456789012:team-project2-dev"
}
} | I can work out how to use jq to replace a value from a variable,$ jq -n --arg name bar '{"name":$name}'
{
"name": "bar"
}But I am not sure how to replace multiple values.{
...
"snstopic": {
"topic-project1": "team-project1-dev",
"topci-project2": "team-project2-dev",
... (different json files have different number of sns topics)
},
...
}I set these environment variables:$ export AWS_DEFAULT_REGION=us-east-2
$ export AWS_ACCOUNT_ID=123456789012
$ export ARN_PREFIX="arn:aws:sns:${AWS_DEFAULT_REGION}:${AWS_ACCOUNT_ID}:"I want to get output as below{
...
"snstopic": {
"topic-project1": "arn:aws:sns:us-east-2:123456789012:team-project1-dev",
"topci-project2": "arn:aws:sns:us-east-2:123456789012:team-project2-dev",
... (different json files have different number of sns topics
},
...
}How to add it in all matched keys in.snstopic? | replace values with variables - jq |
The CNAME should point to the CloudFront endpoint (*.cloudfront.net) rather than the API Gateway endpoint (*.execute-api.[region].amazonaws.com).The CloudFront endpoint can be found by going to API Gateway -> Custom Domain Names. A CloudFront domain should be listed under "Target Domain Name". | I'm trying to build a serverless app with AWS. My API is working fine, but my custom domain is not. I'm receiving a 403 forbidden answer. This is how it's configured my custom domain:And then I'm using the Target URL provided by this Custom Domain in Route 53 as CNAME. How can I fix this? | Receving 403 forbidden from Custom Domain in AWS Api Gateway |
AWS-SDK already promisified. If you want to use 8.10 runtime andtry&&catchblock then simply use the following snippet:async readData()
{
const params =
{
FunctionName: "MyFunctionName",
InvocationType: "RequestResponse",
};
try
{
const lambdaInvokeResp = await lambda.invoke(params).promise();
// if succeed
// handle your response here
// example
const lambdaRespParsed = JSON.parse(lambdaInvokeResp.Payload);
const myData = JSON.parse(lambdaRespParsed.body);
return myData;
}
catch (ex) // if failed
{
console.error(ex);
}
} | I'm attempting to call an AWS Lambda function from another Lambda function using theinvokemethod with aRequestResponseinvocation type and retrieve a value returned from the Lambda.When I call thelambda.invokeusingawaitthe callback still appears to be called asynchronously. I'd like for the values I need to be available on the next line of code, hence the synchronous requirement. However, in the code below in the logs I see the "Data out of Callback" entry occur prior to the "Data in Callback" entry with a 0 value out of the callback and a correct value in the callback.If anyone could help me understand how to accomplish this I would greatly appreciate it! Here's the code:async readData() {
let myData = [];
const params = {
FunctionName: "MyFunctionName",
InvocationType: "RequestResponse",
};
await lambda.invoke(params, (error, data) => {
if (error) {
console.log("Got a lambda invoke error");
console.error(error);
} else {
let response = JSON.parse(data.Payload);
myData = JSON.parse(response.body);
console.log("Data in Callback: " + myData.length);
}
});
console.log("Data out of Callback: " + myData.length);
}Thanks,Chris | Get Value Back from AWS lambda.invoke synchronously |
One way is to check the status of the file system usingDescribeFileSystemsAPI. In the response look at theLifeCycleState, if it is available fire the CreateMountTarget API. You can keep checking the DescribeFileSystems in a loop with a few seconds delay until the LifeCycleState isAvailable | I am writing a script that will create an EFS file system with a name from input. I am using the AWS SDK for PHP Version 3.I am able to create the file system using the createFileSystem command. This new file system is not usable until it has a mount target created. If I run the CreateMountTarget command after the createFileSystem command then I receive an error that the file system's life cycle state is not in the 'available' state.I have tried using createFileSystemAsync to create a promise and calling the wait function on that promise to force the script to run synchronously. However, the promise is always fulfilled while the file system is still in 'creating' life cycle state.Is there a way to force the script to wait for the file system to be in the available state using the AWS SDK? | AWS EFS - Script to create mount target after creating the file system |
You could find the full answer in thissource.The short one, the kubelet pods capacity is set by default because you have a maximum IP per network interface. | Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed25 days ago.Improve this questionI have configured EKS on AWS with 4 nodes. When deploying my application, I've noticed that some pods cannot be setup because of insufficient resources (getting error 0/4 nodes are available: 4 Insufficient pods.)When looking into k8s dashboard, I've noticed that only 10% of memory is used (see picture)I've usedthisguide in order to set the things up.How can I increase this limit and make my node used on full capacity? | AWS EKS Nodes memory / CPU limits are low (10%) [closed] |
When I first asked this question, AWS had the ability torecordmetrics with the Swift SDK, butnotview them in the Pinpoint API, which is absurd, because then you can only record metrics. What's the point? I asked in the AWS forums, and a couple months later, they responded something along the lines of "Please wait - coming soon."This feature is now available, whereas before it simply wasn't.Go to Pinpoint, your project, then click the Analytics drop-down menu, then click events. You can see that you can sort by metric. If you look at my outdated screenshot above, you'll see that this wasnotan option. | It is clear from the documentation that I can add custom metrics for a custom event.How do I view these metrics in the Pinpoint console? From the Pinpoint console, it is obvious how to view attributes. I can go to Analytics > Events, select my custom event, and narrow down the events to whatever attributes I desire. I am asking about how to view metrics. To be clear, these differ by being continuous values whereas attributes are discrete. The documentation says that I can do this. See below how I can filter by attributes manually: (attribute is circled)See the docs on custom events here:https://docs.aws.amazon.com/pinpoint/latest/developerguide/integrate-events.htmlSimilarly, creating a funnel only allows filtering for attributes. How can I filter for metrics?Thank you for your time! | AWS Pinpoint: How to view custom metrics |
Have you tried with annotations like in this example?apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80 | I'm running my workloads on AWS EKS service in the cloud. I can see that there is not default Ingress Controller available (as it is available for GKE) we have to pick a 3rd party-one.I decided to go withTraefik. After following documentations and other resources (likethis), I feel that using Traefik as the Ingress Controller does not create a LoadBalancer in the cloud automatically. We have to go through it manually to setup everything.How to use Traefik to work as the Kubernetes Ingress the same way other Ingress Controllers work (i.e. Nginx etc) that create a LoadBalancer, register services etc? Any working example would be appreciated. | Traefik Ingress Controller for Kubernetes (AWS EKS) |
Simply pipe the output of the command into a file using the ">" symbol.The file does not have to exist before hand (and in-fact will be overwritten if it does exist).aws s3 sync . s3://mybucket > log.txtIf you wish to append to the given file then use the following operator: ">>".aws s3 sync . s3://mybucket >> existingLogFile.txtTo test this command, you can use the--dryrunargument to thesynccommand:aws s3 sync . s3://mybucket --dryrun > log.txt | When I run, AWS S3 SYNC "local drive" "S3bucket", I see bunch of logs getting generated on my aws cli console. Is there a way to direct these logs to an output/log file for future reference?I am trying to schedule a sql job which executes the powershell script that syncs backup from local drive to S3 bucket. Backups are getting synched to the bucket successfully. However, I am trying to figure out a way to direct the sync progress to an output file. Help appreciated. Thanks! | How to redirect AWS S3 sync output to a file? |
This may be due to the permissions on the user. I had a similar issue but with .NET, I could add the tags but then I could not view them.I later found that to add tags the user must have thes3:PutObjectTaggingpermission, but to view the added tags the user must also have thes3:GetObjectTaggingpermission.Basically you want to confirm that you have both of these permissions for the user. Hope this helps | I am trying to add Tags while uploading to Amazon s3 with putObject method.As per documentation I have created Tagging as String type.My file got uploaded to Amazon s3 but I am unable to see object level Tags of file object with the supplied tags data.Following code sample as per documentationvar params = {
Body: <Binary String>,
Bucket: "examplebucket",
Key: "HappyFace.jpg",
Tagging: "key1=value1&key2=value2"
};
s3.putObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
}); | Amazon s3 putObject Tagging is not working |
The event from CodePipeline does not contain the CodeBuild logs so you can't pass this through to your email without something in the middle.A solution could be to have your CloudWatch event target a Lambda function which looks up the logs via the CodeBuild / CloudWatch logs API. It can then generate the email including the logs and send the notification via SNS. | I am trying to add some notifications to my Pipeline in AWS.
I have a build stage where I use AWS CodeBuild and I want to receive an email whenever the build fails.I have followed the tutorial that amazon offers and it works fine to notify me about the failure, but I can't seem to find how to add the logs in the email.I have created the following CloudWatch Event Rule, which monitors the execution of the entire Pipeline.{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Pipeline Execution State Change"
],
"detail": {
"state": [
"FAILED",
"SUCCEEDED",
"CANCELED"
],
"pipeline": [
"Pipeline_Trial"
]
}
}Can anyone help me figure how to add the logs to this rule ? | AWS CodePipeline Notifications |
+100Depending on the EB stack, activating TLS does not automatically mean that it is being activated on the backend as well. You have two options:Terminate at the load balancerUser Agent <-- HTTPS --> Elastic Load Balancer <-- Plain HTTP --> BackendJust change the "instance protocol" setting from HTTPS to HTTP. This terminates the TLS connection on the load balancer and talks unencrypted HTTP with the backend instance. If that fits your security demands, that would be the easiest and quickest solution, because you don't need to adjust your application.Terminate at the backend instanceUser Agent <-- HTTPS --> Elastic Load Balancer <-- HTTPS --> BackendIn this case, you have to provide an HTTPS listener within your application stack. The stack is partially provided by AWS and depends on your deployment platform, so I would like to forward you to the official docs as they contain the best practices:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html | I'm trying to setup https on my EB instance, which is runnning a django app, it currently works with http, but with https it times out. I've been through every step I thought I needed to:Created self signed certificate with name being the domain (myapp123.vdfb.eu-central-1.elasticbeanstalk.com) and uploaded it to the Certificate ManagerSetup port 443 on the Load Balancer:Add rule on the security group attatched to the ec2 instance:Also add rule on the security group attatched to the Load Balancer:Also added these lines in my settings file in the django app:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = TrueAfter all of this, it still keeps not resolving when I try to access it withhttps://....
What am I missing? | Elastic Beanstalk instance not responding with https |
The allocation is for each container.Containers are not used by more than one invocation at any given time -- that is, containers are reused, but not concurrently. (And not by more than one version of one function).If your code is leaking memory, this means subsequent but non-concurrent invocations spaced relatively close together in time will be observed as using more and more memory because they are running in the same container... but this would never happen in the scenario you described, because with the second invocation at T+50, it would never share the container with the 100-second process started at T+0. | In aws lambda the ram allocated for a lambda is for one instance of that lambda or for all running instance of that lambda? Till now I believed its for each instance.Let's consider a lambda 'testlambda' and I am configuring it to have 5 Minutes timeout and 3008 MB (current max) RAM and have "Use unreserved account concurrency" option selected:At T one instance of 'testlambda' start running and assume that it is going to run for 100 seconds and going to use 100 MB of RAM while it is running(for the whole 100 seconds), if one more instance of 'testlambda' start at T+50s how much RAM will be available for the second instance 3008 MB or 2908 MB ?I used to believe that the second instance will also have 3008 MB. But after seeing the recent execution logs of my lambda I am inclined to say that for the second instance will have 2908 MB. | Lambda instance ram allocation |
I ended up with the following function (with the help of someone on the Alexa slack):function countDown(numSeconds, breakTime) {
return Array.apply(null, {length: numSeconds})
.map((n, i) => {return `<say-as interpret-as="cardinal">${numSeconds-i}</say-as>` })
.join(`<break time="${breakTime ? breakTime : 0.85}s" />`) + `<break time="${breakTime ? breakTime : 0.85}s" />`;
} | I want to be able to have alexa (audibly) countdown 15 seconds in my skill. I know I can just<break time="15s" />in SSML. But that isn't audible. I also know I can just do:15<break time="1s" />
14<break time="1s" />or better yet (to account for the time it takes to say the number)15<break time="0.85s" />
14<break time="0.85s" />But that's going to be a ton of repeated code if I do this many times over. So I'm probably going to write a function that takes in a number and a number of seconds, and produces an SSML countdown in that interval.Before I do that, however, I was wondering if there's a proper, built-in way of doing this? Or if someone has a function they've already built for this? Thanks!!! | How to make Alexa countdown in seconds |
As far as enablingmod_sslgoes, you'll have to download the module and load it as it doesn't come preinstalled on Amazon Linux. I addmod24_sslin a config file under .ebextensions:packages:
yum:
mod24_ssl: []This should installmod_ssl.sounder/etc/httpd/modules/, and IIRC there should be an existing file/etc/httpd/conf.modules.d/00-ssl.confthat will runLoadModule ssl_module modules/mod_ssl.soNot sure ifmod_sslis the root cause of your issue, but this will loadmod_sslat least. I actually had different issues with HTTP 408s before (not from production traffic, but apparently from unused connections the load balancer keeps open) and it resolved itself by updating the Apache server based on advice from herehttps://forums.aws.amazon.com/thread.jspa?messageID=307846 | I deployed a web app built with Laravel on Amazon's ElasticBeanStalk, after setup, I tried accessing the page but I got HTTP 408 error. I setup the loadbalancer to listen on port 80 and 443, and also there is a certificate attached to port 443.I accessed the log for and found thismod_ssl does not seem to be enabled, I have tried searching for solutions but I am yet to get anything similar.Any help will do. Thanks | mod_ssl does not seem to be enabled ElasticBeanStalk |
There are a few reasons to use multiple shards in a Kinesis stream.The primary is throughput. There are limits on how much data you can write to (or read from) a shard, as well as how many write operations you can perform per minute. If your stream has a higher incoming rate, you have no choice but to use more shards.Another use-case is what you pointed out, partitioning events based on some parameter, maybe because you want to use different consumers, or maybe because you deem some events have higher priority than others.Having multiple producers is not a reason to have multiple shards. Race conditions do not happen. Just be aware of your total incoming throughput. | I have checked while pushing records if we have 2 shards say shard1 & shard2 and two different producer lambdas we can use partition key attribute to put to different shards.I have a few questions:If multiple publishers say two lambda are pushing to a kinesis stream with a single shard, will it cause any race condition? Is it possible two different sources can push to single shard?Which one is recommended different shards for each producer or single for multiple producers? | Amazon kinesis multiple publishers |
ec2:describeInstancesec2:describeTagsec2:describeVolumesSee:Permissions Required to Use the CloudWatch ConsoleOr just attach AWS Managed Policy:CloudWatchReadOnlyAccess– Grants read-only access to CloudWatch. | I'm using CloudWatch dashboard to monitor EC2 instances. Name tag for EC2 instance is shown besides the instance id in the charts when I login asrootuserHowever when I login as a user with only list and read permissions for CloudWatch and EC2 then charts's legends are appeared without EC2 instance name tag. It's very frustrating to switch back and forth between EC2 management console and CloudWatch dashboard mapping instance id and EC name tagI suspect that I need to add more permissions to the user, but cannot figure out what permissions are needed exactly | AWS CloudWatch Dashboard: how to show EC2 instance name |
Even though RDS hosts themysqldatabase you still need the appropriate packages to talk to the database such asphp-mysql. In addition, themysqlpackage is just the client wheremysql-serverwould actually install the server service which you don't need when using RDS. You can safely installmysqlandphp-mysqland likely achieve what you are looking for. | We have AWS Amazon Linux EC2 instances that connect to separate AWS RDS instances running MySQL.We want to run themysqlcommand on the RDS instances to process a number of large SQL files that contain thousands of SQL statements. However, we need to be able to run these commands from a PHP application installed on the EC2 instances.Is this possible and how can it be done?For testing, we installed MySQL on the same machine as the PHP application, and were able to successfully run themysqlcommand to query both the localhost MySQL instance as well as the remote MySQL instances on the AWS RDS instances.However, we're not sure how to do this when MySQL isn't installed and themysqlcommand isn't available on the EC2 instances.We have explored using thesshcommand in combination with themysqlcommand, but nothing seems to work yet.Any advice is greatly appreciated. Thank you. | How do I run the mysql command from the command line of an AWS RDS instance that is separate from the EC2 instance I am on? |
When you run the code in the lambda, it has the following syntax,def handler_name(event, context):
// paste your code here
return some_valueIn your case, try the following,import boto3
def handler_name(event, context):
ec2_client=boto3.client('ec2')
ec2_client.create_tags(Resources=['i-01d90bb1c3a45708b'], Tags=[{'Key':'Testing', 'Value':'TestingBySwamy'}])Refer:Lambda Function Handler (Python) - AWS Lambda | Below is my code and error I am getting.. I need help.import boto3
ec2_client=boto3.client('ec2')
ec2_client.create_tags(Resources=['i-01d90bb1c3a45708b'], Tags=[{'Key':'Testing', 'Value':'TestingBySwamy'}])Response:{
"errorMessage": "Handler 'lambda_handler' missing on module 'lambda_function'"
}
Handler 'lambda_handler' missing on module 'lambda_function': module 'lambda_function' has no attribute 'lambda_handler' | "Handler 'lambda_handler' missing on module 'lambda_function'" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.