Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You can usesequencefunction which supports dates and timestamps:sequence(
current_date, -- some start date
current_date + interval '3' day, -- some end date
interval '1' day) -- stepAnd then useunnestwhich will flatten the generated array:select t.date
from (select sequence(current_date, current_date + interval '3' day, interval '1' day) dates),
unnest(dates) as t(date);Output:date2022-09-262022-09-272022-09-282022-09-29
|
I need to generate a list of dates between two dates start date and end date here I need from 1/1/2022 to the end of the year on AWS Athena without creating table I need only query view
the output should be:|date|
|--|
|1/1/2022 |
|2/1/2022 |
|3/1/2022 |etc to a specific date.
|
How to generate a list of dates on AWS Athena
|
Solution: name in pip install command appears to be different:pip install awsiotsdkAnd nowimport awsiotworks.Is it normal for these two commands to use different names? I usedpipreqs .andpip install -r requirements.txtinitially, which makes the same mistake and assumespip install awsiotis what it needs to do.
|
I've looked up manyversionsof this problem, I don't think I'm falling for any obvious pitfalls, even using a virtual environment. Starting to wonder if there's something weird with this particular package.python3 -m venv venv
source venv/bin/activate
pip install awsiot
ls venv/lib/python3.10/site-packages/ | grep awsiot
>>> awsiot-0.1.3.dist-info
python -c "import sys; print(sys.path)"
>>> ['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/home/lexfridman/venv/lib/python3.10/site-packages']It's installed, it's in a directory that sys.path knows about, in a fresh venv. Yet:python -c "import awsiot"
>>> Traceback (most recent call last):
>>> File "<string>", line 1, in <module>
>>> ModuleNotFoundError: No module named 'awsiot'Same process yields same result on a different linux machine. What could be causing this? Can anyone recreate?
|
Python "No module named 'awsiot'" despite being installed
|
Access Keys use by this IAM user were compromised a month ago (accidentally uploaded to github) which resulted in AWS putting the account in "quarantine". Included in this "quarantine" was anexplicit denyto create new IAM roles which you need to remove yourself.
|
Sorry I'm a bit of a clutz at AWS. I have an IAM user that belongs to a group. with the 'AdministratorAccess' policy attached to it. I further verify that this policy includes full access to IAM. However when i am logged in under that IAM user and I try to create a role for my redshift cluster so that it can load s3 data I get the following error:User: arn:aws:iam::xxxxxxx is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::xxxxxxx:role/redshift-s3-reader with an explicit deny in an identity-based policyCan someone help me out? I dont see why there would be an explicit deny clause in an all access role like AdministractorAccess
|
explicit deny on AdministratorAccess. AWS
|
Note that the Network Interface and corresponding IP are on the individual Task(s) running in the ECS service, not the Service itself. There's no way to preserve the network interface/IP of the individual tasks within the service. Those IP addresses are subject to change any time new ECS Tasks are started, which may be due to an update to the service, or it may be due to auto-scaling events, or due to ECS replacing a failed task.If you have something outside the VPC that needs to connect to the ECS service via a static IP then you need to place aNetwork Load Balancerin front of the ECS service, and assign an Elastic IP to the load balancer. All incoming requests would then be sent to the Elastic IP.If you have something outside the VPC that the ECS service is connecting to, and it is restricting those connections by IP address, then you need to place the ECS service in private subnets, with routes to aNAT Gateway, and assign an Elastic IP to the NAT Gateway. All outbound requests would then appear to be coming from the Elastic IP assigned to the NAT Gateway.
|
Using Jenkins, I am configuring CI/CD.We push the docker image to aws ECR, and then update the aws ECS service with the aws cli command.The network interface and IP change when updating. Is there a way to fix this?aws ecs update-service --cluster ${CLUSTER_NAME} --service ${SERVICE_NAME} --force-new-deployment
|
Is there a way to assign a static IP when restarting the aws ecs service?
|
The issue was resolved by opening a ticket on Support Center and they fixed the problem.
|
When trying to create a new account in my Organization, I get the following message:I have a total of 3 accounts under my Organization, including the Management Account.$ aws organizations list-accounts | jq '.Accounts | length'
3Organizations limits & quotasdocumentationtells me the default limit is10:10 — The default maximum number of accounts allowed in an organization.I also have no account invitations that would take space in the count, what would be the reason for this message?UpdateService quotas is not counting the number of accounts. Utilization field says "Not available":If I force quota request to 10 for example, it says that it must be greater than the current value.If my limit is 10, but I have only 3 accounts created, than why is it blocking creation on the Organizations blade?Update 2To add to my CLI evidence, I only have 3 accounts created.
|
AWS Organizations "You have exceeded the allowed number of AWS accounts"
|
You could try adding path of that particular object in WHERE condition while querying:SELECT * FROM default.my_db
WHERE "$path" = 's3://bucket-name/path/filename.csv.gz'
|
I would like to set the location value in my Athena SQL create table statement to a single CSV file as I do not want to query every file in the path. I can set and successfully query an s3 directory (object) path and all files in that path, but not a single file.Is setting a single file as the location supported?Successfully queries CSV files in path:LOCATION 's3://my_bucket/path/'Returns zero results:LOCATION 's3://my_bucket/path/filename.csv.gz'Create table statement:CREATE EXTERNAL TABLE IF NOT EXISTS `default`.`my_db` (
`name` string,
`occupation` string,
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim' = ','
) LOCATION 's3://bucket-name/path/filename.csv.gz'
TBLPROPERTIES ('has_encrypted_data'='false');I haveread this Q&Aandthis, but it doesn't seem to address the same issue.Thank you.
|
Amazon Athena set location to single csv file
|
Hard refresh your browser page usingCTRL + F5orCTRL + Shift + R.
|
Been testing some Lambda functions and finally managed to get the data to push to DyanmoDB, or at least in the logs it shows the billed duration and this only occurs after I've pushed data to the table, doesn't happen before I test the function.Basically, I'm just testing a small function to push a UserID and Name to a DynamoDB table. I populate the params as seen below.var UserID = toAdd['UserID']; var Name = toAdd['Name'];var params = { Item: { 'UserID':UserID, 'Name':Name }, TableName: 'bookings2D' };When I console log my params I'm seeing this:dynamo.putItem(params, dynamoResultCallback);And as you can see below, the request is at the very least being triggered.However, when I navigate to my DB Table, and perform a table scan I receive this error:This only occurs AFTER I run the Lambda function, if I delete and recreate the table this no longer appears. Seems like it's just something small format wise I may not be grasping.Any help is much appreciated, any questions feel free to ask :)Thanks
|
Cannot convert undefined or null to object - DynamoDB
|
try resource in a format:arn:${Partition}:ecr:${Region}:${Account}:repository/${Repository-name}https://docs.aws.amazon.com/AmazonECR/latest/userguide/security_iam_service-with-iam.html
|
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPushPull",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account_id>:user/root"
},
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": [
"xxx.dkr.ecr.us-west-2.amazonaws.com/yyy"
]
}
]
}Command I try to use is:aws ecr set-repository-policy --repository-name yyy --policy-text file://ecr-policy.jsonIf I dolsin my linux machine I can see thisecr-policy.jsonin same folder where I run this command.I want to grant access to myself.I am always getting error:An error occurred (InvalidParameterException) when calling the SetRepositoryPolicy operation: Invalid parameter at 'PolicyText' failed to satisfy constraint: 'Invalid repository policy provided'I checked my AWS ARN and it ends withroot.
|
Unable to Create Policy for AWS ECR
|
I found an ok solution for giving the resources that can't be duplicated different names by adding the stagename in their properties like this:In the Stack:const bookingsTable = new dynamodb.Table(this, 'BookingsTable', {
tableName: `${stageTag}-BookingsTable`,
partitionKey: {
name: 'bookingId',
type: dynamodb.AttributeType.STRING
},
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST
});In the Stage:const tables = new TablesStack(this, `TablesStack`, {
stageTag: this.stageName,
env: {
account: account,
region: region
}
});There by I create 2 tables one namedPreProd-BookingsTableand the otherProd-BookingsTable.So I don't need to destroy the already deployed stacks
|
I'm working on setting up a CI/CD Pipeline with cdk in typescript. I have a very modular stack structure so I'm having a Stage with 3 stacks: LambdasStack, EndpointsStack and TablesStack. As the name suggest I have all my Lambdas in LambdasStack and so on.For the pipeline I want the following flow:BuildDeploy Stacks for PreProdIntegration TestDestroy Stacks of PreProdManual approval before ProdDeploy Stacks for ProdThe PreProd stacks have to be destroyed before the deployment of the Prod stacks because of the unique names of the tables within the TablesStack. And that's what I'm struggling with. My code to destroy them is:const deletePreProdStacks= new ShellStep('Delete deployed Stacks', {
commands: [
'npm install',
'cdk destroy -f --all'
]
});With'cdk destroy -f --all'the stacks of the stage are not found so they can't be deleted.How can I solve this problem? Giving the tables autogenerated names can't be the right solution? Or is there an option to overwrite the PreProdStacks to ProdStacks?I only have access to only one AWS Account. Because I read that having Testing/PreProd stage and Prod stage on different accounts.Maybe someone has a similar best practice reference for me?Thanks in advance :)Edit1: tag update
Edit2: added situation about deployment of PreProd & Prod in same account
|
cdk CI/CD Pipeline - destroy duplicate stacks
|
Your best option is to avoid this requirement. Cross-stack resource modification makes your IaaC hard to reason about and introduces deploy-time side-effects. And it makes the CDK best-practice gods cross.If you absolutely, positively must, you can use aCustom Resourceto change a resource defined in another stack. TheAwsCustomResourceconstruct (uses a lambda behind the scenes) can execute arbitrary SDK commands during the deployment lifecycle.Notes:Class methods likefrom_string_parameter_nameare the idiomatic way toimport read-only versions of existing resources. These won't help in your case.Parsing a CDK-generated template, as in the OP, is an anti-pattern and anyway won't solve your problem.
|
I would like to update SSM Parameter using AWS CDK.My use case: In first stack I am creating SSM parameter. In the second stack want to update(change) it. One of solution that I came across was using lambda, and i would like to avoid it.Is the a way updating existing parameter via CDK, maybe something alongcfn_param.set_valueFirst stack:class ParamSetupStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
ssm.StringParameter( self,
f'PIPELINE-PARAM-1',
parameter_name="parma-1",
string_value="SOME STRING VALUE",
description="Desctiption of ...."
)Second stack:class UpdateParamStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
template = cfn_inc.CfnInclude(self, "Template",
template_file="PATH/TO/ParamSetupStack.json",
preserve_logical_ids=False)
cfn_param = template.get_resource("PIPELINE-PARAM-1")
cfn_param.set_value("NEW VALUE")
|
How to update existing SSM Parameter with AWS CDK
|
You can filter tag values based on wildcards (either * or ? for multi or single character match respectively) or by listing them as comma separated values. So if I have three instances with a tag "t1" and different possible values across several instances, I can select them all with the following command:aws ec2 describe-instances --filters Name=tag:t1,Values=*You can also select subsets by using the wildcards or comma separated lists. Some examples are below, and the AWS reference page islocated herefor further information.Match all values for the t1 tag that start with "myval" (Note the --query parameter is used in this example to just select the instance id, so we can quickly compare the results without scrolling all the JSON output):$ aws ec2 describe-instances --filters Name=tag:t1,Values=myval* --query "Reservations[*].Instances[*].InstanceId" --output textMatch a comma separated list of values:$ aws ec2 describe-instances --filters Name=tag:t1,Values=t1val1,t1val2Match a single character wildcard:$ aws ec2 describe-instances --filters Name=tag:t1,Values=t1val?
|
AWS CLI provides a way for filtering by tag-key and tag-value in the following ways:aws ec2 describe-instances --filters Name=tag-key,Values=my_tagaws ec2 describe-instances --filters Name=tag-value,Values=my_tag_valueaws ec2 describe-instances --filters Name=tag:my_tag,Values=my_tag_valuebut when tag contains multiple values, e.g.my_tag:my_value1,my_value2,my_value3it doesn't work.
When using any of the previous commands, it returnsonlythose instances which contains tag with exactly this value. All other cases where the tag contains value along with other values are ignored.How to achieve filtering like 'tag contains this value' instead of 'tag is exactly this value'?P.C. I'll be using this approach in java application using aws-sdk, so I'm interested in aws-sdk api solution; I definitely don't want to filter a set of instances on my server-side.
|
AWS CLI How to use tag filtering for the tag which contains multiple values when using ec2 describe-instances?
|
There can be multiple Security Groups on a resource. When evaluating Security Groups,access is permitted ifanysecurity group rule permits access. If no Security Group rule permits access, then access is Denied.There is only one Network Access Control List (NACL) on a subnet. When evaluating a NACL, the rules are evaluated in order. There is a default rule that isevaluated last, which determines whether the default is Allow or Deny.I agree with you thatthe lecturer's statement appears inaccurate.
|
I am currently working my way through theAWS Certified Solutions Architect - Associate (SAA-C02)Linkedin Learning course and I came across something confusing regarding security groups.
During the lecture, the lecturer says that when using security groups:We evaluate all rules before deciding whether to allow trafficAs opposed to how NACLs work, where you stop processing once a rule matches.But at the end of the lecture, the summary says the following:It is important to get the order of rules correct or the desired permissions will not be accomplishedI don't understand this. If all rules are evaluated, then why would their order matter? Furthermore, security groups only support allow rules. There is no case of one rule allowing traffic and another one denying it.
|
How are security group rules evaluated?
|
You can try to validate the filename through the your backend API before returning the PreSigned PUT URL. And a less sequre but can be good, is to validate the file content in the frontend client.
|
Currently working on a project that involves what is essentially an online gallery stored on AWS S3. The current process in question is that the frontend webpage sends a request for an API service hosted on EC2, which then returns with a presigned URL. The service will be using Go, with aws-sdk-go-v2.The URL is time limited, set as a PUT object type. Unfortunately I haven't figured out how to limit the other aspects of the file that will be uploaded yet. Ideally, the URL should be limited in what it can accept, IE images only.My searchs around have come up with mixed results, saying both it's possible, not possible, or just outright not mentioned for what I'm doing.There's plenty of answers regarding setting a POST/PUT policy, but I they're either for the V1 SDK, or just outright a different language.This answer here even has a few libraries that does it, but I don't want to resort to using that yet since it requires access keys to be placed in the code, or somewhere in the environment (Trying to reap the benefits of EC2's IAM role automating the process).Anyone know if this is still a thing on V2 of the SDK? I kinda want to keep it on V2 of the SDK just for the sake of consistency.EDIT: Almost forgot. I saw that it was possible for S3 to follow a link upon upload completion as well. Is this still possible? Would be great as a verification that something was uploaded so that it could be logged.
|
AWS S3 Presigned URLs Policies
|
HTTP/HTTPSruns on top ofTCP. If you checkOpen Systems Interconnection model,HTTP/HTTPSare at the top application layer, whereasTCPis at transport layer.ALB supports only application layer (HTTP/HTTPS in that case), while NLB works on transport layer (TCP/UDP). Thus NLB can load balance anything above TCP as well. This includeHTTP,SSH,FTPand so on.There is no protocol conversion between TCP and HTTP, as they work on different layers. So everything happens transparently.
|
If we need static IP address in AWS for Load balancer then we have to go for Network Loadbalancer forwarding requests to Application Loadbalancer.Now since ALB only supports HTTP and HTTPS protocols
And
NLB only supports TCP protocolHow does this communication actually work?The client like browser will send the request in HTTP or HTTPS.
How does this communication happens ?
|
How does NLB -> ALB actually work ? ALB allows only HTTP, HTTPS, WebSockets and NLB supports only TCP, TLS, UDP
|
cloudis only supported in TF 1.1.0, not any older version. Fromdocs:Because the cloud block is not supported by older versions of Terraform, youmust use 1.1.0 or higherin order to follow this tutorial.You have toupgradeyour TF 0.14 to the newest version.
|
I'm following this documentation in migrating the local state to be integrated with Terraform Cloud.https://learn.hashicorp.com/tutorials/terraform/cloud-migrateIt is fairly straightforward, I just need to copy this code:terraform {
required_version = ">= 1.1.0"
required_providers {
random = {
source = "hashicorp/random"
version = "3.0.1"
}
}
cloud {
organization = "<ORG_NAME>"
workspaces {
name = "Example-Workspace"
}
}
}The problem is that my code below has the same code with aboveterraform {
required_version = ">= 0.14.9"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
cloud {
organization = "ORG"
workspaces {
name = "ORG_WORKSPACE"
}
}
}But it is returning an error:Blocks of type "cloud" are not expected here.Some notes:I put the code in a file calledproviders.tf.I have done some other code and did aterraform apply, thus returning tfstate.I did a login to Terraform Cloud usingterraform loginwith my credentials.When I try to terraform init, the error occurs.Any help would be much appreciated. Thank you!
|
Blocks of type "cloud" are not expected here for integrating with Terraform Cloud
|
One option is to usearray_aggand process the resulting array of arrays viaflattenandarray_distinct:-- sample data
WITH dataset (id, userids) AS (
VALUES (1, array [ 1, 2, 3 ]),
(1, array [ 3, 4, 5 ])
)
--query
SELECT array_distinct(flatten(array_agg(userids)))
FROM dataset
GROUP BY idOutput:_col0[1, 2, 3, 4, 5]
|
I am trying to combine arrays of unique userids into one single array of unique userids. AWS athena does not have theset_unionfunction, so I cannot use, set_union(userids)And reduce_agg seems to not allow for arrays, reduce_agg(userids, ARRAY[], (a, b) -> array_union(a, b), (a, b) -> array_union(a, b))Is there any other trick I can use to combine arrays into one array (distinct items)
|
AWS athena (presto SQL): How to take the (set-like) union of arrays in a group by statement
|
I experienced this error even after following theAWS guideon how to compile lambda functions with extra dependencies. After getting stuck for hours, it turns out to be a difference in the CPU architecture between my personal laptop and the lambda function's runtime environment. My personal laptop is an Asus TUF A15 which is using an x64 AMD Ryzen 7 4800H CPU. However, my Lambda function's runtime was Python 3.8 on x86_64 which is an Intel CPU. The cryptography library packages I was downloading and packaging on my AMD CPU are not compatible with the Intel CPU. Most other libraries work fine though but apparently not the cryptography library.Solution:I launched a temporary m5.large EC2 instance running x86_64 AMI for Amazon Linux 2022 (can be a T2, I don't think it matters as long as it's x86_64), then followed the same steps specified onthis docand my function executed successfully without any issues.
|
I am using pysftp to connect to a sFTP site from a python function. This is working fine from my local that is running asfile_track.py. But when I deploy that on AWS lambda it is failing with –{
"error Message": "Unable to import module 'lambda function': cannot import name 'asn1' from 'cryptography.hazmat.bindings._rust' (unknown location)",
"error Type": "Runtime.ImportModuleError",
"requestId": "0235edb8-25a3-4570-a1ea-2a2696a7dd04",
"stack Trace": []
}Please help me out!
|
AWS Lambda: cannot import name 'asn1'
|
TheCertificatesshoud be ARN of certificate from ACM,AWS::CertificateManager::Certificate, not yourListenerCertificate.
|
I have a cloudformation template that is trying to create an application load balancer listener and it also attempts to create a listener certificate. The issue is both resources reference each other. I get a circular dependency error when validating the yaml configuration...#APPLICATION LOAD BALANCER LISTENER
ApplicationLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
AlpnPolicy:
- String
Certificates:
- !Ref ListenerCertificate
DefaultActions:
- Action
LoadBalancerArn:
Ref: ApplicationLoadBalancer
Port: 443
Protocol: HTTPS
SslPolicy: ELBSecurityPolicy-2016-08
#APPLICATION LOAD BALANCER LISTENER SSL LINK
ListenerCertificate:
Type: AWS::ElasticLoadBalancingV2::ListenerCertificate
Properties:
Certificates:
- !Ref SSLCertificate
ListenerArn:
Ref: ApplicationLoadBalancerListener
|
CloudFormation Elastic load balancer listener circular dependency with listener certificate
|
You can store a shell script in this directory:/var/lib/cloud/scripts/per-boot/It will be automatically run aftereveryboot. (This is done bycloud-init, which also runs User Data scripts.)
|
I had a project where I need to run a command at EC2 reboot. I found only information about User Data but that works only atfirst launchwhich is not exactly what I needed. I need a command to run everytime I connect to the machine.
|
How to run a command at EC2 Instance RE-boot?
|
Doesn't seems like they give "Location" in the response anymore.{
'$metadata': {
httpStatusCode: 200,
requestId: '',
extendedRequestId: '',
cfId: ,
attempts: 1,
totalRetryDelay: 0
},
ETag: '',
ServerSideEncryption: ''
}This is the response object.Use this to get the Location`https://${BUCKETNAME}.s3.${REGION}.amazonaws.com/${KEY}`Be sure"ACL"is"public-read"
|
i am trying upload an in image file to s3 but get this error says :ERROR: MethodNotAllowed: The specified method is not allowed against this resource.my code using @aws-sdk/client-s3 package to upload wth this code :const s3 = new S3({
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
}
});
exports.uploadFile = async options => {
options.internalPath = options.internalPath || (`${config.s3.internalPath + options.moduleName}/`);
options.ACL = options.ACL || 'public-read';
logger.info(`Uploading [${options.path}]`);
const params = {
Bucket: config.s3.bucket,
Body: fs.createReadStream(options.path),
Key: options.internalPath + options.fileName,
ACL: options.ACL
};
try {
const s3Response = await s3.completeMultipartUpload(params);
if (s3Response) {
logger.info(`Done uploading, uploaded to: ${s3Response.Location}`);
return { url: s3Response.Location };
}
} catch (err) {
logger.error(err, 'unable to upload:');
throw err;
}
};I am not sure what this error mean and once the file is uploaded I need to get his location in s3thanks for any help
|
uplode image to amazon s3 using @aws-sdk/client-s3 ang get its location
|
AWS blocks outbound traffic on port 25 by default for EC2 instances and Lambda functions (source:AWS support page).You can place a request for removing restriction on port 25 for your EC2 instance following this link:https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request. You have to be logged in your AWS account to be able to access this link.
|
I'm trying to test an email validation service on AWS EC2 instance, where my program would query the SMTP server (Mail Transfer Agent on port 25). For testing purposes, I replicated the program using Telnet connection, which works fine on my local machine:telnet gmail-smtp-in.l.google.com 25
Trying 66.102.1.27...
Connected to gmail-smtp-in.l.google.com.
Escape character is '^]'.
220 mx.google.com ESMTP a20si12977453wrg.559 - gsmtp
HELO gmail.com
250 mx.google.com at your service
MAIL FROM:<[email protected]>
250 2.1.0 OK a20si12977453wrg.559 - gsmtp
RCPT TO:<[email protected]>
550-5.1.1 The email account that you tried to reach does not exist. Please try
550-5.1.1 double-checking the recipient's email address for typos or
550-5.1.1 unnecessary spaces. Learn more at
550 5.1.1 https://support.google.com/mail/?p=NoSuchUser a20si12977453wrg.559 - gsmtpTelnet, however, doesn't work on EC2, as in the example below:telnet gmail-smtp-in.l.google.com 25
Trying 74.125.133.26...
telnet: connect to address 74.125.133.26: Connection timed out
Trying 2a00:1450:400c:c08::1a...
telnet: connect to address 2a00:1450:400c:c08::1a: Network is unreachableEC2 is running a linux instance and allows all outbound connection. My guess here is that AWS doesn't let you connect to SMTP server on port 25 to prevent spam, but I haven't seen the confirmation of that. Any suggestions how I could fix this? If AWS is too rigid, any alternative AWS-like services where I could migrate my project?Thank you!
|
Cannot telnet via AWS EC2 to SMTP(MTA) server on port 25
|
Yes, by usingtransformations.If your replication tasks are full-load-and-cdc, then, definitely no. Because each replication task will end up with the redo logs being transferred to your replication instance, consuming memory CPU power, and network bandwidth.
|
I'm using AWS DMS to CDC from MySQL on-premise database to AWS S3.Is it feasible to transfer only a few columns from the source table to target? I have a table with more than 50 columns and only need 10.If I want to transfer 5 tables from the source, is it best practice to create 1 replication task for each table, or put all 5 in one?
|
AWS DMS replicate only selected columns
|
When callingReceiveMessage(), you can specify a list ofAttributeNamesthat you would like returned.One of these attributes isApproximateReceiveCount, which returns "the number of times a message has been received across all queues but not deleted".It is an 'approximate' count due to the highly parallel nature of SQS -- it is possible that the count is slightly off if a message was processed around the same time as this request.
|
I have a use case to know how many times sqs message has been read in my code.For example we read message from SQS, for abc reason/exception we cant process that message . Now the same message available in queue to read after visibility timeout.This will create endless loop. Is there a way to know how many times particular sqs message has been read and returned back to queue.I am aware this can be handled via dead letter queue. Since that requires more effort I am checking is there any other optioni dont want to retry the message if it fails more than x time and i want to delete it. Is it possible in SQS
|
is it possible to know how many times sqs messsage has been read
|
This is really stupid. I tried to check IPs for my endpointhost *.cvpn-endpoint-XXXX.prod.clientvpn.[region].amazonaws.comandhost cvpn-endpoint-02aa72c3aa8d442d6.prod.clientvpn.eu-west-1.amazonaws.comand both failed. As described inthisresponse, you need to add a random subdomain. By adding this on the .ovpn file (on theremoteparameter), it works!
|
I'm trying to create a AWS Client VPN endpoint. I followedthisAWS tutorial and I always get a timeout error like this:DNS resolution error: 30 times.I'm not sure what to do, I saw some videos on this topic and it seems I did everything correctly, does anyone know how to debug this? (or what could be the cause)?
|
AWS VPN Client Endpoint DNS resolution timeout with openVPN
|
For standard uses cases you do not have to actively manage success-failure communication between lambda and SQS. If the lambda returns without error within the timeout period, SQS will know the message was successfully processed. If the function returns an error, then SQS will retry a configurable number of times and finally direct still-failing messages to a Dead Letter Queue (if configured).Docs: Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can't be processed (consumed) successfully.Important: Add your DLQ to the SQS queue, not the Lambda. Lambda DLQs are a way to handle errors for async (event-driven) invocation.
|
I have a lambda function with SQS as its trigger. when lambda executes, regardless of an error happening or not, it will put the job back in the queue and creates a loop.should I return something in lambda function to let SQS know that I got the message(done the job)? how should I ack the message? as far as I know we don't have ack and nack in SQS.Is there any option in the SQS configuration to only retry N time if any job fails?
|
AWS lambda with SQS trigger keeps retrying and putting job back in the queue
|
No, files from/tmpare not automatically deleted.TheAWS Lambda FAQsstate:To improve performance, AWS Lambda may choose toretain an instance of your function and reuse itto serve a subsequent request, rather than creating a new copy. To learn more about how Lambda reuses function instances, visit our documentation.Your code should not assume that this will always happen.As per the above doc and experience, you may find an empty or "pre-used"/tmpdirectory depending on if AWS Lambda has reused a previous Lambda environment for your current request.This may, or may not be suitable depending on the use case & there's no guarantees soif you need to ensure a clean/tmpdirectory on every function invocation, clear the/tmpdirectory yourself.Is there a flush() sort of function?No, AWS does not (& shouldn't) offer a way programmatically via their SDK as this is related to file I/O.How to delete all files inside the/tmpdirectory will be dependent on the Lambda function runtime.For Python, try:from subprocess import call
...
call('rm -rf /tmp/*', shell=True)
|
Does AWS clear the/tmpdirectory automatically?Is there aflush()sort of function? If not, how can I remove/delete all the files from the/tmpfolder?I have an AWS Lambda function where I download a file to my/tmpfolder. I unzip the zipped file and gzip all individual files. All this happens within the/tmpdirectory and before I upload the gzipped files to S3 again.Afterwards, I no longer need the files in my/tmpfolder & would like to clear the directory.If I open/tmpfrom my local macOS machine, I don't see any related files at all so I am not sure how to check if they are successfully deleted or not.
|
Are files from /tmp automatically deleted by AWS Lambda?
|
Q1:
The policies are different, because of the extra condition that is imposed on account XYZ in the CDK code, which isn't imposed in the manually created policy. If that's not what you want/need, you will have to change it.Q2:
If you want to achieve the exact same policy, you can use theattachToPolicyfunction on theRoleto add the second statement separately, without the extra condition of theexternalIds.
|
I want to create following trust relationship of IAM role using CDK code{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ABC>:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<ID>"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<XYZ>:root"
},
"Action": "sts:AssumeRole"
}
]
}Above policy is directly created using AWS console, but when I am creating it through CDK code I am getting something like :{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<XYZ>:root",
"arn:aws:iam::<ABC>:root"
]
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<ID>"
}
}
}
]
}I am using following CDK code to achieve this:const account1 = new AccountPrincipal('<XYZ>');
account1.withConditions({StringEquals : { "sts:ExternalId": "<ID>"}});
const account2 = new AccountPrincipal('<ABC>');
const role1 = new Role(this, 'role1', {
roleName: "role1",
description: "some description",
assumedBy: new CompositePrincipal(account1, account2),
externalIds: ['<ID>'],
});Q1: Will these two policies have different effect?Q2: How can I achieve first policy from CDK?
|
AWS : Trust relationship in CDK
|
I got the exact same problem and was searching for a stack somewhere in the Management Console. After a long time of reinstalling, updating etc. I was fooled again by the Console. I checked for the stacks via the AWS CLIaws cloudformation list-stacks --region eu-west-1 --profile account-id_AWSAdministratorAccessI found a stack hiding from the Console"StackSummaries": [
{
"StackId": "arn:aws:cloudformation:eu-west-1:account-id:stack/CDKToolkit/some-uuid",
"StackName": "CDKToolkit",
"TemplateDescription": "This stack includes resources needed to deploy AWS CDK apps into this environment",
"CreationTime": "2021-12-13T15:16:34.541000+00:00",
"LastUpdatedTime": "2021-12-13T15:16:40.397000+00:00",
"DeletionTime": "2021-12-13T15:23:40.728000+00:00",
"StackStatus": "DELETE_IN_PROGRESS",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
},I could delete it via:aws cloudformation delete-stack --stack-name CDKToolkit --region eu-west-1 --profile account-id_AWSAdministratorAccessAfter that I could Bootstrap again
|
I am trying to use the cdk bootstrap command.>$env:CDK_NEW_BOOTSTRAP=1
>npx cdk bootstrap --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess aws://1234.../us-east-1I get the output:CDK_NEW_BOOTSTRAP set, using new-style bootstrapping.
Bootstrapping environment aws://1234.../us-east-1...I just hangs at this point. On one attempt I came back more than an hour later and it was still stuck. An s3 bucket with the prefix cdk did show up but has no files.I've attempted to run it a few times but it's always the same.What can cause it to get stuck this way?UpdateBased on the comment from vt102 I have obtained some error out from the command.>$env:CDK_NEW_BOOTSTRAP=1
>npx cdk bootstrap --cloudformation-execution-policies --verbose --debug arn:aws:iam::aws:policy/AdministratorAccess aws://1234.../us-east-1The output is now:Waiting for stack CDKToolkit to finish creating or updating...
Stack CDKToolkit has an ongoing operation in progress and is not stable (REVIEW_IN_PROGRESS (User Initiated))That second line about the unstable stack repeats every couple of seconds.I went into the AWS Console and looked under CloudFormation -> Stacks but there are no stacks listed. I attempted to change the status filter but nothing.How can I find and delete this unstable stack and start again?I recall when I first tried the cdk command I made syntax error in the account number and region. It got stuck and I killed it. That's probably when it got into this invalid state.That
|
What causes cdk bootstrap to hang?
|
Regarding CloudFront. Cache size is virtually unlimited and you pay for it withtransfer fees. The fact that you set "retention time of 1years" does not mean your files will stay in cache for that long. If AWS deems your files asin-frequentlyused, it will purge them well before they get 1 year old. Fromdocs:If a file in an edge location isn't frequently requested, CloudFront might evict the file—remove the filebefore its expirationdate—to make room for files that have been requested more recently.
|
I am trying to understand the pricing of CDN in AWS, GCP and Azure.One thing I am not able to figure out is, if there is any limit to total amount of cache which can be stored in a single distribution, and is there any added cost as the total volume of cache keeps on increasing.To give my prospective:-Usecase 1) Using CDN for delivering 1_000_000 files of size 100KB in a location daily, with retention time of1day.Total Volume of data present in cache (1year later):1_00_000 * 100KB * 1 = 100GBTotal Bandwidth consumed (1year later):1_000_000 * 100KB * 365 = 3650GBUsercase 2) Using CDN for delivering 100 files of size 100MB in a location daily, with retention time of1years.Total Volume of data present in cache (1year later):100 * 100MB * 365 = 3650GBTotal Bandwidth consumed (1year later):100 * 100MB * 365 = 3650GB(Note:Let's assume in the use case above all the files counted are always unique.)So speaking of cloudfront, it will mainly charging me for the bandwidth, which is same for both the use cases. However in Usecase 2 the resources spent for storing the cache is a lot higher.My question is am I missing something in the pricing, or CDN providers don't care about storage costs?
|
Do CDN such as cloudfront have any limitation to maximum volume of cache that can be stored in a distribution?
|
The error means that the AMI's CPU architecture you're using is 64-bit(x86) but you selected 64-bit(Arm) for converting.You can search arm64 type instances in Console -> EC2 -> Instance type.
|
I am trying to convert t3a.medium to t4g.medium instance getting below error.'t4g.medium' is not a valid instance type for instance 'i-xxxxxxxxxxxxxxxx' of architecture 'x86_64'Is there any way to find the right instance type? I did some research but not found anything related, no reply from the AWS support form.I want to convert all my t3a and t3 type instances to the t4g type instance, any clue to fix this problem will be really helpful.
|
Converting t3a.medium to t4g.medium EC2 Instance
|
Adding ::/0 and 0.0.0.0/0 route to igw in route tables fixed the issue.
|
I needed an ipv6 address for my ec2 instance so I reconfigured my vpc and subnet to provide ipv6 address. I got a new ipv6 address assigned to my ec2 instance. But now ping6 to any ipv6 address like 2a03:2880:f11c:8183:face:b00c::25de from ec2 instance is not working. Moreover wget command is also not able to connect (wgethttps://archive.apache.org/dist/kafka/2.6.0/kafka_2.12-2.6.0.tgz). Pings to ipv4 addresses from ec2 instance are working fine. What could be the problem?
|
EC2 ping not working after moving to IPv6
|
I managed to make it work by using the.scan()method from theaws-sdk.const attributName = "values";
const attributeValue = "string1";
docClient.scan({
TableName: "Table",
ExpressionAttributeValues: {
":attribute": attributeValue,
},
FilterExpression: `contains(${attributName}, :attribute)`,
});
|
With the following (TableName: "Table"):[
{
name: "name1",
values: ["string1", "string2"]
},
{
name: "name2",
values: ["string1", "string2", "string3"]
}
]My partition key would bename, without any sort key. I am trying to query all the items with the samevaluefield. The following is what I have tried:docClient.query({
TableName: "Table",
KeyConditionExpression: "name = :name",
FilterExpression: "contains(values, :value)",
ExpressionAttributeValues: {
":name": "certain_name",
":value": "string1",
},
});Suppose I want to query all the items with thevaluefield of "string1". However, AWS DynamoDB requires thepartition keywhich is unique for all my items. Are there any ways to query all the items with the samevaluefield, without bothering about thepartition key?Or would a better approach be to just get all the items from DynamoDB, and just query with my own methods?Thank you everyone!
|
AWS DynamoDB querying with values in an array
|
In theaws_iam_roledocumentation page there is no example that shows how to load a policy from a JSON file but this works for me:resource "aws_iam_role" "my_role" {
name = "my_role"
assume_role_policy = file("${path.module}/my/path/my_policy.json")
}
|
I'd like to provision for an AWS IAM Role itsAssume Role Policyusing Terraform. I already have the policy declare as a JSON file.From documentation I understand that the moduleaws_iam_roleis what provisions an IAM Role but as I read from the note:The assume_role_policy is very similar to but slightly different than a standard IAM policy and cannot use an aws_iam_policy resource. However, it can use an aws_iam_policy_document data source. See the example above of how this works.Which means I'm tight to the IAM Policy declaration according to theaws_iam_policy_documentsyntax (which itself requires me to manual convert into another format) but I don't see how I can import the policy from a JSON file instead to create the IAM Role I need - the reason behind is that the policy is quite broad and I'd like it to be in a separate JSON file.Can anyone advise on how to declare an IAM Role with a Policy declared in a JSON file?
|
How to declare an AWS IAM Assume Role Policy in Terraform from a JSON file?
|
Its aboutCDK constructs. You should know that there are 3 levels of constructs: L1 (low-level), L2 (regular) and L3 (high-level patterns).InterfaceVpcEndpointis L2 used to create interface VPC endpoints.VpcEndpointclass is asupport classwhich is parent ofInterfaceVpcEndpoint. You can think ofVpcEndpointas being somewhere between L2 and L1. You shouldn't use, or even cant use it directly, as its used to encapsulate common functionality between Interface VPC endpoints and Gateway VPC endpoints.So to create interface endpoint, useInterfaceVpcEndpoint. Similarly to create gateway endpoint, useGatewayVpcEndpoint. Both are L2 constructs.
|
AWS CDK provides anInterfaceVpcEndpointand aVpcEndpoint. What is the difference between these two constructs?
|
InterfaceVpcEndpoint vs VpcEndpoint in AWS CDK
|
When we describe EC2 instance, We get IamInstanceProfile key which has Arn and id.Arn has IamInstanceProfile name attached to it.Arn': 'arn:aws:iam::1234567890:instance-profile/instanceprofileOrRolename'This name can be used for further operation like get role description or listing policies attached to role.Thanks
|
I am trying to list down all the EC2 instances with its IAM role attached using boto3 in python3. But I don't find any method to get the IAM role attached to existing EC2 instance. is there any method in boto3 to do that ?When I describe an Instance, It has a key name IamInstanceProfile. That contains instance profile id and arn of the iam instance profile. I don't find name of IAM instance profile or any other info about IAM roles attached to it. I tried to use instance profile id to describe instance profile, But it seems to describe an instance profile, we need name of instance profile (not the id).Can someone help on this ? I might be missing something.Thanks
|
is there a way to get name of IAM role attached to an EC2 instance with boto3?
|
Instead of importing the database instance, try importing the database instance's security group.ISecurityGroup databaseSecurityGroup SecurityGroup.FromSecurityGroupId(scope, "ImportedDatabaseSecurityGroup", securityGroupId, new SecurityGroupImportOptions());
var fargateServiceSecurityGroup = new SecurityGroup(this, "FargateServiceSecurityGroup", new SecurityGroupProps());
databaseSecurityGroup.Connections.AllowFrom(fargateServiceSecurityGroup, Port.AllTcp(), "Allow from fargate security group");
|
I'm currently trying to identify an existing MySQL instance and I want to allow my ECS deployment to be able to connect to it.The progress so far is the following:const rdsPrimaryDatabase = rds.DatabaseInstance.fromDatabaseInstanceAttributes(this, 'ApplicationReadWrite', {
instanceEndpointAddress: "application_database_ewqqqrqw.eu-west-1.rds.amazonaws.com", port: 3305, securityGroups: [],
instanceIdentifier: 'application_database'
});
const securityGroup = new ec2.SecurityGroup(this, 'ApplicationEcsSecurityGroup', {
vpc: vpc,
allowAllOutbound: true,
securityGroupName: 'ApplicationEcsSecurityGroup',
})
securityGroup.connections.allowTo(rdsPrimaryDatabase, 3306, 'Primary Database')The above is currently resulting in the following error, related to the last line:Argument of type 'IDatabaseInstance' is not assignable to parameter of type 'IConnectable'.
The types of 'connections.defaultPort' are incompatible between these types.The error is quite understandable, but I'm unsure as to how to overcome this - as well as I'm not quite sure that I'm doing it the right way.Any help is appreciated.
|
Allow connections from ECS to an existing RDS database?
|
You can usedynamic blocks. The condition depends exactly on what is your condition (var.stateis not shown, so I don't know what it is), but in general you can do:data "aws_ami" "my_ami" {
filter {
name = "name"
values = ["my_ami_name"]
}
dynamic "filter" {
for_each = var.state ? [1] : []
content {
name = "state"
values = [var.state]
}
}
}
|
Given the data source definition:data "aws_ami" "my_ami" {
filter {
name = "name"
values = ["my_ami_name"]
}
}How does one add a second filter only if a condition is true?Example pseudo code of what I want:data "aws_ami" "my_ami" {
filter {
name = "name"
values = ["my_ami_name"]
}
var.state ? filter {
name = "state"
values = [var.state]
} : pass
}The second filter would only be used if the state variable has content.Note that I don't want to use a 'N/A' value to always use the second filter, regardless if it's needed or not.
|
Terraform: How to add a filter to a data source conditionally
|
Found the solution on AWS re:Invent 2019: Scalable serverless event-driven applications using Amazon SQS & Lambda (API304)https://youtu.be/2rikdPIFc_Q?t=1010
|
I need to do run a ML model in aws sagemaker in high volume.The recommended flow will beuser -> web server -> SQS -> lambda -> sagemakerWhat I want to compare isuser -> web server -> Async lambda -> sagemakerWhat I want to know if I can do an async call with just lambda why will I use SQS
|
SQS + Lambda vs Async Lambda
|
Going with the WCU calculation guidehereit looks like BatchWriteItem and PutItem both follows the same rounding off calculation for the size and will have same WCU consumed.ForPutItem, UpdateItem, and DeleteItem operations, DynamoDB rounds
the item size up to the next 1 KB. For example, if you put or delete
an item of 1.6 KB, DynamoDB rounds the item size up to 2 KB.BatchWriteItem—Writes up to 25 items to one or more tables. DynamoDB
processes each item in the batch as an individual PutItem or
DeleteItem request (updates are not supported). So DynamoDB first
rounds up the size of each item to the next 1 KB boundary, and then
calculates the total size. The result is not necessarily the same as
the total size of all the items. For example, if BatchWriteItem writes
a 500-byte item and a 3.5 KB item, DynamoDB calculates the size as 5
KB (1 KB + 4 KB), not 4 KB (500 bytes + 3.5 KB).
|
I have about 200 records that I need to write frequently to DynamoDB and I'm trying to see if the BatchWriteItem saves any overhead in terms of WCU versus iterating PutItem 200 times. Other than the number of network requests sent, does BatchWriteItem lower the amount of WCU used?
|
DynamoDB: Does BatchWriteItem use less Write Compute Units than PutItem for a high number of records?
|
You can go into your cognito pool settings.Under Message Customization -> Verification typeSelect Link instead of code.
|
I need to verify the user with email verification by clicking a verification link like the firebase default template instead of entering a verification code on signup.Here's my codeexport const signUp = async (username, password, details) => {
try {
const {user} = await Auth.signUp({
username,
password,
attributes: {
...details,
},
});
console.log(user);
} catch (error) {
console.log('error signing up:', error);
}
};I am not able to find this setting in the admin panel also.Thanks in Advance. 🙂
|
Use verification link instead of verification code AWS Amplify
|
What could be the origin of that problem?You are correct. This is AWS/console fault. Specifically, it provides wrong permissions in the lambda's resource-based permissions for the default route to work. To fix that you have toedit the permissions.Specifically, go to your function'sResource-based policy(this is different then execution role). You should find onePolicy statementthere which you have to edit. Then change inSource ARNfrom something like:arn:aws:execute-api:ffffff:xxxx:api-id/*/*/function-nametoarn:aws:execute-api:ffffff:xxxx:api-id/*/*
|
There are tons of similar questions both on this site and on the web, which leads me to believe there is something really wrong with AWS' documentation due to this causing grief to so many people.So, I decided to post the most basic example step by step.First, we create a new function:It has default "everything", I don't touch a single line of code.(the red error message is AWS not playing nice with Firefox)The default code passes the test:Now I add a trigger:This gives me the link for the trigger:I can go to the API endpoint:https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com/default/testAnd it works:Now, the problems will start. I open the API gateway that was created:and try the default link:https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.comAnd...Most of the people having similar questions seem to be having an issue with the gateway expecting some json content, etc, but here is an untouched AWS sample and the gateway link doesn't work.The troubleshooting steps say to add logging and troubleshoot it that way, but there is nothing of interest in the logs.What could be the origin of that problem?
|
"internal server error" with API gateway and lambda on AWS
|
AWS Control Tower needs trusted access to be disabled for both Cloudtrail and Config. To disable this you need to login into the Organization management account, and go toAWS Organizations > Services > Disable Config/Cloudtrail.Trusted access enabled at an Organization level enables these services to inject service roles in all member accounts where they need to change something. Disabling this for Cloudtrail would result in the Organization trail not working anymore, however the master trail would still be intact. All shadow trails in member accounts would be disabled. AWS still allows you to search/filter/download cloudtrail management events in each of the member accounts for last 90 days, just that they wouldn't be transferred to a central s3 bucket for storage.
|
I'm trying to create an AWS Control Tower landing zone for my AWS organization, and am getting a message sayingYou must unsubscribe your organization from AWS CloudTrail so that AWS Control Tower can proceed. During the setup process, AWS Control Tower creates a new trail in the audit account that's part of your landing zone.How do I do this? Does this mean stopping all CloudTrail trails from sending logs, or is there an organization-wide setting to disable?
|
How do I unsubscribe my AWS organization from CloudTrail?
|
You can usegit diff --name-only $$CODEBUILD_RESOLVED_SOURCE_VERSION $$CODEBUILD_WEBHOOK_PREV_COMMITWhere$CODEBUILD_WEBHOOK_PREV_COMMITis the commit id of the previous commit. And$CODEBUILD_RESOLVED_SOURCE_VERSIONis the commit id of the actual one.Inside a build phase you can check the change with:- |
if [ "$(git diff --name-only $CODEBUILD_RESOLVED_SOURCE_VERSION $CODEBUILD_WEBHOOK_PREV_COMMIT | grep -e <filde_path>)" != "" ]; then
#your code;
fi
|
I'm trying to set up an AWS CodeBuild project to run tests to validate PRs and commits on a GitHub repository.Because of the nature of the repo (a monorepo combining several ML models):I need to restrict down to only run tests associated with files changed in the PR/commit to keep time+cost under control, butThe tests will typically require reference to other un-changed files in the repo: So can't justonlypull changed files through to the build container.How can a running CodeBuild build triggered by a GitHub PR (as per the docshere) 'see' which files are changed by the PR to selectively execute tests?
|
How can an AWS CodeBuild job see which files have changed?
|
Keep in mind that a Lambda function has one handler (=> 1 invocation = 1 method called).You can achieve the 1 route <-> 1 method by doing one of the following:You have a single Lambda function triggered by your 3 APIGW routes.
You can then add a simple router to your function which parse theevent['path']and call the appropriate method.def lambda_handler(event, context):
path = event['path']
if path == '/webhook/start':
return start_delivery(event, context)
elif path == '/webhook/status':
return update_status(event, context)
elif path == '/webhook/end':
return end_status(event, context)
else:
return { "statusCode": 404, "body": "NotFound" }Create 1 Lambda function by route:webhook/start triggers the StartDelivery Lambda function with start_delivery as handlerwebhook/status triggers the UpdateDelivery Lambda function with update_delivery as handlerwebhook/end triggers the EndDelivery Lambda function with end_delivery as handlerYou can use Infrastructure as Code (Cloudformation) to easily manage these functions (SAM:https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html)
|
I have 3 webhooks that calls my API Gateway which calls my Lambda Function.url/webhook/....I want each webhook to call its own python methodstartDelivery --> def start_delivery(event, context):
UpdateStatus--> def update_status(event, context):
EndDelivery--> def end_delivery(event, context):I understand most likely one method will be executed via "url/webhook" which calls the appropriate python method.def Process_taskto call one of the threeWhat is the ideal way to set up this structure?Creating different urls for Webhooks and API Gateway captures it and somehow calls the handler?url/webhook/start
url/webhook/status
url/webhook/endSending a different query string for each webhook? and in the lamba parse the query string and call the right python method?
|
AWS Lambda AP Gateway Handling DIfferent Routes
|
In Amazon DocumentDB, modifying the the tls parameter requires a reboot for the change to take effect. Thus, it is possible to modify the parameter, still have a pending change on the cluster, and still be able to connect without TLS. It is recommended to reboot all the instances in the cluster for the pending changes to take hold in the cluster and then try connecting with TLS again.
|
I am using following commands as aws suggests to download rds-combined-ca-bundle.pem file and to connect to cluster.wgethttps://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pemmongo --ssl --host docdb-2021-03-29-09-23-57.cluster-cqwdgjnpay32.ap-south-1.docdb.amazonaws.com:27017 --sslCAFile rds-combined-ca-bundle.pem --username docudbadmin --password *****getting the followingError: couldn't connect to server docdb-2021-03-29-09-23-57.cluster-cqwdgjnpay32.ap-south-1.docdb.amazonaws.com:27017, connection attempt failed: HostUnreachable: Connection reset by peer :
connect@src/mongo/shell/mongo.js:353:17
@(connect):2:6
exception: connect failedBut,with out enabling tls and tls_monitor parameters in cluster group,I'm able to connect from ec2 through mongo shell.
|
Cannot connect to aws documentdb ssl enabled cluster from mongo shell in ec2 which is in same vpc as of cluster
|
Imported resources won't actually be a part of your new stack (i.e. they won't be resources in the generated CloudFormation). So if you are only concerned with those resources you don't need to worry.If you are wanting to make sure something in the stack is not being deleted when the stack is deleted you can call theapplyRemovalPolicy(RemovalPolicy.RETAIN)on the resource.
|
often times one must import existing resources into a stack when working with aws-cdk. When we "destroy" the stack we take it for granted that the existing resources we imported are not deleted along with everything else.Is it possible toexplicitlynot destroy a resource during the destroy process?
|
Do not delete existing resources when destroying a stack in AWS-CDK
|
You might consider moving from Amazon EC2 toAmazon Lightsail.Lightsail haspricing plans that include volumes of Data Transfer trafficand it is designed for people who just want to launch a small number of virtual computers (eg WordPress instances) rather than configure a whole cloud infrastructure.See:Amazon Lightsail Pricing | Virtual Private Server (VPS) | AWS
|
I created a t3.micro EC2 instance on aws being costed at an hourly rate of $0.0065/hr. It's got 2 vCPUs and 1 GiB Memory. I manged to run a 128 tick CS:GO server on it, but the data transfer out charges are killing it. The estimated cost of this server per month is around $43, considering I only play 5 scrims (5v5 competitives) per day, and data transfer out alone costs me $38 in this case. However, some individuals are offering me a server for as low as $10 per month. What am I doing wrong? How do they do it?
|
Does anyone know how to reduce data transfer out charges on AWS?
|
I was able to solve this problem, by disabling logging on the cloudfront distribution and then again enabling it back again.
|
I have set up a cloudfront distribution one year ago... and I had s3 logging enabled on it, and linked an s3 bucket named "cloudfront-s3".Now I went back to check, and saw that the logs are not being sent to that bucket at all.I cannot seem to find the problem or cause of WHY this is happening.Any help would be appreciated.
|
Cloudfront Distribution S3 logging not working
|
The CDK uses CloudFormation under the hood, which manages the remote state of the infrastructure in a similar way like a Terraform state-file.
You get the benefit of AWS taking care of state management for you (for free) without the risks of doing it yourself and messing up your state file.The drawback is that if there is drift between the state CloudFormation thinks resources are in and their actual state, things get tricky.
|
Terraform has a remote stack via well documented plugins, i.e. terraform.backend.s3https://www.terraform.io/docs/language/settings/backends/s3.htmlCan aws cdk provide remote state for the stacks?
I can't find in documentation.https://docs.aws.amazon.com/cdk/latest/guide/awscdk.pdfstackI ask about aws cdk because I found pure documentation about aws cdktf.
Found that cloud cloudfront generates a lot of json file as well as uses it. Does the contain state?
|
Can aws cdk provide remote state?
|
Deleting a user won't affect anything that was created by that user.There is, of course, the possibility that the user acted maliciously, crafting some sort of conditional based on their username or access key ID (although this isn't easy; CloudFormation, for example, doesn't provide the invoking user to the template).A more likely problem is if they acted incompetently, storing their access key / secret key in some configuration file. This will cause the applications to fail whether you delete the user or disable their access keys.Regardless, it's worth searching your codebase and deployments for both their username and their access keys.If you don't want to delete the user outright, then be sure to disable their access keys (as @Marcin said), along with their ability to login via the Console.
|
One of our teammate left and I was wondering what would happen if I delete his AWS IAM user ?Will I get an issue with the resources he built (Fargte tasks, Cloudwatch rules etc) ?Thank you in advance for your answers.
|
Impact on deleting AWS User
|
This is currently not possible and there is aGitHub issueopen to ask for this specifically. If you could made your voice be heard there that's where we are consolidating this feedback. Thank you.
|
I have created an AWS VPC, subnet and security group and want to deploy my docker containers to these premade resources as a fargate ecs service.However I don't know how to tell the service to use a premade subnet (it looks like it randomly picks a subnet from an 'allowed' list of subnets which is currently all subnets I have in my VPCThe below file correctly deploys to the desired vpc, cluster and security group, just not subnet:version: '2'
x-aws-cluster: "Test Cluster"
x-aws-vpc: "vpc-02dffc2a8782579d4"
x-aws-security-group: "sg-02511658ffc184884"
services:
nginx:
image: nginx:1.19
networks:
- Backend-Access
networks:
Backend-Access:
external:
name: sg-02511658ffc184884
ipam:
driver: default
config:
- subnet: subnet-0aeef680f1f9e5cda # this has no effect
#- subnet: 172.31.4.0/24 also does not place the service in this subnetI am running it usingdocker compose up -d(running it without -d gives a cluster does not exist error)
|
How to make docker compose use existing AWS subnet
|
AWS recently announcedS3 Event Notifications with Amazon EventBridge. Consequently, you can enable EventBridge notification on your bucket and then have one (or more) Lambda function(s) triggered by those events.Example implementation using AWS SAM:AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'S3 EventBridge Example'
Parameters:
BucketName:
Type: String
Description: 'Name of the bucket to be created'
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: true
S3EventProcessor:
Type: AWS::Serverless::Function
Properties:
FunctionName: S3EventListener
Architectures:
- arm64
Runtime: nodejs14.x
Handler: index.handler
InlineCode: |
exports.handler = (event, context) => {
console.log('event:', JSON.stringify(event));
}
Events:
S3EventBridgeRule:
Type: EventBridgeRule
Properties:
Pattern:
source:
- aws.s3
detail:
bucket:
name:
- !Ref BucketName
|
I have 4 AWS Lambdas that should read S3 bucket when some file is created (S3 Event), but in cloudformation I just can use 1 Lambda ARN, see insideAWS::S3::Bucket LambdaConfiguration:How can I trigger more than 1 Lambda in Bucket Lambda Configuration ?
|
AWS::S3::Bucket LambdaConfiguration in multiple AWS Lambdas
|
k api keys in a single secret is goinfg to be very unwieldy. Assuming a 40 byte token, you're looking at 2mb of data - SSM has a max data length for a value of 4096 bytes unless I'm mistaken.To me it would make more sense to generate a key with KMS and use that key to encrypt customer API keys before writing them to a DynamoDB table (or even RDS if you so desire) When you need to use a customer API key, fetch it from dynamoDB, decrypt it with the KMS key, and then make use of it.If you want automatic key rotation, SSM could be used to encrypt the key you use to encrypt the client API tokens. Your token decryption key would remain usable while the wrapping SSM entry would be reencrypted with a key rotation set by policy.Finally, asSoftware Engineersuggested above, there is Vault.
|
I'm implementing a service that requires me to call my customers' API using their API keys. My customers will provide me with their API keys in their accounts.When I'm calling my customers' API, I have to retrieve their API key before making the call. Since these are my customers' API keys and I want them to be kept safely, I'm considering keeping all of them in AWS Secrets Manager. I have roughly about 5,000 users (still growing) and I plan to store all their keys into a single secret in Secrets Manager. My application makes about a few millions calls to my customers API a month and it needs to retrieve the keys at high frequency and concurrency.However, I'm not sure if this is the kind of use case for Secrets Manager because their docs sound to me like it was meant for just keeping secret information for the application and not for customers like a database. At the same time, storing encrypted keys in the database and having to decrypt them with a KMS key sounds like I may end up with roughly the same cost.Is Secrets Manager meant for such a use case to store customers' sensitive information such as API keys? If not, what should I consider in my case?
|
Should I use Secrets Manager for storing customers' API keys?
|
aws s3 cpcould trigger as3:ObjectCreated:Copyif both of your src and dst are S3 buckets.aws s3 syncwillrun aaws s3 cpwhen theComparatordetermines that the file needs to be uploaded or downloaded. This will trigger as3:ObjectCreated:Putors3:ObjectCreated:Copyors3:ObjectCreated:CompleteMultipartUploaddepending on the file size, src and dst.run aaws s3 rmwhen theComparatordetermines that the file needs to be removed from the S3 bucket. This willtrigger as3:ObjectRemoved:DeleteMarkerCreatedif the status of the S3 bucket versioning isEnabledorSuspended.trigger as3:ObjectRemoved:Deleteif the status of the S3 bucket versioning isDisabled.Let me know if you have any further questions :)
|
I have lambda functions that run off of s3 events. I'm using aws-cli to move items into s3. I'm not sure what triggers when you perform a 'sync' and a file is actually added by the sync.Ithinkthat s3cptriggers a "put" event (ObjectCreatedByPut), and if the file is large enough it triggers a "multipart upload" event (ObjectCreatedByCompleteMultipartUpload). I don't believe it triggers a "copy" event, even though cp is in the command.I don't think s3synctriggers either of these. But I'm not 100% sure. I've tried reading through their docs but I'm not finding specific answers. I'm trying to pick up each event by a specific lambda function, so I'm just having trouble with what thesynctriggers, if it triggersanythingat all.Thanks!
|
What event does AWS s3 CLI "cp" and "sync" trigger?
|
To get instance ids fromdescribe_instancesyou have to iterate overReservations, and then overInstances.Thus, you code could be:import boto3
import json
from collections import defaultdict
region = 'us-east-1'
def lambda_handler(event, context):
client = boto3.client('ec2')
running_instances = client.describe_instances(
Filters=[
{
'Name': 'tag:orgid',
'Values': [
'demoxx',
]
},
],
)
instance_ids = []
for reservation in running_instances['Reservations']:
for instance in reservation['Instances']:
instance_ids.append(instance['InstanceId'])
return instance_ids
|
I am running following script in lambda function to describe ec2 instance using tags. But in response I want only instance ID, whereas it retuns lot of info. please guide or anyother way to find out ec2 insatnce id using tags. Thankscode is:import boto3
import json
from collections import defaultdict
region = 'us-east-1'
def lambda_handler(event, context):
client = boto3.client('ec2')
running_instances = client.describe_instances(
Filters=[
{
'Name': 'tag:orgid',
'Values': [
'demoxx',
]
},
],
)
return json.loads(json.dumps(running_instances, default=str))
|
Return only instance ID, in Lambda using describe instance
|
You either specify it twice or usedynamic blocks. Example with dynamic blocks is:variable "to_tag" {
default = ["instance", "volume"]
}
resource "aws_launch_template" "foo" {
name = "foo"
image_id = data.aws_ami.server.id
instance_type = "t2.micro"
dynamic "tag_specifications" {
for_each = toset(var.to_tag)
content {
resource_type = tag_specifications.key
tags = {
Name = "test"
}
}
}
}or simply specify it twice:resource "aws_launch_template" "foo" {
name = "foo"
image_id = data.aws_ami.server.id
instance_type = "t2.micro"
tag_specifications {
resource_type = "instance"
tags = {
Name = "test"
}
}
tag_specifications {
resource_type = "volume"
tags = {
Name = "test"
}
}
}
|
base on the documentation example given below I see the tag set for instance type. But If I want the same tag to be applied to multiple resources then how would I set it uphttps://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_templatetag_specifications {
resource_type = "instance"
tags = {
Name = "test"
}
}
|
Setup launch template with tag_secifications with multiple resources
|
gzip.openexpects a filename or an already opened file object, but you are passing it the downloaded data directly. Try usinggzip.decompressinstead:filedata = fileobj['Body'].read()
uncompressed = gzip.decompress(filedata)
|
Hey I'm trying to read gzip file from s3 bucket, and here's my try:s3client = boto3.client(
's3',
region_name='us-east-1'
)
bucketname = 'wind-obj'
file_to_read = '20190101_0000.gz'
fileobj = s3client.get_object(
Bucket=bucketname,
Key=file_to_read
)
filedata = fileobj['Body'].read()And now to open gzip file I'm doing like:gzip.open(filedata,'rb')but it's throwing me error:ValueError: embedded null byteSo I'm trying to decode it first:contents = filedata.decode('utf-8')which is throwing another error:UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byteI have tried decoding it usingISO-8859-1and then it's getting decode but again while opening gzip file it's giving same error.Or is there any other way using which I can pull the data from S3 like using URL or something?
|
Read gzip file from s3 bucket
|
Sadly youcan't do thisautomatically using plain CloudFormation just by havingSomeRule1,SomeRule2, becauseExcludedRuleis not a simple list of strings. It islist of objects, in the form of:ExcludedRules:
- Name: SomeRule1
- Name: SomeRule2Generation of such alist of objectswould require some looping mechanism which is not supported in CloudFormation. You have to explicitly list all these rules, one by one.But if you really mustautomatesuch process, you could develop aCloudFormation macrowhich would give you the ability to loop and construct such structures.Custom resourcescan also be used to automate such operations.Both the macro and the custom resource would require you to develop a speciallambda functionwhich would perform the looping based on yourSomeRule1,SomeRule2and construct validExcludedRules.
|
How do I split a string and use the value for a property?For example say I have the following string:SomeRule1,SomeRule2.I want use this string to populate theexludedRulesproperty ofAWS::WAFv2::WebACL ManagedRuleGroupStatement.excludedRulesis a list ofExcludedRuleobjects that contains a singleNameproperty. How can I use the splitted string value for theNameproperty?
|
cloudformation - Is it possible to split a string and assign to property in a list?
|
No. you don't need to create a crawler to run Glue Job.Crawler can read multiple datasources and keep Glue Catalog up to date.
For example, when you have partitioned data in S3, as new partitions(folders) are created, we can schedule a crawler job to read those new S3 partitions and update metadata in Glue Catalog/tables.Once Glue Catalog is updated with metadata, we can easily read actual data(behind these glue catalog/tables) using these Glue ETL or Athena or other processes.In your case, you directly want to read S3 files and write them back to S3 in a Glue job, so, you don't need to a crawler or Glue Catalog.
|
I'm learning Glue with Pyspark by following this page:https://aws-dojo.com/ws8/labs/configure-crawler/.My question is: is crawler & creating a database in Lake Formation required for creating a glue job?I have some issue with my aws role and I'm not authorised to create resourse in LakeFormation, so I'm thinking if I can skip them to only create a glue job and test my script?For example, I only want to test my pyspark script for one single input .txt file, I store it in S3, do I still need crawler? Can I just useboto3to create a glue job to test the script and do some preprocessing and write data back to s3?
|
Is crawler required for creating an AWS glue job?
|
Firstly you should consider whether running the crons on these instances is suitable. If you're trying to keep this highly available and it is directly interacted via customers what will the impact of the crons performance be?Perhaps consider using a separate autoscaling group or instance with a total of 1 instances to run these crons? You could launch the instance or update the autoscaling group just before the cron needs to run and then automate the shutdown after it has completed.Otherwise you would need to consider using a locking mechanism for your script. By using this your script write a lock to confirm that it is in process, at the beginning of the script run it would check whether there was any script lock in progress. To further prevent the chance of a collision between multiple servers consider adding jitter (random seconds of sleep) to the start of your script.Suitable technologies for writing a lock are below:DynamoDBusingstrongly consistentreads.EFSfor a Linux application, orFSXfor a Windows application.S3usingstrong consistency.
|
I have scheduled 2 cronjobs for my application.My Application server is in an autoscaling group and I kept a minimum of 2 instances because of High availability. Everything working is fine but cron job is running multiple times because of 2 instances in autoscaling.I could not limit the instance size to 1 because already my application in the production environment I prefer to have HA.How should I have to limit execute cron job on a single instance? or should i have to use other services like AWS Lamda or AWS ELasticBeanstalk
|
How to run cron job only on single instance in AWS AutoScaling?
|
If you are doing this in early 2021, check what version of NodeJS you are using. If it's 15.6 to 15.8, a bug was introduced that broke the way .zip files are built which causes this error.The CDK bug is#12536and the upstream NodeJS bug is#37027.Reverting back to 15.5 or earlier should fix the problem, and it looks like NodeJS 15.9 may have fixed the issue too but I haven't confirmed this.
|
I am trying to upload a static webpage onto s3 utilizing the AWS CDK with the S3 and S3 Deployment modules. The issue is that the deployment goes well until I get an error that states that the uploaded file must be a non-empty zip. the documentation indicates that I should be able to use a directory, but I've tried it with a zip as well and the same error persists. Not sure how to proceed.import * as CDK from "@aws-cdk/core";
import * as S3 from "@aws-cdk/aws-s3";
import * as S3Deployment from "@aws-cdk/aws-s3-deployment";
const path = "../website.zip";
export class WebsiteStack extends CDK.Stack {
constructor(app: CDK.App, id: string, props?: CDK.StackProps) {
super(app, id, props);
const bucket = new S3.Bucket(this, "Files", {
websiteIndexDocument: "index.html",
publicReadAccess: true,
});
new S3Deployment.BucketDeployment(this, "Deployment", {
sources: [S3Deployment.Source.asset(path)],
destinationBucket: bucket,
});
new CDK.CfnOutput(this, "BucketDomain", {
value: bucket.bucketWebsiteDomainName,
});}
}
|
AWS CDK S3 Deployment Error - Uploaded file must be a non-empty zip
|
By default its not possible unlike Regular Kinesis, which has max retention of 7 days, Kinesis behind DynamoDB has max retention of 24 hours and messages will be discarded after it exceeds max retry attempts and deleted after 24 hours.So, we need to build an exception handling process, one such methodsCreate an SQS Queue with higherMessageRetentionPeriod(max 14 days) and set a RedrivePolicymaxReceiveCounton no of times to retry.SetupDestination on Failureon Lambda to SQS.Same Lambda can be slightly modified to read either from Kinesis or from SQS or a different Lambda can be used to read from SQS.Throw error back from Lambda when it fails to write to DocumentDb. This will send record back to Kinesis/SQS. This way we can get away up to 14 days. We can add a DLQ on SQS to another SQS too, which can send left over messages after 14 days to DLQ, with destination to a persistent storage.
|
I have a dynamoDB stream and lambda trigger for a table. The lambda trigger basically sync dynamoDB table to DocumentDB.What if, DocumentDB is down for more than 24 hours. How can I put back the all the activity(put, delete, update) happened in dynamoDB back to the stream so that lambda trigger can access the records and sync the data to DocumentDB.I see that dynamoDB stream keeps the record for maximum of 24 hours.
|
AWS DynamoDB streams: Can I keep records for longer than 24 hours?
|
Based on the documentation consumers from Kinesisread the entire data from the Stream.
Thus consumers are responsible to commit their history and pick up from where they left.https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html
|
I am searching the documentation of Kinesis Data Streams but I can't find a clear statement likeKinesis guarantees at least once deliveryFrom the producer side I expect that a message sent, gets propagated to more than one nodes (something like Kafka's ack=all)
From the consumer side I am expecting something equivalent to Kafka commit offset on successful processing from the consumer, or something like Google Cloud's Pub/Sub message acknowledgement.Is there a submit message guarantee for Kinesis?
Is there a processing guarantee for Kinesis (mark message as read only if processed and acknowledged that it was processed)
|
Does Kinesis guarantee delivery?
|
Not 100% sure if this will solve all your issues, but as a first step at least you probably want to do enable 'Raw Message Delivery'https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.htmlIn this way, the messages that SNS puts in the queue won't have any additional properties added, and thus should match messages that are put in directly.
|
I have a SNS topict1that is subscribed into a SQS queueq1. I have a NodeJS processp1that publishes tot1, processp2that subscribes fromq1. I also have a processp3that writes toq1directly.Suppose,datais populated whenp2reads from the queue. Then, while the following snippet works withp1as the writer to the queue, I get a JSON parser error withp3.for (var i = 0; i < data.Messages.length; i++) {
var message = data.Messages[i];
let messageBody = JSON.parse(message.Body)
let payload = JSON.parse(messageBody.Message)The only way I could process data fromp3working is tonothaveJSON.parse(messageBody.Message)and processmessageBodydirectly. Thus, it seems that, the message structure as received in a queue is different if the writer was a SNS topic subscription or a SQS writer.Can you please advise if I can have a single NodeJS application that can process data from SQS regardless of what wrote into that queue?
|
Message structure in AWS SQS and SNS
|
I think the only way this would be possible would be to get the messages usingReceiveMessageand then count the number of objects in it in code. I don't think you can "inspect" a message group as such. I believe you can only pull 10 at a time as well so there is that to consider.GetQueueAttributeswill give you details on the queue not the messages or groups within it.
|
I'm trying to determine if it's possible to use the AmazonSQSClient to get the number of messages in an AWS FIFO queue group (messages with a specific MessageGroupId).I have already looked at some docs:https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SQS/TSQSClient.htmlThe docs hint that I might be able to do what I want with the GetQueueAttributes method... although it isn't clear. I want to do this within a Lambda function.Is this possible?
|
AmazonSQSClient get number of messages in FIFO queue group
|
Yes, you can create a CloudFormation Stack from existing resources. As there are some following steps to do this thing.Open the AWS CloudFormation console.On the Stacks page, choose Create stack, and then choose With
existing resources (import resources).Read the Import overview page for a list of things you're required to
provide during this operation.For more and detailed help this AWS UserGuide and Blog's will help you achieve it easily.Refer Existing Resources into a CloudFormation StackCreating a stack from existing resources
|
What are the best ways to refer to Existing resources (Lambda, IAM Role, S3 bucket etc) when these are created manually from the console page?Is it using
Import Existing resources and copy
Parameters
Custom Resources
or any other optionsThanks in advance
|
CloudFormation : How to refer to existing AWS Resources
|
It appears what you want here is the ARN of the S3 bucket, which is provided byexported resource attributes. Specifically, you probably want thearnresource attribute.Updating your policy like:144: "${aws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket.arn}",
146: "${aws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket.arn}/*",will provide you with the String that you need by accessing thearnattribute. The currently written policy is accessingaws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket, which is a Map (possibly Object) of every argument and attribute for that resource, and will not interpolate within the string of your policy.
|
I'm trying to use terraform to create a model on SageMaker by followingthis pageI can't assign a full access policy to the sagemaker role due to permission constrains, so I created a role and attached a policy with part of the permissionsWhen I testedTerraform plan, it gave me this:Error: Invalid template interpolation value
.............................
141: "ecr:GetRepositoryPolicy"
142: ],
143: "Resource": [
144: "arn:aws:s3:::${aws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket}",
145: "arn:aws:s3:::${local.binaries_bucket_name}",
146: "arn:aws:s3:::${aws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket}/*",
147: "arn:aws:s3:::${local.binaries_bucket_name}/*",
148: "arn:aws:ecr:us-east-1:*:repository/*",
149.....................
157: }
158: ]
159: }
160: POLICY
|----------------
| aws_s3_bucket.xx_xxxxxxxxxx_xxx_bucket is object with 25 attributes
Cannot include the given value in a string template: string required.I'm new to this, just wondering if this is complaining the bucket name is too long or something else? What should I do to fix this, I'm a bit confused. Many thanks.(PS: Terraform versionv0.13.4+ provider registry.terraform.io/hashicorp/awsv3.20.0)
|
Terraform - Cannot include the given value in a string template: string required
|
The currently used value:value = ["${aws_subnet.private.*.id}"]produces alist of lists. For example,[
[
"subnet-0f5b759e80ffcf305",
"subnet-0500c8c2a40e5b381",
],
]If you want to keep using this in that form, later, when you useelementyou have to do following:subnet_id = element(module.network.private_subnets[0], 3)Alternatively, redefineprivate_subnetsto be:value = aws_subnet.private.*.id
|
I am running into an issue while trying to upgrade from terraform 11 to terraform 12. I was previously using the following syntax to retrieve the 3rd element from a list of ids from a module. The module output is like so:# Subnets
output "private_subnets" {
description = "List of IDs of private subnets"
value = ["${aws_subnet.private.*.id}"]
}Previously, this worked with terraform 11subnet_id = "${element(module.network.private_subnets,3)}"I thought that I could use the index of 2 to get the same results, but I get the following error:Error: Incorrect attribute value type
on terraformfile.tf line 65, in resource "aws_instance" "myinstance":
65: subnet_id = module.network.private_subnets[2]
|----------------
| module.network.private_subnets[2] is tuple with 3 elementsAny help with this would be greatly appreciated.
|
Error: Incorrect attribute value type module.network.private_subnets[0] is tuple with 3 elements
|
If the bucket name is hard coded like the example you pasted above, you can always externalize it to the cdk context file. As you've seen, when you access the bucket name from the Bucket construct, it creates a reference to it and that is so if you need it in another resource, cloud formation will depend on the value from the Bucket resource by using the Ref/GetAtt capabilities in CloudFormation. Then it will be guaranteed that the bucket actually exists before it is used downstream.If you don't care about that and just want the actual bucket name in the cdk app code then put the value in the cdk context json file and use node.try_get_context to retrieve it wherever.
|
I've create an S3 bucket for hosting my website. For that I've used the below code from the AWS CDK for python docsself.bucket = s3.Bucket(
self,
"my-bucket-name",
bucket_name="my-bucket-name",
removal_policy=core.RemovalPolicy.DESTROY,
website_index_document="index.html",
public_read_access=True
)For a reason, I want to send this bucket object as an argument to another object and get the bucket name from the argument. So, I've triedself.bucket.bucket_name
self.bucket.bucket_arnnothing seems working, instead the object returns${Token[TOKEN.189]}. Could anyone guide me through this?
|
How to get bucket name from Bucket object in AWS CDK for python
|
With Lambda you can write files in the/tmpfolder, but not in other locations. It sounds like you are including the sqllite DB in the Zip you are uploading to lambda, and if so, you won't be able to change it because of the folder it will be in.You could first copy the SearchResultData.db file to /tmp if it doesn't already exist there. Then you can connect to the DB that is in the /tmp directory and write to it. But if you are expecting that database to be shared by other lambda invocations, don't count on that. If you need a persistent database, you should create one outside of lambda and connect to it. That said,this articlesays"Each execution environment provides 512 MB of disk space in the /tmp
directory. The directory content remains when the execution
environment is frozen, providing a transient cache that can be used
for multiple invocations. You can add extra code to check if the cache
has the data that you stored. For more information on deployment size
limits, see AWS Lambda quotas."Check out this for ideas:http://faculty.washington.edu/wlloyd/courses/tcss562/tutorials/TCSS562_f2019_tutorial_6.pdfhttps://aws.amazon.com/blogs/aws/new-a-shared-file-system-for-your-lambda-functions/
|
I am uploading a .zip file to AWS Lambda which contains a .py file. After uploading I am getting an error like this:attempt to write a readonly database
[ERROR] OperationalError: table "AuData1" already exists
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 15, in lambda_handler
cur.execute('''END RequestId: f5539447-a6d8-47ed-b415-5e2971923357The code for .py file is:def lambda_handler(event,context):
try:
conn = sqlite3.connect('SearchResultData.db')
cur = conn.cursor()
cur.execute('DROP TABLE IF EXISTS AuData1')
except Exception as e:
print(e)
cur.execute('''
CREATE TABLE "AuData1" (
.......So basically the sqlite database present in the folder contains this table 'AuData1' but I want it to drop that and create a new one. However the database is not accessible as it says "readonly database".
Please help if any solution available. Thank you in advance
|
AWS Lambda with python sqlite3
|
Your website showing Nginx server installed page , that means you have correctly pointed your Go-daddy domain to your AWS Ec2 IP. and now you dont need to do anything with Go-daddy..So Now lets look at your AWS Ec2 Nginx .currently it showing "Nginx server installed page" which is default index.html file, which is in/usr/share/nginx/htmlfolder. This get auto add when you installs Nginx server. successful Nginx installation shows this default page.To show your own website page , you need to add 2 things inserverblock ofNginx.conffile1) assign your domain name to `server_name` directive.
2) assign folder/path of your website directory (root directory having index file) to `root`like below:server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /usr/share/nginx/html/example/;
...
}
|
I deployed my Django website on ec2 ubuntu instance. I have associated an elastic IP to my ec2 instance and it works fine when I ping the IP and it shows my website. I configured Route53 and mapped with my GoDaddy domain name. Now, when I use my domain name it is showing the Nginx default page but not my website. However, with IP it works fine.
|
Route53 domain name not working (godaddy domain)
|
You can useregexp_extract():select regexp_extract(city, '[^ ,]+$')This returns the last set of strings incitythat do not contain spaces or commas -- the last element you are looking for.
|
I have no normalised fields in a no-sql json extract queried from ATHENA.I would like to get the last value of my field.fields examples:Raleigh, NC, USA
Frankfurt, GermanyIn the idea i would like something like this to select last element:SPLIT_PART(city, ',' , last_element ) AS countryI don't know if i use the right function to perform this.Bonus: how to select a field with like value.from without raise sql error ? :)
|
Athena SPLIT_PART last element
|
AWS won't create certificates unless it can validate you own the domain for which you want the certificate. It has two ways it can validate you own the domain. Through DNS or through email.If using email validation to create a certificate for my.domain.com an email will be sent to admin, administrator, hostmaster, postmaster, and webmaster all @my.domain.com or you can also specify a super domain so it will send to domain.com instead.If using DNS validation, AWS will generate a specific record to add as a sub domain of my.domain.com to prove you own it. If using Route53 HostedZone for this domain, you can specify the zone and it is all seamless. If not, you can still do DNS validation but you have to manually (or write a custom resource) create the record that AWS needs or the stack will never finish updating.const domainName = StringParameter.valueForStringParameter(this, `/musical/${branch}/PaymentDomain`)
const cert = new Certificate(this, 'Certificate', {
domainName: domainName,
validation: CertificateValidation.fromDns(hostedZone), // Seamless
validation: CertificateValidation.fromDns(), // You need to create DNS validation record manually
validation: CertificateValidation.fromEmail() // Click link from email
})Pick one of those validation methods and include it when creating the certificate
|
I'm requesting a domain certificate from an AWS CDK stack.
For the domain name, I use a value from AWS Parameter Store like:const domainName = StringParameter.valueForStringParameter(this, `/musical/${branch}/PaymentDomain`)
const cert = new Certificate(this, 'Certificate', {
domainName: domainName,
})When I try to runcdk synthorcdk deployI get an errorWhen using Tokens for domain names, 'validationDomains' needs to be suppliedI tried to addvalidationDomainsto the certificate request but not only is this property depreciated I also don't know how to specify the validation domain. It has a typescript format that I don't understand:CertificateProps.validationDomains?: {
[domainName: string]: string;
}SincevalidationDomainsis depreciated, the docs say I should usevalidationbut when I try to usevalidationthe docs say that should be used for the validation method (DNS or email)How do I create a correct certificate request when I use a token for a domain name?
|
How to specify a validation domain for AWS CDK when using tokens for domain names
|
I was looking into this myself and from what I understand, it is currently not possible todirectlycreate this rule, but I think it should be doable with a different approach.Instead of requiring a custom rule that disables merging (which doesn't exist today), you could make it so that the PR requires review from a specific IAM user. With that, you could probably use a fixed "build" user, and fire an automatic approval request for the PR once the build finishes successfully. This will in turn "approve" that rule in the PR and allow it to be merged after the build succeeds.Since approval can be done via the CLI interface, I'm sure it should also be possible via API. For example, you could usethis APIto automatically mark any given PR as approved by the calling user, then ensure the service that is calling it is the same user registered in the "build" approval template.Besides the HTTP WebApi, there are also other ways to call into these CodeCommit actions, like the AWS SDK library (C# example:https://www.nuget.org/packages/AWSSDK.CodeCommit/).
|
I'm using an AWS Lambda function to kick off a build in AWS CodeBuild when a Pull Request is created or updated in AWS CodeComimit, which is working well.However, I'd like to be able to prevent the merging of that Pull Request in to the master branch of the repository, until the latest build for that PR has completed successfully.Does anyone know if there's a way that can be done in AWS? E.g. so that the Merge button is disabled or not available, like when not enough approvers have been obtained?
|
AWS CodeCommit prevent merge until successful build
|
CodePipeline (CP) does not have build-in mechanism for rollbacks. Thus in your case I seethree options:if the target S3 bucket isversioned, you can roll back "manually" by deleting latest version of each object. This way you will effectively move back to a previous deployed version of your application.You have to roll back on yourbitbucketin a same way you would reverse last PR or commit. The change in bitbucket should trigger your CP to do new deployment, but of old version from the git repository.The other option, could involve yourCodeBuildto do a backup of your currently deployed files in the bucket while building new version of the app. This way each run of CP would also create a backup of an existing version to other bucket. Then the roll-back would be as simple as just copying files from one bucket to the other.
|
I have a code pipeline in AWS which performers CI/CD for my react app and deploy it into the s3 bucket.Now I am curious how can I achieve a rollback in this flow.my current code pipeline flow is:git bucket(repo) - > Code build(to build the app into static hosting) - > code deploy action(with action provider s3).In case anything goes wrong, how can I achieve a rollback into this CI/CD pipeline?
|
Rollback integration with aws code pipeline which is deploying a React app on S3
|
What was the need for SAM, if CF was already doing great things as IaC (Infrastructure as Code) for AWS Cloud?It simplifies developments which involvelambda and API gateway- a very popular combination. Doing the same things in pure CFN, would require extra steps (e.g. manual setup of integration methods), which many don't want, or don't need to, know how to make. Also SAM hascustom command line tool, which helps you run and test your lambda+api gateway locally and provide number oftest eventsnot available through CFN, or hides complexities associated with deployments of your functions throughCodeDeploy. You can't do this easily with just CFN.Why somebody would prefer to SAM instead of CF?Ability to easily test things locally combined with streamlined integration with CodeDeploy is very useful. So its good for people who want to putmore focus on writing codefor their applications, rather then spending much time on setting up everything from scratch, which is more of DevOps job.Any finally, can I use the SAM resources (syntactical) to write in CF or vice-versa, for instance, can I declare a Lambda using the following syntax in a normal CF template or vice-versa:-SAM templatescan contain CFN resources, but not the other way. TheResourcessection in SAM:This section is similar to the Resources section of AWS CloudFormation templates. In AWS SAM templates, this section can contain AWS SAM resourcesin addition to AWS CloudFormation resources.
|
Recently, I have started learning AWS Cloud Formation (CF) and AWS Serverless Application Model (SAM). I found that there differences when it comes to syntax in its template files. For instance, to create a Lambda resource in SAM, we would declare something like this:-Resources:
HelloLambda:
Type: AWS::Serverless::FunctionWhereas in CF, we declare by this:-Resources:
HelloLambda:
Type: AWS::Lambda::FunctionNot just that there are few attributes/properties when it comes to Lambda that differs in SAM than a CF.I'm still not able to get my head around and confused. I have a few queries and would really appreciate if you can clear my doubts:-What was the need for SAM, if CF was already doing great things as IaC (Infrastructure as Code) for AWS Cloud?Why somebody would prefer to SAM instead of CF?Any finally, can I use the SAM resources (syntactical) to write in CF or vice-versa, for instance, can I declare a Lambda using the following syntax in a normal CF template or vice-versa:-Resources:
HelloLambda:
Type: AWS::Serverless::FunctionCheers,
|
Can I use the AWS Cloud Formation resource syntax into SAM template or vice-versa?
|
At an initial glance without getting deep into both it can cause confusion, but I'll try to break them down below.AWS Billing and Cost Management provides a summarised view of spending i.e. what you spent so far this month, and the predicted end of month bill, this is quite static and gives you a high level overview of spending. In addition you can configure your billing details from here. All of these features are free to use with no charge for accessing the interface.AWS Cost explorer on the other hand is a paid service ($0.01 per query). By using cost explorer you can dig down into the finer details of expenditure, such as on a region, service, usage type or even tag based level. Using this you can identify costs by targeting your query to be specific enough to identify these charges. Additionally you can make use of hourly billing to get the most accurate upto date billing
|
Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closedyesterday.Improve this questionHi StackOverflow community,I need your help in understanding what is the difference between AWS Billing and Cost Management and AWS cost explorer.I am not getting the difference.Thank you very much in advance for your help.regards
|
What is the difference between AWS Billing and Cost Management and AWS cost explorer?
|
resource_already_exists_exceptionis the new name of this error. It used to beindex_already_exists_exceptionand has been renamed in version 6.0 as you can see inPR #21494.That change was made to prevent having one different exception for each different resource type (index, alias, etc).So, what you get is perfectly OK, given therides_order_266index already exists.
|
I'm mapping indexrides_order_266.
elastic throwing exceptionresource_already_exists_exception. after reading the exception message. It looks like indexrides_order_266already exists but if this is the case then elastic search throw exceptionindex_already_exists_exception. I am getting confused that I am right or wrong. can some explain the exception message?Elasticsearch version: 6.4.2[resource_already_exists_exception] index [rides_order_266/aGTcXrUrTAOV12qxEHl9tQ] already exists, with { index_uuid=\"aGTcXrUrTAOV12qxEHl9tQ\" & index=\"rides_order_266\" }","path":"/rides_order_266","query":{},"body":"{\"settings\":{\"index\":{\"mapping.total_fields.limit\":70000,\"number_of_shards\":1,\"number_of_replicas\":0,\"refresh_interval\":\"1s\"}}
|
Elasticsearch throwing resource_already_exists_exception
|
I think that your problem is that the configuration you created is set to a Redshift connection. It expects some network communications that are different from a MySQL connection.Can you try to create a MySQL connection instead?
|
I havetable plusapp and I createebthen deploy my project then connect to database and all thing is good and cool!I need to connect to database(MYSQL) to import some data to the AWS database so I do these steps:open new workspace in table plustakeendpointandusernameof database and thepasswordand thename of databaselike so:press Test button and after wait some times I got this error:I change the port to also 5432 and got same first errorI change the port to 3306 and got this error:where is the problem ?
|
Connect to RDS eb2 by tableplus?
|
With PySpark on EMR,EMR_CLUSTER_IDandEMR_STEP_IDare available as environment variables (confirmed on emr-5.30.1).They can be used in code as follows:import os
emr_cluster_id = os.environ.get('EMR_CLUSTER_ID')
emr_step_id = os.environ.get('EMR_STEP_ID')I can't test but the following similar code should work in Scala.val emr_cluster_id = sys.env.get("EMR_CLUSTER_ID")
val emr_step_id = sys.env.get("EMR_STEP_ID")Sincesys.envis simply aMap[String, String]itsgetmethod returns anOption[String], which doesn't fail if these environment variables don't exist. If you want to raise an Exception you could usesys.env("EMR_x_ID")TheEMR_CLUSTER_IDandEMR_STEP_IDvariables are visible in the Spark History Server UI under the Environment tab, alongside with other variables that may be of interest.(Update 2023-03-02: It seems these variables are not visible on the Spark History Server UI, at least for EMR 6.7+, but are still available as environment variables in code.)
|
Scenario:I am running the Spark Scala job in AWS EMR. Now my job dumps some metadata unique to that application. Now for dumping I am writing at location "s3://bucket/key/<APPLICATION_ID>" Where ApplicationId isval APPLICATION_ID: String = getSparkSession.sparkContext.getConf.getAppIdNow basically is there a way to write at s3 location something like "s3://bucket/key/<emr_cluster_id>_<emr_step_id>".
How can i get the cluster id and step id from inside the spark Scala application.Writing in this way will help me debug and help me in reaching the cluster based and debug the logs.Is there any way other than reading the "/mnt/var/lib/info/job-flow.json" ?PS: I am new to spark, scala and emr . Apologies in advance if this is an obvious query.
|
How to get AWS EMR cluster id and step id from inside the spark application step submitted
|
This one is a bit tricky due to json. Also I would usetemplatefileinstead oftemplate_fileas you can pass lists into it.variable "emails_addresses" {
default = ["[email protected]", "[email protected]"]
}
variable "sns_arn" {
default = "arn:aws:sns:us-east-1:xxxxxx:xxxx"
}
variable "protocol" {
default = "email"
}
output "test" {
value = templatefile("./email-sns-stack.json.tpl", {
emails_addresses = var.emails_addresses,
sns_arn = var.sns_arn,
protocol = var.protocol
})
}whereemail-sns-stack.json.tplis:{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": ${jsonencode(
{for email_address in emails_addresses:
split("@",email_address)[0] => {
Type = "AWS::SNS::Subscription"
Properties = {
"Endpoint" = email_address
"Protocol" = protocol
"TopicArn" = sns_arn
}
}})}
}The output, after pretty json formatting for readability:{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"sample-1": {
"Properties": {
"Endpoint": "[email protected]",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-east-1:xxxxxx:xxxx"
},
"Type": "AWS::SNS::Subscription"
},
"sample-2": {
"Properties": {
"Endpoint": "[email protected]",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-east-1:xxxxxx:xxxx"
},
"Type": "AWS::SNS::Subscription"
}
}
}
|
I want to create CFT using terraform template_file by loping based on a list variable(email_addresses).
Below are the variables and template I am trying to generate.variables:-
emails_addresses = ["[email protected]", "[email protected]"]
sns_arn = "arn:aws:sns:us-east-1:xxxxxx:xxxx"
protocol = "email"Expecting template:{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"sample-1": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Endpoint": "[email protected]",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-east-1:xxxx:xxxxx"
}
},
"sample-2": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Endpoint": "[email protected]",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-east-1:xxx:xxxx"
}
}
}
}The resource name in CFT can be some random string but it should be the same per mail in case of multiple plans/apply.
|
How to create a template using terraform loop
|
This web pageinstructs how to implement multiple chat rooms with AWS API gateway storing room id with connection id in DynamoDB.Apologize that this is written in Japanese but I guess the code itself can explain enough.
|
https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/I understood how this can be accomplished with a single chat 'room'. All of the messages would go through the 'sendMessage' route key in the API Gateway.How would this be done if I had multiple chat 'rooms' with different sets of users?
|
How to implement multiple chat rooms with AWS API Gateway websockets?
|
+100Usingreduce, it's possible to accomplish this behavior:from functools import reduce
from boto3.dynamodb.conditions import Key, And
FilterExpression=reduce(And, ([Key(k).eq(v) for k, v in criteria.items()]))Hope this works for you!
|
I'm looking for a way to create ascanrequest in Dynamodb with multipleFilterExpressionconditions "ANDed" together.For example, we could scan a "fruit" database using this criteria:criteria = {
'fruit': 'apple',
'color': 'green',
'taste': 'sweet'
}I understand these could be concatenated into a string like so:FilterExpression = ' AND '.join([f"{k}=:{k}" for k, v in criteria.items()])
ExpressionAttributeValues = {f":{k}": {'S': v} for k, v in criteria.items()}However this does not seem like the most elegant / pythonic approach.
|
Dynamodb and Boto3, Chain Multiple Conditions in Scan
|
All Amazon S3 buckets areprivate by default.Content is not accessible unless the is permitted with a Bucket Policy or an Object ACL.Buckets are not publicly readable by default.
|
How would you create a private s3 bucket from the aws cli?My command isaws s3api create-bucket --bucket my-bucket --region eu-west-2 --create-bucket-configuration LocationConstraint=eu-west-2 --acl privateBut on bucket creation, there's public read enabled.
What I expect to see under Access isBucket and objects not publicnotObjects can be public
|
AWS s3 private bucket creation
|
With Cognito, it will always have aUsernamefield that is considered separate from the rest of the attributes. It also contains aphone_numberandemailattribute which are considered separate from the username. So if you want a user to sign up via email and/or phone number, then you will submit an email/phone number as the username, AND you'll also submit the email and/or phone number in their respective fields if you want to use them for verification or account recovery options.
|
I'm new to AWS and had a question about adding a user through the Amplify portal. I have the settings for email or phone number, but when I try and create a user, it requires a username.I would rather have a user input an email in the place of a username, but not exactly sure of the dynamics of AWS Amplify.AWS Cognito Settings:AWS Cognito Create User (Online Portal):
|
AWS Amplify + Cognito: Create account without username/Use email instead of username?
|
Yes, you can do this with the help ofaws_s3_bucket_objects. Specifically, first you will call it with the object key of interest:data "aws_s3_bucket_objects" "my_object" {
bucket = my_bucket_name
prefix = "path/to/file.txt"
}If the object exists, theykeysattribute will have 1 element. Subsequently, you canconditionallyexecuteaws_s3_bucket_objectas follows:data "aws_s3_bucket_object" "deployed_builds_s3" {
count = length(data.aws_s3_bucket_objects.my_object.keys)
bucket = data.aws_s3_bucket_objects.my_object.keys[0]
key = data.aws_s3_bucket_objects.my_object.bucket
}The about will execute if the number of keys found is greater than 0. This is enabled throughcountmeta-argument.
|
I would like to safely load data from ans3_bucket_object. Meaning if s3 object doesn't exist provide a default value instead. Is there a way to do that ?If I specify the non existing key, I get a failuredata "aws_s3_bucket_object" "deployed_builds_s3" {
bucket = my_bucket_name
key = "path/to/file.txt"
}error:Error: Failed getting S3 object: NotFound: Not FoundI know it's possible to do with local files like this for example:locals {
file_content = fileexists("file.txt") ? file("file.txt") : ""
}Is there something similar with s3 objects ?
|
Fail safe reading s3_bucket_object with terraform when object doesn't exist
|
I stumbled uponthis question in the Amplify GitHub issuesabout the AWSJSON scalar having a similar issue. The issue seems to be related toconflict resolutionin the API.I don't remember setting that up, but I triedamplify update API, chose my GraphQL API and the advanced options, declined any conflict resolution and ranamplify push.Now values are saving as expected. When I remove an item from an array it's removed from the backend and visa-versa.
|
Preface: I'm new to GraphQL and Amplify.I have a user object that contains two arrays, one calledpreferred_genres, and another calledpreferred_characters. I have a profile page where the user can edit their profile and add items to these arrays.The update mutation provided by AWS Amplify seems only to be adding elements to the array, never removing them. If on my UI I remove an item from the list and then submit the update mutation, the item is not removed from my backend. If I add an item to the array, the item is added on the backend, but I also end up with duplicate values for those already in the backend.What I'm trying to do is overwrite the array on the backend with the data I'm submitting from the front end. Am I missing something obvious? Is there a more appropriate way of achieving this in GraphQL?Schematype User @model {
id: ID!
...
preferred_characters: [Character]
preferred_genres: [Genre]
...
}
|
How to avoid creating duplicate items in an array when posting an AWS Amplify GraphQL mutation
|
There was a load balancer attached to EC2 instance. I logon to AWS console, manually removed the load balancer, ran terraform destroy. It is successfully destroyed.
|
export TF_WARN_OUTPUT_ERRORS=1terraform destroyError: Error applying plan:2 error(s) occurred:module.dev_vpc.aws_internet_gateway.eks_vpc_ig_gw (destroy): 1 error(s) occurred:aws_internet_gateway.eks_vpc_ig_gw: Error waiting for internet gateway (0980f3434343410c209) to detach: timeout while waiting for state to become 'detached' (last state: 'detaching', timeout: 15m0s)module.dev_vpc.aws_subnet.production_public_subnets[1] (destroy): 1 error(s) occurred:aws_subnet.production_public_subnets.1: error deleting subnet (subnet-04ad0a3a0171c861c): timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)
|
Error waiting for internet gateway (igw-0980f3434343410c209) to detach: timeout while waiting for state to become 'detached'
|
you should add below in your code when you reference a rule group
The field is needed even if you don't want to override it, you specify "none" as per the docs.OverrideAction:
None: {}
|
I wrote this cloudformationand it keeps giving me this errorError reason: A reference in your rule statement is not valid., field: RULE, parameter: Statement (Service: Wafv2, Status Code: 400, Request ID: 8f88058f-556e-4fec-baf2-9a84d0353bbe, Extended Request ID: null)has anyone seen this error before
?
Thank you{
"Name": "Rule",
"Priority": 1,
"Action": {
"Block": {}
},
"VisibilityConfig": {
"SampledRequestsEnabled": true,
"CloudWatchMetricsEnabled": true,
"MetricName": "customrule"
},
"Statement": {
"RuleGroupReferenceStatement": {
"Arn": { "Fn::GetAtt" : [ "TestRuleGroup", "Arn" ]
}
}
}
}
|
wafv2 webacl cloudformation gets error when I tried to attach the rulegroup I created
|
When your lambda receives messages from MyQueue, the messages will go intoinvisibility mode, where it is not visible by others who also read the same queue.Normally, when your function successful processes the msg, the lambda service willautomatically remove the messagefrom the queue. However, if this does not happen, the message remains invisible for the remainder of the invisibility time. Then when it becomes visible again, lambda service may againre-try the processingof the same message. If the re-tries have been exhausted, the message will go to the DLQ.More about this ishere:If a message fails to be processed multiple times, Amazon SQS can send it to a dead-letter queue. When your function returns an error, Lambda leaves it in the queue.After the visibility timeout occurs, Lambda receives the message again. To send messages to a second queue after a number of receives, configure a dead-letter queue on your source queue.
|
When using SQS w/ Dead letter and Lambda , why do messages remain "in flight" for 5 min after lambda fails w/ runtime exception?I've created 3 resourcesMyQueue(configured to send undeliverable messages to MyQueueDLQ.Default visibility timeout:30 sec )MyQueueDLQLambda(retry attempts set to 0, timeout 30 seconds)I for some reason expect (perhaps because of lack of understanding) that upon my lambda failing, the dead letter queue to receive the message shortly after failure. (instead of minutes after).How exactly can I ensure the dead letter queue gets the message in the fastest way possible so that anything responding to the dead letter queue messages doesn't wait minutes unnecessarily?Note: I am intentionally throwing a runtime exception in the lambda to test this so that I understand how this all works.My goal would be to ensure that messages go to the dead letter queue as fast as possible. Is 5 min the best I can do?Update 1: Ive set the timeout on the lambda to 5 seconds, and the timeout on the queue to 25 seconds, now it takes about 1min and 40 seconds for the message to arrive on the DLQ. Which still doesnt match my expectations. SHouldnt the message arrive on the DLQ in 25 seconds?Update 2: So today I discovered a little info icon on the AWS Explorer SQS queue on the bottom window. This may very well describe what I am seeing
|
When using SQS w/ Dead letter and Lambda , why do messages remain "in flight" for 5 min after lambda fails w/ runtime exception?
|
MediaInfo supports natively AWS, without having to download the file in a first step. MediaInfo downloads in RAM what it needs for the analysis, and does itself the seek requests when needed.URL style ishttps://AWSKey:[email protected] pre-signed URLs is also possible but the 20.03 version is buggy, you need to useMediaInfo snapshots.Jérôme, developer ofMediaInfo.
|
I have a video stored in aws s3 bucket I want to get the metadata of the video (like framerate, resolution, etc) inside aws lambda which is using node js runtime.It will be better if this can be done in memory instead of downloading the whole video in lambda temp memory
|
Getting video metadata of a video stored in s3 bucket using aws lambda node js
|
You shoudn't be usingEnvironmentfor that. Instead there is dedicated section calledSecrets.Using this section you canpass your secretsto the containers. For example:Secrets:
- Name: DB_HOST
ValueFrom: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOST
|
I'm creating a cloud formation code to build ECS cluster. Where I need to fetch some values from AWS parameter store. I don't find any example code sample for the same. Look like 'ValueFrom' in cloudFormation don't support!!Can anyone confirm?Following I'm trying to use:ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
PortMappings:
- ContainerPort: !Ref ContainerPort
Environment:
- Name: DB_HOST
Value: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOST
- Name: DB_PASSWORD
Value: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_PASSWORD
- Name: DB_PORT
Value: 5432In the above case, CloudFormation codes executed without error but it's treated DB_HOST and DB_PASSWORD as simple/direct text don't take form parameter store, check the screenshot highlighted:So it only works for DB_PORT and doesn't work for DB_HOST and DB_PASSWORD until I manually change 'value' (highlighted in the screenshot) to 'valueFrom' like below picture:Basically I'd like to use 'valueFrom' option through CloudFormation !!I also tried:Environment:
- Name: DB_HOST
ValueFrom: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOSTBut it's not supported by cloud formation and through error !!
|
How to inject value from AWS parameter store through CloudFormation in ECS ContainerDefinitions
|
Based on the comments, I can add a little bit of more info.Official CB docker images are listedhere. Thetwo newest onesareaws/codebuild/amazonlinux2-x86_64-standard:3.0for Amazon Linux 2aws/codebuild/standard:4.0for Ubuntu 18.04Both these images are also open sourced (links above). Thus, we caninspecttheirDockerfilefiles.In both of them,awscliis installed in a similar way:pip3 install --no-cache-dir --upgrade setuptools wheel aws-sam-cli awscli boto3 pipenv virtualenvAs we can see, this installsawscliv1.Instructions for installingawscliv2aredifferentand they do not involvepip.
|
We are building our project and we have to use AWS CLI v2 to deploy our project.The runtime version that we use is this one:phases:
install:
runtime-versions:
nodejs: 12.xIs there an official AWS CodeBuild nodejs image that we can use that has AWS CLI v2 installed or do we need to create our own. Is there an elegant way to upgrade to v2 for the above runtime?This seems that it works but it might not be very stable in the future:# uninstall awscli version 1
- pip3 uninstall -y awscli
# install awscli version 2
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- unzip awscliv2.zip
- ./aws/install
|
AWS CodeBuild nodejs image with aws cli v2 installed
|
You can usesort_byget the latest snapshot.aws ec2 describe-snapshots --query "sort_by(Snapshots, &StartTime)[-1].{SnapshotId:SnapshotId,StartTime:StartTime}"output{
"SnapshotId": "snap-123456",
"StartTime": "2020-07-07T13:57:05.982Z"
}OR if you just looking for owned by you thenMY_ACCOUNT_ID=1234567 aws ec2 describe-snapshots --filter "Name=owner-id,Values=$MY_ACCOUNT_ID" --query "sort_by(Snapshots, &StartTime)[-1].{SnapshotId:SnapshotId,StartTime:StartTime}"aws-snapshot-by-meUpdate:As the above query does not contain instance information, so you can get instance information by doing a reverse query. find snapshot first and then find instance ID using attached volume ID.VOLUME_ID=$(aws ec2 describe-snapshots --filter "Name=owner-id,Values=$MY_ACCOUNT_ID" --query "sort_by(Snapshots, &StartTime)[-1].VolumeId" --output text)
aws ec2 describe-volumes --filter "Name=volume-id,Values=$VOLUME_ID" --query 'Volumes[?Attachments != `null`].Attachments[].InstanceId'
|
I need to get 5 colums reported from awscli. These are, last snapshot taken for instance, the date it was taken, the tag used if any, the name tag of the instance and the instance i.d.The below will list ALL snapshots and the time taken and a 'null' name gets reported...aws ec2 describe-snapshots --query 'Snapshots[*].{ID:SnapshotId,Time:StartTime,Name:Tags[?Key==`Name`]|[0].Value}'This will give me the description of the snapshot, the snap id and the date:aws ec2 describe-snapshots --owner self --output json | jq '.Snapshots[] | select(.StartTime < "'$(date --date='-1 month' '+%Y-%m-%d')'") | [.Description, .StartTime, .SnapshotId]'So basically I have something that gives me the snapshot data, will query on date and tell me what time it was taken but I'm missing the full requirement all in one.I guess the main stumbling block for me is how to only report on the last snapshot that was taken for an instance. Can anyone please help?
|
aws cli query to find the last snapshot taken, the date it was taken, the tag used if any, the name tag of the instance and the instance i.d
|
Short answer isNO.Two things to note about Step Function state input/output and state InputPath/OutputPathInput processing doesn't support default property values. All the properties have to come either from the previous state or as input to step function invocation. (Input and Output processing)JsonPath can be used only to reference a property in InputPath or OutputPath. If a poperty is not present in the input or result an error is thrown. (Reference Paths)SuggestionMy suggestion is to set theinputSizeproperty to zero or a negative number either in previous state or during step function invocation (don't forget to pass it along various intermediate states). Then you can have achoice statewhere you can take different routes in your workflow depending upon the value of theinputSizeproperty.Another way could be to have a mandatory boolean property which tells if a particular property is present or not in the input or result. This can be helpful in the case we want to check for nullness of a property and take different routes in the workflow.
|
Is there a way to make AWS Step function parameter optional?
or
accept an expression to say if the value is passed pick the value else default it to a certain value?Example:Lets say my Parameters are defined as follows:"Parameters": {
"comment": "Selecting what I care about.",
"MyDetails": {
"size.$": "$.inputSize"
}
},If I don't pass inputSize, the step function fails. is there a way to make this an optional parameter or have an expression like inputSize || 10 where 10 would be picked if nothing is passed
|
Make AWS Step function Parameter Optional
|
C# code cannot be edited in the Lambda console. Use the .NET Core CLI to create & deploy your Lambda function. The steps to do so can be found in my blog posthere. Here's a summary of the same:Install .NET Core fromhere.Install Lambda templates:dotnet new --install Amazon.Lambda.TemplatesCreate Lambda function:dotnet new lambda.EmptyFunction --name MyFunctionInstall .NET Core Global Tool:dotnet tool install -g Amazon.Lambda.ToolsDeploy the function:dotnet lambda deploy-function MyFunction --profile <AWS CLI profile>Invoke the function:dotnet lambda invoke-function MyFunction --payload "Hello World" --profile <AWS CLI profile>
|
When I am trying to select the runtime for my AWS Lambda function. It shows:The code editor does not support the .NET Core 3.1 (C#/PowerShell) runtimeHas anyone faced issues like this? Please help me out! Thanks in advance.
|
AWS Lambda: The code editor does not support the .NET Core 3.1 (C#/PowerShell) runtime
|
You are usingnamed profiles, by defaultdefaultprofile is used. Just like youconfiguredefault, you can/need to configure your profile.aws configure --profile my_profileIt will prompt you to fillaccess key,secret access key,AWS Region, andoutput formatas it is statedhereYou can create additional configurations that you can refer to with a name by specifying the --profile option and assigning a name. The following example creates a profile named produser. You can specify credentials from a completely different account and Region than the other profiles.
|
I have the default region set in~/.aws/configfile:[default]
region=us-west-2However, when I trydescribe-instancescommand for some specific profile it is failing with the following message:$ aws ec2 describe-instances --profile my_profile
You must specify a region. You can also configure your region by running "aws configure".Shouldn't it used the default profile configured in~/.aws/configfile? What am I missing here?Output ofaws configure list:Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ******************** shared-credentials-file
secret_key ******************** shared-credentials-file
region us-west-2 config-file ~/.aws/configOutput ofaws configure list --profile my_profileName Value Type Location
---- ----- ---- --------
profile my_profile manual --profile
access_key ******************** shared-credentials-file
secret_key ******************** shared-credentials-file
region <not set> None Nonep.s. new to AWS. pardon me if this is a very basic question.
|
aws cli not honouring default region configuration
|
You can useNode Affinityspec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <label-key>
operator: NotIn
values:
- SS
|
I have a cluster having 2 nodes and one of the nodes is labelled as "SS" for node selector.I have three services and one service should be deployed in node selector node(which is happening properly) and the other two services should be deployed in another nodeHow to deploy the remaining services should be deployed in a node(which is not been labelled)?I don't want to use a node selector also for the other two services.
|
deployment not in nodeSelector kubernetes
|
Based on the comments.The solution was to use the followingcommand inUserData:net user Administrator "new_password"The command, as explained in thedocs, can be used to change admin password.This works becauseUserDataexecutes under administrator account (ref):User data scripts are executed from thelocal administrator accountwhen a random password is generated.
|
I am deploying a CloudFormation template which launches an EC2 instance from theWindows_Server-2019-English-Full-Base-2020.05.13AMI.By default, the Windows Server image has anAdministratoruser. To connect to the instance via RDP, I have to navigate to the console, click onConnectand then get the generated random password from the console.Is there a way I can set the RDP password to a custom value? I would like to do this from the CloudFormation template, in the UserData section.
|
How to set the administrator password for a Windows Server machine on AWS from CloudFormation?
|
Does AWS provide any bucket policy so that I can share my bucket objects with the IAM User for a limited time frame?Yes.Check theDateGreaterThanandDateLessThanconditions and theaws:CurrentTimecondition key. Here's an example, using the policy in your question as a base:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678910:user/testuser"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::sourcebucket"
],
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2020-04-01T00:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
}
}
]
}Here are some useful links:AWS Global Condition Context Keys:aws:CurrentTimeAWS: Allows Access Within Specific Dates
|
I want to share my bucket (sourcebucket) with an IAM User (testuser) for a limited time window. Does AWS provide any bucket policy so that I can share my bucket objects with the IAM User for a limited time frame?{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678910:user/testuser"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::sourcebucket"
]
}
]
}
|
Does S3 provide any bucket policy to share the objects to IAM user for a limited time?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.