Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
If you have an input variable declared like this:variable "description" { type = string }...then you can return its value as an output value like this, in the same module where you declared it:output "description" { value = var.description }ShareFollowansweredMar 18, 2022 at 2:10Martin AtkinsMartin Atkins67.8k88 gold badges132132 silver badges152152 bronze badges0Add a comment|
I have a custom terraform module which create an AWS EC2 instance, so it's relying on the aws provider. This terraform custom module is used as a base to describes the instance i want to create, but I also need some other information that will be reused later.For example, i want to define a description to the VM as an input variable, but i don't need to use it at all to create my vm with the aws provider.I just want this input variable to be sent directly as an output so it can be re-used later once terraform has done its job.ex What I have as input variablevariable "description" { type = string description = "Description of the instance" }what I wanna put as output variableoutput "description" { value = module.ec2_instance.description }What my main module is doingmodule "ec2_instance" { source = "./modules/aws_ec2" ami_id = var.ami_id instance_name = var.hostname disk_size = var.disk_size create_disk = var.create_disk availability_zone = var.availability_zone disk_type = var.disk_type // I don't need the description variable for the module to work, and I don't wanna do anything with it here, i need it later as output }I feel stupid because i searched the web for an answer and can't find anything to do that. Can you help ?ThanksEDIT: Added example of code
Terraform - How to output input variable?
If the first service tries to access the second service by the second service's public IP, then the traffic will go out to the Internet and back, which will destroy the network traffic's association with the origin security group.To keep the traffic inside the VPC, and to make sure the security group rules apply as intended, the first service needs to connect to the second service via the second service's private IP.If you are using a load balancer for the second service, then it needs to be an internal load balancer, not an external load balancer.ShareFolloweditedJun 19, 2022 at 6:15Simeon Leyzerzon18.9k1010 gold badges5858 silver badges8888 bronze badgesansweredMar 8, 2022 at 20:26Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badges1I will try to create an internal load balancer–gamiraMar 9, 2022 at 13:28Add a comment|
I have two fargate tasks running in two different clusters, the first one is running on port 3000 and can receive requests from anyone, the second one is running on port 8080 and can be accessed only by the first one. Both are in the same Security Group and VPC.I created an inbound rule to allow public access for the first one, then I tried to create other inbound rule to enable the access for the second through security group ingress. But when the first service tries to access the second, I receive an Timeout Error.When I allow the public access to the second service, the communication works properly, but I cannot allow it to run forever.Each service has a load balancer configured, but I've already tried (unsuccessfully) to access the service by its task's public IP.Anyone has any idea what I am doing wrong??The inbound rules for the security group can be checked in this image
How can i enable communication between two tasks running on different AWS ECS Clusters
Actually the solution here was quite simple, although not obvious to non-Lambda experts.As described in the question, the first step was to build the package library.pip install --target ../package/python -r requirements.txtHowever, when building the Lambda usingsam build -uthe same 'requirements.txt' file is used and the required dependencies were again being installed, this time as part of the app.So all I had to do was remove the requirements that I wish packaged in a separate layer and rebuild. It does mean that I have to maintain 2x 'requirements.txt' but that is entirely manageable.I've opened anissueand hopefully AWS will update their documentation.ShareFollowansweredFeb 25, 2022 at 10:53David GardDavid Gard11.5k4141 gold badges123123 silver badges243243 bronze badgesAdd a comment|
I have a Python Lambda and since I started using AWS X-Ray the package size has ballooned from 445KB to 9.5MB.To address this and speed up deployments of my code, I have packaged my requirements separately and added a layer to my template. Thedocumentationsuggests that this approach should work.Packaging dependencies in a layer reduces the size of the deployment package that you upload when you modify your code.pip install --target ../package/python -r requirements.txtResources: ... ProxyFunction: Type: AWS::Serverless::Function Properties: Architectures: - x86_64 CodeUri: proxy/ Handler: app.lambda_handler Layers: - !Ref ProxyFunctionLibraries Role: !GetAtt ProxyFunctionRole.Arn Runtime: python3.8 Tracing: Active ProxyFunctionLibraries: Type: AWS::Serverless::LayerVersion Properties: LayerName: proxy-function-lib Description: Dependencies for the ProxyFunction. ContentUri: package/. CompatibleRuntimes: - python3.8However, this doesn't seem to have prevented the Lambda from still packaging everything in the top layer, and every time I deploy the package is still 9.5MB. The new layer for some reason is 11MB in size, but that is only being deployed when a change is made.How can I reduce the size of the Lambda function package?
Lambda function package still large despite using a layer for dependencies
Pass serde settings to aTable(@aws-cdk/aws-glue-alpha) using thedataFormat(type ofDataFormat) prop.// TableProps { dataFormat: glue.DataFormat.PARQUET }For finer-grained control, use the L1CfnTable(aws-cdk-lib) construct, whose API matches the CloudFormationAWS::Glue::Tableresource.// CfnTableProps tableInput: { // ... storageDescriptor: { inputFormat: 'org.apache.hadoop.mapred.TextInputFormat', outputFormat: 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat', serdeInfo: { serializationLibrary: 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe', parameters: { 'serialization.format': 1 }, },ShareFolloweditedFeb 22, 2022 at 10:23answeredFeb 22, 2022 at 8:25fedonevfedonev23.4k22 gold badges3131 silver badges4444 bronze badgesAdd a comment|
I am using CDK to create a Glue table like this:const someTable = new Glue.Table( scope, "some-table", { tableName: "some-table", columns: [ { name: "value", type: Glue.Schema.DOUBLE, }, { name: "user_id", type: Glue.Schema.STRING, }, ], partitionKeys: [ { name: "region_id", type: Glue.Schema.BIG_INT, }, ], database: glueDb, dataFormat: Glue.DataFormat.PARQUET, bucket: props.bucket, } );It looks like this this is creating my Glue table as expected, but it's also doing some things behind the scenes like setting up a a Serde serialization lib (org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe). For my use case, I have to also specify some Serde parameters in the table configuration, but I can't find how to do it in the CDK documentation (https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_aws-glue.Table.html) even though it looks like something you can configure in the console under "Edit Table".Has anyone run into this and have any suggestions about how to update this?Thanks!
How to add SerDe parameters in CDK?
The heath check path in your TG should be URL path, not the actual location on the EB instance. You can try with just/index.php:/index.phpThis assumes that your application is actually working and the only issue are health checks.ShareFollowansweredFeb 16, 2022 at 3:33MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges3The application is working. If I navigate to the load balancer DNS address and on /index.php, I get a working result. But the health checks are still failing.–Dave MichaelsFeb 16, 2022 at 3:51@DaveMichaels The HC specifies 200 response code. Maybe your app returns different code. You have to verify that as well.–MarcinFeb 16, 2022 at 3:532Actually, the instance did come back as healthy. I guess it just took some time.–Dave MichaelsFeb 16, 2022 at 3:54Add a comment|
I am trying to deploy a PHP through AWS CodeDeploy and am currently stuck on the AllowTraffic step in CodeDeploy. The application is on an EC2 instance behind an ALB. In the ALB, I am getting failing health checks. I have the PHP application code sitting in the following directory on the EC2 instance:/var/www/html/src. If I were to curl the private IP of the EC2 following by the directory where the code sits, I am getting an error 404 Not Found. Even though theindex.phpfile is in that directory, I am unable to curl it. Currently I have security groups setup where the ALB security group allows any traffic from only HTTP, and all traffic from the ALB security group is allowed to reach the EC2 instance. I am able to curl the root of the instance and see Apache's default page.If I were to adjust the health check settings on the ALB Target group, I get a 403 error when setting the health check to/. I get a 404 error when specifying the path to the directory that has the PHP application code.Any advice on how I can get the instance to a healthy state for the ALB would be appreciated.TG Health CheckApplication Load balancer security group allows traffic on port 80 EC2 instance security group allows traffic from Application Load Balancer security group.The PHP application should be accessible on port 80, where Apache is running. The Application Load Balancer has only 1 listener that is set up for port 80, that forwards traffic to the target group.
PHP application behind application load balancer failing health check
The following scripts work perfectly. It might help someone.- task: AWSShellScript@1 displayName: 'Build' inputs: awsCredentials: AwsServiceConnection regionName: $(Region) scriptType: 'inline' inlineScript: | sam build --debug \ --template-file template.yaml - task: AWSShellScript@1 displayName: 'Package' inputs: awsCredentials: AwsServiceConnection regionName: $(Region) scriptType: 'inline' inlineScript: | sam package --resolve-s3 (or it can be a bucket name. --s3-bucket <bucketname>) --output-template-file packaged.yaml - task: AWSShellScript@1 displayName: 'Deploy Infrastructure' inputs: awsCredentials: AwsServiceConnection regionName: $(Region) scriptType: "inline" inlineScript: | sam deploy \ --template-file packaged.yaml \ --no-confirm-changeset \ --no-fail-on-empty-changeset \ --capabilities CAPABILITY_IAM \ --stack-name test-dev-stack \ --resolve-s3 \ ( or it can be a bucket name. --s3-bucket <bucketname>) --s3-prefix test-dev-stackShareFolloweditedFeb 21, 2022 at 0:48Jeremy Caney7,3718383 gold badges5252 silver badges8080 bronze badgesansweredFeb 20, 2022 at 20:41MorshedMorshed18533 silver badges1616 bronze badgesAdd a comment|
I used the aws sam lambda function(hello_world) template to create a project. I can build, invoke locally and deploy from local using sam command. Now, the code is in Azure repo. I used "python package" to do pipeline and deploy but I do not know how can I build, package and deploy scripts of sam cmd? I searched google but did not get good resources or video.Please give me some good resources, videos etc.
how to build, package and deploy AWS SAM lambda function of python from azure Devops CI/CD pipeline to AWS
Thanks to @flanamacca for theanswerThe problem is the format of the key because I'm not using DynamoDB JSON format."Key": { "id": {"S": "36"} },ShareFolloweditedMay 5, 2022 at 13:42answeredMar 23, 2022 at 0:41DiegoDiego49611 gold badge66 silver badges1717 bronze badgesAdd a comment|
I am trying to use TransactWriteItemsCommand to work using the new AWS SDK V3 for NodeJS. Unfortunately I can't find an example and the docs are not really well documented yet.This is my params object:{ "TransactItems": [ { "Update": { "TableName": "boxes", "Key": { "id": "36" }, "UpdateExpression": "set isOpen = :isOpen", "ExpressionAttributeValues": { ":isOpen": { "BOOL": true } } } }, { "Update": { "TableName": "boxes", "Key": { "id": "33" }, "UpdateExpression": "set isOpen = :isOpen", "ExpressionAttributeValues": { ":isOpen": { "BOOL": true } } } } ] }What I'm doing wrong? Any help would be appreciated!
AWS SDK v3 TransactWriteItemsCommand TypeError: Cannot read property '0' of undefined
Fixing this in CF takes two steps buddy.Cut the CNAME resource from your CloudFormation stack and then apply it. This informs CloudFormation to delete that resource from the stack's managed resources.Paste the CNAME resource again to the same CloudFormation stack and then apply it. This will force CloudFormation to both add it to its resource list and provision the Route53 CNAME.ShareFollowansweredFeb 3, 2022 at 0:43Allan ChuaAllan Chua9,78999 gold badges4343 silver badges6565 bronze badgesAdd a comment|
This happened to me a couple of times now. A mistake caused the deletion of an entry in Route53, a CNAME in this case, from a Cloud Formation stack (driven by CDK). How do I cause the re-creation of this record? Re-deploying the stack doesn't seem to do it, as Cloud Formation consider it deployed.I think this is essentially what's called drift?
How to recover when a Route53 is missing from my Cloud Formation stack?
I swapped over to using Jest which does support module mappingsIn the package.json...... "scripts": { "test": "jest" }, "jest": { "moduleNameMapper": { "^/opt/nodejs/(.*)$": "<rootDir>/layers/common/$1" } } ...ShareFollowansweredJan 27, 2022 at 19:23Matt BrysonMatt Bryson2,43433 gold badges2424 silver badges4545 bronze badges1It cannot be done in Mocha?–user4002112Mar 25, 2022 at 18:12Add a comment|
I have a SAM application with a bunch of Lambda functions and layers, using Mocha/Chai to run unit tests on the individual functions.The issue is, that I am also using Layers for shared local modules.The SAM project structure is like this..functions/function-one/app.jspackage.jsonfunction-two/app.jspackage.jsonlayers/layer-one/moduleA.jsmoduleB.jspackage.jsonlayer-two/moduleC.jspackage.jsonAccording to AWS once the function and layers are deployed, torequirea local layer from a function you use this path...const moduleA = require('/opt/nodejs/moduleA');However, that running locally as a unit test wont resolve to anything.Any idea on how to resolve the paths to the layer modules when running unit tests?I could set an ENV var, and then set a base path for the layers based on that, but I was wondering if there was a more elegant solution I was missing...Is there any way to alias the paths when running Mocha ?other options are to useSAM INVOKEbut that has massive overheads and is more integration testing...
Unit testing AWS Lambda that uses Layers - Node JS app
Confirmed with AWS Support - no, this is NOT possible currently. There is a feature request for this but no ETA or schedule for its release."Currently, it is not possible to implement oauth2 authorization code grant flow without using hosted UI for authentication. This is because there is no public API to retrieve the authorization code from Cognito and it has to be passed back to Hosted UI after successful authentication.There is currently a Feature Request to have the ability to use authorization code grant flow without using the hosted UI."ShareFollowansweredOct 19, 2022 at 15:23smk081smk08194811 gold badge1313 silver badges3939 bronze badges1Hi @smk081, any new development on this?–Rahul PatelMar 29, 2023 at 13:26Add a comment|
Is there anyway to configure and utilize the Amazon Cognito Identity SDK for JavaScript (https://www.npmjs.com/package/amazon-cognito-identity-js) to use the Authorization Code Grant flow? It seems like it only supports Implicit Grant, indicating that you should not generate a Client Secret when creating an AppClient and the users credentials are exchanged directly for JWTs with an API call.Utilizing the Amazon Cognito Hosted UI options, the redirect after successful authentication with user credentials includes the authorization code and it can be posted to a backend server/API that performs the interaction with the Token endpoint to exchange the authorization code for JWTs.Is instead of getting the user's JWTs directly from Cognito using this library/SDK, is it possible for it just mimic the Hosted UI flow and return a authorization code?
Amazon Cognito Identity SDK for JavaScript support Authorization Code Grant flow?
Thepg_upgrade_internallog file will usually contain details on any failures/errors.You can take a look on these logs using the command line:aws rds describe-db-log-files --db-instance-identifier my-db-instanceOr via console, or RDS API.For more information take a look on these links:Upgrading the PostgreSQL DB engine for Amazon RDS,Viewing and listing database log filesShareFollowansweredJan 21, 2022 at 18:19valdecivaldeci14.4k66 gold badges5858 silver badges8383 bronze badges1Thank you so much! I found it on the console but I want to know how to do it via the command line. The error it's this: Your installation contains the "unknown" data type in user tables. This data type is no longer allowed in tables, so this cluster cannot currently be upgraded. You can remove the problem tables and restart the upgrade. A list of the problem columns is in the file: tables_using_unknown.txt But I don't know how to access that file. I would really appreciate if you could tell me how :)–Florencia Yáñez GutiérrezJan 21, 2022 at 19:12Add a comment|
I'm trying to upgrade an RDS database cluster engine from Aurora PostgreSQL 9.6.19 before its end of life, I made copy and tried to upgrade to 9.6.21 and 10.16 but everytime the same problem happens:Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully.The status of the database is Available so maybe it refers to something else but I don't know what and how to fix it, I've tried looking for answers to no avail.Has anyone fixed this?
How can I learn more about AWS's RDS Aurora PostgreSQL 9.6.19 upgrade failure?
EventBridge over HTTPS to an instance in EC2 running httpd server. The instance only has its private IP.You can't do this. HTTPS requiresvalid public domainwithvalid public SSLcertificate. This in turn requires your instance to be accessible from the internet.The instance itself can be private only, but in that case you have to front it with internet facingALB, which will handle HTTPS for you.ShareFollowansweredJan 16, 2022 at 10:44MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges2Thanks a lot! Seems like a good reason to master Lambda Functions. Too bad AWS API Destination does not support VPC integration with DNS and private routing, it would have been handy. Disabling name check for a cert for development purposes would have been nice.–epecomJan 16, 2022 at 20:04Is there any sensible and secure way to restrict an endpoint to accept requests only from event bridge api destinations if that endpoint is an api gateway endpoint in the same aws account using resource policies or otherwise–MingoJan 27, 2022 at 11:41Add a comment|
My goal is to forward messages from EventBridge over HTTPS to an instance in EC2 running httpd server. The instance only has its private IP.It turned out that the EventBridge's API Destination with its Targets and Connections works beautifully with external IPs, but no communication is happening to the Private IP. As part of the experiment Security is set to accept all https/http connections from all 0.0.0.0/0.I am seriously considering EventBridge -> Lambda function with VPC bind -> EC2 Private IP.But I am having that nagging feel that I maybe missing something with the API Destination, some network magic? An endpoint?Any advice is welcome!
AWS EventBridge API Destination can't connect to EC2 private IP
I think you can just do something like'OPTIONS': { 'sslmode': 'verify-full', 'sslrootcert': 'global-bundle.pem' },ShareFollowansweredJan 12, 2022 at 16:41bensbens12911 silver badge88 bronze badgesAdd a comment|
I'm digging into Django and thought it would be a nice exercise to connect to an AWS RDS database over SSL, but I can't quite figure out how to provide the SSL-cert from AWS to the database config.I've downloaded theglobal-bundle.pemfromhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html.Now in the Django source code, it seems you can pass these parameters toDATABASES:DATABASES = { 'default': { ... 'OPTIONS': { 'sslmode': 'verify-ca', 'sslrootcert': 'root.crt', 'sslcert': 'client.crt', 'sslkey': 'client.key', }, } }My question is, how do I convert/pass the certificate from AWS?
Connect Django Postgres to AWS RDS over SSL
Just to add some clarity on this, you need to addAWSLakeFormationDataAdminpolicy to the IAM role that you are using to run your Glue job.Also, on the Lake Formation side, you need to make sure that the above principal (IAM role) has data lake permission to access the Glue metadata tables of the data catalog.ShareFolloweditedFeb 28, 2023 at 11:29lxg12.7k1313 gold badges5353 silver badges7676 bronze badgesansweredDec 21, 2022 at 18:46A KA K4144 bronze badgesAdd a comment|
I am trying to use glueContext.purge_table function in my aws glue job. Whenever the job is executed it throws the following error:An error occurred while calling o82.purgeTable. : java.lang.RuntimeException: class com.amazonaws.services.gluejobexecutor.model.AccessDeniedException:User: arn:aws:sts::012345678:assumed-role/XYZ/GlueJobRunnerSession is not authorized to perform: lakeformation:GetDataAccess on resource: arn:aws:glue:us-east-1:MICHIGAN_DEFAULT_CATALOG_ID_RANDOMIZED:table/database/table (Service: AWSLakeFormation; Status Code: 400; Error Code: AccessDeniedException; Request ID: 25829fe6-2a10-430a-b050-023c13bcc8ce; Proxy: null) (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: AccessDeniedException; Request ID: ed60ddfa-8263-486a-b9f6-1dd57cbfd9bd; Proxy: null)The following policies have been attached with the role:Any help would be highly appreciated.
GlueJobRunnerSession is not authorized to perform: lakeformation:GetDataAccess on resource
There are many ways to connect to private resources in a VPC from outside of AWS. The most common one for testing and development purposes is throughssh tunnelas explained in AWS docs:How can I use an SSH tunnel through AWS Systems Manager to access my private VPC resources?The other one, more for production deployment, is through a VPN between your home/work network and your VPC.ShareFollowansweredDec 29, 2021 at 1:16MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
AWS won't let you connect to ElastiCache instances from outside of their network even if you set up security groups to allow traffic from your IP address. All connections must originate from within their network.Given this constraint, how can I test an application that heavily rely's on ElastiCache locally without creating a local instance?
How to test AWS ElastiCache instance locally?
It's alwaystrickyto use SQS with Lambda (concurrency limit configured) because in short, is not gonna work, instead, you will get some throttling records because the lambda can't process messages limited by the concurrency.You can check this article which explains the why and a workaround solution :https://zaccharles.medium.com/lambda-concurrency-limits-and-sqs-triggers-dont-mix-well-sometimes-eb23d90122e0check also this AWS documentation for further information about this subjecthttps://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-queueconfigShareFollowansweredDec 21, 2021 at 11:38HatimHatim1,14611 gold badge88 silver badges1414 bronze badgesAdd a comment|
I have daily scheduled task that triggers around 10k lambda functions for 10k records that I need to maintain. I'm using SQS to queue all those messages and I want to spread execution over couple of hours. So I set up reserved concurrency to only 3 concurrent invocations. But still when that scheduled task hits concurrent invocations of that lambda functions goes over 3. Any advice on how to do it? When I check lambda configuration it shows that reserved concurrency is 3. But on monitoring concurrent invocations shows way over 3.
Reserved concurrency on aws lambda does not prevent lambda to scale more?
s2svpn would be great but my question is can a lambda function HTTP request route through that connection?Sure. Lambdas can have a VPC subnet attached. It's a matter of configuring the subnet routing table / VPN configuration to route the traffic to the carrier through the VPN endpoint.Also, can an AWS Lambda function maintain a static IP?No. Depends. A VPC-attached Lambda will create an eni (network interface) in the subnet with internal (not fixed) subnet iP address. But the traffic can be routed though a fixed NAT or a VPN gateway.That's the reason I asked which IP address needs to be fixed, on what level. The VPN has a fixed IP address. If the carrier enforces the VPN address whitelisting, lambda clients should be working. If a fixed IP of the internal network is required then you will need a fixed network interface (e.g. using EC2)ShareFollowansweredDec 16, 2021 at 21:30gusto2gusto211.5k22 gold badges1818 silver badges3939 bronze badgesAdd a comment|
I'm using an SMS sending service provided by a local mobile carrier. The carrier enforces clients to connect to their datacentre over a VPN in order to reach their endpoints. The VPN tunnel must always be kept open (i.e. not on demand).Currently, I'm using a micro EC2 instance that acts as middleware between my main production server (also an EC2 instance) and the carrier endpoint.Production Server --> My SMS Server --over VPN--> Carrier SMS ServerIs there a way to replace my middleware server with an AWS Lambda function that sends HTTP requests to the carrier over an always-on VPN tunnel?Also, can an AWS Lambda function maintain a static IP? The carrier has to place my IP in their whitelist before I can use their service.
Can AWS Lambda function call an endpoint over a VPN?
Theparent-idis the resource that the gateway lives on. You can find it by callingaws apigateway get-resources --rest-api-id {your api id} --region {region}ShareFollowansweredDec 15, 2021 at 13:57Brendan McMahonBrendan McMahon9188 bronze badgesAdd a comment|
Thedocumentationhas me really confused:create-resource --rest-api-id <value> --parent-id <value> --path-part <value> [--cli-input-json | --cli-input-yaml] [--generate-cli-skeleton <value>]--parent-id (string) [Required] The parent resource’s identifier.Thecreate-resourcecommand takes a required parameter of--parent-id, but I have no clue what that's even referring to. If this is a "top-level" resource, there isn't a parent id?
How do I create a resource on my api gateway through the aws CLI if there is no parent resource?
eksctlonly provide option for you to choosenodeGroupsormanagedNodeGroupsdocs:https://eksctl.io/usage/container-runtime/#managed-nodesbut not describe the different. But I think the follow document will give you the information you needIt describe the different features betweenEKS managed node groups-Self managed nodesandAWS Fargatehttps://docs.aws.amazon.com/eks/latest/userguide/eks-compute.htmlDepend on which purpose you want to use to choose the match one with your purpose, and if I was you, I will choosemanaged nodegroup.ShareFolloweditedDec 15, 2021 at 10:22answeredDec 15, 2021 at 10:16GeeqGeeq3133 bronze badgesAdd a comment|
I use eksctl to create EKS cluster on AWSAfter create a yaml configuration file define EKS cluster followdocs, when I run the commandeksctl create cluster -f k8s-dev/k8s-dev.yamlto execute the create cluster action, the log show some lines below:2021-12-15 16:23:55 [ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s) 2021-12-15 16:23:55 [ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)What is the different betweennodegroupandmanaged nodegroup?I have read from officialdocs from AWSaboutmanaged nodegroupbut I'm still can not clearly which exactly reason to choosenodegroupormanaged nodegroup?What would you use when you need to create a EKS cluster?
AWS comparison between nodegroup and managed nodegroup
You have to use the step functions plugin, using this you first dosls deploywhich will deploy the state machine for you. Now if you don't change the state machine diagram, then you can simply dosls deploy -f functionNameto deploy the individual functions.ShareFollowansweredDec 15, 2021 at 11:04Himanshu PantHimanshu Pant90877 silver badges1313 bronze badges4Does this mean that as long my state machine diagram changes, I will have to do asls deployto redeploy the entire service (which sadly takes 15min on my hardware)? Is there a possible workaround to only re-deploy the step function without re-deploying the entire service?–AmehPlsApr 4, 2022 at 9:451@AmehPls if you make changes to the state machine then it has to be a redeployment unfortunately, to the best of my knowledge.–Himanshu PantApr 4, 2022 at 10:58Just to tag along, we've got a pull request open on the step functions plugin to do this.–Himanshu PantMay 12, 2023 at 8:09Not works for version 3.34.0–Cristián Vargas AcevedoSep 6, 2023 at 20:23Add a comment|
I'm rather new to AWS Lambda and Step Functions, and the serverless framework in general. From my understanding, we can deploy an entire service via CloudFormation using the commandserverless deploy. However to save time, we can just update specific Lambda functions we have made changes to, by usingserverless deploy function -f myFunction.Is there an equivalent of this for Step Functions so that I don't have to redeploy the entire service whenever I only make a change to the Step Function? I have already triedserverless deploy function -f myStepFunctionbut I simply get a Serverless Error that it does not exist in the Service.
How do I deploy an individual Step Function without re-deploying the entire stack?
I noticed you have a fresh install -- fresh installs do not have software listening over HTTP by default.If there is no application listening on a port, incoming packets to that port will simply be rejected by the computer's operating system. Ports can be "closed" through the use of a firewall, which you have disabled, therefore the ports are open just unresponsive which makes them appear closed.If the port is enabled in the firewall in the terminal usingsudo apt-get install ufw sudo ufw allow ssh sudo ufw allow https sudo ufw allow http sudo rebootand enabled in the AWS console as a rule, the port is open and just not responsive so it's seen as closed. By installing either nginx or something that binds to port 80, external requests to that port will be connected successfully, and the port will therefore be recognized as open. The reason ssh is recognized as open is because 1. it has firewall transparency, and 2. it is always listening (unlike port 80!). Before installing nginx even though ports are allowed through the firewall:sudo apt-get install nginx sudo ufw allow 'Nginx HTTP' sudo systemctl status nginx(more nginx info)After:Simple port tester toolhereShareFolloweditedMar 4 at 8:28CommunityBot111 silver badgeansweredDec 9, 2021 at 7:35Keegan MurphyKeegan Murphy79755 silver badges1515 bronze badgesAdd a comment|
I want to open port 80 to allow HTTP connections on my EC2 server. But when I'm entering "telnet xx.xx.xx.xx 80" on a terminal the following is displayed"Trying xx.xx.xx.xx..." telnet: Unable to connect to remote host: Connection timed outIn AWS I've opened port 80 by defining an Inbound Rule on the Security group (only one security group is defined for this EC2 server)I'm using the Public IPv4 address to make a telnet connection
How to open port 80 on AWS EC2
Normally, you would useEC2 instance rolewith permissions to access your secret manager. This way there is no need to hard-code any access and secret keys in your application nor store them on the instance.ShareFollowansweredDec 8, 2021 at 2:21MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges3Thanks, what should I do when I am coding/ testing locally?–Davis WardDec 8, 2021 at 2:29@DavisWard AWS SDK automatically will get credentials from the EC2 role. So you don't really need to do anything special.–MarcinDec 8, 2021 at 2:30@DavisWard You put them in~/.aws/credentials(or environment variables if you need them customized for different applications).–chrylis -cautiouslyoptimistic-Dec 8, 2021 at 3:55Add a comment|
I am trying to make my application secure, so instead of storing all of my AWS IAM credentials for different service users, I started to use AWS secrets manager. The part that is confusing me is in order to get a connection to the AWS secrets manager to retrieve all of the secrets for my other IAM service connections, I need an access key and secret key. Storing these in application.properties on the application on an EC2 instance seems like working back words, since if someone gets access to these two keys, they can get access to all of the secrets and then secrets manager isn't really providing any value. How can I create a connection to secrets manager without storing the keys at all in my code? Thanks in advance.
Where to store the accessKey and secretKey for AWS Secrets Manager in Spring Boot
Not all characters are supported. Fromdocs:Path segmentscan only containalphanumeric characters, hyphens, periods, commas, colons, and curly braces. Path parameters must be separate path segments.ShareFolloweditedDec 3, 2021 at 1:04answeredDec 2, 2021 at 23:56MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges1Much appreciated: we'll definitely be encoding then. Thanks!–rhf1625Dec 3, 2021 at 6:32Add a comment|
How can I get requests with urls containing any of these special chars<>^|to make it to my lambda behind api gateway, and not be blocked at api gateway?Problem: We have a lambda triggered by api gateway, and the lambda responds normally to most requests. But, a url with certain special chars (<>^|) returns a400 bad_requestwithout ever being delivered to the lambda. When the url-encoded alternative (i.e.%7Cin place of|) is used in the url instead, the lambda responds without issues.At first, our team presumed that the requests containing these special chars, specifically vertical bar (|), were being blocked by AWS WAF. But the WAF graph doesn't register that the requests ever hit, leading us to think this could be an api gateway issue.What's been tried: ran the same curl request 10 times against a deployed server. Nine of those times, used a special char in url endpoint, tenth time, used encoded equivalent.Encoded attempt went through, and lambda responded without issue. All nine other attempts failed, and did not show up in AWS console, or the WAF graph.Outside of informing our users to always encode these special chars in the url bodies as a 'fix', would super appreciate any insights into this api gateway issue. Thanks.
URLs containing special characters (`<>^|`) blocked by api gateway, never makes it to lambda
The first thing that will validate is that your instance of aws has active the port to which you want to connect in this case 3306 if it does not work with this you can connect by means of a SSH of the following way remembering that it has to have active the port 22:Configure the ssh optionhost:add the one that EC2 gives you e.g. ec2-127-0-0-1.compute-1.amazonaws.com.Port:22.user:ubuntu or root (or the one indicated in your aws configuration).Authentication Method:public key.Private key:your .pemIn general:Connection Name:whatever you wantHost:localhost or 127.0.0.1Port:3306User:the database userPass:the database user.ShareFollowansweredAug 5, 2022 at 19:36Darwin BermudezDarwin Bermudez6633 bronze badgesAdd a comment|
When I tried to usenavicatto connectmysqlonAWS, it always reject the connection. I'm sure my IP address is correct and the port is correct as I checked onEC2for several times. Also the username and password are correct too because I can log inmysqlonEC2. I don't know what is wrong here. Any help would be really appreciated.
how to connect navicat to mysql on aws EC2?
It turns out the status of a feature group after its creation isCreatedbut before you can ingest any rows you need to simply wait until it'sActive:while status != 'Created': try: status = feature_group.describe()['OfflineStoreStatus']['Status'] except: pass print('Offline store status: {}'.format(status)) sleep(15)ShareFolloweditedMar 3, 2022 at 7:01sohrabrs311 bronze badgeansweredNov 15, 2021 at 12:39rudolfovicrudolfovic3,19322 gold badges1515 silver badges4343 bronze badgesAdd a comment|
I am trying to ingest some rows into a Feature Store on AWS using:feature_group.ingest(data_frame=df, max_workers=8, wait=True)but I am getting the following error:Failed to ingest row 1: An error occurred (ValidationError) when calling the PutRecord operation: Validation Error: FeatureGroup [feature-group] is not in ACTIVE state.
How to get an AWS Feature Store feature group into the ACTIVE state?
To replicate database migration must use TCP/IP port. You should check this port and run database migration again.ShareFollowansweredOct 29, 2021 at 2:59PiyawatPiyawat5111 silver badge55 bronze badgesAdd a comment|
I got below error while try to migrate database from Mysql enterprise version 8.0.23-commercial even I grant all REPLICATION CLIENT, REPLICATION SLAVE to migration user I still got this error and also turn on binlog[SOURCE_CAPTURE ]E: Error 1045 (Access denied for user 'migration'@'IP' (using password: YES)) connecting to MySQL server 'IP' [1020414] (mysql_endpoint_capture.c:297)2021-09-28T17:46:20 [SOURCE_CAPTURE ]E: Errors in MySQL server binary logging configuration. Follow all prerequisites for 'MySQL as a source in DMS' fromhttps://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.htmlor'MySQL as a target in DMS' fromhttps://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html[1020414] (mysql_endpoint_imp.c:778)If migrate full load only via this endpoint it successfully this error found when full load + on going migration
Error 1045 access denied from DMS source database
Not to my knowledge.The onlyEXCEPTis the normalSELECTfunctionality to subtract one relation from another.ShareFollowansweredAug 31, 2021 at 19:23user15782476user157824761Thanks Max - pls also see John Rotenstein's comment on the question, he's confirmed this to be the case.–gooseSep 1, 2021 at 8:30Add a comment|
In BigQuery I can write:SELECT * EXCEPT (col1, col2, ...) ...Is there an equivalent for RedShift? I don't think there is, but I wanted to see if anyone had any bright ideas.Incidentally, I find this to be very useful in BigQuery when writing multiple subqueries, each flowing into the next. I can include/exclude columns at the relevant part of the query without having it break something later on, which is very useful when developing a complex query.
Is there anyway to do SELECT * EXCEPT (col1, col2, ...) ... in RedShift?
You can use anEvaluateExpressiontask to do thatnew tasks.EvaluateExpression(stack, name, { expression: `['--arg_1', $$.Execution.StartTime]`, resultPath: '$.emrArgs' });And then you can adapt your task creation like the followingconst emrTask = new EmrAddStep( stack, name, { name: name, jar: jar, args: sfn.JsonPath.listAt('$.emrArgs'), clusterId: clusterId, } );Could not find a way to do it in a single task though.ShareFollowansweredOct 30, 2021 at 12:43LoheekLoheek1,94511 gold badge1717 silver badges3030 bronze badgesAdd a comment|
I'm trying to set up an EMR step from the CDK (typescript), using a variable from the state context object as a parameter, but I can't get it to work.Here's what I tried:const emrTask = new EmrAddStep( stack, name, { name: name, jar: jar, args: [ '--arg_1', '$$.Execution.StartTime', ], clusterId: clusterId, } );During the state run$$.Execution.StartTimedoes not get replaced by the actual value.I also tried this:const emrTask = new EmrAddStep( stack, name, { name: name, jar: jar, args: [ '--arg_1', JsonPath.stringAt('$$.Execution.StartTime') ], clusterId: clusterId, } );But I get this error:Error: Cannot use JsonPath fields in an array, they must be used in objects
Using JsonPath Step Functions variable inside an array using CDK
When you have sticky sessions enabled in your target group, the ALB will use cookies to associate future HTTP requests with the same target in the target group.An HTTP user agent, such as a browser (in your case Postman) will store the cookie set by the ALB and submit them with future HTTP requests to the ALB, which will lead to the ALB forwarding the call to the same target in the target group.See:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.htmlShareFollowansweredAug 19, 2021 at 11:20httpdigesthttpdigest5,59122 gold badges1616 silver badges3131 bronze badgesAdd a comment|
I have a Load Balancer setup which has a Target group associated with it. This target group has two targets. Both are healthy.When I tried to reach the load balancer, all the traffic is routed to the same target every single time. If I deregister one of the target(which was receiving all the traffic) then the traffic goes to the other target.Even though the default algorithm is round robin, traffic is routed to the same target every single time.I tried changing the algorithm and other parameters, still no luck.Can anyone let me know why this is happening? and how to fix this?
Aws Target group: Same target is hit every single time
I was a little research about this topic and the official doc of EKS don't say anything about avoid this approach.In summary AWS recommend you this about subnets/vpc networking:Make sure about the size of your subnets (if you have insufficient IP addresses available, your pods will not get an IP address)Prefer use private subnets for your workers node & public subnets for Load BalancersReference:https://aws.github.io/aws-eks-best-practices/reliability/docs/networkmanagement/#recommendations_1Btw, for a better security you can implement network policies, encryption in transit (load balancers, add a service mesh), please read this doc for more details:https://aws.github.io/aws-eks-best-practices/security/docs/network/#network-securityShareFollowansweredAug 17, 2021 at 17:46Enrique TejedaEnrique Tejeda36611 silver badge66 bronze badges11Thanks Enrique! Like you said, AWS doesnt clearly say if it's a good approach or not. I am looking if anyone is already using this approach and experienced any issues with it.–Vivek BolajwarAug 17, 2021 at 18:31Add a comment|
I am setting up two EKS clusters in one VPC.Is it possible to share the subnets among these two clusters? Is there any problem with that approach?I was thinking of creating three private subnets that could be shared between these two EKS clusters.
Sharing of subnets across multiple EKS clusters
You can use S3 as storage for your uploaded files. But you would have to update your code to be able to use the S3 SDK for uploading files to S3 storage. There is nothing wrong in storing images in EC2 as well as long as you do not run into millions of them. So your current setup is fine.ShareFollowansweredAug 16, 2021 at 1:37Ravi Kumar CHRavi Kumar CH48911 gold badge88 silver badges1818 bronze badges2Alright, I didn't even know S3 had an SDK. Thanks for your response–HeriiAug 16, 2021 at 1:401Yes, there is S3 SDK for different programming languages. Here is the link where you can find the relevant examples.S3 Examples–Ravi Kumar CHAug 16, 2021 at 1:53Add a comment|
Recently I came into a situation while deploying my app to heroku, I learned that heroku has an ephemeral storage, meaning that after every update to my api, all my uploads would be deleted.So I investigated a little bit more of how I could keep my files and I finally got an answer: Use AWS S3 Storage.Given that I had to use AWS and I didn't want to create an account (for laziness), I decided to just create one and instead of heroku and use and EC2 instance.But now that I have configured my nginx, reverse proxy, ssl, etc, a question came to my mind. Should I use AWS S3 storage? Or Should I use the available space in the EC2 instance and just increase it when I need it?My configuration is as follows: I have an API built using node.js and express, the app runs using pm2 and I configured a reverse proxy, so every time a user goes to a subdomain pointing to the ec2 instance, the reverse proxy listens in both port 80 and 443 and redirect the traffic to 127.0.0.1:5000Now, the thing is that the user has to upload profile picture and some other files, and they are uploaded at the same level of my project.-- Project Folder -- routes -- model -- upload // I have subfolders here, and uploads are uploaded here. -- index.jsSo, should I leave my project as it is or should I change the way that my uploads work?
Where should I upload my app's files while using AWS?
Sadly chancingLaunchTyperequires replacementof the services. So you will have downtime.The only way around this is to doblue/greentype of deployment, where you deploy new service in Fargate, and perform redirection of traffic through R53 from old service to new one.Similarly, changes toServiceNamerequire replacement.ShareFollowansweredAug 5, 2021 at 0:46MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I have an app currently running in ECS, and am attempting to upgrade it to use the Fargate launch type.After updating my cloudformation template, and attempting to update the stack, I am getting an error that the service already exists.Resource handler returned message: "Resource of type 'AWS::ECS::Service' with identifier 'redacted-app-name' already exists." (RequestToken: 50118296-f55c-11eb-a6e3-b31cdb2b43da, HandlerErrorCode: AlreadyExists)I assume that by adding either theLaunchTypeorNetworkConfigurationkeys to my service ECS thinks this is a different service.Any ideas on how to best move forward without having to delete the ECS service or Cloudformation Stack? I am looking for a solution with minimal downtime.Thanks!
How to Update AWS::ECS::Service to Fargate launch type
The solution was very simple and easy, since I was not providing the ACCESS_KEY & SECRET_KEY, so AWS was not letting me upload image to s3.I added both the access key and secret key to it while getting the client of s3 from boto3s3_client = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)A good documentation of this is presentat boto documentationShareFollowansweredJul 30, 2021 at 21:03Aakash SharmaAakash Sharma43711 gold badge66 silver badges1515 bronze badgesAdd a comment|
When I tried uploading an image to s3 using boto3 in python I am constantly getting errors. The error says:An error occurred (AccessDenied) when calling the PutObject operation: Access DeniedMy code for uploading the image isdef upload_file(file_name, bucket, object_name=None): """Upload a file to an S3 bucket :param file_name: File to upload :param bucket: Bucket to upload to :param object_name: S3 object name. If not specified then file_name is used :return: True if file was uploaded, else False """ # If S3 object_name was not specified, use file_name if object_name is None: object_name = file_name # Upload the file s3_client = boto3.client('s3') try: response = s3_client.upload_file(file_name, bucket, object_name, ExtraArgs={'ACL':'public-read'}) print(response) except Exception as e: print(e) return False return True
boto3 giving Access Denied error while uploading file through python
Here's the solution:1.Build a.whlfile corresponding to package usingpython setup.py bdist_wheelwithin a parent directory.2.Add the relative path to this.whlfile to used pip requirement file (requirements.txtfor instance) :req0==1.0.9 req1==5.5.0 ../<relative path to local package>/dist/<package name>-<version>-<details>.whl # generated .whl file's name3.serverless-python-requirementswill automagically pack this dependency within the deployed archive when doingsls deploy. How cool is that, huh!ShareFollowansweredJul 30, 2021 at 16:10EnzoMolionEnzoMolion9591010 silver badges2929 bronze badgesAdd a comment|
I am trying to package a local python package¹ and use it within an AWS lambda deployed via the Serverless framework. I already useserverless-python-requirementsplugin to add pip dependencies to deployed package.How can I proceed ?Shall I create a package and zip it? Or generate a whl file and use pip? And then, how to deploy it?¹: I cannot just add it to "normal codebase" as I want to share it with other bricks (Glue jobs for example)
Build and use local package for AWS Lambda using serverless framework
UseEFS. This is a network file system which can be simultaneously mounted on EC2 instance as well as within Lambda functions.As the process is quite lengthy, and the question didn't mention which OS is being used (EFS is not supported on Windows), or what Lambda runtime is desirable. Its impractical to document a full example, however there are some useful guides to get started.There is a blog on usingEFS within Lambdafunctions. The Lambda function would need access to read the object from the S3 bucket and store it on the EFS volume.An example ofmounting EFS on an EC2instance running Amazon Linux also exists.ShareFollowansweredJul 22, 2021 at 7:53Steve E.Steve E.9,14366 gold badges4141 silver badges5858 bronze badges1Thanks for the reply. I will go through the shared details. When I searching for solutions, I came through EFS and came to know that it is not supported in Windows from the documents. That's why I added EC2 (Windows) in bold in the question.–BhuvaneshJul 22, 2021 at 10:44Add a comment|
I want a solution to share files between AWS Lambda and EC2(Windows).How can lambda place the file inside the ec2 file system after it notifies by the s3event? In the same way, if way lambda wants to access the ec2 file system that also should be possible.For example: if any file created in s3 will trigger notify lambda, then that would be copied to EC2 drive in some path, then application inside server will process it and so on.Please let me know any possible way to achieve this. Thanks in advance.
Share Files between Aws lambda and EC2 instance
That's it, I solved the problem, I just had to change endpoint_url to "https://s3.us-east-2.wasabisys.com" (instead of us-east-2, insert the region of your basket). ThanlShareFollowansweredJul 17, 2021 at 9:21ElenaElena5111 silver badge44 bronze badges1struggled for this whole morning thank you very much and most definitely their should be another type exception to let user know that the region is incorrect–Thalinda BandaraOct 2, 2021 at 7:06Add a comment|
I want to upload files to the cloud storage in Wasabi, but I can't. This error comes out: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records. I checked the key several times, everything is correct. The strange thing is that before that I tried to create a new basket and everything worked out for me, but I can't upload the files.import boto3 s3 = boto3.client('s3', endpoint_url='https://s3.wasabisys.com', aws_access_key_id="********R2PN", aws_secret_access_key="*************zDKnnWS") file_path = r"C:\Users\Asus\Desktop\Programming\rofls_with_node\tracks.txt" bucket_name = "last-fm9" key_name = "tracks.txt" s3.put_object(Body=file_path, Bucket=bucket_name, Key=key_name)
The AWS Access Key Id you provided does not exist in our records. AWS
I managed to encrypt using the RSAES_OAEP_SHA_256 algorithm in plain Java and decrypt using the KMS SDK. My solution below:public static byte[] encrypt(String plainText, String publicKey) throws GeneralSecurityException { AlgorithmParameters parameters = AlgorithmParameters.getInstance("OAEP", new BouncyCastleProvider()); AlgorithmParameterSpec specification = new OAEPParameterSpec("SHA-256", "MGF1", MGF1ParameterSpec.SHA256, PSource.PSpecified.DEFAULT); parameters.init(specification); Cipher cipher = Cipher.getInstance("RSA/ECB/OAEPWithSHA-256AndMGF1Padding", new BouncyCastleProvider()); cipher.init(Cipher.ENCRYPT_MODE, getPublicKey(publicKey), parameters); return cipher.doFinal(plainText.getBytes()); }ShareFollowansweredJul 16, 2021 at 7:13BouramasBouramas1,12511 gold badge1717 silver badges3737 bronze badgesAdd a comment|
I have generated a Public-Private pair through the KMS CMK SDK and I retrieved the public key. I am looking for a way to encrypt data with this public key without using the KMS SDK or anything related to amazon. Then I would proceed with decryption by using the KMS API again.The problem is that the client does not wish to integrate with any AWS related software.Another important point is that I do want to store my private keys locally nor access them at all. I am using the aws CMK keyId in order to perform encryptions and decryptions.The algorithm used to generate the pair is: RSAES_OAEP_SHA_256 The key specification is: RSA_4096I am working with Java and I am looking for a solution with the java security packages.Any help would be much appreciated, I will amend my question in case more details are needed.
Encrypt with a AWS KMS Public Key without using an AWS SDK or CLI tool
It's not possible to have two Docker images in the same repo with the same SHA256 hash. The Docker repository is saving space by detecting that they are the same image, so it is simply adding the tags to the image that already exists in the repo. This is working as intended.ShareFollowansweredJul 14, 2021 at 12:44Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badges4That makes sense but I have a scenario where I want to use the already uploaded image as a base and then do changes and store it as a new image (with a different tag) without impacting the already existed image. So, Is there any way to achieve this?–Jay ShahJul 14, 2021 at 16:13What makes you think that workflow you just described would somehow impact the already existing image? Hint: it would not.–Mark BJul 14, 2021 at 16:39It will impact because currently if I try to do this in the same repository, it assigns a new tag to the same image. So, no matter from which tag we fetch it will update the same image whereas I want to isolate the two images from each other once copied.–Jay ShahJul 14, 2021 at 16:46If you actually make a new image, with some changes from the existing base image, and then push that to the repository, it will show up as a new image, not simply a tag on the old image. You are seeing the current behavior because you are not pushing any changes at all, you are simply pushing the same image again that already exists in the repository.–Mark BJul 14, 2021 at 16:58Add a comment|
I have an existing image inside an ECR repo with the tag "780" and I wanted to make a copy of it inside the same repo with the tag "781". I tried executing the below commands which I found fromherebut that gives a new tag to the same image when given the same repo.docker login REPO docker pull REPO/IMAGE:TAG docker tag REPO/IMAGE:TAG REPO/IMAGE:NEWTAG docker push REPO/IMAGE:NEWTAGIs there an API or utility (preferably in python) or any other way using which this can be achieved?
How to duplicate/clone an image inside same AWS ECR repository with different tag?
Not implemented in AWS toolkit plugin yet. Seehttps://github.com/aws/aws-toolkit-jetbrains/issues/1883ShareFollowansweredJul 14, 2021 at 7:49Konstantin AnnikovKonstantin Annikov12.3k55 gold badges2929 silver badges4141 bronze badgesAdd a comment|
I am trying to use the plugin AWS Toolkit (IntelliJ) with localstack, but I do not see any option or configuration file to include localstack endpoint/configuration. Is it possible?
Can AWS Toolkit in IntellIJ be used with localstack?
Under mouse settings in Windows 10, there's an option there called "scroll inactive windows when I hover over them". Make sure that is enabled to allow mouse scroll inside virtual desktop.ShareFollowansweredOct 14, 2021 at 0:48EmersonEmerson2122 bronze badgesAdd a comment|
I am using AWS Workspace with Linux for some work. And using Windows Client to connect to it.Everything seems to be working fine except I cannot use mouse scroll in the workspace(not working in firefox/terminal/any-window). (scroll is working fine on the machine where the client is running)Left-click and right-click both are working fine.I tried to find it on AWS forums and SO too, but couldn't find anything related to this. If you try to search the same on Google, there is one similarthread- but it's related to mouse with some extra buttons.Scrolling seems to be a basic functionality that should be provided. Any help would be appreciated. Thanks.
How to make mouse scroll work in AWS Workspace
You can create a workflow by using AWS Step functions and that is able to perform ETL operations on the data that you are describing. (In cases where a given data set is too large that will timeout Lambda functions, then look at using Glue. However, given your use case and the data that you describe, I doubt that is the case here and Lambda will work).You can use Lambda functions to perform the data operations and the AWS SDK to invoke AWS Service operations to meet your business requirements.As an example of how to use Lambda and AWS Step functions to perform this use case, see this AWS tutorial, that shows a similar use case that reads an excel document that is located in an Amazon S3 bucket, extracts the data and puts the data into an Amazon DynamoDB table.This AWS tutorial is implemented by using the AWS SDK for Java ; however, you can write the Lambda functions in any of the supported programming languages. This will certainly point you in the right direction.Creating an ETL workflow by using AWS Step Functions and the AWS SDK for JavaShareFollowansweredJul 6, 2021 at 12:05smac2020smac202010.7k44 gold badges2424 silver badges4242 bronze badges2Thank you for the help. Any advantange of using Step Functions over AWS Glue?–ERRJul 6, 2021 at 17:34Both are valid. But if the data is not going to break Lambda - using Lambda is an option–smac2020Jul 6, 2021 at 18:04Add a comment|
Hi I have 3 different files (2 x CSVs and 1 x JSON) with student transcript grades from different schools.The first school derives from a CSV with following structure:firstnamelastnametopicmarkMarkJohnsonMathA+JohnFisherArtB-The second school has a CSV file with the structure below:nametopicmarkPeterMusicA+MaryArtB-Finally the 3rd school is a Json file with the structure below:[ { "firstname": "Peter", "lastname": "McCkaulay", "subject": "Mathematics", "grade": 49 }, { "first_name": "Mary", "last_name": "Jane", "subject": "Physics", "grade": "" }, { "first_name": "Joseph", "last_name": "Brighton", "subject": "Soc. Studies", "grade": 89 } ]Can anyone please give me some recommendations on how to to build an efficient ETL process on AWS that will allow me to process all data from the 3 different schools and load that into an AWS RDS (PostgreSQL, MySQL, etc) so I can run some analysis over the data?I know I could achieve this by loading the 3 files into S3, then create a lambda to load the data into DynamoDB and then load that into the RDS. Is that the best option though?Any help is appreciated.
AWS - ETL - JSON / CSV files to RDS
I heard once that Amazon Prime Video uses CloudFront and they have videos that get cached in CloudFront, but I have never seen any actual figures published about the size of the cache.My assumption is that CloudFront cacheseverything, but older stuff falls off the cache when they run out of space. So, if your videos keep getting watched, they'll stay in the cache.CloudFront uses a Regional model, so if a cache at the edge does not have content (eg Manchester, England) it will go the cache in the nearest Region (London). If that cache is missing the content, it will go back to the source. So, this means that several edges can benefit from a nearby regional cache, which is more likely to have content since it would receive more 'hits' (and I assume it would also have a larger cache).If you want to measure how well CloudFront is caching, you candetermine whether something was served from the cacheby looking forX-Cache: Hit from cloudfrontorX-Cache: Miss from cloudfrontin the page headers.CloudWatchcan also provide aCache Hit Rate, which provides the proportion of requests that were served from CloudFront edge caches instead of going to origin servers for content.ShareFollowansweredJul 6, 2021 at 10:11John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badgesAdd a comment|
How much maximum cache space allowed per distribution in AWS CloudFront CDN per each POP?For example:If I run a 4K video sharing website and each video size is approximately 2GB.100K users came from same city in one day to watch videos (from 10K video collection) on website.So, CloudFront need to serve 10,000 different videos for 100K users and each video size is 2GB. So, total 20,000 GB (20TB) of space.So, Is CloudFront store all that 20TB of content as cache in that specific POP?
Total cache space limit per CloudFront distribution per POP?
You can't. It is not even defined in AWS CLI/API (https://docs.aws.amazon.com/Route53/latest/APIReference/API_Operations_Amazon_Route_53.html).However, you are kind of protected because deletion of a domain in Route 53 requires confirmation as AWS states:Important: When we receive a request to delete a domain, ICANN requires us to get confirmation from the current registrant contact. We will send an email from[email protected]or[email protected]to the registrant contactI would not give much importance to the scan result of that tool, since what would actually keep your domain safe against unwanted deletion, renew or updates is securing your AWS account, for instance, setting 2FA (two factor authentication) for your root user. If your access to AWS is not for your personal account (like your own website or experiments) then it is strongly recommended that you avoid login in with the root user for common tasks, and instead create IAM Roles based on policies so each (group of) user has a specific task.Note that only clientTransferProhibited (Transfer Lock) is enabled in Route 53 because it refers to an operation that can be (maliciously) initiated externally and not only within Route 53.ShareFolloweditedJun 29, 2021 at 23:27answeredJun 29, 2021 at 4:45Guillermo Garcia MaynezGuillermo Garcia Maynez8611 silver badge33 bronze badges1This is essentially the same answer I got from AWS support: not supported in AWS route 53–qUEnbcArJun 30, 2021 at 17:52Add a comment|
An external vulnerability scanner flagged a domain I manage through AWS Route 53 as not having theclientDeleteProhibited,clientRenewProhibited, andclientUpdateProhibitedEPP status codes set.I confirmed this via whois:Good whois entry for compliant domain# whois.registrar.amazon.com # ... Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited Domain Status: clientRenewProhibited https://icann.org/epp#clientRenewProhibited Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibitedBad entry for non compliant domain# whois.registrar.amazon.com # ... Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibitedHow can I configure AWS Route 53 to enable these status codes?
How to set clientDeleteProhibited, clientRenewProhibited, and clientUpdateProhibited EPP status codes in AWS Route 53?
You have 2 options:use regular ec2 instance like this example -Android-Emulator-on-AWS-EC2I tried it and it works well, but you don't have any fun way to get the emulator with GUI and you can work only with ADB.Use bare metal instances like c5.metal (The cost is $4 per hour) this is a good guide on how to do it.Deploying Android Emulators on AWS EC2in this link he explains the difference between the instances.ShareFollowansweredJul 13, 2021 at 12:36EliyaEliya13988 bronze badgesAdd a comment|
Well, I already go through several blogs to run an android emulator on amazon ec2, but none of them works, and they are generally 3 to 5 years old.Would like to know, do we have any alternative, way, crack in 2021 to run android studio & emulator on Amazon EC2 instance(Ubuntu or Windows Server) without using any third party like Bluestacks, and for free?Thanks
How to run android emulator on amazon ec2?
When accessing an Amazon RDS database from the Internet, the database needs to be configured forPublicly Accessible = Yes.This will assign a Public IP address to the database instance. The DNS Name of the instance will also resolve to the public IP address.For good security on publicly-accessible databases, ensure that the Security Group only permits access from your personal IP address.ShareFollowansweredJun 22, 2021 at 22:58John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badgesAdd a comment|
I am trying to connect to my RDS database from my computer with a python script using psycopg2. python code:import psycopg2 from db_credentials import * import logging def get_psql_conn(): conn = psycopg2.connect(dbname=DB_NAME, user=DB_USER, password=DB_PASS, host=DB_HOST) logging.info("connected to DB!") return connI get the following error:psycopg2.OperationalError: could not connect to server: Operation timed out Is the server running on host ********* and accepting TCP/IP connections on port 5432?My security groups assigned to the RDS database:SG 1:SG 2:Now i tried to make a security group which allows my computer IP to access the DB.SG 3:I can connect to the DB from my ec2 instances, running the same python script as above. This seemingly has to do with the 2nd security group, as when i remove it, i can no longer connect from my ec2 instances either. It then throws the same error i get when trying to connect from my computer.I have little understanding of RDS or security groups, i just followed internet tutorials, but seemingly couldnt make much sense out of it.Any help is greatly appreciated! Thanks
Connect to AWS RDS database via psycopg2
You can implement addEventListener in use effect like this.import {AppState} from 'react-native'; useEffect(() => { setTimeout(() => { AppState.addEventListener("change", _handleAppStateChange); }, 2000); return () => { AppState.removeEventListener("change", _handleAppStateChange); }; }, []);Here you define your _handleAppStateChangeconst _handleAppStateChange = (nextAppState) => { if ( appState.current.match(/inactive|background/) && nextAppState === "active" ) { console.log("App has come to the foreground!"); //clearInterval when your app has come back to the foreground BackgroundTimer.clearInterval(interval) }else{ //app goes to background console.log('app goes to background') //tell the server that your app is still online when your app detect that it goes to background interval = BackgroundTimer.setInterval(()=>{ },100) appState.current = nextAppState; console.log("AppState", appState.current); } }ShareFollowansweredJun 19, 2021 at 22:13Awais IbrarAwais Ibrar60555 silver badges1515 bronze badgesAdd a comment|
There are instances in my React Native app where a user may navigate away from the app, but then come back. While running it in Expo, if the user is away too long, they will lose connection to the server. How can I prevent/correct this? We are using a Websocket
How to prevent connection loss in React Native?
No. It is not possible to 'train' Amazon Textract.The available actions are limited to analysing a document and detecting text.See:Actions - Amazon TextractShareFollowansweredJun 19, 2021 at 8:23John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges2Thanks You Sir. Is there any other options in AWS where In can create my own Textract and train it.–Syed Kounain Abbas RizviJun 19, 2021 at 9:19Amazon SageMakeris a machine learning service where you can train models, but it's a very complex service and probably isn't what you are seeking.–John RotensteinJun 19, 2021 at 10:37Add a comment|
So far my Textract tests are very impressive for handwriting, but I see sometimes it fails to recognise some forms and some values. Is it possible to train it? If I'm scanning the same type of form/document it will be very useful to amend the results and teaching it where the boundaries of some form elements lie and some key-value associations as well?It will be a real deal breaker for the kind of service I'm trying to design.Thanks in advance.
How to customise AWS Textract?
I'm not sure if this will work for Athena, as it's based on a very old version of Presto/Trino.In recent versions of Trino (formerly known as PrestoSQL), you can do this:Cast thetimestamp with time zonetotimestampto remove the timezone part.Then, usewith_timezoneto reinterpret the resultingtimestampinUS/Eastern.Finally, useAT TIME ZONEto change the time zone of the resultingtimestamp with time zonewhile preserving the instant.Take a look at the example below:trino:tiny> WITH t(ts) AS (VALUES TIMESTAMP '2021-06-09 19:00:36.000000 UTC') -> SELECT with_timezone(cast(ts as timestamp(6)), 'US/Eastern') AT TIME ZONE 'America/Los_Angeles' -> FROM t; _col0 ------------------------------------------------ 2021-06-09 16:00:36.000000 America/Los_Angeles (1 row)ShareFollowansweredJun 11, 2021 at 19:11Martin TraversoMartin Traverso5,0211717 silver badges2525 bronze badgesAdd a comment|
I need to change a UTC timestamp to 'US/Eastern' timestamp without changing the date and time - essentially update only the timezone information and later convert that to a different timezone.For example (what I need):'2021-06-09 19:00:36.000000' UTC --> '2021-06-09 19:00:36.000000' US/EasternThen I need to convert that to 'America/New_York'.'2021-06-09 19:00:36.000000' US/Eastern --> '2021-06-09 16:00:36.000000' America/Los AngelesWhen I try the query below, it's not giving me the correct results, since it is converting from UTC to America/Los Angeles. When it should be US/Eastern to America/Los Angeles.SELECT id , date_utc , CAST(date_utc AT TIME ZONE 'America/Los Angeles') AS date_la FROM call_records
Prestosql/Amazon Athena: Time Zone Change
I have used before the API Gateway Resource Policy:https://www.serverless.com/framework/docs/providers/aws/events/apigateway/#resource-policyFor the lambda function association directly you can take a look at that thread:https://github.com/serverless/serverless/issues/4926ShareFollowansweredJun 6, 2021 at 23:14Richard LeeRichard Lee2,16522 gold badges2626 silver badges3333 bronze badges2How can I specify it to aaliasof my lambda function?–Joey Yi ZhaoJun 6, 2021 at 23:20There is a plugin for thatserverless-aws-alias. This article helped me on that:medium.com/swlh/serverless-alias-insight-e1db93a69562–Richard LeeJun 7, 2021 at 14:21Add a comment|
I am using serverless.yml to deploy lambdas to AWS and I'd like to know how to configure the resource-based policy for my lambda.I deploy a customised alias to my lambda and need to grantinvoke:lambdain the policy of the resouce-based policy. So when you open lambda -> configuration -> permission, the policy should appear as belowwhen I use theroleconfigure in serverless.yml, it only changes the permission for my lambda execution role. How can I modify theResource-based policyfor my lambda?
How can I provide resource-based policy in my lambda via serverles.yml?
Based on the comments.If you terminate SSL on the load balancer (LB), SSL-related information is not carried over to your targets. To ensure full SSL-forwarding to your targets, you have to useTCP listener. This way your targets will be responsible for handling SSL, and subsequently will be able to custom process it.ShareFollowansweredMay 31, 2021 at 10:05MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I have an AWS classic load balancer. Here are my listeners :TheAWS classic load balanceris doing tls termination, and redirecting the traffic to port30925of my nodesThe process listening on port30925is an istio gateway, redirecting traffic afterwards based on theSNIof the requestHowever, theAWS classic load balancerdoesn't seems to keep theSNIof the request after tls terminationIs there any documentation regarding the behavior of the load balancer in that situation?I found a couple of links talking aboutSNI(here for example), but it's only talking about the load balancer itself handling the routing of theSNI
Does AWS classic load balancer keeps the SNI after tls termination?
Spring Boot apps run nicely on Elastic Beanstalk. However, you do need to set some variables. For example, have you set server-port variable to 5000?And as you stated, to successfully use a Service Client, you can set environment variables for your creds. Here is an end to end walkthrough that shows how to successfully put a Spring BOOT app that invokes several AWS Services on Elastic Beanstalk.Creating your first AWS Java web applicationPS - your log file mentions a ZIP file. Be sure to create the JAR properly as discussed in the above example.ShareFolloweditedMay 27, 2021 at 13:26answeredMay 27, 2021 at 13:01smac2020smac202010.7k44 gold badges2424 silver badges4242 bronze badges23Thank you very much for this link. It finally starts, in my pom.xml I removed the following:<configuration> <executable>true</executable> </configuration>–Markus G.May 27, 2021 at 13:17I am glad it helped you. We have quite a few use cases for Java V2 here that includes many AWS services working together.github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/…–smac2020May 27, 2021 at 13:18Add a comment|
I finally reached the point where my Elastic Beanstalk Instance / Environment got launched. (Java Corretto 11 Platform) Now it fails starting up the provided.jarfile.In theeb-engine.logfile, I am not able to find any more error than this:2021/05/27 11:36:25.889735 [INFO] Executing instruction: StageJavaApplication 2021/05/27 11:36:25.889871 [ERROR] An error occurred during execution of command [app-deploy] - [StageJavaApplication]. Stop running the command. Error: staging java app failed due to invalid zip fileThe jar file is a Spring Boot application built withmvn -B package. Locally the whole thing starts, but crashes afterwards because of not given environment variables (Expected behaviour). But it seems AWS is not even starting the application..Any suggestions on this?
Elastic Beanstalk fails creating
You could useAWS EventBridgefor this task, it uses the same underlying API as CloudWatch Events but with some relevant architectural changes to better implement an event-driven architecture.Here'sthe official documentation how to implement a schedule rule, you're looking to use a ECSTargetAWS Batch serves a different purpose than the one from your use case, as per their official documentation:AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.What you're trying to do is quite simple, I recommend you keep it simple and don't try to overcomplicate it.ShareFollowansweredMay 25, 2021 at 12:52YayotrónYayotrón1,79911 gold badge1616 silver badges2828 bronze badgesAdd a comment|
In my application, I need to run a fargate job(Job1) which loops through a particular task and invokes multiple tasks of fargate Job(Job2). So I want to know what are the possible ways to run this whole operation as a scheduled task? I tried to create ECS cluster with 2 containers and schedule both job1, and job2 using cloud watch events to run. But i was wondering what is the use of AWS Batch? Was is it an alternative for Cloud watch events? Suggest your thoughts please
How to run a schedule job in a AWS?
The thing is Elastic Beanstalk removes an escape\character from environment value.So your private key, for example-----BEGIN PRIVATE KEY-----\nXIaEvQIBKDAN...becomes-----BEGIN PRIVATE KEY-----nXIaEvQIBKDAN... and invalidIn case anyone needs some example, this is what I did.As suggested in the comment above, I replaced all\nwith@and put the key on the EB env's Environment properties.When my app runs, it will get the key like this-----BEGIN PRIVATE [email protected] before using it, just replaced@back to\nwithprocess.env.PRIVATE_KEY.replace(/\@/g, '\n')ShareFollowansweredMay 31, 2022 at 14:52Jirachai UraijareeJirachai Uraijaree18611 silver badge66 bronze badgesAdd a comment|
I am moving my firebase authenticated node.js application from Heroku to AWS Elastic Beanstalk. The private key is not parsing correctly. It is having trouble understanding the newline characters from the environment variables.For reference I have already tried the solution found in this stackoverflow post:Node.js -Firebase Service Account Private Key won't parseprivateKey: process.env.FIREBASE_PRIVATE_KEY.replace(/\\n/g, '\n')Unfortunately, as this works on Heroku, the same is not true for AWS Elastic Beanstalk. After logging the private key in the terminal I notice it is replacing the \n with simply 'n' and making no newline character at all.Newline characters are not allowed while setting environment variables in the EB software configuration. Any new lines are replaced with just a single space.Not sure if there is another way short of keeping the key directly in the code which I would like to avoid.
Parsing the firebase private key on AWS Elastic beanstalk
If you haven't wrote youruser-datawith your ecs cluster name, then your EC2 instances will not register with the cluster. You have to explicitly register them with the cluster using user-data:#!/bin/bash echo "ECS_CLUSTER=MyClusterName" >> /etc/ecs/ecs.configas explained inBootstrapping container instances with Amazon EC2 user data.ShareFollowansweredMay 12, 2021 at 0:38MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I want to deploy a dockerized python app using ECS, I followed this tutorial but ran into problems:https://www.youtube.com/watch?v=zs3tyVgiBQQThe cluster is created and an ECS instance appears in the EC2 menu:https://ibb.co/GR1SMDDBut no ECS instances appear in the ECS menu:https://ibb.co/QjwypHtI created a task and defined a container, similar to what's shown at that point in the video:https://youtu.be/zs3tyVgiBQQ?t=619When I'm about to run a new task, in the cluster menu, most fields are already filled with what I wrote for the task above, but when I 'run task', this happens:https://ibb.co/BPvG6rhUnable to run taskNo Container Instances were found in your cluster.I've been looking for solutions for a few days now, but I'm new to AWS, and I'm running out of ideas, so any help on how to solve this, step-by-step, is very appreciated
AWS EC2 ECS No Container Instances were found in your cluster
Your event is invalid in CloudWatch Events (CWE) rule, but should be fine for CWE replacement, i.e.AWS EventBridge(EB). Thus I would recommend using EB for that event.EB is basically a new version of CWE, so you can do same thing.ShareFollowansweredMay 11, 2021 at 9:37MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I wish to create an AWS CloudWatch Event rule for S3 create events, in a specific bucket and prefix. Since CloudWatch does not support wildcards, I am instead trying to give the prefix explicitly as in the following example:Does AWS CloudWatch Events Rule supports any wildcards in S3 bucket/key names(for obfuscation I provide the bucket and prefix names here asmy-data-bucketandmy-data-prefix/)My JSON rule:{ "source": [ "aws.s3" ], "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "eventSource": [ "s3.amazonaws.com" ], "eventName": [ "PutObject", "CompleteMultipartUpload" ], "requestParameters": { "bucketName": [ "my-data-bucket" ], "key": [{ "prefix": "my-data-prefix/" }] } } }Yields the error:Event pattern contains invalid element (can only be Strings enclosed in quotes, numbers, and the unquoted keywords true, false, and null)This might be that the curly braces are not allowed for thekeyrequest parameter, but how can you create a rule to listen to just a specific prefix, without being allowed wildcards? (and preferably without having to customize the CloudTrail trail as inCloudwatch event rule listen to s3 bucket with specific keypath)
Create AWS CloudWatch Event rule for S3 prefix
The support for64bit Amazon Linux 2018.03 v2.9.17 running PHP 7.2finished onMay 2, 2021. The onlycurrentversions of EB for PHP are based onAmazon Linux 2(AL2).Since AL1 is largely different then AL1 (what you have now), the only way to upgrade is to performmigrationfrom AL1 to AL2 as explained in:Migrating your Elastic Beanstalk Linux application to Amazon Linux 2ShareFolloweditedMay 9, 2021 at 10:12answeredMay 9, 2021 at 9:47MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges11Hi @Marcin, I managed to upgrade my environment thanks! But I encounter a new issue now - for which I created another question:stackoverflow.com/questions/67582584/…–RockyStrongoMay 18, 2021 at 8:30Add a comment|
When I log in Elastic Beanstalk, I see that my platform is deprecated:I guess it is because of the 7.2 PHP version that is now unsupported.But when I click on change I don't see any higher version. In addition I can't find where in ElasticBeanstalk I can change the PHP version of my app.
Update deprecated php platform on Elastic Beanstalk
you should use restricted data token for buyer info and detailed address info, instead of "/orders/v0/orders/{orderId}/address" endpointShareFollowansweredFeb 12, 2022 at 14:45Mustafa AltunokMustafa Altunok2122 bronze badgesAdd a comment|
In the old MWS "getOrders" api call, there was in the response a field for each order named "shippindAddress" which I could use to create my shipping labels.In the new SP-API in the getOrders there is no such field. Instead you can get the shipping address via "/orders/v0/orders/{orderId}/address ". But you should call this api for each order (>100) and the rate is set for 1 per second.. So for 100 orders I will have to wait over 1.5 mins to get all addresses.Is there a possibility to get all order with shipping addresses ? Or to increase the request limit ?
Amazon SP-API get shipping addresses for all orders
Look into generating additional claims in youpre-token-generation handlerBasically you can create an attribute that includes organization role mappinge.g.{ // ... "custom:orgmapping": "OrgA:User,OrgB:Admin" }then transform them in your pre-token-generation handler into "pseudo" groups that don't actually exist in the pool.ShareFolloweditedApr 27, 2021 at 18:13answeredApr 27, 2021 at 18:03Andrew GillisAndrew Gillis3,41022 gold badges1515 silver badges1717 bronze badges3Thanks! I'll try this and I'll update it here–dfrancaApr 29, 2021 at 20:36@dfranca Did this work for you? What did your final solution look like? I have a very similar case and would love to know how you tackled this.–EtepMar 31, 2022 at 20:57@Etep Yes, I have added a edit_groups on my schema that uses the combination of OrgA:ROLE, then I have created a custom pre token generation handler to add the claim with the user Org:Role combination–dfrancaApr 2, 2022 at 12:07Add a comment|
I have an AWS Amplify application that has a structure with multi-organizations:Organization A -> Content of Organization A Organization B -> Content of Organization BLet's say we have the user Alice, Alice belongs to both organizations, however, she has different roles in each one, on organization A Alice is an administrator and has more privileges (i.e: can delete content or modify other's content), while on Organization B she is a regular user.For this reason I cannot simply set regular groups on Amplify (Cognito), because some users, like Alice, can belong to different groups on different organizations.One solution that I thought was having a group for each combination of organization and role. i.e:OrganizationA__ADMIN,OrganizationB__USER, etc So I could restrict the access on the schema using a group auth directive on theContentmodel:{allow: group, groupsField: "group", operations: [update]},The content would have agroupfield with a value:OrganizationA__ADMINThen I could add the user to the group using theAdmin Queries APIHowever, it doesn't seem to be possible to add a user to a group dynamically, I'd have to manually create each group every time a new organization is created, which pretty much kills my idea.Any other idea on how I can achieve the result I'm aiming for? I know that I can add the restriction on code, but this is less safe, and I'd rather to have this constraint on the database layer.
AWS Amplify (AppSync + Cognito) Authorization using dynamic groups per organitzation/tenant
There is athird party github repowith public layers, including pandas. You don't have to do anything to use, except adding the layer arn to your function. The arndepends on your region, so you have to choose your region. For example, forus-east-1the pandas layer for python 3.8 is:arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-pandas:31ShareFollowansweredApr 27, 2021 at 8:05MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges21Thank you so much. This solution saves a lot of time and really easy to implement.–era s'qApr 27, 2021 at 8:17@eras'q No problem. Glad it helped.–MarcinApr 27, 2021 at 8:19Add a comment|
I'm trying to upload a deployment package to my AWS lambda function following the articlehttps://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e. My final zip file is as follows:https://drive.google.com/file/d/1NLjvf_-Ks50E8z53DJezHtx7-ZRmwwBM/viewbut when I run my lambda function I get the errorUnable to import module 'lambda_function': No module named 'importlib_metadata'My handler is namedlambda_function.lambda_handlerwhich is the file name and the function to run. I also tried uploading these zip files as layers excluding thelambda_function.pyand get:What am I doing wrong?EDIT: I tried usingzip/lambda_function.lambda_handleras my handler still gettingUnable to import module 'zip/lambda_function': No module named 'zip/lambda_function'
How to upload pandas, sqlalchemy package in lambda to avoid error "Unable to import module 'lambda_function': No module named 'importlib_metadata'"?
For some services AWS provides such equivalent, but there is no general way for that. You could try a third party toolConsole Recorder for AWS:Records actions made in the AWS Management Console andoutputs the equivalent CLI/SDK commandsand CloudFormation/Terraform templates.ShareFollowansweredApr 25, 2021 at 9:06MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
How can I find the CLI equivalent of any console operation in AWS?Is there any way to generate that?
How to generate AWS CLI equivalent of any console operation
With API GW you will get rate limiting, throttling and if you want to authenticate and authorize requests based on OAUTH or any other auth model that can be done with API GW.ShareFollowansweredAug 5, 2021 at 2:15BharatBharat2122 bronze badgesAdd a comment|
I am trying to understand the use of API Gateway along with AWS ALB (Ingress Controller) for the EKS cluster.Let's say, there are 10 microservices in the AWS EKS cluster running on 10 pods. The EKS cluster is in Private VPC. I can create Kubernetes Ingress which will create an ALB and provide rule-based routing. The ALB will be in Public VPC and I believe, AWS will allocate a public ip to the ALB. I can configure the ALB behind Route53 to access using the domain name. My understanding says that ALB supports multiple features including host or path based routing, TLS (Transport Layer Security) termination, WebSockets, HTTP/2, AWS WAF (Web Application Firewall) integration, integrated access logs, and health checks.So, security wise there should not be any challenge.Am I wrong?Please referLinkof the above mentioned solution architecture.Is there any specific use case where I need to use AWS API Gateway in front of AWS ALB in the above-mentioned architecture?What are additional benefits the AWS API Gateway has along with AWS ALB?Should I put AWS ALB in the Private VPC if decided to use AWS API Gateway in front of that?
AWS API Gateway infront of AWS ALB (Ingress Controller) for EKS
We are using MWAA 2.0.2 and managed to use Airflow's Rest-API through MWAA CLI, basically following the instructions and sample codes of theApache Airflow CLI command reference. You'll notice that not all Rest-API calls are supported, but many of them are (even when you have a requirements.txt in place).Also have a look atAWS sample codes on GitHub.ShareFollowansweredOct 14, 2021 at 14:19dovregubbendovregubben38422 silver badges1818 bronze badgesAdd a comment|
I have Airflow running in AWS MWAA, I would like to access REST API and there are 2 ways to do this but doesn't seem to work for me.Overriding api.auth_backend. This used to work and now AWS MWAA won't allow you to add this, it is consider as 'blocklist' and not allow.api.auth_backend = airflow.api.auth.backend.defaultUsing MWAA Cli(Python). This doesn't work if any of the DAGs uses packages that are in requirments.txt file.a. as an example, I have "paramiko" in requirements.txt because I have a task that uses SSHOperator. The MWAA Cli fails with "no module paramiko"b. Also noted here,https://docs.aws.amazon.com/mwaa/latest/userguide/access-airflow-ui.html"Any command that parses a DAG (such as list_dags, backfill) will fail if the DAG uses plugins that depend on packages that are installed through requirements.txt."
Accessing Airflow REST API in AWS Managed Workflows?
Found the solution. The documentation statesdocker-compose.ymlis the only file needed but it still needs to be zipped before being uploaded to Elastic Beanstalk environment.ShareFollowansweredApr 20, 2021 at 7:57Tumbleweed91Tumbleweed915133 bronze badges12Hi, testing this now, but I'm confused by the conflicting documentation: "If you use only a docker-compose.yml file to deploy your application, you don't need to create a .zip file." Further, you also used to be able to just upload a Dockerrun.Aws.json directly, without zipping, so I was expecting the same behavior.docs.aws.amazon.com/elasticbeanstalk/latest/dg/…–Joshua WolffJun 1, 2022 at 19:49Add a comment|
I have been trying to make Elastic Beanstalk work with Docker AMI2 image anddocker-compose.yml. The documentation says it should work out of the box withdocker-compose.ymlfile. I use ECR as docker registry and have updated Elastic Beanstalk role to be able to pull images from ECR.https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/single-container-docker-configuration.htmlCreate a docker-compose.yml file to deploy a Docker image from a hosted repository to Elastic Beanstalk. No other files are required if all your deployments are sourced from images in public repositories. (If your deployment must source an image from a private repository, you need to include additional configuration files for authentication. For more information, see Using images from a private repository.) For more information about the docker-compose.yml file, see Compose file reference on the Docker website.However, I keep getting the following message when spinning up the environment:Instance deployment: You must specify a Docker image in either 'Dockerfile' or 'Dockerrun.aws.json' in your source bundle. The deployment failed.According to documentationDockerrun.aws.jsonshould only be required for the old AMI. Has anyone come across similar issue?
AWS Elastic Beanstalk with AMI2 and docker-compose.yml
According to this document:https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.htmlImportantAmazon S3 website endpoints do not support HTTPS. For information about using HTTPS with an Amazon S3 bucket, see the following:How do I use CloudFront to serve HTTPS requests for my Amazon S3 bucket?Requiring HTTPS for communication between viewers and CloudFrontShareFollowansweredApr 15, 2021 at 10:40Sharuzzaman Ahmat RaslanSharuzzaman Ahmat Raslan1,57722 gold badges2424 silver badges3636 bronze badges1I understand why I have this problem, but I do now know how to fix it–KeyB0rysApr 15, 2021 at 10:41Add a comment|
I try to make a redirection from one to another domain. It is what I have:Route53 (basic.domain.com) -> S3 bucket (with redirection to example.domain.com) -> example.domain.comand this scenario works fine:http://basic.domain.com redirects to example.domain.combut here I get a timeout:https://basic.domain.com redirects to example.domain.comI probably should put CloudFront between Route53 and S3 bucket but I am looking for a real redirection. I mean when I typehttp://basic.domain.comI want to see example.domain.com on the browser address bar.
AWS S3 bucket redirection with https
ThattotalRetryDelayvalue would be most useful to you if your nodejs program were not sending multiple concurrent requests to the API. It tells you how long to wait before you sendonemore request, not 10 or 50 more.The solution to your problem might be to put your requests into some sort of internal queue and send them one at a time with a short delay between them.Or, if you know how many concurrent requests you send, you could try multiplyingtotalRetryDelayby that number and delay that much.ShareFollowansweredApr 6, 2021 at 13:30O. JonesO. Jones106k1717 gold badges123123 silver badges176176 bronze badgesAdd a comment|
I am using@aws-sdk/client-iamSDK from AWS for JavaScript, In a node based server. We are usingGetGroupCommand.If we aggressivley call above command AWS SDK throwsThrottlingerror, with a fielderror?.$metadata?.totalRetryDelaywhich tells after how many milliseconds we shall retry the request.Based on this trial - error thing we have modified the calls to sleep for certain amount of time, But when calls are too many they all retry after sleep causing the AWS server to flood again & throw theThrottlingerror.I couldn't find any guide/reference for AWS JS IAM SDK 3 explaining under what conditions it may throwThrottlingerror.There is middlewarehttps://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_middleware_retry.htmlI guess it's something we can use, but not sure how. Sample example of this or best practices forthrottlingfor AWS SDK JS 3 are not mentioned on the github repo or the sdk guide.Can you show me how to handle thisThrottlingissue in AWS SDK 3 for JS?None of the following have any helpful information about throttling,SDK Reference:https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.htmlDeveloper guide:https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.htmlCode examples:https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/sdk-code-samples.html
How to handle throttling with JavaScript AWS IAM SDK 3?
How can you distinguish if a load balancer terminates TLS in itself or in the EC2sIf the Load Balancer terminates TLS, then it will haveHTTPS Listenerswith the cert associated with it.Is it different when its a Network Load Balancer? compared to a "regular" ELB.Network Load Balancers are Level 4 load balancers and work at protocol level, handling TCP/UDP connections and till recently, did not have TLS offload. That haschanged nowand the same principle holds - if there's a TLS listener, then it's mostly to be handling the TLS terminationShareFolloweditedFeb 10, 2022 at 8:27RichVel7,65066 gold badges3434 silver badges4949 bronze badgesansweredApr 4, 2021 at 9:23Sathyajith BhatSathyajith Bhat21.6k2222 gold badges9898 silver badges137137 bronze badgesAdd a comment|
The setup: Application deployed in EC2 instances that are load balanced by an ELB, with Autoscaling Group.The requirement: secure data encryption in transit in adherence to TLS protocol between the clients and EC2 instances.The question:How can you distinguish if a load balancer terminates TLS in itself or in the EC2 instances? I am preparing for the AWS Architect Associate exam and I have encountered this problem multiple times. It seems that whether it terminates TLS in itself or in EC2 instances, it uses port 443. If I have a set of multiple choice answers of possible ELB configurations, which one should I choose if I want TLS to be terminated at EC2 instance?Is it different when it's a Network Load Balancer compared to a "regular" ELB?
AWS ELB - SSL/TLS termination confusion
Yourlambda_function.pyis inside folder calledlambda_function. Justmoveyourlambda_functiontoRWS-POC, or modify handler into:lambda_function/lambda_function.lambda_handlerShareFollowansweredApr 1, 2021 at 11:06MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges11just realised I was zipping the driectory rather than the contents of the directory! thanks !–Dais.SApr 1, 2021 at 11:14Add a comment|
I am trying to upload a zip file to AWS lambda but keep getting the error "Unable to import module 'lambda_function': No module named 'lambda_function'"I've started very basic by creating a zip file named "lambda_function.zip" with one file inside "lambda_function.py". At a later stage I will need to include packages in the zip file, but for now it's a very simple function named lambda_handler only using json.Once uploaded this is the file structure and the error message received after testing:code and error message screenshotIf I move lamda_function.py into the root folder "RWS-POC" then it works, but later on when I need to upload a larger zip file this won't be an option as editing via the interface is disabled.I can also confirm that the handler is set to lambda_function.lambda_handler and the python file is named "lambda_function" and the function named "lambda_handler"lambda_function.lambda_handler settings screenshotI'm sure I'm doing something very basic wrong, so any help would be very much appreciated.Thanks!
AWS Lambda error message "Unable to import module 'lambda_function': No module named 'lambda_function'",
It turns out the error was unrelated to the actual parsing of the file. I dug through the logs and realized that my ECR authentication token had supposedly expired. This was strange since I was using the same ECR authentication for other Elastic Beanstalk environments without issue. The solution was to generate a new authentication token for ECR, upload a new config file to S3, and point the Dockerrun authentication bucket and key fields to the new file.If you run into a similar error, look further back in youreb-enginelogs for other errors that may be the root of the problem.ShareFollowansweredMar 28, 2021 at 20:38Taylor McleanTaylor Mclean16799 bronze badgesAdd a comment|
I'm banging my head against a wall trying to figure out the source of the following error I get when trying to deploy this Dockerrun file to EB:Error: parse Dockerrun.aws.json file failed with error json: invalid use of ,string struct tag, trying to unmarshal unquoted value into intHere is the file in question:{ "AWSEBDockerrunVersion": "1", "Authentication": { "Bucket": "mybucket", "Key": "myconfig.json" }, "Image": { "Name": "1234567890.dkr.ecr.us-east-2.amazonaws.com/myimage:tag", "Update": "true" }, "Ports": [ { "ContainerPort": "3001", "HostPort": "80" } ] }I've read over the documentation here:https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/single-container-docker-configuration.htmlI can't seem to find any issues with the file. I know that AWS has validators for CloudFormation templates, does something similar exist for Dockerrun files? How would one go about troubleshooting this error?
How can I troubleshoot Dockerrun parsing errors?
If you google "SaaS Multi-tenant Storage Isolation" You'll find a number of resources on this topic. The short answer is that you can achieve what you are trying to do (pool isolation enforced by IAM policies) with DynamoDB. But with RDS you have to rely on database users and PostgeSQL RLS features.If you look at the IAM documentation for DyanmoDB and RDS condition keys, you'll see that with DynamoDB you can filter actions based on things like "dynamodb:LeadingKeys" or "dynamodb:Attributes". Where as RDS you do not have filters that get as granular as individual indicies or attributes.https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-policy-keyshttps://docs.aws.amazon.com/service-authorization/latest/reference/list_amazondynamodb.html#amazondynamodb-policy-keysHere's some addtional material on the topic worth looking at:https://www.youtube.com/watch?v=fuDZq-EspNA(includes examples at ~35mins)https://d1.awsstatic.com/whitepapers/Multi_Tenant_SaaS_Storage_Strategies.pdfhttps://aws.amazon.com/blogs/database/multi-tenant-data-isolation-with-postgresql-row-level-security/ShareFolloweditedNov 14, 2021 at 16:21answeredNov 14, 2021 at 16:07ahfxahfx35822 silver badges99 bronze badgesAdd a comment|
I have an app that will have many users at various levels of privileges on what they can "See/Select".The users form a hierarchy. Level1--> Level2--Level3 etc. Level1 users may have many level2 users, similarly Level2 users may have many level3 users . Each user can see anything that belongs to them or anyone directly below them. For example: A level 2 user can see all level 3 users information that rolls up to her but cannot see , any level 3 users that rolls up to other level 2 users.... You get the idea.All userids are unique.I am thinking of implementing Row Level Security with policies to restrict based on the userid. This is working at the database level by implementing Row level security and policies.However, I would like to know how this can be achieved with IAM roles and IAM based authentication?I read the documentation. It states, that while I can create a IAM roles and assign them privileges at the db level, and individual users can be assigned to these IAM roles to access the database using authtokens, it does not state how I can track each of the users at the database level to ensure that they can only see their data?Any insights appreciated.S
AWS RDS Postgres/Iam Authentication/ and Row Level Security - All In One. Is this possible?
I realized that SDK v2 for Go was missing that functionality and I opened issue on Github.https://github.com/aws/aws-sdk-go-v2/issues/1169ShareFollowansweredMar 16, 2021 at 8:43szymonszymon91011 gold badge77 silver badges1313 bronze badgesAdd a comment|
I'm developing Go app with AWS SDK v2 for Go. I want to connect to my RDS DB through RDS Proxy using IAM Role for auth. I've found an examples in SDK docs how to do it in SDK v1, however with SDK v1 I have a problem with assuming correct IAM role inside my AWS EKS pod (AWS_ROLE_ARNandAWS_WEB_IDENTITY_TOKEN_FILEenvironment variables). There was an open issue regarding that (https://github.com/aws/aws-sdk-go/issues/3101#issuecomment-604739840), however with below code I still can't make it work withSDK v1(AWS assumes Worker Node IAM role instead of Pod role from env vars):sess, sessErr := session.NewSessionWithOptions( session.Options{ Config: aws.Config{ Region: aws.String(os.Getenv("DB_REGION")), }, SharedConfigState: session.SharedConfigEnable, }, ) ... client := rds.New(sess) pass, errToken := rdsutils.BuildAuthToken(host, os.Getenv("AWS_REGION"), user, client.Config.Credentials)Instead, I decided to try with SDK v2, but I discovered that bothrdsutilsandBuildAuthTokenwere removed from the SDK on 25 Sep 2020 (https://github.com/aws/aws-sdk-go-v2/commit/eecb706f5d1e3ca44aafca5c042ea275f4050764#diff-457ec6738454cb66ee5a04f7b14c84ecf31f37cb2f42f428cc28dc099970f8cd). Now I'm lost. With SDK v1 I'm not able to properly assume IAM role, but with SDK v2 I don't even see any option to retrieve a token for RDS at all.Does anybody have any experience with deploying Go app on AWS EKS which connects to RDS Proxy using IAM Role?
Golang AWS SDK and RDS Proxy IAM Auth
Below code should work in Python 3.7 runtime. Of course, you can improve the code but, it will give you what you are looking for.reqcontxt = event.get("requestContext") httpprtcl = reqcontxt.get("http") methodname = httpprtcl.get("method") print('### http method name ###' + str(methodname))Thanks.HirenShareFollowansweredFeb 24, 2021 at 21:50dossanidossani1,92233 gold badges1717 silver badges2727 bronze badges0Add a comment|
I am having difficulty getting http method used in a call to aws lambda via api gateway. I created a REST api in api gateway, which makes a call to a lambda function. In the lambda function I want to have two functions, one for POST requests and one for GET requests. I am unable to get the method from event. In other threads answers are usually for javascript or java only.I run the following curl command from my terminal:curl "https://myurl/endpoint"I also try to send a GET request via advanced rest client.Here's what I'm trying to do:def lambda_handler(event, context): method = event['httpMethod'] if method == "GET": return get_function() if method == "POST": return post_function()Running the above code results in akeyError. I have tried this as well:method = event['requestContext']['http']['method']I tried printing out the event itself like thismethod = event. All I get from this is{}, both in the response and in cloudwatch.How can I read the http method in a request
AWS lambda get http method with Python
AFAIK there is no AWS Organizations integration for the IAM service actiongenerate-credential-report. You can look up all integrations in the docs:AWS services that you can use with AWS Organizations[1]. It looks like there is an integration forservice last accessed dataandIAM access analyzer.That is, for the time being, you can probably just iterate over all your accounts and callgenerate-credential-report[2]. There is a python tool on GitHub that simplifies this strategy. [3]I guess you can adjust this tool to serve your needs.[1]https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html[2]https://docs.aws.amazon.com/cli/latest/reference/iam/generate-credential-report.html[3]https://github.com/lloesche/aws-user-reportShareFollowansweredMay 2, 2021 at 23:07Martin LöperMartin Löper6,53911 gold badge1818 silver badges4242 bronze badgesAdd a comment|
I am looking to generate a AWS credential report for all the accounts under an organization. Is there any way to generate the consolidated report of accounts.I know we can generate a credential report per account (one) under an organization as per the AWS documentationhttps://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.htmlThe same way can we generate a credential report for all the accounts in an organization with single root user? or if we can create a cross account will it help us to get all the credentials in the assumed role account?
How to generate AWS credential report for all accounts in an organization
I've just confirmed that the limit applies to the verification emails as well. After signing up 50 users, the following message is received after user signup:An error occurred (LimitExceededException) when calling the AdminCreateUser operation (reached max retries: 2): Exceeded daily email limit for the operation or the account. If a higher limit is required, please configure your user pool to use your own Amazon SES configuration for sending email.Similarly if the signup occurs via the Hosted UI, except it only mentionsAn error was encountered with the requested page..Worth mentioning that the Sign up still occurs, ie, the user is still created in the User Pool but no verification email is sent. Also, password recovery emails cannot be sent after this limit is reached, as the limit is shared and isper account, so applicable across all user pools in same account.ShareFollowansweredNov 8, 2022 at 20:10ammendoncaammendonca60611 gold badge55 silver badges1010 bronze badgesAdd a comment|
In AWS Cognito, the service notes that one should use Amazon SES for user pools due to the daily email limit of Cognito,as seen here. The quotas documentation shows that the maximum amount of emails sent per day is50.In the 'Configuring Email or Phone Verification docs', it states thatthere isno chargefor sending verification codes to email addresses. This documentation does not explicitly bring up Cognito email quotas.I cannot find a clear answer as to whether or not verification code emails apply to the quota. I'm trying to avoid a situation in which >50 users try to sign up in a day, but cannot receive their verification email. Can anyone clarify this? Thanks.
Do verification emails apply to the AWS Cognito daily email quota?
When IAM authentication is enabled, requests to the HTTP endpoint must be signed using SigV4. You can use a tool likeawscurlto do this.Here is an example from the Amazon Neptune documentation that I have modified slightly to have it point to the /status endpoint.Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables correctly (and also AWS_SECURITY_TOKEN if you are using temporary credential). You can also pass these as parameters toawscurl. Then use a command such as (change the region to be your region).awscurl -X GET --service neptune-db --region us-west-2 "$SYSTEM_ENDPOINT/status"You can get temporary credentials usingstsvia the AWS CLI tools as follows:aws sts get-session-tokenIf you are running on an EC2 instance you can get the tokens from the metadata service so long as the EC2 instance has a role attached that has access to Neptune. More details here:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.htmlShareFolloweditedFeb 13, 2021 at 18:23answeredFeb 13, 2021 at 17:52Kelvin LawrenceKelvin Lawrence15.3k22 gold badges1818 silver badges3939 bronze badgesAdd a comment|
I am trying to connect to AWS neptune DB after enabling IAM DB authorisation and it is not able to connect and failing with below error.{"code":"AccessDeniedException","requestId":"68bbc87a-cbf6-31d3-5829-91f32062239f","detailedMessage":"Missing Authentication Token"}However Its working fine with disabling IAM DB authorisation.I have created a policy (using linkhttps://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-policy.html) to connect to neptune DB and attached the policy to IAM role which is being added to the ec2 instance. I can able to telnet to neptune DB endpoint with 8182 port.Can someone please help.
unable to connect to AWS Neptune DB after enabling IAM DB authorisation
As your domain name is resolving to the IP address the IP address will still need to allow ingress into the IP.For this reason the change will need to happen within the host.Depending on the web server technology you are using (such as Apache or Nginx) the first host file that loads is served if no other host configuration is matched.If you add the secondary vhost for your domain ensuring you explicitly reference the domain, then in the default host rather than serving your application return a 403 instead this will prevent bypassing your domain name.More information is available in the following links:Apache VHOST configurationNginx configurationShareFollowansweredFeb 7, 2021 at 10:56Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badgesAdd a comment|
I have a wordpress website hosted on AWS Lightsail. I added an A record to my DNS pointing my custom domain to the static ip of the Lightsail instance.But, I don't want the website to be accessible via the static ip, only via my custom domain.How can I block access to the static ip?Thanks!
AWS Lightsail - disable direct access to static ip
I'm wondering as well, but haven't seen any information in this regard.aws-data-wrangler is currently adding support for Lake Formation governed tables. Looking at one of the larger PRshttps://github.com/awslabs/aws-data-wrangler/pull/560/files, some observations:Governed tables are managed through the lakeformation apiAn "LF Query Engine" is mentionedhttps://github.com/awslabs/aws-data-wrangler/pull/560/files#diff-71cf0e59c4ff5180dca21273da3998c16dcad442519db75af27482e5420f8dc0R61"Execute PartiQL query on AWS Glue Table" is mentionedhttps://github.com/awslabs/aws-data-wrangler/pull/560/files#diff-71cf0e59c4ff5180dca21273da3998c16dcad442519db75af27482e5420f8dc0R127ShareFollowansweredMar 12, 2021 at 21:29Martin SuchanekMartin Suchanek3,01666 gold badges3131 silver badges3131 bronze badges0Add a comment|
Lake Formationanounced preview for ACID and RLS features. In the nearest future the next step towardsLakehouse architecturewould be possible on EMR+LakeFormation without extra management layer like Databricks.What data format/technology is used by Lake Formation's Governed Tables? Would it be Hudi? If not Hudi, how the new format/technology compares to Hudi?
Lake Formation Governed Table underlying format/technology
To quote the IAM documentation onHow IAM Differs for AWS GovCloud (US):You cannot create a role to delegate access between an AWS GovCloud (US) account and a standard AWS account.Also, note that IAM credentials are protected as ITAR-regulated data.ShareFollowansweredJan 27, 2021 at 17:48jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badges3thank you for your prompt reply. Does this mean the S3 bucket created inside a commercial account cannot have a bucket policy allowing access from an IAM user/role located inside the govcloud?–TinaJan 27, 2021 at 18:001Commercial regions know nothing about GovCloud credentials and vice-versa, to the best of my knowledge. They're two different identity partitions.–jarmodJan 27, 2021 at 21:001@jarmod is correct; the 3 internet-connected AWS partitions (aws, aws-us-gov, and aws-cn) are all completely separate AWS stacks, with nothing shared except the client tools (which have hard-coded lookup tables to hit the partition-specific service endpoints for a given region).–user187557Oct 21, 2021 at 17:19Add a comment|
I have a s3 bucket inside a commercial aws account. My ec2 instances are inside a govcloud s3 account. Tried to create an IAM role inside the govcloud account and picked the option for "Another AWS Account" and put down the commercial aws account number but it doesn't let me setup this trust. Keeps throwing this error: Invalid principal in policy: "AWS":"[the commercial account number]". If I try to create the same IAM role inside the commercial and have it trust the govcloud, it gives me the same error. I even tried to create a new S3 bucket inside the commercial account and have a bucket policy that allows access from the govcloud account but it complains about the principal being invalid:"Principal": { "AWS": "arn:aws-us-gov:iam::[govcloud account ID]:user/root" },If I try to set up the above trust between two govcloud account or two commercial account it works fine. Was hoping someone could help me please. Thank you in advance
Accessing a commercial s3 bucket from a govcloud ec2 instance
A simple approach for backup is to usegit bundle: that producesonefile per repository (with all their history and branches)Once you have those files locally, you can then save them to S3: one file per repo is easier/quicker to save/sync than pushing lots of small files.ShareFollowansweredJan 20, 2021 at 0:19VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges31This differ from third-party backup service likegithub.com/marketplace/cloudback–VonCJan 20, 2021 at 0:201. github.com/marketplace/cloudback seems to support only github repos. do you know any service similar that work with bitbucket? 2. regarding thegit bundleit wont help if someone would do ``` git reset --hard *****``` and thengit push --force–helpperJan 20, 2021 at 9:59@helpper 1. No. I was just mentioning cloudback as a thrif-party service example, but it is not free anyway, so it would (I suppose) not be a good solution in your case. 2. I usegit bundlemore as a way to take a snapshot of an existing repository, so it does not matter what kind of push was done to said repository.–VonCJan 20, 2021 at 10:01Add a comment|
I am usingbitbucket cloudas my git host and I have around 200 repos. I wish to backupall my reposincluding (all branches) on AWS S3 bucket daily.We are several people who works on the same account ( 100+ repos) although no one have the permission to delete a repo we can all delete the content of a repo by makinggit reset --hard <init_commit>and thengit push --force. according tobitbucket backupwe observe that there is no auto backup therefore I wish to create oneI thought of using AWS code builder for it but if I understand from the docs correctly, it is only for 12 repos max for project.As I am sure I am not the first person who try solve it, I am trying to understand what is the best practice for such a requirement?The default, which I hope to avoid would be a lambda+ bitbucket api + cloud watch.Thx in advance
What is the best practice for backing up All my git repos backup on s3?
solved it , it might help youwhen creating RDS choose mysql version 5.x.x instead of 8.x.x. MySQL client for EC2 is not able to connect to version 8.x.x.ShareFollowansweredJan 18, 2021 at 18:47devdev79222 gold badges99 silver badges3232 bronze badgesAdd a comment|
I have a RDS in private subnet and and ec2 in public subnet, both subnet are in same availability zone, RDS security group has access from entire VPC CIDR on all 0-65535 portsWhen I try to ssh into RDS from my public ec2 using command :mysql -h <endpointurl> -P 3306 -u admin -pI get following error:ERROR 2059 (HY000): Authentication plugin 'sha256_password' cannot be loaded: /usr/lib64/mysql/plugin/sha256_password.so: cannot open shared object file: No such file or directory.whats wrong here , please help
Private RDS Connection from Public Subnet Error : "Authentication plugin 'sha256_password'"
AWS Step function doesn't support JsonPath.length()and you need to wait for the next AWS Step Functions update.Here is the similar question:AWS Step Function: Function .length() returned error in variable field in Choice stateShareFollowansweredDec 30, 2020 at 6:16Pooya ParidelPooya Paridel1,34177 silver badges1111 bronze badges11Any updates on this? Do you know if.length()is supported now?–alexMay 24, 2022 at 13:02Add a comment|
I would like to know how to find the length of an array in an Amazon Step Function using only Amazon States Language and avoiding other AWS services like lambda etc.Sample input to step function -{ "SampleField" : "SampleString", "SampleField2" : "SampleString2", "SampleArray": [ { "Name": "Jack", "Age": 10 }, { "Name": "John", "Age": 18 }, { "Name": "Mary", "Age": 15 } ] }Sample output from the step function -{ "LengthOfSampleArray" : 3 }Please ensure that you don't invoke any lambda function or any other AWS service in the state machine.Feel free to use as many states as you wish and any type of states.
Find length of array in Amazon Step Function
The python path env should be available throughPYTHONPATHenvironment variable.You can alsosourceit andexportmanually if you want (as root):source /opt/elasticbeanstalk/deployment/env export PYTHONPATHShareFolloweditedDec 25, 2020 at 0:25answeredDec 14, 2020 at 23:26MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges1my instance through aws eb, doesn't seem to have PYTHONPATH in it. any idea about it ?–Deepansh ParmaniSep 26, 2021 at 11:53Add a comment|
Elastic beanstalk containers have theis python venv located at"/var/app/venv/staging-LQM1lest"But what does"LQM1lest"even mean? Not a singe documentation page mentions it.Is there way to get it programmatically? Because its really looks like random string and a subject of change, i dont like the idea of hardcoding it in deploy scripts.
Elastic beanstalk undocumented venv path
I would like to make sure you have set theAccessControltoBUCKET_OWNER_FULL_CONTROLin Mediaconvert job settings. It sounds like this isn't being set in the job settings and since the bucket requires the objects to be set with the Bucket owner full control ACL.Setting can be found under all theOutput Group settings, underdestination settings."DestinationSettings": { "S3Settings": { "AccessControl": { "CannedAcl": "BUCKET_OWNER_FULL_CONTROL" } }Regards MichaelShareFollowansweredDec 22, 2020 at 4:35MichaelTamMichaelTam7122 bronze badges1Hei Michael, thanks for the response, we had PUBLIC_READ before, but anyway I also tried this and it didn't help.–julia adamchukDec 22, 2020 at 15:10Add a comment|
We are using AWS MediaConverter to convert videos to mp4 format. But MediaConvrter is giving this error in the job:Unable to write to output file [s3://{path_to_file}]: [Failed to write data: Access Denied]Obviously, MediaConverter doesn't have write access to bucker, but I don't know how to give them to it.We have following policy for S3:{ "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::{CloudFront-origin}" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::{S3-bucket}/*" }, { "Sid": "2", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::{role-for-our-API}", "arn:aws:iam::{MediaConverter-role}" ] }, "Action": "*", "Resource": "arn:aws:s3:::{S3-bucket}/*" } ] }Our ACL gives Write, List permission only for Bucket Owner. Previously everyone could List and Write objects and MediaConverter worked, but we found this we could not accept List and Write permissions for everyone.Block public access is off for every point.IAM user that we using for API and Role that we are using for MediaConverter have all the permissions for S3 (AmazonS3FullAccess).Appreciate any help, thank you.
What permissions S3 needs for AWS MediaConverter to have access to write files?
There is none, you can use the index management policy if you like, which will operate at the index level, not at the doc level. You have a bit of wriggle room though in that you can create a patterndata-*and have more than 1 index,data-expiring-2020-...,data-keep-me.You can apply a template to the patterndata-expiring-*and set a transition to delete an index after lets say 20 days. If you roll over to a new index each day you will the oldest day being deleted at the end of the day once it is over 20 days.This method is much more preferable because if you are deleting individual documents that could consume large amounts of your cluster's capacity, as opposed to deleting entire shards. Other NoSQL databases such as DynamoDB operate in a similar fashion, often what you can do is add another field to yourdocssuch asdeletionDateand add that to your query to filter out docs which are marked for deletion, but are still alive in your index as a deletion job has not yet cleaned them up. That is how the TTL in DynamoDB behaves as well, data is not deleted the moment the TTL expires it, but rather in batches to improve performance.ShareFollowansweredDec 14, 2020 at 0:16DerropsDerrops7,86966 gold badges3434 silver badges6666 bronze badgesAdd a comment|
I can't find anyway to setup TTL on a document within AWS Elasticsearch utilizing python elasticsearch library.I looked at the code of the library itself, and there are no argument for it, and I yet to see any answers on google.
Is there a way to set TTL on a document within AWS Elasticsearch utilizing python library?
Global endpoint is deprecatedhttps://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.htmlThe global endpoint is called "legacy". That means the new way is to systematically use the regional endpoint.Also the regional endpoint performs better. To simplify, when using the global (legacy) S3 URL format:Requests the global S3 endpoint.It asks what the regional S3 endpoint is.Redirects you to the regional S3 endpoint.Requests the regional S3 endpoint.When you use the regional (modern) S3 URL format:Requests the regional S3 endpoint.All credits tothis reddit answerShareFolloweditedJun 8, 2021 at 22:51answeredDec 7, 2020 at 18:59Yves M.Yves M.30.4k2323 gold badges108108 silver badges147147 bronze badgesAdd a comment|
I have a CloudFront distribution that has S3 ineu-west-1as origin.I know that S3 regional domain name{bucket-name}.s3.{region}.amazonaws.comgive me instant initial CloudFront initialisation without downtime. Global{bucket-name}.s3.amazonaws.comneeds 2~3 hours to be initialized (seehttps://stackoverflow.com/a/58423033/1480391).Does CloudFront perform the same withregionalorglobalS3 domain name?IsregionalS3 domain name slower thanglobalS3 domain name regarding how CloudFront fetches S3 origin (internal DNS domain resolution for example)?
Is CloudFront origin using S3 global domain name performing better than regional one?
Faced the same issue, not sure if you have figured out the solution. Network setup was fine, was using the same VPC for both RDS and DMS Replication instance, there was no issue with security group, however theissuewas with theversion of DMS. RDS MySQL version is 5.7, the ODBC driver present in the higher version doesn't seem to work with the RDS instance or probably something is limiting connection to RDS MySQL 5.7 in DMS,launcheda new replication instance in DMS withversion 3.3.3and the DB connectivity worked fine, also data movement is happening fineShareFollowansweredMay 4, 2021 at 13:37Kannan AnandanKannan Anandan4155 bronze badgesAdd a comment|
I am working on a project, where I want to replicate data from a source Aurora Mysql to Kinesis with AWS data migration service (DMS).I am able to connect to source Mysql DB from mysql command and can see whole database:mysql --host=<host>.amazonaws.com --port=8200 --user=<user> -pBut when I am starting replication task of DMS, DMS is giving connection issue with source DB. Connection with destination (Kinesis) is fine.Testing connection for source DB from DMS, giving error:Test Endpoint failed: Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider ODBC general error., Application-Detailed-Message: RetCode: SQL_ERROR SqlState: HY000 NativeError: 2003 Message: [unixODBC][MySQL][ODBC 8.0(w) Driver]Can't connect to MySQL server on '.compute-1.amazonaws.com' (110)I tried from AWS docs but I am not able to figure out the issue. In the network ACL of VPC, I have allowed ALL incoming traffic for its VPC.
AWS DMS not able to connect Mysql source DB but DB is connecting from mysql command
Iverifiedyour user data on my centos instance and your script iscorrect. However, the issue is probably because of two things:subnet_id = aws_subnet.private.idthis suggest that you've placed your instance in a private subnet. To connect to your instance form internet, it must be inpublic subnetthere is novpc_security_group_idsspecified, which leads to using a default SG from the VPC, which has internet traffic blocked by default.Also I'm not sure what do you want to do withprivate_ip = var.web-private-ip. Its confusing.ShareFollowansweredNov 11, 2020 at 12:30MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges31Thanks Marcin. It was definitely the security group and subnet. I thought that the instance would inherit the security group because it is in the same vpc, but I see know that is per instance.–KeimilleNov 11, 2020 at 13:49user-data relies on the security_group? Why is that? It can be configured before an instance is launched, no?–MrChadMWoodOct 16, 2023 at 15:38@MrChadMWood I would suggest making a proper question specific to your issue.–MarcinOct 16, 2023 at 21:53Add a comment|
I am working with Terraform and trying to execute bash script using user date. Below is my code:resource "aws_instance" "web_server" { ami = var.centos instance_type = var.instance-type subnet_id = aws_subnet.private.id private_ip = var.web-private-ip associate_public_ip_address = true user_data = <<-EOF #!/bin/bash yum install httpd -y echo "hello world" > /var/www/html/index.html yum update -y systemctl start httpd firewall-cmd --zone=public --permanent --add-service=http firewall-cmd --zone=public --permanent --add-service=https firewall-cmd --reload EOF }However, when I navigate to the public IP I do not see the "hello world" message and also do not get a response fron the server. Is there something I am missing here? I've tried going straight through the aws console and user data is unsuccesful there to.
Why is userdata not working in my Terraform code?
I prefer to use native spark dataframe, because it allows me more customization.I can usestringtypeproperty to cast json field from dataframe to jsonb field in the table. For this case, my dataframe has two fields.from pyspark import SparkConf sc = SparkContext.getOrCreate(SparkConf()) spark = SparkSession(sc) df = spark.read.format('csv') \ .option('delimiter','|') \ .option('header','True') \ .load('your_path') ##some transformation... url = 'jdbc:postgresql://your_host:5432/your_databasename' properties = {'user':'*****', 'password':'*****', 'driver': "org.postgresql.Driver", 'stringtype':"unspecified"} df.write.jdbc(url=url, table='your_tablename', mode='append', properties=properties)Before to execute the above script, you should create the table in postgresql, because the propertymodeis setted asappend. This as follow:create table your_tablename ( my_json_field jsonb, another_field int )ShareFolloweditedJan 21, 2021 at 5:12answeredJan 21, 2021 at 5:06Giancarlo PoémapeGiancarlo Poémape3166 bronze badgesAdd a comment|
I'm seeking a solution on how to write string as jsonb type in postgresql. So DynamicFrame has a string column that holds json data. When trying to save to postgresDataSink0 = glueContext.write_dynamic_frame.from_catalog(frame = Transform0, database = "cms", table_name = "cms_public_listings", transformation_ctx = "DataSink0")I get the following error:An error was encountered:An error occurred while calling o1623.pyWriteDynamicFrame. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 134.0 failed 4 times, most recent failure: Lost task 0.3 in stage 134.0 (TID 137, ip-172-31-27-18.ec2.internal, executor 24): java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "public".listings ([REMOVED_COLUMNS]) VALUES ([REMOVED_VALUES]) was aborted: ERROR: column "schema" is of type jsonb but expression is of type character varying Hint: You will need to rewrite or cast the expression. Position: 207 Call getNextException to see other errors in the batch.I can't change the schema to hold a string, so it is either I use AWS Glue ETL or would have to craft Python Shell Job. I would prefer to find a way to use PySpark with AWS Glue.
How to save String as JSONB type in postgres when using AWS Glue
Connecting to your Linux instance if you lose your private key - Amazon Elastic Compute Cloudis the AWS documentation on what to do to connect to a Linux instance when the private key is lost. Nothing beats the AWS documentation, so I am leaving the details to the AWS documentation.It shows the steps as:Step 1: Create a new key pairStep 2: Get information about the original instance and its root volumeStep 3: Stop the original instanceStep 4: Launch a temporary instanceStep 5: Detach the root volume from the original instance and attach it to the temporary instanceStep 6: Add the new public key to authorized_keys on the original volume mounted to the temporary instanceStep 7: Unmount and detach the original volume from the temporary instance, and reattach it to the original instanceStep 8: Connect to the original instance using the new key pairStep 9: Clean upShareFolloweditedOct 29, 2020 at 7:43John Rotenstein254k2626 gold badges408408 silver badges498498 bronze badgesansweredOct 29, 2020 at 7:24Praveen SripatiPraveen Sripati33.1k1818 gold badges8282 silver badges121121 bronze badges1I added some information to avoid a link-only answer. I think another method is to create an AMI of the instance and launch a new instance from that AMI, which then provides the opportunity to specify a new Keypair that will be installed. However, it is often better to fix the 'original' instance since that way it retains network settings, etc.–John RotensteinOct 29, 2020 at 7:45Add a comment|
Unable to access AWS machine since no key-pair is available. Would it be possible to create a snapshot of the volume attached with this instance and atleast get access what is there in the machine?
Unable to access AWS machine since no key-pair is available
There is no option to rename an existing database in Glue.As perthisYou can use the AWS Glue Catalog Manager to rename columns, but at this time table names and database names cannot be changed using the AWS Glue console. To correct database names, you need to create a new database and copy tables to it (in other words, copy the metadata to a new entity). You can follow a similar process for tables. You can use the AWS Glue SDK or AWS CLI to do this.boto3reference for Glue andCLIreference for Glue.ShareFollowansweredOct 28, 2020 at 4:24Prabhakar ReddyPrabhakar Reddy4,9061919 silver badges3737 bronze badges11Can you explain the CLI command to copy tables to another database? Can we use the update-database CLI command in this case?–ISMNov 8, 2020 at 22:40Add a comment|
I have a database in aws glue that has '-' in its name. This database contains a bunch of tables. I would like to know whether this database can be renamed so that '-' can be replaced with '_'. Did a lot of searching but could not find any solution.Need help.Thanks
Rename a database in aws glue
By using the--queryparameter you are able to modify the response you will receive back from the AWS API.This feature can be very useful in scenarios where you want to programmatically use the AWS CLI to extract parts of a response, in your example this is extracting theKeyMaterialattribute from the response but it could also be used to filter and extract attributes based on their values.For you usecase it means you will be able to get the plain text of the key and pump it straight into a text file rather than manually performing copy and paste (although this is benefited by the--outputflag).For more information take a look at theControlling command output from the AWS CLIdocumentation.ShareFolloweditedOct 24, 2020 at 15:22answeredOct 24, 2020 at 15:14Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badgesAdd a comment|
I came across the following command to create AWS SSH key-pair but failed to understand what "--query" parameter is doing here.aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text > MyKeyPair.pemCan someone please explain me the significance of--queryparameter?
AWS SSH Key Pair Creation
The ec2 instance assumes the role you attached with the instance profile. Boto3 uses this role by default. You can view the role attached inside the ec2 console, or change the role from their.The role has to be part of an allow statement in the sns topic policy inside the other account!But also on your side the role needs to have explicit permission to publish on sns (sns:publish)!ShareFollowansweredOct 25, 2020 at 20:52f7of7o66344 silver badges88 bronze badgesAdd a comment|
I want to publish a message from my Aws account(111222333) ec2 instance to SNS topic owned by another AWS account(444555666), Topic owner gave the full permissions to my ec2 role. While publishing the message to topic I am getting the AuthorizationErrorException.import boto3 import json aws_region = 'us-east-1' client = boto3.client('sns', region=aws_region) message = {"foo": "bar"} topic_arn = "arn:aws:sns:us-east-1:444555666:my_topic" response = client.publish( TopicArn=topic_arn, Message=json.dumps({'default': json.dumps(message)}), MessageStructure='json' )botocore.errorfactory.AuthorizationErrorException: An error occurred (AuthorizationError) when calling the Publish operation:User: arn:aws:sts::111222333:assumed-role/ecsec2role/i-0121fggsfdf56is not authorized to perform: SNS:Publish on resource: arn:aws:sns:us-east-1:444555666:my_topic.Do I need to mention any where which role to use my ec2 instance to run my script ?*
Getting AuthorizationErrorException while publish to a SNS topic that is owned by another AWS account
Unfortunately according to thedocumentationthe following is stated:Amazon S3 Block Public Access must be disabled on the bucket.This is because it will ignore the bucket policy due to theBlock public and cross-account access to buckets and objects through any public bucket or access point policiesvalue.Unless your bucket policy also allows anonymousGetObjectby default your objects will not be public.ShareFollowansweredOct 19, 2020 at 19:02Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges1Although this sounds reasonable, I don't think it's correct (or perhaps - maybe it was at the time but no longer is). I haven't looked into it in detail, but as part of AWS' Service Workbench solution a bucket and CF distro are deployed, with an origin access identity. The bucket has all 'block public access' options selected, and the onlyAllowin the policy is fors3:GetObjectwith the correct origin access identity principal. This seems pretty straightforward, so I suspect something else in going on in the OP's case here.–Tim MaloneJun 16, 2022 at 5:33Add a comment|
I am trying to setup the S3 buckets I want my CloudFront distribution to access.From my client I use AWS mobile SDK to upload to S3. When clients consume files from S3 I hit CloudFront and things worked until I made this change:When I created the distribution, I had CloudFront update the bucket policy to have the OAI included in the principal:So, then I thought I could run GET calls on CloudFront, because CloudFront has the OAI setup and S3 bucket reflects that.However, I keep getting Access denied:What else do I need to do to secure down the bucket and only allow CloudFront to read and allow my client app to be able to upload files to it using the SDK configured with the poolId I have setup for it?. Unless I leave the "Block all public access" unchecked, I get access denied via CloudFront.
Allow CloudFront to access S3 origin while also having S3 bucket Block all public access?
These log settings are set usingMethodSetting:DataTraceEnabled- is for "Log full requests..."LoggingLevelis for "Log level"MetricsEnabledis for "Enable detailed CloudWatch metrics"ShareFolloweditedFeb 7, 2022 at 22:39answeredOct 14, 2020 at 6:41MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges41I tried this and get this errorStageDescription cannot be specified when stage referenced by StageName already existsAny Idea why?–Red BottleOct 14, 2020 at 6:48@RedBottle Can you update your question with relevant template parts?–MarcinOct 14, 2020 at 7:011Nvm this is a seperate issue. My initial question is answered. Thank you.–Red BottleOct 14, 2020 at 7:39This is for v1. Is there anything for ApiGatewayV2? I've created a similar questionstackoverflow.com/q/67670073/2948212–diegosaswMay 24, 2021 at 10:02Add a comment|
Hi I'm trying to enable Cloudwatch logs in API Gateway using Cloudformation. However, I do not find the the documentation to do so. All I can find isLogginglevelin the official documentation which doesn't seem to be the solution.For context I'm looking to achieve this using Cloudformation but don't know how to. Please help.
How to enable Cloudwatch Logs for API Gateway using Cloudformation?
We are using NLB in front of the RDS postgres db. Unfortunately, it just "proxies" the requests w/out SSL termination there - the NLB is configured on TCP port 5432. Have you looked at the relatively newAWS RDS proxy- according to the docs it should be able to do the SSL termination, what I'm not sure about is if you can use it without RDS. The other way to go is of course to setup this proxy on your own, with all advantages and disadvantages of this solution.ShareFolloweditedDec 5, 2020 at 20:41Dharman♦31.9k2525 gold badges9191 silver badges139139 bronze badgesansweredDec 5, 2020 at 17:14oklivielioklivieli2122 bronze badges1I know it was long time ago, but was wondering ... were you using the preserve ip flag in our target groups?–TataJan 26, 2023 at 10:34Add a comment|
Is it possible to terminate SSL at an AWS ELB to front a Postgres Server so as to make the following command succeed?PGSSLMODE=verify-full \ PGSSLROOTCERT=/path/to/go-daddy-root-ca.pem \ PGCONNECT_TIMEOUT=5 \ psql -h my-postgres.example.com -p 5432 -U test_username test-database -c 'select 1';I'm able to use TCP protocol at the load balancer, and TCP at the instance protocol if I set my ssl certificate and key on the Postgres service w/ the following arguments:-c ssl=on -c ssl_cert_file=/var/lib/postgresql/my-postgres.example.com.crt -c ssl_key_file=/var/lib/postgresql/my-postgres.example.com.keyHowever, I would like to handle the SSL at the Load Balancer level if possible, so as to not pass certs and keys into the instance running the postgres service. I've tried the following ELB configurations:LB Proto, Instance Proto, Postres SSL, success/failTCP, TCP, on, successSSL, TCP, on, failureTCP, SSL, on, failureSSL, SSL, on, failureSSL, TCP, off, failureThe failure message is the following:psql: error: could not connect to server: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.Maybe an NLB would work, of perhaps terminating SSL end-to-end at the Postgres Service level is the only way for averify-fullto succeed?If it helps for an alternative approach, postgres is running in an EKS cluster.Thank you for any info you can provide!
Postgres SSL Termination at AWS ELB
have in mind that you need 1 EIP per subnet/zone and by default EKS uses a minimum of 2 zones.This is a working example you may found useful:metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxxxxxxxxxxxxx,subnet-yyyyyyyyyyyyyyyyy" service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-wwwwwwwwwwwwwwwww,eipalloc-zzzzzzzzzzzzzzzz"I hope this is useful to youShareFollowansweredJan 6, 2022 at 14:58William AñezWilliam Añez2122 bronze badges1yup, good answer but too specific -- folks just grab the annotations and jump off the page before upvoting I guess–Anton KraievyiJun 15, 2022 at 17:30Add a comment|
I use aws-load-balancer-eip-allocations assign static IP to LoadBalancer service using k8s on AWS. The version of EKS is v1.16.13. The doc athttps://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211, line 210 and 211 says "static IP addresses for the NLB. Only supported on elbv2 (NLB)". I do not know what the elbv2 is. I use the code below. But, I did not get static IP. Is elbv2 the problem? How do I use elbv2? Please also refer tohttps://github.com/kubernetes/kubernetes/pull/69263as well.apiVersion: v1 kind: Service metadata: name: ingress-service annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-0187de53333555567" service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
aws-load-balancer-eip-allocations does not work. Assign static IP to LoadBalancer service using k8s on AWS
I have faced the similar issues with Glue crawler. You have two options to solve it:Manually add the missing columns via Databases -> Tables -> Click table -> Edit Schema -> Add column. You will see the updated table.If there is a data manipulation stage before cataloging, add the missing columns in all records with None value.Both of these solutions are tested in a project.ShareFollowansweredOct 5, 2020 at 10:19amshamsh3,21722 gold badges1515 silver badges2727 bronze badgesAdd a comment|
Will require your help please. I have a raw data json files containing many files in timestamp format folder structure . When I run crawler it is able to detect 116 columns but is not able to detect 5 columns which are present in the files but has a very low frequency. Can somebody let me know as how can I detect 5 columns which are not there.Structure of the file is :{"serialNumber":"PNRF","delivered":1601656317296,"timestamp":"1601656317","ecd4":"-5","pt":"PTR"} {"serialNumber":"PNRT","delivered":1601656317296,"timestamp":"1601656317","ecd4":"-5","pt":"PIF0"}
AWS Crawler not able to read all the columns
+25TheCAP theoremis vital for all large distributed systems. It is explainedhere.ShareFollowansweredSep 23, 2020 at 17:03Ari FordshamAri Fordsham2,45488 silver badges2929 bronze badges1Link-only answers are discouraged. Without those off-site links, your answer is just the phrase "CAP theorem" as something to google on. If you're not going to explain it yourself, it might be better to post that as a comment.–Peter CordesSep 23, 2020 at 17:18Add a comment|
Assuming you want to launch a social app (which meanmany interactions) with the ambition of acquiring several thousand users and for those who have already done so, what are the pitfalls that you know and that you would absolutely avoid, in term of code and servers architectures for sure ?I have that feeling that you can easily feel alone when you trying to answer this kind of question which is clearly out of the scope of all that SaaS or landing pages that maybe (and I insist on this word) don't have this scaling problem. Or maybe that there is just no real pitfall and that the best approach is 'problem' --> 'solution' when those problems come up.I don't think it is an opinion-based question because I/O intensive database, queue systems, server calculation, etc have clearly some technical consideration in that kind of configuration.And to give you some example of problems that I think large scale social app can encounter, there is Facebook engineers with their early latency problem or Twitter engineers with their Bieber problem.I was able to avoid the first pitfall that Netflix couldn't avoid, which is not using Cloud and trying to build their own servers infrastructure at this scale.
Code and servers pitfalls to avoid when launching social application with many interactions?
You probably have already found the solution, but I am leaving this for future visitors looking for the answer.Recently Amazon Cognito has started supporting the verification link as well with custom message template. Unlike verification code, the users won't be stuck if the window is closed as they can always click the link from the email.You can choose verification by email and under "Message Customization", use "Link" for "Verification Type". In your message template, you can use the placeholder{##Click Here##}for the actual link.If you are using terraform you can usethis resourcewhich would look something like:verification_message_template { default_email_option = "CONFIRM_WITH_LINK" email_subject_by_link = "Verify your account" email_message_by_link = <<EOF Hello,<br/><br/> Thank you for signing up.<br/><br/> <strong>Here is your Verification Link: {##Click Here##}.</strong><br/><br/> Thanks,<br/> EOF }ShareFollowansweredMar 4, 2022 at 16:30wildnuxwildnux41811 gold badge77 silver badges2323 bronze badgesAdd a comment|
I'm using Cognito hosted UI to allow new user to sign up and require new sign up to enterconfirmation codesent via email. However, if the user failed to enter theconfirmation codeand close off the hosted ui, the sign up flow will get stuck (as reportedhere). The new user will not be able to sign-in or sign-up again because the user is created in cognito inunconfirmstatus.Any advice?
Aws Cognito new user sign up got stuck when fail to enter verification code
Solution is:resource "aws_cloudwatch_metric_alarm" "cloudwatch_metric_alarm_down" { alarm_name = "${local.name}-ecs-alarm-down" comparison_operator = "LessThanOrEqualToThreshold" evaluation_periods = var.evaluation_periods_down threshold = var.downscale_threshold metric_query { id = "e1" expression = "FILL(m1, 0)" label = "0 if NoData" return_data = "true" } metric_query { id = "m1" metric { metric_name = var.metric_name namespace = var.env_name period = "60" stat = var.metric_statistic } } alarm_actions = [aws_appautoscaling_policy.down.arn] depends_on = [aws_appautoscaling_policy.down] }More about FILL() is herehttps://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html#metric-math-syntax-functions-listShareFollowansweredSep 29, 2020 at 10:44DenisDenis15477 bronze badgesAdd a comment|
We have ECS services with autoscaling configured. Workflow of autoscaling should be like this:if in 1 minute we receive more than 10 values we need scale upwait 10 minutesif no values we need to scale downScale up works perfect but scale down doesn't work at all. We are getting this:Failed to execute action arn:aws:autoscaling:eu-central-1:BLA-BLA-BLA-fargate-scale-down. Received error: ""Probably this is because in terraform forresource "aws_appautoscaling_policy" "down"we have the following:step_adjustment { metric_interval_lower_bound = "" metric_interval_upper_bound = 0 scaling_adjustment = -100 }and it expects "0" instead of no data.Inresource "aws_cloudwatch_metric_alarm" "cloudwatch_metric_alarm_down":treat_missing_data = "breaching" insufficient_data_actions = [aws_appautoscaling_policy.down.arn]Is there any solution for this? Except once per minute manually send "0".
Autoscaling on Insufficient Data
Check the AWS docs out here for the fix on windows :https://docs.aws.amazon.com/codecommit/latest/userguide/troubleshooting-ch.htmlIt should be due to credential helper doesn't connect to the credentials properly for some reason.(I had the same issue on Mac. Once the aws credential helper and osxkeychain credential helper were restored as instructed in the doc, things were working fine.)See if that helps. - CheersShareFolloweditedSep 14, 2020 at 8:44answeredSep 14, 2020 at 8:29DhammikaDhammika54144 silver badges1010 bronze badgesAdd a comment|
I am trying to set up My AWS code commit to my local system.I am trying to clone the repogit clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/lambda-pipeline-repowhen I tried to do that I get this window where I enter my IAM user credentials.Soon after this, I getCloning into 'lambda-pipeline-repo'... fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/lambda-pipeline-repo/': The requested URL returned error: 403I am wondering what I am doing wrong.I also tried things in this linkhttps://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-windows.htmlCould anyone help me out of this?
fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/lambda-pipeline-repo/': The requested URL returned error: 403