Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Figured it out in the end…For a name server to be associated with a Route53 hosted zone, do the following:Create a Hosted zone for your domain name. Note the NS record.Go toRegistered Domains>example.com>Add or edit name servers> Add the name servers from step 1.When transferring a domain to AWS, it keeps the old NS record. Make sure to change it as per step 2.ShareFolloweditedJun 15, 2019 at 20:04answeredSep 27, 2017 at 9:41Daniel AptDaniel Apt2,55811 gold badge2222 silver badges3434 bronze badges22This answer is still current as of Nov 2020. If you happen to delete, edit or change your hosted zone, Route 53 will NOT update the Name Servers automatically.–JoseworksNov 24, 2020 at 15:38Is it safe to leave your records out there in the open? I have watched YouTube videos and they seem to blur out these info, I'm guessing they're sensitive.–oluwatysonFeb 12, 2021 at 22:16Add a comment|
|
I have recently transferred my domain name trademarklawexplained.com from One.com to AWS.I have the following set-up in Route53:For some reason, none of my records (SOA, NS and A) have had any effect.trademarklawexplained.comdoes not map to35.176.22.92, nor does it even look at the name server (I tested it withthis tool)Have I set up my hosted zone incorrectly, should I somehow publish it, or is the issue with One.com?If someone could point me in the right direction to troubleshoot it would be greatly appreciated.
|
DNS hosted zone not taking effect - AWS Route53
|
You can only have 1 web process type. You can horizontally scale your web process to run on multiple dynos ("horizontal scalability"), however you will need to upgrade to at least standard-1x dyno types to do that (i.e. you can only run 1 web dyno instance if you are using free or hobby dyno types).However, in addition to your web process, you can instantiate multiple additional process types (e.g. "worker" processes). These will NOT be able to listen on HTTP/S requests from your clients, but can be used for offloading long-running jobs from your web process.So, if you map each of your 4-6 microservices to a different Process Type in your Procfile, and if your microservices are not themselves web servers, you might be able to make do with hobby dynos.ShareFollowansweredMar 16, 2017 at 15:24Yoni RabinovitchYoni Rabinovitch5,21111 gold badge2424 silver badges3535 bronze badges3Ok. Got your point. Is there any way to deploy multiple micro services on one dyno? For example as group of docker containers?–Maciej TrederMar 16, 2017 at 16:05You can certainly run multiple processes on a single dyno, although that is not necessarily recommended.–Yoni RabinovitchMar 20, 2017 at 13:19@YoniRabinovitch So what is the difference between these "worker processes" ( limited to 10) vs Gunicorn worker processes / OS processes? According to Heroku, Hobby dyno can have up to 255 processes, whereas Performance can have more than 50K+–Aung KhantMay 7, 2020 at 15:07Add a comment|
|
I am creating small app running multiple microservices. I would like to have this app available 24/7, so free dyno hours are not enough for me. If I upgrade to ahobbyplan I would get10 Process Types.Can I run another microservice on each of the processes (web), or does Heroku give me the ability only to install one web process per dyno, and the other10 process typesare for scaling my app? In other words, If i need 6 microservices running 24/7 should I buy 6 hobby dynos?
|
Can the same dyno run multiple processes?
|
In Amazon S3, thekeyis the object name, or file name if your objects are files. Thekeyis listed in the results when retrieving the contents of the bucket, and you retrieve the contents of the object by specifying the object'skey.Keys in Amazon S3 must be unique.If an object in the bucket already exists using thekeyvalue you're specifying for yourPutObjectcommand, then the old object will be replaced with your new object. Essentially, it's overwriting it.ShareFollowansweredOct 25, 2016 at 23:38Matt HouserMatt Houser35k66 gold badges7373 silver badges9292 bronze badgesAdd a comment|
|
I am usingBototo upload artefacts to an s3 bucket, but don't know what theKeyparameter of theput_object()method is:client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)What gives?
|
s3: what is the 'key' parameter of the s3.put_object() method?
|
Thanks for your answers , The issue was with permission with the profile used , the credential must have access rights to both the S3 BucketsShareFollowansweredSep 21, 2016 at 6:58Jamsheer KandothJamsheer Kandoth19211 gold badge33 silver badges1212 bronze badges1This was my case.–Saturn KJan 31, 2017 at 19:37Add a comment|
|
I am trying to copy from one bucket to another bucket in aws with the below commandaws s3 cp s3://bucket1/media s3://bucket2/media --profile xyz --recursiveReturns an error sayingAn error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
Completed 1 part(s) with ... file(s) remaining
|
when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
|
The short answer is: you do not need to do this.The AWS SDK for JavaScript uses dynamic requires to load services. In other words, the classes are defined, butthe API data is only loaded when you instantiate a service object, so there is no CPU overhead in having the entire package around.The only possible cost would be from disk space usage (and download time), but note that Lambda already bundles theaws-sdkpackage on its end, so there is no download time, and you're actually using less disk space by using the SDK package available from Lambda than using something customized.ShareFollowansweredMay 17, 2016 at 0:40Loren SegalLoren Segal3,25211 gold badge2929 silver badges2929 bronze badgesAdd a comment|
|
In an effort to improve cold start latency in AWS Lambda, I am attempting to include only the necessary classes for each Lambda function. Rather than include the entire SDK, How can I include only the DynamoDB portion of the SDK?// Current method:
var AWS = require('aws-sdk');
var dynamodb = new AWS.DynamoDB();
// Desired method:
var AWSdynamodb = require('aws-dynamodb-sdk');
|
How to include only one class from aws-sdk in Lambda
|
This seems to be possible using the create-deployment command.http://docs.aws.amazon.com/cli/latest/reference/opsworks/create-deployment.htmlNote: Haven't done this myself, currently working on it though!ShareFollowansweredJan 5, 2016 at 7:17dcbaokdcbaok8111 bronze badge11Not sure why this was downvoted, it is the correct answer. When sending this command I use ` --command "{\"Name\":\"update_custom_cookbooks\"}"` and works as intended.–brutuscatMay 11, 2016 at 9:02Add a comment|
|
Is there a way toupdate custom cookbooksfrom the command line on an opsworks instance?I don't see a way to do it with theAWS OpsWorks Agent CLIor with theAWS Command Line Interface.The only way I am able to do it is through the console.Thanks!
|
opsworks: 'update custom cookbooks' from command line
|
S3 has no concept of directories.
S3 is an object store where each object is identified by a key.
The key might be a string like "logs/2014/06/04/system.log"Most graphical user interfaces on top of S3 (AWS CLI, AWS Console, Cloudberry, Transmit etc ...) interpret the "/" characters as a directory separator and present the file list "as is" it was in a directory structure.However, internally, there is no concept of directory, S3 has a flat namespace.
Seehttp://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.htmlfor more details.Knowing this, I am not surprised that empty directories are not synced as there is no directories on S3ShareFollowansweredDec 5, 2014 at 16:08Sébastien StormacqSébastien Stormacq14.5k55 gold badges4242 silver badges6464 bronze badgesAdd a comment|
|
By default, it appears that "s3 sync" doesn't create empty folders in the destination directoryaws s3 sync s3://source-bucket s3://dest-bucket --include "*" --recursiveI've searched for a ffew hours now, and can't seem to find anything to address empty folders/directories when using "sync" or "cp"fwiw, i do see the following message that may pertain to the empty folders, but its hard to know for sure since the source bucket is pretty big and unwieldy.Completed 4132 of 4132 part(s) with -5 file(s) remaining
|
How to include empty folders in "s3 sync"?
|
If you want to use the same certificate both for CloudFront and for
other AWS services, you must upload the certificate twice: once for
CloudFront and once for the other services.From here:http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPSShareFollowansweredFeb 4, 2014 at 18:47datasagedatasage19.2k22 gold badges4848 silver badges5555 bronze badges11CloudFront: If you are uploading a server certificate specifically for use with Amazon CloudFront distributions, you must specify a path using the --path option. The path must begin with /cloudfront and must include a trailing slash (for example, /cloudfront/test/). Enter the following command:–mstDec 30, 2015 at 15:03Add a comment|
|
I'm trying to install a DigiCert Wildcard SSL on a CloudFront CDN.It worked immediately with all Elastic load balancers, but it's not showing up the CloudFront SSL certificate selection dropdown, even if the certificate is found in the IAM store.Any ideas what permissions could be conflicting?
|
AWS CloudFront: Not showing IAM SSL certificates
|
Found an answer myself, leaving this here since the answer will never match search terms for this specific problem.Follow the directions on the accepted answer here:Android Studio - Importing external Library/JarLike the post right below it says, you don't have to exit from Android Studio, just Build->Rebuild ProjectShareFolloweditedMay 23, 2017 at 10:34CommunityBot111 silver badgeansweredAug 28, 2013 at 19:31jmickelajmickela66311 gold badge88 silver badges1717 bronze badgesAdd a comment|
|
I'm trying to make use of the AWS S3 storage to store images for a mobile app, but I'm not able to compile my app. I have the .jar files in my libs directory. In the dependencies section of my build.gradle file I have:dependencies {
compile "com.amazonaws.services.s3:1.6.1"
}I've tried every combination I can think of. com.amazonaws.services.s3:1.6.1, com.amazonaws.services:s3:1.6.1, com.amazonaws:services:1.6.1...but I always get an error about that says Could not find com.amazonaws.services.s3:1.6.1. (with whatever I put on the compile line in the error).I can't find anything about getting this SDK to work with Android Studio and don't really know enough about Gradle to know how to get it to work. Any suggestions? Anyone already have this working?
|
Using Amazon Web Service SDK for Android in Android Studio
|
Assume your repo is at/home/user/aws/(.gitfolder inside),Your AWSDevTools are at/home/user/AWS-ElasticBeanstalk-CLI-2.3/AWSDevTools/LinuxThen you should be able to follow these steps:1) Open a command prompt,cdto/home/user/aws(your repo)2) Run/home/user/AWS-ElasticBeanstalk-CLI-2.3/AWSDevTools/Linux/AWSDevTools-RepositorySetup.sh.3)git aws.config4) Enter AWS detailsThen you cangit add,git commit, andgit aws.push. Let me know what error you get with these steps.ShareFolloweditedJan 14, 2013 at 22:32answeredJan 14, 2013 at 22:21Dan HoerstDan Hoerst6,28233 gold badges3939 silver badges5252 bronze badges2Thanks Dan! I was able to do this just had to use a prior version of AWSDevTools.–vizyourdataJan 16, 2013 at 22:361Good answer. I was trying to run theAWSDevTools-RepositorySetup.shoutside of the project I was trying to push, and was getting errors. This cleared it up for me!–dmackermanDec 6, 2013 at 20:16Add a comment|
|
I am receiving the following error when I run the setup for AWSDevTools inside my git repo.[[: not found
cp: cannot stat/home/user/aws/.git/scripts': No such file or directory`I copied the scripts folder from the AWSDevTools .zip directly to the git repo and then received this error:[[: not found
cp: cannot create directory.git/AWSDevTools': No such file or directory`copied AWSDevTools from .zip to the repository as well thinking the installer just wanted these folders in the repo to run but I continued to receive the same directory error. Any help would be great.
|
setup AWSDevTools-RepositorySetup.sh in git repository on ubuntu
|
I think what you are looking for is 'Sticky Sessions'. If I'm right about that, Amazon gives you two different options.Load Balancer(duration-based, I recommend this one)http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html#US_EnableStickySessionsLBCookiesAnd application based session stickinesshttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html#US_EnableStickySessionsAppCookiesShareFolloweditedJun 15, 2013 at 0:00Daniel Serodio4,32955 gold badges3939 silver badges3333 bronze badgesansweredMay 23, 2012 at 21:53Brandon NicollBrandon Nicoll43422 silver badges1010 bronze badgesAdd a comment|
|
We're using Amazon Web Services (AWS) and we have multiple web servers and a load balancer. The problem with the web servers is, that the $_SESSION is unique for each one. I'm keeping some information about the user in the $_SESSION.What is the proper way to synchronize this information? Is there any way to unite the place, where those sessions are being kept, or should I use MySQL to store this data (I don't really like the last option)?
|
How to synchronize sessions using Amazon Web Services (AWS)?
|
Seems like there is no Offer element in your response. Trynode = api.item_lookup(...)
from lxml import etree
print etree.tostring(node, pretty_print=True)to see how the returned XML looks like.ShareFollowansweredMar 15, 2010 at 16:01SebastianSebastian1,08599 silver badges2828 bronze badgesAdd a comment|
|
I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN:def price_offers(asin):
from amazonproduct import API, ResultPaginator, AWSError
from config import AWS_KEY, SECRET_KEY
api = API(AWS_KEY, SECRET_KEY, 'de')
str_asin = str(asin)
node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All')
for a in node:
print a.Offer.OfferListing.Price.FormattedPriceI am readinghttp://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.htmland trying to make this work, but all the time it just says:Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer
|
How to get the list of price offers on an item from Amazon with python-amazon-product-api item_lookup function?
|
This is the important part of the error message.>= 2.0.0, ~> 3.27, ~> 4.0You request a version greater than or equal to 2.0.0You prefer a version 3.27You prefer a version 4.0Both 2 and 3 cannot be possible at the same time.The solution for this specific case is the stop requesting 2 different versions at the same time.Check versions of available providers:!+?main ~/Projects/x/src/x-devops/terraform/env/test> terraform providers
Providers required by configuration:
.
├── module.test-sonar
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.client_vpn
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.test-appserver
│ └── provider[registry.terraform.io/hashicorp/aws] ~> 3.27
├── module.test-vpn-server
│ └── provider[registry.terraform.io/hashicorp/aws]
├── module.test-networking
...There is a module which requests3.27.Find all modules which request 3.27 and update them to 4.0.This should resolve such problems.ShareFollowansweredMay 17, 2023 at 12:28HalilHalil2,01733 gold badges2727 silver badges4545 bronze badges22thank you for your explain, i'm facing the same error, and this save my day–Ngọc BìnhOct 11, 2023 at 4:46you save my day, thank you for a nice explanation :)–PesaDec 12, 2023 at 12:55Add a comment|
|
I am trying to update hashicorp/aws provider version.I added terraform.tf file with following content:terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}Later I tried to update modules using:terraform init -upgradeHowever, I have started to get:Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 2.0.0, ~> 3.27, ~> 4.0How could this problem be resolved?
|
Terraform: resolve "no available releases match the given constraints" error
|
The AWS event for Lambda Function URL isevents.LambdaFunctionURLRequestand therelated types.So your handler signature should look like:func handleRequest(ctx context.Context, request events.LambdaFunctionURLRequest) (events.LambdaFunctionURLResponse, error) {
// add code here
return events.LambdaFunctionURLResponse{Body: request.Body, StatusCode: 200}, nil
}Once you create the URL of your Lambda, the caller (e.g. Postman) can specify whatever HTTP method, which you can access inside your handler with:// request is LambdaFunctionURLRequest
request.RequestContext.HTTP.MethodShareFollowansweredJun 29, 2022 at 6:40blackgreen♦blackgreen38.5k2525 gold badges132132 silver badges140140 bronze badges0Add a comment|
|
I have a lambda function that is invoked with API Gateway.
The function is working using the GET method with this code:func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
// add code here
return events.APIGatewayProxyResponse{Body: request.Body, StatusCode: 200}, nil
}
func main() {
lambda.Start(handleRequest)
}I have thisevent.APIGatewayProxyRequestandGETmethod. But when I try to migrate the URL to aFunction URLs, I have nothing similar to set theGETmethod. How was the URL supposed to know which method to use in the POSTMAN? and...Do we have an equivalent forevent.APIGatewayProxyRequestto do the request?When I invoke it with URL, I got a502BAD GATEWAYerror.
|
AWS lambda's function URL without API Gateway in Go
|
It seems that GLUE 3.0 image has some issues with SSL. A workaround for working locally is to disable SSL (you also have to change the script paths as documentation is not updated).$ docker run -it -p 8888:8888 -p 4040:4040 -e DISABLE_SSL="true" \
-e AWS_ACCESS_KEY_ID=$(aws --profile default configure get aws_access_key_id) \
-e AWS_SECRET_ACCESS_KEY=$(aws --profile default configure get aws_secret_access_key) \
-e AWS_DEFAULT_REGION=$(aws --profile default configure get region) \
--name glue_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 \
/home/glue_user/jupyter/jupyter_start.shAfter a few seconds you should have a working jupyter notebook instance running onhttp://127.0.0.1:8888ShareFolloweditedJan 16, 2022 at 11:25answeredJan 16, 2022 at 10:25jabaldonedojabaldonedo26.2k88 gold badges7878 silver badges7777 bronze badges1Thank you to share a useful solution @jabaldonedo–prachyabJan 31, 2022 at 6:36Add a comment|
|
I am working on Glue in AWS and trying to test and debug in local dev. I follow the instruction herehttps://aws.amazon.com/blogs/big-data/developing-aws-glue-etl-jobs-locally-using-a-container/to develop Glue job locally. On that post, they use Glue 1.0 image for testing and it works as it should be. However when I load and try to dev by Glue 3.0 version; I follow the guidance steps but, I can't open Jupyter notebook on :8888 like the post said even every step seems correct.here my cmd to start a Jupyter notebook on Glue 3.0 containerdocker run -itd -p 8888:8888 -p 4040:4040 -v ~/.aws:/root/.aws:ro --name glue3_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 /home/jupyter/jupyter_start.shnothing shows on http://localhost:8888.still have no idea why! I understand the diff. between versions of Glues just wanna develop and test on the latest version of it. Have anybody got the same issue?
Thanks.
|
AWS Glue 3.0 container not working for Jupyter notebook local development
|
Block Public Access feature is another layer of protection for buckets.
Amazon S3 buckets and objects are private and protected by default, with the option to use Access Control Lists (ACLs) and bucket policies to grant access to other AWS accounts or to public (anonymous) requests.Before the release of Block Public Access feature, was common to see more data leaks and breaches centered around data stored on S3 due to missconfiguration. It was not Amazon’s fault, was the company’s fault.So, if you want to make an bucket or objects within publicly accessible, first you need to disable this additional layer of security (disabling it does not means the bucket or objects are public, only means than you can make them public) and then make them public via ACL, bucket policies, etc.Reference:Amazon S3 Block Public Access – Another Layer of Protection for Your Accounts and BucketsShareFolloweditedDec 24, 2021 at 16:00answeredDec 24, 2021 at 14:28OARPOARP3,75911 gold badge1313 silver badges2323 bronze badges2well explained. Thank you that's clear now. It would be helpful if they could put this simple explanation in the doc.–Kid_Learning_CDec 24, 2021 at 15:57TheS3 Block Public Accessdocs are helpful.–jarmodDec 24, 2021 at 23:06Add a comment|
|
I am very confused with the S3 bucket policy settings.Here you can choose to block all public access.However, if you un-select these options, for the public to access the bucket and the objects, you still need to edit/add policies in the "Bucket policy" section:You need to edit the above policy to the following:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myapp/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity 111111111"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myapp/*"
}
]
}If you do not specify"Effect": "Allow", "Principal": "*",, then the default policy is "block".So, why do we need the "Block public access" part if public is already blocked by default?
|
AWS S3 Bucket: what is the difference between "Block public access" and a blank Bucket policy file with no "allow" specified?
|
This happened to me as well while using Backblaze. You can configure Laravel to upload files as "public" which is needed when your Backblaze bucket is set to be public.Inconfig/filesystems.php:'disks' => [
's3' => [
...
'visibility' => 'public',
],
],ShareFollowansweredNov 21, 2022 at 16:37MoriturMoritur1,70111 gold badge1818 silver badges3131 bronze badgesAdd a comment|
|
I am uploading an image to s3 bucket using laravel. Delete operation and listing the object is working fine but when I'm trying to upload the images it is giving an error. Can't troubleshoot the issue. here is the error response.Error executing "PutObject" on "https://weddingdotmelbourne.s3.us-west-002.backblazeb2.com/YoPDX.txt"; AWS HTTP error: Client error: `PUT https://weddingdotmelbourne.s3.us-west-002.backblazeb2.com/YoPDX.txt` resulted in a `400 Bad Request` response: ◀
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Unsupporte (truncated...)
InvalidArgument (client): Unsupported value for canned acl 'private' - <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Unsupported value for canned acl 'private'</Message>
</Error>EDITHere is the simple testing code I'm trying.Route::get('s3-putfile', function () {
try {
$name = Str::random(5) . ".txt";
$putFileAs = Storage::disk('s3')->put($name, "Hello this is the test file");
dd($putFileAs);
} catch (Exception $e) {
return ($e->getMessage());
}
});
|
S3 Bucket gives InvalidArgument (client): Unsupported value for canned acl 'private' - for put file operation with laravel
|
I resolved this by confirming by myself that everyting was ok, than executing the command setting it to not perfom dns and ip validation.ShareFollowansweredJun 14, 2021 at 8:22J.J.EnrikJ.J.Enrik17811 silver badge99 bronze badges25I can confirm it works. Tried out like this as in the comment from Jota Martos.sudo /opt/bitnami/bncert-tool --perform_public_ip_validation 0 --perform_dns_validation 0–ibsenvFeb 21, 2022 at 6:26Tks a lot @ibsenv works for me–felipearonDec 27, 2022 at 23:35Add a comment|
|
I have thisprobably easyproblem:I'm trying to usebncert-toolon my aws wordpress website machine. I transferred my domain from elsewhere to aws, made an hosted zone, also the static ip address.nslookupworks, writing the right ip.Reading this answeri went towww.whatsmydns.netand every query gets an almost all green lights.Trying to simply reach the website with a browser works: i can see my website normally (except that tls warning).Can you help me with this? Thank you all.
|
bncert says my domain resolves to a different IP address, but it is not
|
Upps It seems that I found the solution myself.Check this site:https://www.shellhacks.com/aws-cli-ssl-validation-failed-solved/I downloaded ZScaler certificate and then pointed from config:$ cat ~/aws/.config
[default]
ca_bundle = /data/ca-certs/whatevername.pemI was getting crazy, I hope it helps someone else.ShareFollowansweredApr 7, 2021 at 9:51Salva.Salva.12911 gold badge11 silver badge88 bronze badgesAdd a comment|
|
Recently I am gettin an error when, for instance, listing data from Amazon S3:aws s3 ls
SSL validation failed for https://s3.eu-west-1.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)I have noticed that the company I work for has added a ZScaler Client Connector. It seems that this client is causing the error. I wonder if someone could give a hint about how to solve this issue.
|
SSL Certificate error: [SSL: CERTIFICATE_VERIFY_FAILED] when using aws client in windows 10
|
I finally found the configuration forawswrangler:import awswrangler as wr
wr.config.s3_endpoint_url = 'https://custom.endpoint'ShareFollowansweredMar 25, 2021 at 2:27David ParksDavid Parks31.1k4747 gold badges191191 silver badges340340 bronze badgesAdd a comment|
|
I'm attempting to use the python packageawswranglerto access a non-AWS S3 service.TheAWS Data Wranger docsstate that you need to create aboto3.Session()object.The problem is that theboto3.client()supports setting theendpoint_url, butboto3.Session()does not(docs here).In my previous uses ofboto3I've always used theclientfor this reason.Is there a way to create aboto3.Session()with a customendpoint_urlor otherwise configureawswranglerto accept the custom endpoint?
|
How to get python package `awswranger` to accept a custom `endpoint_url`
|
The Lambda execution runtime will provide your function invocation with a temporary session token (not a persistent/permanent access key / secret access key).Behind the scene, the Lambda Service will use the AWS Security Token Service (AWS STS) to assume the Lambda execution role of your Lambda function. This is why you must also add the Lambda Service principal as a trusted service principal in the trust policy of your execution role. And the result of this is a temporary session.The credentials for this temporary session are stored in a combination of the environment variablesAWS_SECRET_ACCESS_KEY,AWS_ACCESS_KEY_IDandAWS_SESSION_TOKEN.You should however not need to configure/specify any credentials manually, as the default credentials loader chain in the AWS SDK takes care of this automatically.ShareFollowansweredFeb 15, 2021 at 19:50httpdigesthttpdigest5,59122 gold badges1616 silver badges3131 bronze badges1Thanks for the perfect explanation, this fixed it. I just found out about it as well. Just don't try to set the credentials explicitly like I did and just require the aws-sdk and everything works as expected!–ZeroMaxFeb 15, 2021 at 20:09Add a comment|
|
I am deploying an AWS lambda function (with nodejs 12.x) that executes an AWS command ("iot:attachPrincipalPolicy") when invoked. I'm taking the credentials to run this command from the lambda execution environment variables.const AWS = require('aws-sdk/global');
const region = process.env['AWS_REGION'];
const accessKeyId = process.env['AWS_ACCESS_KEY_ID'];
const secretAccessKey = process.env['AWS_SECRET_ACCESS_KEY'];
AWS.config.region = region;
AWS.config.credentials = new AWS.Credentials(accessKeyId, secretAccessKey);
// attachPrincipalPolicy command from the AWS SDK hereWhen I test the function locally (withsam local start-api) it runs successfully, because in my AWS CLI I have set theACCESS_KEY_IDand secret of my administrator account.However when I deploy the function and invoke it the lambda fails on that command with a client error (the credentials are not valid), even when I give full admin access also to the lambda's execution role.Here I gave full permissions in an inline policy and I also explicitly added the pre-defined admin access policy too.I expected the AWS_ACCESS_KEY_ID that you get from the environment variables to grant me all the permissions that I have set in the lambda function's execution role but it looks like the privilege that I grant to the execution role are not reflected in these credentials.Is my assumption wrong? Where do these credentials come from and how can I find out what they allow me to do?
|
AWS Lambda credentials from the execution environment do not have the execution role's permissions
|
CodeBuild does not support Git LFS, however it's possible to install it on-the-fly and then rungit lfs pullfrom the source directory to download the files. Like this:env:
git-credential-helper: yes
phases:
install:
commands:
- cd /tmp/
- curl -OJL https://github.com/git-lfs/git-lfs/releases/download/v2.13.2/git-lfs-linux-amd64-v2.13.2.tar.gz
- tar xzf git-lfs-linux-amd64-v2.13.2.tar.gz
- ./install.sh
- cd $CODEBUILD_SRC_DIR
pre_build:
commands:
- git lfs pull
<rest of your buildspec.yml file>ShareFolloweditedJul 5, 2021 at 1:58answeredMar 31, 2021 at 7:29AskannzAskannz13922 silver badges77 bronze badges11this didn't work for me. I'm getting the errorNot in a git repository.–davekatsSep 29, 2021 at 23:19Add a comment|
|
Since AWS CodeBuild doesn't seem to support git LFS (Large File System) I tried to install it:version: 0.2
phases:
install:
commands:
- apt-get install -y bash curl
- curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash
- apt-get install -y git-lfs
pre_build:
commands:
- echo Downloading LFS files
- git lfs pull
build:
commands:
- echo Build started on `date`
post_build:
commands:
- echo Build completed on `date`For the above code I'm getting the following error (renamed repo address):[Container] 2020/06/18 16:02:17 Running command git lfs pull
fatal: could not read Password for 'https://[email protected]': No such device or address
batch response: Git credentials for https://[email protected]/company/repo.git not found.
error: failed to fetch some objects from 'https://[email protected]/company/repo.git/info/lfs'
[Container] 2020/06/18 16:02:17 Command did not exit successfully git lfs pull exit status 2
[Container] 2020/06/18 16:02:17 Phase complete: PRE_BUILD State: FAILED
[Container] 2020/06/18 16:02:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: git lfs pull. Reason: exit status 2Can I do something else in order to fetch LFS files?
|
How to use git lfs in AWS CodeBuild?
|
You could include several policies, one for each table, or provide a"*"for all DDB tables (note that"*"provides much broader permissions than is recommended, though).Policies:
# Give just CRUD permissions to one table
- DynamoDBCrudPolicy:
TableName: !Ref MyTable
# Give just CRUD permissions to another table
- DynamoDBCrudPolicy:
TableName: !Ref MyOtherTable
# Give just CRUD permissions to all tables
- DynamoDBCrudPolicy:
TableName: "*"ShareFollowansweredSep 3, 2019 at 15:29Keeton HodgsonKeeton Hodgson47733 silver badges88 bronze badges1This MyOtherTable solution is not currently working for me. The '*' work around works but isn't great security wise. Anytime I have 2 DynamoDBCrudPolicy lines I get these errors: "Must specify valid parameter values for policy template 'DynamoDBCrudPolicy'" Anyone else experiencing this?–peoplespeteJun 20, 2022 at 16:38Add a comment|
|
I am using SAM (Serverless application model) and creating policy for lambda function for dynamo. By defaultAmazonDynamoDBFullAccessis there but I want to giveDynamoDBCrudPolicyfor lambda function in which more than one table is used.In aws sam docs there is policy for one table not for more than onePolicies:
# Give just CRUD permissions to one table
- DynamoDBCrudPolicy:
TableName: !Ref MyTableHere is CRUD policy for one table, I want for more than one table.
|
How to define CRUD policy in SAM (Serverless application model) template to more than one table for a lambda functio?
|
Check theTagsfor the Application Load Balancer, you will find theenvironment idandnameover there which is given to the Elastic BeanStalk application. By using these Tags, you can easily determine which ALB is attached to which EB.ShareFollowansweredMay 6, 2019 at 14:15Aress SupportAress Support1,38566 silver badges1313 bronze badges1Yeah, that's what I would have thought. Just to be certain I just now rebuilt that environment, and the new ALB does not have any tags associated with it. I may have some sort of deeper issue here.–Bernard LechlerMay 6, 2019 at 14:29Add a comment|
|
I have several Elastic Beanstalk applications, each with an Application Load Balancer attached. I can't seem to figure out how to determine which ALB is attached to which EB. I feel like I'm missing something obvious here.
|
Determine which Application Load Balancer is attached with which Elastic Beanstalk
|
I've foundjqto be the most flexible method for working with the AWS CLI.The following takes input fromdescribe-instancesand pipes it intojq.jqextracts the bits you're interested in and outputs it in the CSV format you specified.CLIaws ec2 describe-instances |jq -r '.Reservations[].Instances[]| . as $i | [($i.Tags|from_entries|.Name)?, $i.InstanceId, $i.PrivateIpAddress] |@csv'Output"ac02-01","i-0123456789ABCDEF","10.0.0.214"ReferencesHow to extract a particular Key-Value Tag from ec2 describe-instancesShareFolloweditedMay 3, 2019 at 15:36answeredMay 3, 2019 at 15:09kenlukaskenlukas3,77299 gold badges2727 silver badges3838 bronze badges6This is awesome. A problem I'm now experiencing, however, is thatjqis failing after a short list, with the error: "Cannot iterate over null (null)"–MikeMay 3, 2019 at 15:27Do you have instances without tags?–kenlukasMay 3, 2019 at 15:30Mmm..thatmaybe a possibility. I'm not certain, tho. We have over 400 instances and I haven't seen all of them. Any sort of magic for "tag if tag exists?" Haha.–MikeMay 3, 2019 at 15:321@Mike This can be done withoutjqcommand, please have a look at my answer–Dzmitry BahdanovichMay 3, 2019 at 15:331Added a?after the Tags piece which will skip over nulls–kenlukasMay 3, 2019 at 15:36|Show1more comment
|
I'm trying to pull a list of all of our instances formatted like so:Tag:Name.Value instance-id private-ip-addressThis is the command I'm using:aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value[],InstanceId,PrivateIpAddress]' --output textAnd this is what I'm getting as output:instance-id private-ip-address
tag:name.valueEven though I've got the Tag bit before everything else, it still lists on a new line below the corresponding ID/IP.Any way to fix this? Also any way to retrieve a format like this:Tag:name.value,instance-id,private-ip-addressThanks
|
AWS CLI: How to achieve certain output format for describe-instances?
|
You need toRefthe pseudoparameter and use theFn::Joinmethod to construct the nameMyLambdaFunction
Type: "AWS:Serverless::Function"
Properties:
FunctionName: !Join [ "", [ {"Ref": "AWS::StackName"}, "-myLambdaFunction" ]]ShareFollowansweredApr 15, 2019 at 15:31rdasrdas21k66 gold badges3636 silver badges4747 bronze badgesAdd a comment|
|
I want to have multiple stacks based on a single CloudFormation template, but I'm getting naming conflicts. The simplest way to resolve this would seem to be prepending (or appending) theStackNameto each of the repeated resources, e.g. my lambda functions or roles.AWS talks aboutAWS::StackNamein the'Template Reference' section of the documentation, but there's no clear demonstration of how to do this.How can I prepend theStackNameto a CloudFormation resource?MyLambdaFunction
Type: "AWS:Serverless::Function"
Properties:
FunctionName: AWS::StackName + "-myLambdaFunction"
|
Prepend StackName to Cloudformation resources
|
You can allow CloudFront IP addresses on CloudFront because static website endpoint doesn't support Origin access identity.
Here is the list of CloudFront IP addresses:http://d7uri8nf7uskq.cloudfront.net/tools/list-cloudfront-ipsShareFollowansweredApr 3, 2019 at 17:18James DeanJames Dean4,22111 gold badge1111 silver badges1919 bronze badges0Add a comment|
|
I want to temporarily restrict users from being able to access my static website hosted in s3 which sits behind a cloudfront distribution.Is this possible and if so what methods could i use to implement this?I've been able to restrict specific access to my s3 bucket by using a condition in the bucket policy which looks something like this:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "12.34.56.73/32"
}
}
}
]
}which works and restricts my s3 bucket to my ip, however this means that the cloudfront url gets403 forbidden: access denied.When reading the AWS docs, it suggests that to restrict specific access to s3 resources, use anOrigin Access Identity. However they specify the following:If you don't see the Restrict Bucket Access option, your Amazon S3 origin might be configured as a website endpoint. In that configuration, S3 buckets must be set up with CloudFront as custom origins and you can't use an origin access identity with them.which suggests to me that i can't use it in this instance. Ideally i'd like to force my distribution or bucket policy to use a specific security group and control it that way so i can easily add/remove approved ip.
|
Restrict access to s3 static website behind a cloudfront distribution
|
This is assuming there isn't a bug in your code somewhere and you're processing the same amount of work...AWS Lambda keeps instances of your application code running for some time to ensure subsequent requests to it are speedy, so it could simply be a matter of the garbage collection hasn't been run on that process running your code.Instead, what I would be more concerned about is paying for 512MB when your application isn't even using 256MB. Keep in mind that you don't pay for what you use, you pay for what you allocate.EDIT:
As percementblock's comment, keep in mind that changing memory allocation will affect your CPU and networking shares.ShareFolloweditedMar 22, 2019 at 13:54answeredMar 22, 2019 at 13:32MatthewMatthew25.2k99 gold badges8080 silver badges112112 bronze badges42Remember the memory setting also controls cpu and network.–cementblocksMar 22, 2019 at 13:47I'll give a more detailed look into the code, because another functions that I made don't have this trouble. Thanks for the answer @Matthew.–Tiago ÁvilaMar 22, 2019 at 14:40Does allocating less memory force GC to happen more aggressively?–trademarkAug 10, 2020 at 18:56@trademark there's a lot of GC configuration options, and I think that memory-pressure is one metric used to determine whether the GC is run, so possibly?–MatthewAug 17, 2020 at 14:01Add a comment|
|
I have an AWS Lambda function built in .NET Core 2.1. Which is triggered by a SQS Queue.This function has a Max Memory of 512MB and a timeout of 2 min.Looking into the CloudWatch logs I'm seeing the Max Memory Used being increased after some number of executions. See the images below:It keeps increasing after some executions, it goes from 217MB to 218MB, after to 219MB and so on. This function is running multiple times and with a high frequence.Have anyone faced this on AWS Lambda? Thanks in advance for any help.
|
AWS Lambda function increasing Max Memory Used
|
You are looking for thebatchGetItemfunction, documentedhere.ShareFollowansweredOct 14, 2018 at 21:17Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
|
I am trying to say:select * from myTable where pkName in ('john', 'fred', 'jane')but there doesn't seem to be a native way to feed a list of items in an array. I have my query working and retrieving values for a single primary key but want to be able to pass in multiple ones. It seems this isn't possible from looking at the DynamoDb page in the console but is there a good workaround? Do I just have multipleORin myKeyConditionExpressionand a very complexExpressionAttributeValues?I'm referencing this page:https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.htmlAnd using code based on the following (which can be found at the address below):var params = {
ExpressionAttributeValues: {
':s': {N: '2'},
':e' : {N: '09'},
':topic' : {S: 'PHRASE'}
},
KeyConditionExpression: 'Season = :s and Episode > :e',
ProjectionExpression: 'Title, Subtitle',
FilterExpression: 'contains (Subtitle, :topic)',
TableName: 'EPISODES_TABLE'
};https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/dynamodb-example-query-scan.html
|
Retrieve rows for multiple primary key values from AWS DynamoDB database
|
Lambda@Edgeformerlyallowed only Node.js, so at the time this question was initially asked, it was not possible to create Lambda@Edge functions in Python, or any other language besides Node.js (or inside a Node.js wrapper).Lambda@Edge supports only the Node.js 6.10 and 8.10 runtime environments as of August, 2018.Lambda@Edge now supports Node.js 8.10, Node.js 10.x, andPython 3.7, as of August, 2019.The edge environment is notably different than the general Lambda offering in a number of ways: seeLambda Function Configuration and Execution Environmentin the CloudFront Developer Guide.ShareFolloweditedAug 21, 2019 at 13:16answeredAug 10, 2018 at 14:07Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges0Add a comment|
|
We are trying to write a lambda@edge function to trigger onviewer-request. I was able to find lots of examples on using JavaScript.e.g'use strict';
exports.handler = (event, context, callback) => {
console.log('Event: ', JSON.stringify(event, null, 2));
console.log('Context: ', JSON.stringify(context, null, 2));
const request = event.Records[0].cf.request;
// You can also store and read the redirect map
// in DynamoDB or S3, for example.
const redirects = {
'/r/music': '/card/bcbd2481',
'/r/tree': '/card/da8398f4',
};
if (redirects[request.uri]) {
return callback(null, {
status: '302',
statusDescription: 'Found',
headers: {
'location': [{
key: 'Location',
value: redirects[request.uri] }]
}
});
}
callback(null, request);
};The above code will redirect requests that matches a specific path. Can anyone advice on how to port a similiar code to python? or share resources/information on deploying python lambda@edge functions.Thanks
|
Lambda@edge function using python
|
It is complaining about this:IamInstanceProfile={
'Arn': 'arn:aws:iam::000000000000:user/instance',
'Name': 'instance'
},It is saying that you cannot specifybothArnandName.The reason is that the ARN uniquely identifies a resource, so the Name is not required. However, I'll admit that the documentation doesn't state this.So, just remove theNameentry.ShareFollowansweredJul 3, 2018 at 12:44John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badges2if i am using any one of them. its also showing the error! when i removed this section of my code, it started working! Now the question is "how will system know iam profile of anyone if the code is working dynamically?"–aneesha kumariJul 3, 2018 at 12:50If you just provide the Arn without the Name, what is the error?–John RotensteinJul 3, 2018 at 13:02Add a comment|
|
This is my code. Is there any problem with this code? It is showing some error!import boto3
ec2 = boto3.resource('ec2', region_name = 'us-east-2')
instance = ec2.create_instances(
BlockDeviceMappings=[
{
'DeviceName': '/dev/sdh',
'VirtualName': 'ephemeral1',
'Ebs': {
'Encrypted': False,
'Iops': 500,
'VolumeSize': 100,
'VolumeType': 'io1'
},
},
],
ImageId='ami-XXXXXXXXX',
InstanceType='t2.micro',
KeyName='KeyName',
MaxCount=1,
MinCount=1,
IamInstanceProfile={
'Arn': 'arn:aws:iam::000000000000:user/instance',
'Name': 'instance'
},
InstanceInitiatedShutdownBehavior='stop',
PrivateIpAddress='XXX.XX.XX.XX'
)It is showing the error:raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidParameterCombination) when calling the RunInstances operation: The parameter 'iamInstanceProfile.name' may not be used in combination with 'iamInstanceProfile.arn'
|
RunInstances operation: The parameter 'iamInstanceProfile.name' may not be used in combination with 'iamInstanceProfile.arn'
|
Well if you have configured your environment (load balancer) through Elastic beanstalk, then from AWS consoleServices -> go to 'EC2'Under Resources, select 'Load balancers'. Choose your load balancer, and under 'Attributes', click the 'Edit idle timeout' button. Set it to whatever value you want (in seconds).ShareFolloweditedJan 3 at 17:33Arvind Maurya9221212 silver badges2525 bronze badgesansweredFeb 8, 2019 at 13:40Rafey HijazyRafey Hijazy12111 silver badge66 bronze badges1Doubt. Is it fine to increase the Timeout from 60 sec to 120 sec. is there any issue?–Thirsty SixSep 22, 2023 at 5:31Add a comment|
|
We have use-case where we need to fulfill the request even after 60s.We get Elastic Load Balancing Connection Timeout 504.How to increase the timeout in ELB (aws).
|
how to increase 60 second timeout issue on Elastic Load Balancer?
|
Terraform's-targetshould be for exceptional use cases only and you should really know what you're doing when you use it. If you genuinely need to regularly target different parts at a time then you should separate your applications into different directory so you can easily apply the whole directory at a time.This might mean you need to use data sources or rethink the structure of things a bit more but means you also limit the blast radius of any single Terraform action which is always useful.ShareFollowansweredFeb 14, 2018 at 9:52ydaetskcoRydaetskcoR54.9k88 gold badges162162 silver badges182182 bronze badgesAdd a comment|
|
efx/
...
aws_account/
nonprod/
account-variables.tf
dev/
account-variables.tf
common.tf
app1.tf
app2.tf
app3.tf
...
modules/
tf_efxstack_app1
tf_efxstack_app2
tf_efxstack_app3
...In a given environment (dev in the example above), we have multiple modules (app1, app2, app3, etc.) which are based on individual applications we are running in the infrastructure.I am trying to update the state of one module at a time (e.g. app1.tf). I am not sure how I can do this.Use Case: I would like only one of the module's LC to be updated to use the latest AMI or security group.I tried the -target command in terrafrom, but this does not seem to work because it does not check the terraform remote state file.terraform plan -target=app1.tf
terraform apply -target=app1.tfTherefor, no changes take place. I believe this is abugwith terraform.Any ideas how I can accomplish this?
|
Apply one terraform module at a time
|
Using a security group ID as a source only works when the traffic is addressed to the private IP. By trying to hit the public IP the traffic is being routed outside the VPC and back in to the VPC, at which point the source security group information has been lost.ShareFollowansweredJan 3, 2018 at 14:31Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
|
The default AWS security group references itself in theSourcefield implying that the instance can communicate with itself:However, being logged in to the instance over SSH and trying to curl it by the DNS name resolving to the public instance's IP ends up with a curl timeout error.The only solution I've come up with is to add the public IP of the instance into the security group instead of the sec group ID, but it's not flexible - I don't want any so highly focused security groups.Why doesn't the default security group, assigned to an instance, allow all traffic from the instance itself?
|
AWS EC2 Security group access itself via HTTP
|
Redshiftis a data warehouse and generally used for OLAP(analytical) processes. Analytical DBs are too slow for transactional processes and do not generally obey primary key foreign key constraints. WhileAuroraandDynamoDBare OLTP(transactional) database. In your case if you are to keep all the data in a single JSON entry it would be better to use DynamoDB but I would suggest to use Aurora as it is a RDBMS having fix schema, but you will have to keep multiple entries per user in another table although retrieving them will be just a single join query.ShareFollowansweredDec 21, 2017 at 5:53Sar009Sar0092,22655 gold badges3030 silver badges5050 bronze badgesAdd a comment|
|
I've been reading a whole bunch on these databases but still am just not sure what to use.I need a backend for a game I am developing. This backend needs to store user accounts, the items the user has and the score of the user.At the start of the game the user will query the database for their items. When they get a new item it will add it to their account on the database. Items will be stored as a large JSON blob.So this database will not be accessed frequently. It does not need to store a huge amount of data, as it really only needs 1 entry per user. But it should be able to scale into the millions.I will need to query the database to determine what users belong to which particular categories, or teams.As cheap as I can get on storing this data in a reliable and adequately fast way is ideal.What is the best option for this?
|
Aurora vs Redshift vs DynamoDB for Indie Game Backend?
|
This is my structure:package:
individually: true
exclude:
- ./**and in my function:functions:
lambda:
handler: dist/index.handler
package:
include:
- 'dist/**/*'
- '!dist/**/*.map'
- '!node_modules/aws-sdk/**/*'First you tell serverless you want to exlude everything and you say that each function will include their own files.Inside each function I include everything inside a specific folder (asdist) and then to exclude specific files as files ending with.mapor, for example, theaws-sdklibrary inside node modules.ShareFollowansweredApr 15, 2019 at 9:45MajinDagetaMajinDageta26244 silver badges1414 bronze badgesAdd a comment|
|
|--serverless.yml
|--lib/
|--node_modules/
|--api/
|--manageclient/
|--addClient/
|--handler.jsThis is my folder structure ,
how to deploy function using serverless so that it includes only handler.js and node_modules/ and lib/.Can you please specify the function command to be written on main serverless.yml?My YML function statementhandler: api/manageclient/addClient/addclient.addclient
package:
exclude:
- ./*
- !api/manageclient/addClient/**
- !api/node_modules/**
- !api/lib/**
|
how to deploy function using serverless so that it includes only required folder/files
|
Set"DeletionPolicy" : "Delete"for your RDSDBCluster resource in CFT.http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.htmlShareFollowansweredDec 2, 2017 at 15:04AsdfgAsdfg11.7k2525 gold badges102102 silver badges181181 bronze badgesAdd a comment|
|
How can I disable snapshot creation when I delete a CloudFormation stack?I create an Aurora DB Cluster in my stack, and when I try to delete it I often get this error and I can't completely delete the stack:CREATE_FAILED AWS::RDS::DBClusterSnapshot Cannot create more than 100 manual snapshotsI don't want a snapshot at all.
|
Don't create snapshot on stack deletion in CloudFormation
|
I think you have an issue on how you set up the params for your object to upload. Try:const s3Config= {
localDir: './dist',
deleteRemoved: false,
s3Params: {
Bucket: 'cdn',
Prefix: 'dist/',
CacheControl: 'max-age=31536000',
Expires: oneYearLater(new Date())
}
}ShareFollowansweredOct 31, 2017 at 21:13DanetagDanetag52677 silver badges99 bronze badgesAdd a comment|
|
I'm using an npm package callednode-s3-clientwhich is a high-level wrapper for theaws-sdkfor Node.js, to upload a local project directory to an S3 bucket.Using that package, I'm passing some metadata to my files, namely key value pairs forExpiresandCache-Control. I'm uploading an entire directory which consists of HTML, JS, CSS, JPEG files. However when I check my S3 bucket, the headers that I'm settingonly appliesto JS and CSS files, these headers are not applied to images.I've gone through the documentation of the package and aws-sdk but I can't seem to find what causes the issue of selectively applying my metadata to some files and not applying to others.Here's my config object:const s3 = require('node-s3-client')
const s3Config= {
localDir: './dist',
deleteRemoved: false,
s3Params: {
Bucket: 'cdn',
Prefix: 'dist/',
Metadata: {
'Cache-Control': 'max-age=31536000',
'Expires': oneYearLater(new Date())
}
}
}
const client = s3.createClient({
s3Options: {
accessKeyId: KEY_ID,
secretAccessKey: ACCESS_KEY,
signatureVersion: 'v4',
region: 'us-east-2',
s3DisableBodySigning: true
}
})
client.uploadDir(s3Config)What might be causing this issue?
|
Setting Expires and Cache-Control headers for images that are being uploaded to AWS S3
|
This is not possible.Both the Classic Load Balancer and Target Groups for the Application Load Balancer only acceptAmazon EC2 instancesas targets.ShareFollowansweredAug 18, 2017 at 13:36John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badges2Would be nice if you could link to where this is stated. Also, you mention in your comment on the question that "there might be alternatives". It would be much appreciated if you could expound a bit on what these might be, in your answer?–SteenJun 17, 2019 at 11:261@steen It is hard to link to documentation that proves something isnotpossible, but here's something close:Target Groups for Your Application Load Balancers - Elastic Load BalancingApplication Load Balancers can now also send traffic toAWS Lambda functions. As to the alternatives, it would depend upon what the OP was actually trying to accomplish, but they never gave any further information.–John RotensteinJun 17, 2019 at 12:13Add a comment|
|
ELB:Elastic Load BalancerALB:Application Load BalancerI am trying to map elb/alb on aws to another elb (ex:http://my-elb-domain.com),elb/alb -> elbin alb:I didn't find a way to register elb as targets.in elb:only maps to instances
|
AWS: is it possible to map (ELB/ALB) to ELB?
|
I figured it out. The request timeout was not long enough for the upload to finish, thus it was making the call again and so on and so on. To resolve the issue, I set the timeout for the request to 0, giving the request all the time it needs to finish the upload. With this in place, it properly returns a 201 response back to the client.exports.create = function(req, res) {
req.setTimeout(0); // <= set a create request to no timeout length.
var stream = fs.createReadStream(req.file.path);
var params = {
Bucket: 'aws bucket',
Key: req.file.filename,
Body: stream,
ContentLength: req.file.size,
ContentType: 'audio/mp3'
};
var s3upload = s3.upload(params, options).promise();
// return the `Promise`
s3upload
.then(function(data) {
console.log(data);
return res.sendStatus(201);
})
.catch(function(err) {
return handleError(err);
});
}ShareFollowansweredAug 8, 2017 at 16:42Patrick San JuanPatrick San Juan7111 gold badge11 silver badge55 bronze badgesAdd a comment|
|
I'm using the Node AWS-SDK to upload files to an existing S3 bucket. With the code below, the file eventually uploads but it seems to return no status code a couple of times. Also, when the file successfully uploads, the return statement does not execute.Codeexports.create = function(req, res) {
var stream = fs.createReadStream(req.file.path);
var params = {
Bucket: 'aws bucket',
Key: req.file.filename,
Body: stream,
ContentLength: req.file.size,
ContentType: 'audio/mp3'
};
var s3upload = s3.upload(params, options).promise();
s3upload
.then(function(data) {
console.log(data);
return res.sendStatus(201);
})
.catch(function(err) {
return handleError(err);
});
}LogsPOST /api/v0/episode/upload - - ms - -
POST /api/v0/episode/upload - - ms - -
{ Location: 'https://krazykidsradio.s3-us-west-2.amazonaws.com/Parlez-vous%2BFrancais.mp3',
Bucket: 'krazykidsradio',
Key: 'Parlez-vous+Francais.mp3',
ETag: '"f3ecd67cf9ce17a7792ba3adaee93638-11"' }
|
s3 file upload does not return response
|
Download the JSON file AWS IP ranges:AWS IP Address RangesIt contains theCIDRs. You may want to write a simple script and check if the IP falls in any of theCIDRs and then retrieve the corresponding region. This is how JSON looks:{
"ip_prefix": "13.56.0.0/16",
"region": "us-west-1",
"service": "AMAZON"
},Here isPython3code to find the region, given an IP. Assumes theip-ranges.jsonfile downloaded from AWS is in the current directory. Will not work inPython 2.7from ipaddress import ip_network, ip_address
import json
def find_aws_region(ip):
ip_json = json.load(open('ip-ranges.json'))
prefixes = ip_json['prefixes']
my_ip = ip_address(ip)
region = 'Unknown'
for prefix in prefixes:
if my_ip in ip_network(prefix['ip_prefix']):
region = prefix['region']
break
return regionTest>>> find_aws_region('54.153.41.72')
'us-west-1'
>>> find_aws_region('54.250.58.207')
'ap-northeast-1'
>>> find_aws_region('154.250.58.207')
'Unknown'ShareFolloweditedAug 1, 2017 at 16:04answeredAug 1, 2017 at 15:29helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badges1FWIW, if you getipaddress.AddressValueError:...Did you pass in a bytes (str in Python 2) instead of a unicode object?, then wrap the string argument inunicode()–Ajith AntonyJan 15, 2020 at 1:00Add a comment|
|
I am trying to determine the AWS Region that my heroku database is stored on. All I have is the IP address.How can i determine the AWS region?
|
Determine AWS Region from IP Address?
|
Solved the problem with some help.
Since the port available to me was port 80, so I just forwarded the port 8080 to port 80 via. port forwarding and it worked out.
Sharing the link from where I found the solution:installing nodejs and forwarding port on awsShareFollowansweredJun 28, 2017 at 15:04AnshulAnshul17922 silver badges1313 bronze badges1after uncommentingnet.ipv4.ip_forward=1and checked if its enable bycat /proc/sys/net/ipv4/ip_forwardit still returned 0 but I still proceeded onsudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080andsudo iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT,sudo iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPTand now my aws node instance can be accessed to its public Public IPV4. Thanks for this answer–KevzAug 13, 2019 at 1:29Add a comment|
|
I have an AWS EC2 instance which I am using for my node js application. I cannot access any application page on the server.
I wrote this simple code solely for testing purposes but I cannot access even this from my browser.var express = require('express');
var app = express();
app.listen(3000, ()=> {
console.log('listening');
});
app.get('/',(req,res)=> {
res.send('hi');
});On navigating to http://:3000, I should be able to see "hi" written but the request times out.
Here are my security group configs :
|
Can't access nodejs server on aws
|
TryAWS Lamda@Edge. It solves this completely.First, create an AWS Lambda function and then attach your CloudFront as a trigger.In the code section of this AWS Lamda page, add the snippet in the repository below.https://github.com/CloudUnder/lambda-edge-nice-urls/blob/master/lambdaRewrite.jsNote the options in the readme section of the repoShareFolloweditedJun 24, 2021 at 8:16Dharman♦31.9k2525 gold badges9191 silver badges139139 bronze badgesansweredJun 24, 2021 at 8:10michael_vonsmichael_vons99011 gold badge99 silver badges1818 bronze badgesAdd a comment|
|
I'm hosting my static website on AWS S3, with Cloudfront as a CDN, and I'm wondering how I can get clean URLs working.I currently have to go toexample.com/about.htmlto get the about page. I'd preferexample.com/aboutas well as across all my other pages. Also, I kind of have to do this because my canonical URLs have been set with meta tags and search engines, and it's gonna be a bit much to go changing them.Is there a setting in Cloudfront that I'm non seeing?UpdatesThere are two options I've explored, one detailed by Matt below.First is trimming.htmloff the file before uploading to S3 and then editing the Content Header in the http for that file. This might work beautifully, but I can't figure out how to edit content headers from the command line, where I'm writing my "push website update" bash script.Second is detailled by Matt below and leverages S3's feature that recognizes root default files, usually index.html. Might be a great approach, but it makes my local testing challenging, and it leaves a trailing slash on the URLs which doesn't work for me.
|
How to get clean URLs on AWS Cloudfront (S3)?
|
https://play.golang.org/p/l2nrhG9lOAIt looks like theinterface{}return type is the culprit. Forcing that to astringreturns a more JSON-looking value. Having aninterface{}return type makes me think something else is expecting astructor marshal-able return value. Chase theinterface{}or force astringand you should be good.ShareFolloweditedMay 4, 2020 at 13:41Nik3,03522 gold badges2525 silver badges2525 bronze badgesansweredJan 27, 2017 at 23:36Mike MMike M4655 bronze badges1I look at the source code for the dependency i'm using and the result is getting marshaled so it was getting marshaled twice–frankgrecoJan 27, 2017 at 23:38Add a comment|
|
How to I get JSON back from golang without it being encoded as a string? I am usingthis projectto wrap go code in python so that I can execute it in an AWS lambda function.My issue is that whether I returnjson.Marshal(obj)orstring(json.Marshal(obj))I either get the base64 encoded json or a string representation (e.g."{\"bar\": \"foo\"}"). This is not an appropiate response if you are using the AWS API gateway as it expects pure json like you would get if you returned a json object in Node.js.Here's my code:package main
import "C"
import (
"encoding/json"
"github.com/eawsy/aws-lambda-go-core/service/lambda/runtime"
)
type Response struct {
StatusCode int `json:"statusCode"`
Headers map[string]string `json:"headers"`
Body string `json:"body"`
}
func Handle(evt json.RawMessage, ctx *runtime.Context) (interface{}, error) {
res := &Response{
StatusCode: 1,
Headers: map[string]string{"Content-Type": "application/json"},
Body: "Hello World",
}
content, _ := json.Marshal(res)
return string(content), nil
}Here's the result I get from AWS:
|
Return JSON in Go
|
I figured out. You have to create a file with the extension .config under in the directory .ebextensions of your project.{
"option_settings" : [
{
"namespace" : "aws:elasticbeanstalk:application",
"option_name" : "Application Healthcheck URL",
"value" : "/"
},
{
"namespace" : "aws:autoscaling:asg",
"option_name" : "MinSize",
"value" : "1"
},
{
"namespace" : "aws:autoscaling:asg",
"option_name" : "MaxSize",
"value" : "1"
},
{
"namespace" : "aws:autoscaling:asg",
"option_name" : "Custom Availability Zones",
"value" : "us-east-1a"
},
{
"namespace" : "aws:elasticbeanstalk:environment",
"option_name" : "EnvironmentType",
"value" : "SingleInstance"
},
]
}ShareFolloweditedApr 22, 2018 at 18:55Adrian Lopez2,73155 gold badges3232 silver badges4949 bronze badgesansweredDec 27, 2016 at 17:05Rohan PanchalRohan Panchal1,25611 gold badge1111 silver badges2929 bronze badges0Add a comment|
|
So i have a java application with the appropriate Procfile/Buildfile.I have ran eb create in our scratch Elastic Beanstalk environment but i have to follow up with a manual configuration change to make it a single instance type vs a load balanced.How would i use the eb-cli where uponeb create $ENVIRONMENT_NAMEit generates a single instance environment?There is a .elasticbeanstalk/config.ymlbranch-defaults:
development:
environment: development
group_suffix: null
staging:
environment: staging
group_suffix: null
production:
environment: production
group_suffix: null
global:
application_name: feed-engine
branch: null
default_ec2_keyname: null
default_platform: Java 8
default_region: us-east-1
profile: prod
repository: null
sc: git
|
Create a single instance Elastic Beanstalk application with eb-cli
|
You can get some of the info you are looking for by using theattribute_definitionsattribute of aTableobject, like this:import boto3
ddb = boto3.resource('dynamodb')
table = ddb.Table('MyTable')
attrs = table.attribute_definitionsThe variableattrswould now contain a dictionary of all of the attributes you explicitly defined when creating the table which normally is only the attributes that are used as keys in some index.However, since DynamoDB is schema less you can store any combination of other attributes in an item in DynamoDB. So, as the comment above states, the only way to know all attributes used in all items is to iterate through all of the items and build a set of attributes found in each item.ShareFollowansweredNov 16, 2016 at 14:40garnaatgarnaat45k77 gold badges125125 silver badges103103 bronze badgesAdd a comment|
|
BestAt this moment I'm using Boto3 in python 2.7 and what I would like to have is:
The column headers of my specific DynamoDB table.At this moment,I'm dealing with a very large dynamoDB table, with 80 columns and + 1.00O.000 records. And the task which I've is, to manipulate these data. To do this, I'm making use of chunks. This means that I'm retrieving each time 1000 rows, from my data-table and manipulate them and write the new result to a csv. (This is required, by some reasons).But because I'm using chunks, it can be that not every chunk contains 80 columns, this means that it sometimes can contain 79 or 78 columns. This happens when there are no values available for a specific column in a chunk.And this isn't desirable because at the end of the day, all those csv's should been concatenated again to each other, and therefore, each csv should contain the equal amount of columns.Thus my idea, which I've is : Add empty columns to the chunk-csv's who doesn't contain all the required columns.But therefore, I've to know, what the headers, attributes, field names are from my table (or the structure) + The thing is, those column headers are dynamic, there can't be a static list of headers And it can be, that suddenly new records are added with a unique column, (which means that the next time, I would receive 81 columns for each of my csv's) - Thus those header knowledge should come from my table / amazon awsKind regards
|
AWS DynamoDB - Boto3 get all attributes, fieldnames, column headers from a dynamoDB table / structure
|
Modifying Frédéric's answer:aws ec2 describe-instances --output text --query \
'Reservations[].Instances[].[PublicDnsName, Tags[?Key==`budget_cluster`].Value | [0]]'Would produce:ec2-xxxxx.amazonaws.com zzzzz
ec2-bbbbb.amazonaws.com yyyyyI've changed the output to text, which removes as much formatting as possible and selected the individual tag value with| [0]since there will only ever be one per instance anyway. Finally, I removed the[]at the end so that the resulting list isn't flattened. That way in text output each entry will be on its own line.You can also make this more robust by only selecting instances that actually have that tag. You could do so with further modifications to the--queryparameter, but it is better in this case to use the--filtersparameter since it does service-side filtering. Specifically you want thetag-keyfilter:--filters "Name=tag-key,Values=budget_cluster"aws ec2 describe-instances --output text \
--filters "Name=tag-key,Values=budget_cluster" --query \
'Reservations[].Instances[?Tags[?Key==`budget_cluster`]].[PublicDnsName, Tags[?Key==`budget_cluster`].Value | [0]]'Would still produce:ec2-xxxxx.amazonaws.com zzzzz
ec2-bbbbb.amazonaws.com yyyyyBut over the wire you would only be getting the instances you care about, thus saving money on bandwidth.ShareFollowansweredOct 7, 2016 at 15:53Jordon PhillipsJordon Phillips15.5k44 gold badges3737 silver badges4343 bronze badges0Add a comment|
|
I've got the following from describe-instances:{
"Reservations": [
{
"Instances": [
{
"PublicDnsName": "ec2-xxxxx.amazonaws.com",
"Tags": [
{
"Key": "Name",
"Value": "yyyyy"
},
{
"Key": "budget_cluster",
"Value": "zzzzz"
},
{
"Key": "poc",
"Value": "aaaaaaa"
}
]
}
]
}
]
}For each instance, I would like to extract the PublicDnsName and the value of the "budget_cluster" tag key. How to do this either withec2 describe-instancesor withjq?
|
How to extract a particular Key-Value Tag from ec2 describe-instances
|
You're setting the ACL for the new object but you haven't alloweds3:PutObjectAcl.ShareFollowansweredOct 4, 2016 at 12:04l0b0l0b056.6k3131 gold badges143143 silver badges228228 bronze badges11Thanks ! It's works by commenting the line for the ACL or adding "s3:PutObjectAcl" in policy.–JeremyCOct 5, 2016 at 11:07Add a comment|
|
I try for the first time to use the PHP AWS SDK ("aws/aws-sdk-php": "^3.19") to use S3.I created a bucket : 'myfirstbucket-jeremyc'I created a policy :{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::myfirstbucket-jeremyc/*"
]
}
]
}I applied the policy to a group and then created a user 's3-myfirstbucket-jeremyc' in this group.My PHP code is :<?php
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
error_reporting(E_ALL);
require(__DIR__ . '/vendor/autoload.php');
$s3Client = S3Client::factory([
'credentials' => [
'key' => $_SERVER['AWS_S3_CLIENT_KEY'],
'secret' => $_SERVER['AWS_S3_CLIENT_SECRET']
],
'region' => 'eu-west-1',
'version' => 'latest',
'scheme' => 'http'
]);
$result = $s3Client->putObject(array(
'Bucket' => 'myfirstbucket-jeremyc',
'Key' => 'text.txt',
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));But i get thiserror :Error executing "PutObject" on
"http://s3-eu-west-1.amazonaws.com/myfirstbucket-jeremyc/text.txt";
AWS HTTP error: Client error: PUThttp://s3-eu-west-1.amazonaws.com/myfirstbucket-jeremyc/text.txtresulted in a 403 Forbidden responseDo you know where i'm wrong ?Thanks in advance !
|
PHP Amazon SDK, S3 Bucket Access Denied
|
The error mentions the following:No region specified or obtained from persisted/shell defaults.So, you have 2 possible resolutions:Include the-Regionparameter, such as-Region us-east-1. SeeGet-EBEnvironment Cmdlet. OrUse theSet-DefaultAWSRegioncmdlet to set the default region. SeeSet-DefaultAWSRegion CmdletShareFolloweditedJun 5, 2017 at 14:34answeredJul 25, 2016 at 21:34Matt HouserMatt Houser35k66 gold badges7373 silver badges9292 bronze badges3Thank you! That's exactly the problem! I didn't think region really met setting the default region. I thought something was wrong with the command.–J. AnnJul 25, 2016 at 23:12Your second link goes to the wrong location. It goes to New-EBApplicationVersion.–Jeffrey HarmonJun 5, 2017 at 14:25Thanks. It should be fixed now.–Matt HouserJun 5, 2017 at 14:34Add a comment|
|
Hi I am trying to deploy onto Elastic Beanstalk using AWS Powershell.Currently I am just trying to get the EB environment using the following cmdlet:
+ Get-EBEnvironment
-ApplicationName
-EnvironmentId
-VersionLabel
-EnvironmentName
-IncludedDeletedBackTo
-IncludeDeletedThis is the cmdlet I used: Get-EBEnvironment -ApplicationName appNameHowever, I am getting the following error:Get-EBEnvironment : No region specified or obtained from persisted/shell defaults.
At C:\Users\lowong\Desktop\script.ps1:22 char:1Get-EBEnvironment -ApplicationName evcfacade~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~CategoryInfo : InvalidOperation: (Amazon.PowerShe...vironmentCmdlet:GetEBEnvironmentCmdlet) [Get-EBEnvironment], InvalidOperationExceptionFullyQualifiedErrorId : InvalidOperationException,Amazon.PowerShell.Cmdlets.EB.GetEBEnvironmentCmdletAm I missing other fields I have to put onto the cmdlet? or what's the problem?(Here's the link to the documentation of the cmdlet:http://docs.aws.amazon.com/powershell/latest/reference/index.html?page=New-EBApplicationVersion.html&tocid=New-EBApplicationVersion)
|
Using AWS Powershell to deploy to AWS elastic beanstalk
|
Contrary to RDS, there is no such option for EC2 instances. They are created in a subnet and if you want multi-az you will need to launch multiple instances in different subnets across the availability zones.ShareFolloweditedJun 19, 2016 at 8:48answeredJun 19, 2016 at 8:46Darin DimitrovDarin Dimitrov1.0m273273 gold badges3.3k3.3k silver badges2.9k2.9k bronze badges3And load balance them to failover?–Vijay MuvvaJun 19, 2016 at 8:471Yes, you could then create an ELB and configure this ELB to distribute the traffic between them. Bear in mind that the ELB will reserve one private IP in each of the subnets.–Darin DimitrovJun 19, 2016 at 8:48That is what I thought so since I could not find any option on the AWS console. Thanks.–Vijay MuvvaJun 19, 2016 at 8:49Add a comment|
|
How can i enable multi-az for a running ec2 instace? I know how to do that for RDS as there is option on the aws console. But for ec2 instaces where can I find this?
|
Enable multi-az for running ec2 instance
|
I would start by readingthe documentationon this feature carefully.You can enable cachingat the stage level, and you can override cache settingsat the method level. You can also specify headers, URL paths and query stringsto be used as the cache key.It's not clear what you have done at this point but you should be able to do one of the following to achieve your goals:Enable caching at the stage level, and disable at the method level for the POST method.Disable caching at the stage level, and enable caching at the method level for the GET method.ShareFollowansweredMay 20, 2016 at 15:41Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
|
I turned on the cache to cache GET requests but it also applies to POST, since post has no query string it currently caches the POST once and sticks with it no matter the POST body.Is there a way to turn this off for the POST method or tell the cache that the post body has the key to cache it?
|
AWS API Gateway caches POST data
|
The Amazon SQSReceiveMessagescommand returns a message (or a batch of messages) from the queue. The messages are in approximately FIFO (first in-first out) order but this is not guaranteed.There is no way to selectively retrieve messages.It is not possible to use the contents of a message, a message attribute nor message metadata to limit the messages returned. It's basically popping a message off a stack.ShareFollowansweredMay 8, 2016 at 7:53John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badgesAdd a comment|
|
There is a SQS Queue in which I am getting messages continuously. I need to read and process only those messages that came in the last 24 hours. The messages which would be coming in currently should be processed on the next day.Timestampis stored in the body of the message.Is it possible to read messages selectively from the SQS queue. For instance, read only those messages whosetimestampvalue is greater than the previous day's timestamp but less than the current timestamp (current timestamp is the time at which this job is running)?
|
Reading messages selectively from SQS Queue
|
Use the Java S3 SDK. If you upload a file calledx.dbto an S3 bucketmybucket, it would look something like this:import com.amazonaws.services.s3.*;
import com.amazonaws.services.s3.model.*;
...
AmazonS3 client = new AmazonS3Client();
S3Object xFile = client.getObject("mybucket", "x.db");
InputStream contents = xFile.getObjectContent();In addition, you should ensure the role you've assigned to your lambda function has access to the S3 bucket. Apply a policy like this:"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}]ShareFolloweditedApr 6, 2016 at 22:05answeredApr 6, 2016 at 21:59ataylorataylor65.4k2525 gold badges162162 silver badges189189 bronze badges2Thanks Atalylor,I will follow as suggested and then update here the result.–Sumit AroraApr 6, 2016 at 22:06Thanks ataylor, this solution worked as suggested.–Sumit AroraApr 7, 2016 at 9:21Add a comment|
|
This question already has answers here:How can I read an AWS S3 File with Java?(9 answers)Closed7 years ago.I have written a AWS Lambda Function, Its objective is that on invocation - it read the contents of a file say x.db, get a specific value out of it and return to the caller.But this x.db file changes time to time. So I would like to upload this x.db file to S3 and read it from AWS Lambda function as like reading a file.File xFile = new File("S3 file in x.db");How to read such x.db S3 file from AWS Lambda Function written in Java ?
|
How to read S3 file from AWS Lambda Function written in Java? [duplicate]
|
Instead of polling, you should subscribe to S3 event notifications:http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.htmlThese can be delivered to an SNS topic, an SQS queue, or trigger a Lambda function.ShareFollowansweredMar 18, 2016 at 14:19Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
|
I have situation where I need to poll AWS S3 bucket for new files.
Also, its not just one bucket. There are ~1000+ buckets and these buckets could have a lot of files.
What are the usual strategies / design for such use case. I need to consumer new files on each poll. I cannot delete files from the bucket.
|
Poll periodically for new files in AWS S3 buckets having a lot of file?
|
Yes, it's possible.You use theRunInstancesAPI method.Launches the specified number of instances using an AMI for which you have permissions.To completely get rid of an instance, useTerminateInstance.Shuts down one or more instances. This operation is idempotent; if you terminate an instance more than once, each call succeeds.The language is a bit confusing because it says "Shuts down one or more instances", but in fact it totally removes them.ShareFolloweditedFeb 10, 2016 at 1:27answeredFeb 10, 2016 at 1:25Eric J.Eric J.149k6363 gold badges343343 silver badges556556 bronze badges3Thanks, how is this different thanRequestSpotInstances?–Jake WilsonFeb 10, 2016 at 1:26RequestSpotInstance allows you to place a bid for an available instance at the current spot market price. This will succeed if your bid is at least equal to the market price. Such instances can be shut down without notice if the current market price later exceeds what you are willing to pay. The spot market is often used for CPU (or GPU) intensive tasks where many instances are needed as cheap as possible, work is divided between all available instances, and the architecture can withstand some instances just going away.–Eric J.Feb 10, 2016 at 1:29Okay, soRunInstancesis intended to fire up On-Demand instances at the current AWS price per hour? If so, that is what I am looking for, thank you.–Jake WilsonFeb 10, 2016 at 17:38Add a comment|
|
Is it possible to programmatically get/deploy and start an EC2 instance? Essentially pick your instance type, AMI and start it up?I see theStartInstancemethod but this only applies to instances already create and stopped in your account.http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StartInstances.htmlEssentially, what is going on is that I have an automated service that needs multiple EC2 instances for computation. I need to programmatically create a new instance, pick the instance type, pick the AMI, start it up and run some deployment scripts to get things rolling.I would think there is a way to do this with the AWS SDK but I'm just not seeing it.On a related note, also need to be able to programmatically destroy a shutdown instance.
|
Programmatically create and deploy On-Demand EC2
|
There are lots of factors, that maximize the CPU utilization on PostgreSQL like:Free disk spaceCPU UsageI/O usage etc.I came across with the same issue few days ago. For me the reason was that some transactions was getting stuck and running since long time. Hence forth CPU utilization got inceased. I came to know about this, by running some postgreSql monitoring command:SELECT max(now() - xact_start) FROM pg_stat_activity
WHERE state IN ('idle in transaction', 'active');This command shows the time from which a transaction is running. This time should not be greater than one hour. So killing the transaction which was running from long time or that was stuck at any point, worked for me. I followedthispost for monitoring and solving my issue.Postincludes lots of useful commands to monitor this situation.ShareFolloweditedJun 23, 2018 at 8:19Keshav4,42888 gold badges3131 silver badges5050 bronze badgesansweredOct 11, 2016 at 19:33Rohini ChoudharyRohini Choudhary2,44322 gold badges2222 silver badges3131 bronze badgesAdd a comment|
|
Attempted to migrate my production environment from Native Postgres environment (hosted on AWS EC2) to RDS Postgres (9.4.4) but it failed miserably. The CPU utilisation of RDS Postgres instances shooted up drastically when compared to that of Native Postgres instances.My environment details goes hereMaster: db.m3.2xlarge instanceSlave1: db.m3.2xlarge instanceSlave2: db.m3.2xlarge instanceSlave3: db.m3.xlarge instanceSlave4: db.m3.xlarge instance[Note: All the slaves were at Level 1 replication]I had configured Master to receive only write request and this instance was all fine. The write count was 50 to 80 per second and they CPU utilisation was around 20 to 30%But apart from this instance, all my slaves performed very bad. The Slaves were configured only to receive Read requests and I assume all writes that were happening was due to replication.Provisioned IOPS on these boxes were 1000
And on an average there were 5 to 7 Read request hitting each slave and the CPU utilisation was 60%.
Where as in Native Postgres, we stay well with in 30% for this traffic.Couldn't figure whats going wrong on RDS setup and AWS support is not able to provide good leads.Did anyone face similar things with RDS Postgres?
|
High CPU Utilisation on AWS RDS - Postgres
|
To allow everyone to get objects, but not allow anyone to list objects, you can apply a bucket policy:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucket/myfolder1/*",
"arn:aws:s3:::mybucket/myfolder2/*"
]
}
]
}Note that anyone who discovers the URL of an object can retrieve that object.ShareFollowansweredOct 8, 2015 at 21:20jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badges4So its not possible to have something like "arn:aws:s3:::mybucket/**/*.txt" to say all the text files? I've tried experimenting with extensions, but I can't seem to get it to work or see any documentation.–DustinOct 12, 2015 at 15:43@Dustin Unfortunately, I don't think they support wildcards like that yet.–jarmodOct 12, 2015 at 22:24@jarmod Can anyone upload files to the bucket? Which line is preventing this from happening.–horseJun 23, 2019 at 12:37@horse No, the policy only allows GetObject (download). It doesn’t need to deny uploads because deny is the implicit default rule.–jarmodJun 23, 2019 at 12:43Add a comment|
|
I have a sub directory under a bucket with ~7000 sub directories. And then each of those sub directories may have 1-100 files.I need the files to be public, but I don't want anyone to be able to see the list of subdirectories, or even the list of files under a given directory.I know I can set the ACL for the files to read-only, and then i think I can set the directory to private. But for this many files, I'm hoping there is a much easier solution?
|
S3 public read-only files but private directories
|
+50You cannot use the--cache-controloption that aws cli provides to invalidate files in CloudFront. The--cache-controloption maps directly to theCache-Control headerand CloudFront caches the headers along with the file, so if you change a header you must also invalidate to tell CloudFront to pull in the changed headers.If you want to use the aws cli, then you must parse the output of the sync command and then use the aws cloudfront cli.Or, you can uses3cmdfrom s3tools.org. This program provides the the--cf-invalidateoption to invalidate the uploaded filed in CloudFront and a sync command synchronize a directory tree to S3.s3cmd sync --cf-invalidate <local path> s3://<bucket name>Read, thes3cmd usage pagefor more details.ShareFollowansweredApr 14, 2015 at 19:34Keith ShawKeith Shaw64466 silver badges77 bronze badges2The --cf-invalidate option gives the following error for me: ERROR: Parameter problem: Unable to translate S3 URI to CloudFront distribution name: s3://<bucket name>–Lars HøidahlSep 23, 2021 at 6:33I had that error (Unable to translate S3 URI to CloudFront distribution name) when my S3 origin domain name in the CloudFront distribution was set tobucket_regional_domain_name(xxxx.s3.eu-west-2.amazonaws.com) instead ofbucket_domain_name(xxxx.s3.amazonaws.com). It's a problem in thes3cmdcode:github.com/s3tools/s3cmd/blob/master/S3/CloudFront.py–OliniuszJan 12 at 21:37Add a comment|
|
I have a rails app that uses aws cli to sync bunch of content and config with my s3 bucket like so:aws s3 sync --acl 'public-read' #{some_path} s3://#{bucket_path}Now I am looking for some easy way to mark everything that was just updated in sync to be marked as invalidated or expired for CloudFront.I am wondering if there is some way to use-cache-controlflag thataws cliprovides to make this happen. So that instead of invalidating CouldFont, just mark the files as expired, so CloudFront will be forced to fetch fresh data from bucket.I am aware ofCloudFront POST APIto mark files for invalidation, but that will mean I will have detect what changed in the last sync, then make the API call. I might have any where from 1000s to 1 file syncing. Not a pleasent prospect. But if I have to go this route, how would I go about detecting changes without parsing the s3 sync's console output of-course.Or any other ideas?Thanks!
|
Amazon CloudFront cache invalidation after 'aws s3 sync' CLI to update s3 bucket contents
|
So I believe I resolved this issue. It appears as if aws was dynamically changing ip addresses. When I was referencing ftp.domain.com for my passiveip the ip that it resolved to didn't match the initial ip tied to the cname record.The solution was to assign a static elastic ip to my ec2 instance and set my passiveip in pureftp to my static elastic ip. Thus far it appears to have resolved my issue.ShareFollowansweredFeb 6, 2015 at 7:21Code JunkieCode Junkie7,6922727 gold badges8080 silver badges143143 bronze badges2Seems legit. The protocol conponents FTP uses to set up the data connection is hideously error-prone in situations like that. Was "ftp.domain.com" originally a CNAME referencing your machine's*-compute*.amazonaws.comhostname?–Michael - sqlbotFeb 6, 2015 at 13:39@Michael Yes it was and it still failed. It was returning an alternate IP address. Even after I gave the machine a static ip address it still failed if I set ForcePassiveIP to use ftp.domain.com. I ended up just hard setting the ForcePassiveIP with the static ip.–Code JunkieFeb 6, 2015 at 15:00Add a comment|
|
I have pureftp running on an AWS ec2 instance. I'm trying to get it to run in passive mode which I thought was working, however I'm finding it may not be working correctly. I'm receiving the following error in FileZillaStatus: Connected
Status: Retrieving directory listing...
Status: Server sent passive reply with unroutable address. Using server address instead.
Status: Directory listing of "/" successfulThe odd part is some people are unable to log in while others are.I have the following pureftp configurationPort Range#Port range for passive connections replies. - for firewalling.
PassivePortRange `50000 50100`PASV IP#Force an IP address in PASV/EPSV/SPSV replies. - for NAT.
#Symbolic host names are also accepted for gateways with dynamic IP
#addresses.
ForcePassiveIP `ftp.mydomain.com` "my cname record is mapped to my ec2 public dns"When I view the local port range on the server, /proc/sys/net/ipv4/ip_local_port_range the following are open.32768 61000My ec2 security group has port 50000 - 50100 openWhen I view my server logs I don't see much other than this every once in a while.Feb 5 08:57:41 ip-172-11-42-52 dhclient[1062]: DHCPREQUEST on eth0 to 172.11.32.1 port 67 (xid=0x601547fd)
Feb 5 08:57:41 ip-172-11-42-52 dhclient[1062]: DHCPACK from 172.11.32.1 (xid=0x601547fd)
Feb 5 08:57:43 ip-172-11-42-52 dhclient[1062]: bound to 172.11.42.52 -- renewal in 1417 seconds.Anybody have any idea where things might be going wrong?
|
AWS EC2 Passive FTP - Server sent passive reply with unroutable address. Using server address instead
|
Object metadata is covered by the GetObject action. For versioned objects you would instead use GetObjectVersion. So the policies you have listed should be working for accessing metadata. There's a great summary of all the permissions and what each one coversherein the AWS docs.If you're able to recreate the error using the AWS REST interface instead of the web console you can get more detailed error information. Using a tool likethis perl command line utilitycan be useful in that regard. Using this you can determine the specificS3 error codesthat you don't typically see when using their web console. Knowing the specific error that's causing the problem will go a long way to determining why the users can't edit the metadata despite your having the correct policies in place.ShareFollowansweredJul 21, 2014 at 14:51Bruce PBruce P20.3k88 gold badges6464 silver badges7373 bronze badges1Ok, I've gotten the utility you linked to working. It lists the buckets I have successfully. However, I have no clue how to use it to try to modify the metadata of a file. I looked through the docs but they only references to "metadata" seem to be in connection with copying or moving a file. Can you point me in the right direction?–SandwichAug 13, 2014 at 12:52Add a comment|
|
I want to grant an AWS IAM group permission to upload, view, modify, and delete objects in a single bucket, through the Management Console. I've got most of it down, but I'm getting reports that users in that group areunable to modify object metadata- they're getting the "Sorry! You were denied access to do that!" dialog.Here's the policy I have:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1403204838000",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::example-bucket"
]
},
{
"Sid": "Stmt1403205082000",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "Stmt1403205119000",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}Does anyone know which S3 action(s) I need to assign to allow the group access to modify object properties (specifically, the metadata)?
|
What's the specific permission to allow an AWS IAM group access to change an S3 object's metadata?
|
I did the following and solved the problem, but it feels kind of forced, like I missed a step somewhere.Go to elastic beanstalk -> application -> configuration -> software configurationClick on the gear buttonEnter a new environment variableSECRET_KEY_BASE xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxSave and wait for AWS server to restartShareFolloweditedMay 29, 2014 at 14:12Bohemian♦419k9797 gold badges587587 silver badges736736 bronze badgesansweredMay 29, 2014 at 13:56Need A HandNeed A Hand57711 gold badge66 silver badges1818 bronze badges21Also possible on the command line:eb setenv SECRET_KEY_BASE=foo–DamienSep 24, 2015 at 10:161Possible when creating env:eb create <env name> --envvars SECRET_KEY_BASE=foo–Jon BurgessJul 7, 2016 at 1:10Add a comment|
|
I am trying to upload my rails project on AWS Beanstalk.I've already run eb init, eb start and configured the database settings to point to RDS.
After I pushed using git aws.push and waited for AWS server to be started, the link provided says:"502 Bad Gateway nginx"In the logs-------------------------------------
/var/app/support/logs/passenger.log
-------------------------------------
App 6861 stderr: [ 2014-05-29 13:26:59.1308 6893/0x00000001e50050(Worker 1) utils.rb:68 ]:
*** Exception RuntimeError in Rack application object (Missing `secret_key_base` for
'production' environment, set this value in `config/secrets.yml`) (process 6893, thread
0x00000001e50050(Worker 1)):In my secrets.yml# Do not keep production secrets in the repository,
# instead read values from the environment.
production:
secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>
|
Rails 4.1 AWS Beanstalk cannot find secret key base
|
I got the answer for my question. First go tohttp://download.eclipse.org/releases/heliosand install Database development. Then go tohttp://aws.amazon.com/eclipseand installAmazon EC2 Management. And lastly install the AWS Toolkit for Eclipse.ShareFollowansweredFeb 18, 2014 at 11:39anu_ranu_r1,61277 gold badges3131 silver badges6161 bronze badges21this worked for me, just as the steps said. Upon installing EC2 Management and launching Eclipse, a prompt will appear for entering the Access Key and Secret–OdaymOct 4, 2014 at 15:32I installed the latest version of the Toolkit today (AWSSDK.2.3.11) onto Eclipse Kepler and things seem slightly improved: the installer gave a list of components to install. I tried selecting just the SDK and got the error mentioned above. Adding the EC2 Management component fixed it. Thanks for the hint.–BampferDec 8, 2014 at 23:52Add a comment|
|
I am trying to install AWS toolkit for eclipse in Eclipse Helios. I cannot install it as I am getting the following errors. How to fix this?Missing requirement: IdentityManagement 1.0.0.v201402141427 (com.amazonaws.eclipse.identitymanagement 1.0.0.v201402141427) requires 'bundle com.amazonaws.eclipse.ec2 1.1.0' but it could not be found
Cannot satisfy dependency:
From: AWS Toolkit for Eclipse Core 2.0.1.v201402141427 (com.amazonaws.eclipse.core.feature.feature.group 2.0.1.v201402141427)
To: com.amazonaws.eclipse.identitymanagement [1.0.0.v201402141427]
|
Cannot install AWS toolkit for eclipse.How to fix these errors?
|
-1Update:You can now change a RDS security group, see user115813's answer a few pixels under my original answer.Please feel free to validate his answer instead of mine.ShareFolloweditedSep 1, 2014 at 19:27answeredAug 30, 2013 at 16:13Thibault D.Thibault D.10.1k33 gold badges2626 silver badges5656 bronze badges6Thanks. It would be helpful if that was documented somewhere or at least where I could find it cause it wasn't in the security groups docs. But maybe people with RDS already have and EC2 set up, whereas I creates an RDS to work with EMR. thanks!!! I hate when things seem mysterious–user2390363Aug 30, 2013 at 17:23Do you mean something like that?docs.aws.amazon.com/AmazonRDS/latest/UserGuide/…–Thibault D.Aug 30, 2013 at 17:29Hi thibaultd, I read through that documentation, but since I wasn't using any EC2, i didn't really see how it applied to those only using RDS and EMR, I never spun up EC2. But with your answer and looking over the document it makes sense now. I'm just getting started. Thanks again!–user2390363Aug 30, 2013 at 17:52What?? Both statements in this answer seem to be false - My RDS instancecanhave multiple security groups, and Icanchange these groups after launching.–Tony RFeb 28, 2014 at 22:43My answer may be out of date, as things change. I'll have to check.–Thibault D.Mar 1, 2014 at 14:44|Show1more comment
|
I'm new to AWS and RDS. I've combed through help files and other stackflow questions, but can't seem to find out if i'm doing something wrong.
When I go to my RDS Instance, I seeSecurity Groups:default( active )I click default, and it takes me to the SG page, where I create new groups.
However, any rules I put in those new groups don't work, only the rules I put in the default group works. In some of the documentation, I see the screenshots and the beside the Security Groups on the instance page, it doesn't list default, but a user created group.So is there some way to make all the new groups active or a way to change which group has precedence on that Instance page? Or am I going to have to put all my rules in the default group?
|
RDS Security groups - default only working
|
For a list of product categories ( and the sub categories ) you will need to log into seller central, then help -> Manage inventory -> Reference -> Tree Guides.Alternativelyhere is the link. Note, you will still need to authenticate for that link to work.ShareFolloweditedApr 17, 2016 at 10:13BenMorel35.4k5151 gold badges191191 silver badges330330 bronze badgesansweredJan 7, 2013 at 19:17Robert HRobert H11.6k1818 gold badges7171 silver badges110110 bronze badgesAdd a comment|
|
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed5 years ago.Improve this questionI need to develop application that send feed submissions via amazon mws. But I'm currently in trouble, because when you send new product to amazon you must specify product category. I found list of categories for different endpoints in theAmazon Marketplace Web Service Products
API Section Referencebut I believe this isn't the complete list of categories, because those main categories also have child categories that are not listed.I searched all over the internet but I'm still stuck. Also looked in the api docs if I can send request to list all available categories but no luck yet. Any help will be welcome guys.Sorry for my bad english :)
|
Where I can get complete list of Amazon MWS product categories? [closed]
|
Try adding the region endpoint (because by default it's looking into us-east-1 enpoint) to your config command, then it should work:as-create-launch-config test1autoscale --region ap-southeast-1 --image-id ami-xxxx --instance-type m1.smallAlso take a look at this:Regions and Endpoints - Amazon Web Services GlossaryShareFollowansweredJun 14, 2012 at 7:30domdom11.9k1010 gold badges5151 silver badges7474 bronze badges1The problem is probably due to the wrong region. Also, don't forget to change the API endpoint, either by using -U http :// xxx or by setting AWS_AUTO_SCALING_URL environment variable (export AWS_AUTO_SCALING_URL=http :// ... ,at least on Linux). Don't forget the http or https in the url. [Note that the address must be without spaces. I had to put them because otherwise addresses are transformed into links :-)]–Davide VernizziJan 25, 2013 at 21:54Add a comment|
|
I'm using the following command to set up the AWS launch config:as-create-launch-config test1autoscale --image-id ami-xxxx --instance-type m1.smallwhere ami-xxxx is the image id that I got from my instance via the web console. I get the following error:Malformed input-AMI ami-xxxx is invalid: The AMI ID 'ami-xxxx' does not existI have triple checked that the image id matches the instance image id. My availability zone is ap-southeast-1a. I am not clear on what image is being asked for if it will not accept the image of the instance I wish to add to the autoscale group
|
Can't add image to AWS Autoscale launch config
|
This solved the issue, just delete the main folder:aws s3 rm "s3://BUCKET_NAME/folder/folder" --recursiveShareFollowansweredJun 12, 2019 at 10:07Lucas OconLucas Ocon19922 silver badges77 bronze badgesAdd a comment|
|
I just began to use S3 recently. I accidentally made a key that contains a bad character, and now I can't list the contents of that folder, nor delete that bad key. (I've since added checks to make sure I don't do this again).I was using an old "S3" python module from 2008 originally. Now I've switched to boto-2.0, and I still cannot delete it. I did quite a bit of research online, and it seems the problem is I have an invalid XML character, so it seems a problem at the lowest level, and no API has helped so far.I finally contacted Amazon, and they said to use "s3-curl.pl" fromhttp://aws.amazon.com/code/128. I downloaded it, and here's my key:<Key>info/[01</Key>I think I was doing a quick bash for loop over some files at the time, and I have "lscolors" set up, and so this happened.I tried./s3curl.pl --id <myID> --key <myKEY> -- -X DELETE https://mybucket.s3.amazonaws.com/info/[01(and also tried putting the URL in single/double quotes, and also tried to escape the '[').Without quotes on the URL, it hangs. With quotes, I get "curl: (3) [globbing] error: bad range specification after pos 50". I edited the s3-curl.pl to docurl --globoffand still get this error.I would appreciate any help.
|
Cannot delete Amazon S3 key that contains bad character
|
var.waf_log_group_namecan't be a random name. It must must includeaws-waf-logs-as explained in theAWS docs.ShareFollowansweredFeb 26, 2023 at 10:25MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges0Add a comment|
|
I am attempting to integrate theaws_wafv2_web_acl_logging_configurationresource with theaws_cloudwatch_log_groupresource in my Terraform configuration. However, I am encountering an error that states:Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashesAccording to the erroraws_cloudwatch_log_grouparnis incorrect.But I followed correct format according to the Terraform documentation.https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl_logging_configurationAnybody knows the reason for this error?
My code as below.resource "aws_cloudwatch_log_group" "test_waf_log_group" {
name = var.waf_log_group_name
retention_in_days = 14
}
resource "aws_wafv2_web_acl_logging_configuration" "log_test_waf" {
depends_on = [aws_cloudwatch_log_group.test_waf_log_group]
log_destination_configs = [aws_cloudwatch_log_group.test_waf_log_group.arn]
resource_arn = aws_wafv2_web_acl.test_waf.arn
}
|
Terraform : Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes
|
Since theaws_subnetresource was created withfor_each, you could reference the values to create a list usingfor[1]:subnet_ids = [for k, v in aws_subnet.private : aws_subnet.private[k].id]Just tested with the same code you have andterraform planshows no errors.[1]https://www.terraform.io/language/expressions/for#for-expressionsShareFollowansweredMar 28, 2022 at 15:06Marko EMarko E15.3k33 gold badges2424 silver badges3131 bronze badges63Acceptance of the answer would be appreciated if this works. :)–Marko EMar 28, 2022 at 15:14what if you want the list for both private and public subnets?–mike01010Aug 8, 2023 at 7:28Not sure if I understand, but I don't think you can set EKS nodes to be in public and private subnets at the same time.–Marko EAug 8, 2023 at 7:34ah. no i was just wondering in general, if i create public and private subnets as two different resources, how could i get a list of all subnets easily...i managed to figure it out–mike01010Aug 8, 2023 at 18:361yeah, in my case i used a concat to merge the two lists:output "subnet_ids" { value = concat(values(aws_subnet.public_subnets)[*].id, values(aws_subnet.private_subnets)[*].id) }–mike01010Aug 8, 2023 at 18:58|Show1more comment
|
How may I get the list of subnet ids created with for_each (need at the bottom of my script):terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = "eu-west-1"
}
data "aws_availability_zones" "azs" {
state = "available"
}
locals {
az_names = data.aws_availability_zones.azs.names
}
variable "vpc_cidr" {
default = "10.0.0.0/16"
}
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
}
resource "aws_subnet" "private" {
for_each = {for idx, az_name in local.az_names: idx => az_name}
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, each.key)
availability_zone = local.az_names[each.key]
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
# I need to get the list of subnet ids (aws_subnet.private) here
subnet_ids = []
}
|
Getting the list of subnet ids created with for_each
|
update an existing task definitionYou can't do this. You have to create anew revisionof an existing task definition. Then you will also have to update your ECS service to use the new task revision. Runningregister_task_definitionagain should automatically create new revision for you.ShareFollowansweredNov 4, 2021 at 2:18MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
|
Using boto3, we can create a new task definition:client = boto3.client('ecs')
client.register_task_definition(...)How do we update an existing task definition? Is it just another call with changes and the same family name?
|
AWS ECS - Using Boto3 to update a task definition
|
a list of all possible values for the service and resource type parts in an AWS ARNThe AWSService Authorization Referenceis what you're looking for, specifically:Actions, resources, and condition keys for AWS services.The Service Authorization Reference provides alist of the actions, resources, and condition keys that are supported by each AWS service. You can specify actions, resources, and condition keys in AWS Identity and Access Management (IAM) policies to manage access to AWS resources.If there is some API I can run and get the list in json or other data format, it would be even better.Unfortunately, no API exists for this info as it's not a service but more of a documentation reference.ShareFollowansweredOct 17, 2021 at 9:33Ermiya EskandaryErmiya Eskandary21.2k33 gold badges4242 silver badges5050 bronze badgesAdd a comment|
|
I'm searching for a list of all possible values for the service and resource type parts in AWS ARN.The two parts are explained in the documentation:https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.htmlIf there is some API I can run and get the list in json or other data format, it would be even better.
|
Where can I find the list of possible values for the service and resource type parts in AWS ARN?
|
Using below command you can able to see the values from stage file:select t.$1, t.$2 from @mystage1 (file_format => myformat) t;Based on the data you can change your copy command as below:COPY INTO my_table(col1, col2, col3) from (select $1, $2, try_to_date($3) from @mystage1)
file_format=(type = csv FIELD_DELIMITER = '\u00EA' SKIP_HEADER = 1 NULL_IF = ('') ERROR_ON_COLUMN_COUNT_MISMATCH = false EMPTY_FIELD_AS_NULL = TRUE)
on_error='continue'ShareFolloweditedOct 7, 2021 at 12:55Wai Ha Lee8,6949090 gold badges5858 silver badges9393 bronze badgesansweredOct 7, 2021 at 4:53SrigaSriga1,24311 gold badge66 silver badges1313 bronze badgesAdd a comment|
|
I have one table in snowflake, I am performing bulk load using.
one of the columns in table is date, but in the source table which is on sql server is having null values in date column.The flow of data is as :sql_server-->S3 buckets -->snowflake_tableI am able to perform the sqoop job in EMR , but not able to load the data into snowflake table, as it is not accepting null values in the date column.The error is :Date '' is not recognized File 'schema_name/table_name/file1', line 2, character 18 Row 2,
column "table_name"["column_name":5] If you would like to continue loading when an error is
encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option.can anyone help, where I am missing
|
Snowflake table is not accepting null values in date field
|
Assuming this is a single page app (SPA) since this is react - in other words, there is no file in your s3 bucketexample-1/index.html?For single-page apps, whatever is delivering the static contents needs to return the rootindex.htmlfile for all unmatched paths. In CloudFront, you can click on the distribution, then click on "Error pages", click "Create a custom error response", select 404 for the HTTP error code, select "create a custom error response" and put/index.htmlin the response page path with a 200 for the response code. You will want to repeat this for a 403 error code.ShareFollowansweredJul 16, 2021 at 18:49tplusktplusk94988 silver badges1515 bronze badges1Thanks, had to edit cloudfront and that fixed it–Leslie AlldridgeSep 10, 2022 at 4:53Add a comment|
|
On clicking/refresh following sitehttps://rrc6k2ykxxxxx.cloudfront.net/example-1this error comes for all non home page url
I have reactjs App on s3 bucket which is served through cloudfront.Error:
NoSuchKey
The specified key does not exist.
Key: example-1what would be best way to redirect refresh on non home pages back to home page? thanks
|
static website on s3 gives 404 on refresh for non home page: no such keys error
|
To get output from an ECS/Fargate task, I think you have to use theTask Token Integrationinstead of Run Job (Sync) which is usually recommended for Fargate tasks. You can pass the token as a container override ("TASK_TOKEN": "$$.Task.Token"). Then inside your image you need some logic like this:client = boto3.client('stepfunctions')
client.send_task_success(
taskToken=os.environ["TASK_TOKEN"],
output=output
)to pass it back.ShareFollowansweredFeb 10, 2022 at 20:50Drew BollingerDrew Bollinger18622 silver badges77 bronze badgesAdd a comment|
|
Have previously worked with lambda orchestration using AWS step function. This has been working very well. Setting the result_path of each lambda will pass along arguments to subsequent lambda.However, I now need to run a fargate task and then pass along arguments from that fargate task to subsequent lambdas. I have created a python script that acts as the entrypoint in the container definition. Obviously in a lambda function thehandler(event, context)acts as the entrypoint and by defining areturn {"return_object": "hello_world"}its easy to pass a long a argument to the next state of the state machine.In my case though, I have task definition with a container definition created from this Dockerfile:FROM python:3.7-slim
COPY my_script.py /my_script.py
RUN ln -s /python/my_script.py /usr/bin/my_script && \
chmod +x /python/my_script.py
ENTRYPOINT ["my_script"]Hence, I am able to invoke the state machine and it will execute my_script as intended. But how do I get the output from this python script and pass it along to another state in the state machine?I have found some documentation on how to pass along inputs, but no example of passing along outputs.
|
Input and Ouput to ECS task in Step function
|
As per docs, fornetwork_interfacesyou should usesecurity_groups, notvpc_security_group_ids:network_interfaces {
security_groups = [aws_security_group.name32.id]
associate_public_ip_address = true
subnet_id = aws_subnet.name1.id
delete_on_termination = true
}ShareFollowansweredMar 27, 2021 at 1:44MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges11Thank you. I was reading docs but i guess i missed this moment. works fine. Have a good day–Uber SholderMar 27, 2021 at 6:07Add a comment|
|
I'm writing terraform infra and i have a problem with security groups for my launch template.image_id = aws_ami_from_instance.name12.id
network_interfaces {
vpc_security_group_ids = [aws_security_group.name32.id]
associate_public_ip_address = true
subnet_id = aws_subnet.name1.id
delete_on_termination = true
}> terraform apply
Error: Unsupported
on LT.tf line 15, in resource "aws_launch_template" "LTforASG
15: vpc_security_group_ids = aws_security_group.pub_SG.
An argument named "vpc_security_group_ids" is not expected here.And if i write it outside of "network_interface" block it gives error like:Error: Error creating Auto Scaling Group: InvalidQueryParameter: Invalid launch template: When a network interface is provided, the security groups must be a part of it
status code: 400, request id: 59d14734-6cde-4027-b245-f3269b7a8071Thanks
|
Cant add security group to launch template
|
Thanks for the answers.I need to perform this action in a lambda and this is the result:import boto3
import json
s3 = boto3.client('s3')
def lambda_handler(event, context):
file='test/data.csv'
bucket = "my-bucket"
response = s3.get_object(Bucket=bucket,Key=file )
fileout = 'test/dout.txt'
rout = s3.get_object(Bucket=bucket,Key=fileout )
data = []
it = response['Body'].iter_lines()
for i, line in enumerate(it):
# Do the modification here
modification_in_line = line.decode('utf-8').xxxxxxx # xxxxxxx is the action
data.append(modification_in_line)
r = s3.put_object(
Body='\n'.join(data), Bucket=bucket, Key=fileout,)
return {
'statusCode': 200,
'body': json.dumps(data),
}ShareFollowansweredJan 14, 2021 at 15:12vll1990vll199032144 silver badges1717 bronze badgesAdd a comment|
|
I am working with python and I need to check and edit the content of some files stored in S3.I need to check if they have a char o string. In that case, I have to replace this char/string.For example:
I want to replace;with.in following file
File1.txtThis is an example;After replaceFile1.txtThis in an example.Is there a way to do the replace without downloading the file?
|
How to edit S3 files with python
|
Can do something like thisconst lambdaARole = new iam.Role(this, 'LambdaRole', {
assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
});
lambdaARole.addManagedPolicy(
ManagedPolicy.fromAwsManagedPolicyName('AmazonDynamoDBFullAccess')
);
const lambdaA = new lambda.Function(this, 'lambda-a', {
functionName: 'lambda-a',
memorySize: 256,
runtime: lambda.Runtime.NODEJS_12_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(require.resolve('/lambda-a'), '..')),
role: lambdaARole,
});ShareFollowansweredDec 3, 2020 at 2:14maafkmaafk6,44677 gold badges3838 silver badges6464 bronze badges31It's not a good practice to give FullAccess to any service. Use the least privilege principle–Ruben J GarciaApr 27, 2022 at 17:14@RubenJGarcia - Can you share an example for that?–MooncraterAug 19, 2022 at 7:19Thanks for this. I would have thought that something likemyddbTable.grantFullAccess(myLamdaHandler)would have worked. "It compiles"... however my lambda doesn't seem to have access still.–Damien SawyerJan 12, 2023 at 6:29Add a comment|
|
I have a lambda which I want to be able to create dynamo db tables (not just access but also create tables with dynamic names).const lambdaA = new lambda.Function(this, 'lambda-a', {
functionName: 'lambda-a',
memorySize: 256,
runtime: lambda.Runtime.NODEJS_12_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(require.resolve('/lambda-a'), '..')),
});How can I do this with AWS CDK?I guess I need to somehow add the policyAmazonDynamoDBFullAccess(or some other policy to allow creating tables) to the lambda's execution role.
|
AWS CDK Grant Lambda DynamoDB FullAccess
|
I've had success with php-amqplib, and I am actually not using the newest version (I am on v2.12.3). I can connect using this:$connection = new AMQPSSLConnection($host, $port, $user, $pass, $vhost, ['verify_peer_name' => false], [], 'ssl');I found that Ihadto set'verify_peer_name' => false, or else I just got aunable to connect to ssl://localhost:5671 (Unknown error)error, but I was also port-forwarding throughlocalhost.ShareFollowansweredMar 26, 2021 at 8:51polesenpolesen71311 gold badge66 silver badges1818 bronze badgesAdd a comment|
|
I created an Amazon MQ broker:Select broker engine: RabbitMQ 3.8.6Single-instance brokerNetwork and security: Public accessVPC and subnets: Use the default VPC and subnet(s)I have tried two libraries: from RabbitMQ manual and Enqueue\AmqpExtEither of them cannot connect to Amazon (with docker container all works fine. But I want to try AMAZON MQ.I used code below:use Enqueue\AmqpExt\AmqpConnectionFactory;
use PhpAmqpLib\Connection\AMQPSSLConnection;
$connectionFactory = new AmqpConnectionFactory([
'host' => 'b-da219bXXXXXXXXXXXX86a.mq.us-east-1.amazonaws.com',
'port' => 5671,
'vhost' => '/',
'user' => 'xxxx',
'pass' => 'xxxx', // I can login with this to rabbit admin panel
'persisted' => false,
'ssl_on' => false,
'ssl_verify' => false,
]);
$c = $connectionFactory->createContext();
$queue = $c->createQueue('emails');
$c->declareQueue($queue);Result:Library error: connection closed unexpectedly - Potential login failure.With 'ssl_on' => true the same error.I don't know can it be happen because I didn't provide ssl cert to amazon.If so, how to fix it?
|
Can't connect to RabbitMQ on Amazon MQ
|
This is likely down toburst capacityin which you gain your capacity over a 300 second period to use for burstable actions (such as scanning an entire table).This would mean if you used all of these credits other interactions would suffer as they not have enough capacity available to them.You can see the amount of consumed WCU/RCU via either CloudWatch metrics or within the DynamoDB interface itself (via the Metrics tab).ShareFollowansweredAug 19, 2020 at 7:24Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges0Add a comment|
|
I made a table with 1346 items, each item being less than 4KB in size. I provisioned 1 read capacity unit, so I'd expect on average 1 item read per second. However, a simple scan of all 1346 items returns almost immediately.What am I missing here?
|
Why is my DynamoDB scan so fast with only 1 provisioned read capacity unit?
|
Since your error that you received says "Unknown host" it gives you a clue where you can troubleshoot first.Refer tothe official psycopg2 documentationfor the guidance on how to use the arguments. You can connect using the dsn parameter or you can use keyword arguments. Since you are using keyword arguments (the second option in the documentation), you should specify the host without the port and the database and see if that works. That means thatdsn_hostname = "redshift-cluster-1.xxx.xxx.redshift.amazonaws.com".This is very similar tothis Stack Overflow question and answer here, if you would like more information.Also I really hope those aren't your actual credentials, if so please remove them for the security of your database!ShareFollowansweredAug 11, 2020 at 21:19kayelowkayelow31122 silver badges33 bronze badges1@RohanNaik Hope that helped, if so you can mark your question as answered.–kayelowAug 12, 2020 at 22:15Add a comment|
|
I have created a cluster in redshift and I am trying to connect to the cluster via python psycopg ... I am able to connect through sqlworkbench/j using jdbc driver but I am unable to do so in python.
Here is my codeimport psycopg2
import os
# Redshift Server Details
dsn_database = "dev"
dsn_hostname = "redshift-cluster-1.cdd5oumaebpd.ap-south-1.redshift.amazonaws.com:5439/dev"
dsn_port = "5439"
dsn_uid = "*****"
dsn_pwd = "*****"
con=psycopg2.connect(dbname= dsn_database, host=dsn_hostname,
port= dsn_port, user= dsn_uid, password= dsn_pwd)I am getting the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "host.amazonaws.com" to address: Unknown hostPlease help!
|
unable to connect to redshift cluster using psycopg
|
When I turned off NLA on the Host server I was able to login, but obviously that's not a reasonable answer.Turned out to be BitDefender, once I turned off protection I could connect to all my EC2 VMS.-Just a note for me. This was only blocking VMS I was connecting to through ec2 Public DNS.ShareFollowansweredJul 11, 2020 at 14:26MikeMike11611 silver badge33 bronze badges1I spent an entire weekend trying to fix this problem (was trying to access an Azure VM so I could do an assessment exercise for a job interview) and sure enough itwasBitdefender's fault. Thank you v.v.v.much.–immutablSep 21, 2020 at 5:59Add a comment|
|
After creating an ec2 windows machine from a custom AMI. I am able to get the admin password. However when I try to login to the machine.I get the RDP error saying'An authentication error has occurred. The local security authority cannot be contacted. This could be due to an expired password.'The password isn't expired and there is no issue with the AMI as when I create another instance from the same it works fine.
|
AWS ec2 windows login error saying An authentication error has occured. The local security authority cannot be contacted
|
Amazon maps-out typical latency between IP addresses and AWS regions. ChooseLatency-based Routingto have the fastest response.Geolocationmaps the IP addresses to geographic locations. This permits rules like "send all users from Côte d'Ivoire to the website in France", so they see a language-specific version. It can redirect bycountry, region (eg Oceania) and US state. Geolocation cares more about the location of users rather than the speed of their connection.Geolocation pre-dates GDPR. It could be used forgeo-blocking, but this is typically done in Amazon CloudFront:Restricting the Geographic Distribution of Your Content - Amazon CloudFrontShareFollowansweredMay 29, 2020 at 4:24John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badges2Hi, can you elaborate on Latency based routing policy?–Aishwarya JoshiMay 28, 2023 at 15:461@AishwaryaJoshi AWS measure the latency (response time) to many different IP addresses in the world. This can result in faster connections than geo-based routing. For example, there might be a high-speed undersea cable that connects to a distant country faster than a closer country. See:Latency-based routing - Amazon Route 53–John RotensteinMay 28, 2023 at 23:59Add a comment|
|
AWS implements latency routing policy based on the ip address of resolver or masked ip address of client. It will help to find the region of low latency.If both geolocation policy and latency policy are based on the IP address of client, it comes to several questions:what's the difference between them?What's the purpose of Geolocation routing policy?Is geolocation policy used for complying the law of different country? e.g. GDPR, cookie usage.Europe: GDPRChina: data must be stored in China.In which case should I use Geolocation routing policy rather than latency policy?Referencehow does AWS Route 53 achieve latency based routing:https://youtu.be/PVBC1gb78r8?t=1963https://youtu.be/PcoQY82SDHw?t=622How does AWS Route 53 achieve latency based routing?
|
AWS Route 53 Routing Policy: Geolocation Vs Latency
|
#!/bin/bash
sudo python3 -m pip install matplotlib pandas pyarrowDO NOT installpyspark. It should be already there in EMR with required config. Installing may cause problems.ShareFolloweditedMay 14, 2020 at 22:10answeredMay 14, 2020 at 22:03SnigdhajyotiSnigdhajyoti1,3681010 silver badges2828 bronze badges0Add a comment|
|
I have an EMR (emr-5.30.0) cluster I'm trying to start with a bootstrap file in S3. The contents of the bootstrap file are:#!/bin/bash
sudo pip3 install --user \
matplotlib \
pandas \
pyarrow \
pysparkAnd the error in my stderr file is:WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Command "python setup.py egg_info" failed with error code 1 in /mnt/tmp/pip-build-br9bn1h3/pyspark/Seems pretty simple...no idea what is going on. Any help is appreciated.EDIT:Tried @Dennis Traub suggestion and get same error. New EMR bootstrap looks like this:#!/bin/bash
sudo pip3 install --upgrade setuptools
sudo pip3 install --user matplotlib pandas pyarrow pyspark
|
Can't get pip install to work on EMR cluster
|
I found the solution from a colleague.This link has the interruption rates.https://spot-bid-advisor.s3.amazonaws.com/spot-advisor-data.jsonI also updated my code accordinglysample code here:def get_ec2_spot_interruption(instances=[], os=None, region=None) -> defaultdict(None):
import requests
import json
results = defaultdict(None)
url_interruptions = "https://spot-bid-advisor.s3.amazonaws.com/spot-advisor-data.json"
try:
response = requests.get(url=url_interruptions)
spot_advisor = json.loads(response.text)['spot_advisor']
except requests.exceptions.ConnectionError:
return
rates = {
0: "<5%",
1: "5-10%",
2: "10-15%",
3: "15-20%",
4: ">20%"
}
for ii in instances:
try:
rate = spot_advisor[region][os][ii]['r']
results[ii] = rates[rate]
except KeyError:
results[ii] = ""
return resultsShareFolloweditedApr 30, 2020 at 16:36answeredApr 30, 2020 at 14:47Fuat UlugayFuat Ulugay56122 silver badges1616 bronze badgesAdd a comment|
|
I have created a script to get best prices for on-demand and also see spot prices for the same instances.https://github.com/fuatu/awsEC2pricefinderThis is what I give as an output below. I want to have another column for each instance and display the "Frequency of interruption".(awspricing) ~/Projects/awspricing $ python awsEC2pricing.py -t 2 4
Records are up-to-date
--------------------------
vCPU: 2.00
RAM: 4.00
OS: Linux
Region: US East (N. Virginia)
--------------------------
Instance vCPU RAM OS PriceH PriceM SpotH SpotM
t3a.medium 2.00 4.00 Linux 0.03760 27.07200 0.01140 8.20800
t3.medium 2.00 4.00 Linux 0.04160 29.95200 0.01250 9.00000
t2.medium 2.00 4.00 Linux 0.04640 33.40800 0.01430 10.29600
a1.large 2.00 4.00 Linux 0.05100 36.72000 0.01990 14.32800
t3a.large 2.00 8.00 Linux 0.07520 54.14400 0.02260 16.27200
m6g.large 2.00 8.00 Linux 0.07700 55.44000 0.00000 0.00000You can see "Frequency of interruption" percentages here:https://aws.amazon.com/ec2/spot/instance-advisor/I googled and also checked the boto3 ec2 methods and cannot find any option to get interruption rates. So any help to show how to get this data programmatically are welcome.
|
How to obtain Amazon EC2 Spot Instance interruption rates
|
Instead of rdesktop, theFreeRDP: A Remote Desktop Protocol Implementationseems to better accommodate thisCredSSP required by serverissue.xfreerdp /u:"Administrator" /v:ec2-3-1-49-118.ap-southeast-1.compute.amazonaws.comShareFollowansweredApr 3, 2020 at 3:06hendryhendry10.2k1818 gold badges8888 silver badges146146 bronze badgesAdd a comment|
|
When I use the recommendedrdesktopto connect to Windows EC2 host I see from Archlinux:$ rdesktop 54.254.180.73
ATTENTION! The server uses and invalid security certificate which can not be trusted for
the following identified reasons(s);
1. Certificate issuer is not trusted by this system.
Issuer: CN=EC2AMAZ-I5MV8JK
Review the following certificate info before you trust it to be added as an exception.
If you do not trust the certificate the connection atempt will be aborted:
Subject: CN=EC2AMAZ-I5MV8JK
Issuer: CN=EC2AMAZ-I5MV8JK
Valid From: Thu Mar 5 16:06:01 2020
To: Fri Sep 4 16:06:01 2020
Certificate fingerprints:
sha1: 98f1e92f9b9a3b57f4b2a23177f1bbe1a9afeb2c
sha256: 8e9f1a2e5497c972b56b8300f6e2ec3f59c8903103984cb5456a237c9a7b2d45
Do you trust this certificate (yes/no)? yes
Failed to initialize NLA, do you have correct Kerberos TGT initialized ?
Failed to connect, CredSSP required by server (check if server has disabled old TLS versions, if yes use -V option).I'm not sure where to go from here. Especially whenrdesktopdoesn't appear maintained.Any tips to connect to a Windows host?
|
Connecting to EC2 Windows host from Linux
|
There are two AWS services that might assist:TheAmazon Elastic Transcoder servicelets you convert media files stored in Amazon S3. For example, you can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions.TheAWS Elemental MediaConvertis a file-based video processing service that provides scalable video processing for content owners and distributors with media libraries of any size.These will not convert "while uploading". Rather, they willtranscode videosalready stored in Amazon S3 and will save the results back to S3. To reduce the storage size of a video, you would need to change some attributes (eg dimensions, quality, encoding method) that would result in a smaller file.ShareFollowansweredNov 25, 2019 at 21:11John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges497497 bronze badgesAdd a comment|
|
Is there any way to support reducing video size while uploading video files toAWS S3 bucket?I triedLambdaservice provided by AWS. I downloaded video file from the input bucket and usedffmpegto compress the video file. But there is only 512MB space limit intmpfolder which is only writable folder in Lambda and 512MB is not enough for my work.Anyone has ideas to figure out this?
|
Any way to compress video with AWS service?
|
According toThis, Your language to run the code should set as spark, not python.ShareFolloweditedSep 26, 2019 at 5:45answeredSep 25, 2019 at 14:21LamanusLamanus13.3k44 gold badges2222 silver badges4848 bronze badges6well i checked and its type is set to python.–Shikhar ChaudharySep 25, 2019 at 14:421that is problem. Set it tospark–Sandeep FatangareSep 26, 2019 at 3:444Hey guys, where do you find this Spark/Python setting "language to run the code"? I have the same problem trying to run a python script in IntelliJfrom awsglue.utils import getResolvedOptions \ ModuleNotFoundError: No module named 'awsglue'–RimerMay 1, 2020 at 19:24@Rimer Did you find the problem? I have the same issue with InteliJ–Moustafa MahmoudMar 2, 2021 at 18:43Unfortunately no, never found solution, and am no longer on the project :(–RimerMar 3, 2021 at 19:02|Show1more comment
|
I'm trying to run a ETL job in AWS glue using python the script isimport sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
glueContext = GlueContext(SparkContext.getOrCreate())
person = glueContext.create_dynamic_frame.from_catalog(
database="test",
table_name="testetl_person")
person.printSchema()This script is running in AWS development endpoint and on running the job throws the below exceptionFile "/tmp/runscript.py", line 118, in <module>
runpy.run_path(temp_file_path, run_name='__main__')
File "/usr/local/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/local/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/glue-python-scripts-cf4xyag5/test.py", line 2, in <module>
ModuleNotFoundError: No module named 'awsglue.transforms'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 137, in <module>
raise e_type(e_value).with_tracsback(new_stack)
AttributeError: 'ModuleNotFoundError' object has no attribute 'with_tracsback'Can anyone help me out?
If you require further information do let me know.
|
AWS online Development endpoint throws importerror no module named aws glue.transforms
|
As of yesterday, SNS also supports strict message ordering and deduplication with FIFO topics.https://aws.amazon.com/about-aws/whats-new/2020/10/amazon-sns-introduces-fifo-topics-with-strict-ordering-and-deduplication-of-messages/ShareFollowansweredOct 24, 2020 at 4:36Otavio FerreiraOtavio Ferreira89277 silver badges1111 bronze badgesAdd a comment|
|
The AWS FAQs for SNS says:Q: Will messages be delivered to me in the exact order they were published?The Amazon SNS service will attempt to deliver messages from the
publisher in the order they were published into the topic. However,
network issues could potentially result in out-of-order messages at
the subscriber end.Does it apply to SQS consumers, specially a FIFO SQS? I have a use case where one of the consumers needs to maintain the order in which the messages were sent. If this is not the case, I would need to use something else.
|
Are SNS messages fanned out to SQS queues keeping the order?
|
Should beheaders: {
'Accept-Encoding': 'gzip',
},ShareFollowansweredSep 18, 2020 at 23:59insivikainsivika58422 gold badges1010 silver badges2222 bronze badgesAdd a comment|
|
I use API Gateway on AWS.First, I activated CORS option and send a request on axios, then it worked.And I activated content-encoding and I added axios optionaxios.defaults.headers.post['Content-Encoding'] = 'gzip'Then CORS error occured.How can I solve it?
|
Add content-encoding header on axios
|
Since that in external tables it is possible to onlyselectdata this one is enough to checkusagepermission over the external tables:SELECT schemaname, tablename, usename,
has_schema_privilege(usrs.usename, schemaname, 'usage') AS usage
FROM SVV_EXTERNAL_TABLES, pg_user AS usrs
WHERE schemaname = '<my-schema-name>'
and usename = '<my-user>';ShareFolloweditedFeb 9, 2023 at 16:03answeredApr 2, 2019 at 13:45VzzarrVzzarr5,09033 gold badges4949 silver badges8787 bronze badgesAdd a comment|
|
This postis useful to show Redshift GRANTS but doesn't show GRANTS over external tables / schema.How to show external schema (and relative tables) privileges?
|
How to show Redshift Spectrum (external schema) GRANTS?
|
In the Lambda execution environment, the root logger is already preconfigured. You'll have to work with it or work around it. You could do some of the following:You can set the formatting directly on the root logger:root = logging.getLogger()
root.setLevel(logging.INFO)
root.handlers[0].setFormatter(logging.Formatter(fmt='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s', datefmt='%d/%b/%Y %H:%M:%S'))You could add the Watchtower handler to it (disclaimer: I have not tried this approach):root = logging.getLogger()
root.addHandler(cw_handler)However I'm wondering if you even need to use Watchtower. In Lambda, every line you print tostdout(so even just usingprint) get logged to Cloudwatch. So using the standardlogginginterface might be sufficient.ShareFollowansweredFeb 24, 2019 at 20:20Milan CermakMilan Cermak7,75433 gold badges4545 silver badges6060 bronze badges1How would you simultaneously support local logging (as per OP's attempt) and CloudWatch logging as per your excellent answer?–jtlz2Dec 15, 2022 at 10:25Add a comment|
|
I have written following code to enable Cloudwatch support.import logging
from boto3.session import Session
from watchtower import CloudWatchLogHandler
logging.basicConfig(level=logging.INFO,format='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s',datefmt='%d/%b/%Y %H:%M:%S')
log = logging.getLogger('Test')
boto3_session = Session(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=REGION_NAME)
cw_handler = CloudWatchLogHandler(log_group=CLOUDWATCH_LOG_GROUP_NAME,stream_name=CLOUDWATCH_LOG_STREAM_NAME,boto3_session=boto3_session)
log.addHandler(cw_handler)Whenever i try to print any logger statement, i am getting different output on my local system and cloudwatch.Example:log.info("Hello world")Output of above logger statement on my local system (terminal) :[24/Feb/2019 15:25:06.969] [Test,<module>:1] [INFO] Hello worldOutput of above logger statement on cloudwatch (log stream) :Hello worldIs there something i am missing ?
|
Correct logging(python) format is not being sent to Cloudwatch using watchtower
|
In addition to RDS for PostgreSQL, which has a 32 TiB limit, you should take a look at Amazon Aurora PostgreSQL, which has a 64 TiB limit. In both cases, the largest single table you can create is 32 TiB, though you can't quite reach that size in RDS for PostgreSQL as some of the space will be taken up by the system catalog.Full disclosure: I am the product manager for Aurora PostgreSQL at AWS.ShareFollowansweredFeb 12, 2019 at 14:46Kevin JKevin J9111 bronze badge1Then why does this say 32 TiB?docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/…–KongressAug 5, 2020 at 15:20Add a comment|
|
Thanks in advance!I am planning to use AWS RDS Postgres for pretty big data (> ~50TB) , but I have couple of questions un-answeredIs 16TB the maximum limit for AWS RDS Postgres instance, if so how do people store > 16TB data.Is the limit of 16TB for RDS the maximum database size post compression that Postgres can store on AWS.Also I do not see any option for enabling compression while setting up AWS RDS Postgres DB instance. How to enable compression in AWS RDS Postgres?I have followed :https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.htmlhttps://blog.2ndquadrant.com/postgresql-maximum-table-size/(wherein Postgres table can have size greater than 32TB).https://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F
|
AWS RDS Postgres maximum instance and table size
|
For conditional logic in an SES template you can use if else statements like you would in code. For your example you would use something like<p>{{sender}} has invited you to join team {{#if teamName}}{{teamName}}{{/if}}</p>Taken from the following documentationhttps://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-personalized-email-advanced.htmlShareFollowansweredFeb 1, 2019 at 8:07bencrinklebencrinkle28844 silver badges1414 bronze badges0Add a comment|
|
Requirement is to send templated mail based on received bodydata from api. BodyData may not contain some tags. see below sample Template part.<p>{{sender}} has invited you to join team {{teamName}}</p>so body data may not contain teamName. So I want to put if condition on {{teamName}} in template.Please help me here to find solution
|
How to add add if condition in AWS SES html template?
|
The resulting role that SAM creates for you is just the name of your function with "Role" added to the end. You can use this information to get the Role or properties of it using normal CloudFormation functions.For example, if you wanted to access the role ARN ofMyFunction, you would use!GetAtt MyFunctionRole.Arnin your SAM YAML template. The same principle should apply for!Refand other functions.ShareFollowansweredSep 19, 2018 at 23:20Keeton HodgsonKeeton Hodgson47733 silver badges88 bronze badgesAdd a comment|
|
I like how a role + inline policy is created when I deploy my template:Resources:MyFUnction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
Description: Enter description of what this specific Lambda does
CodeUri: hello_world/build/
Handler: app.lambda_handler
Runtime: python2.7
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Policies:
# Using AWSLambdaExecute automatically creates a role named: <StackName>Role-<UUID>
- AWSLambdaExecute
# This policy is assigned as an Inline policy to the role
- Version: '2012-10-17' # Policy Document
Statement:
Effect: Allow
Action: ......Now can I ref the role that is dynamically created and add an Output: for it in the SAM template?
|
How do I get the name of a dynamically created lambda role?
|
Naming is always tough.For general naming - this is a good answer:https://softwareengineering.stackexchange.com/a/130104In addition, the scope of the namespace of your lambda functions is a consideration - e.g. ifallof your functions relate to users in application XYZ for enterprise ABC, thencreateis sufficient.However, if you have lambda functions for both enterprises ABC & DEF, and each have multiple applications with user management andmayneed differentcreatemethods for different things, then you may need something likeAbcApplicationxyzCreateUser.One other comment - in English,commandObject(e.g.createUser)readsbetter and sounds more natural when said aloud compared toobjectCommand(e.g.userCreate). But I have found it easier to have the contextual parts (e.g.companyorapplication; if needed, but better avoided if possible) at the start as it facilitates tools to organize methods better (contextCommandObjecte.g.AbcCoCustomerServiceAppCreateUser).In short, make it simple, avoid using anything that is implicitly obvious, but allow for distinguishing between different applications/systems/entities in the namespace if needed.ShareFollowansweredAug 9, 2018 at 14:12Brian QuinnBrian Quinn14622 bronze badges1This is some of the most sensible advice I've read about naming is as long as I can remember!–mbigrasMar 13, 2021 at 7:52Add a comment|
|
I'm building an API with the Serverless framework. Endpoints are defined on Amazon API Gateway, where each signature is mapped to an individual Lambda.What is a good naming convention for the Lambdas here? For example, the candidates forPOST /usercould be:userPostcreateUser
|
Naming conventions for AWS Lambdas intended for REST API access
|
FWIR if you leave the defaults then it won't create the profile since the defaults are allNONE.Your format is not quite correct for creating the profile configuration manually.It should be[profile example]
region=eu-west-1
output=textShareFollowansweredAug 7, 2018 at 18:10CheruvianCheruvian5,73711 gold badge2424 silver badges3434 bronze badges0Add a comment|
|
I'm trying to setup an new named profile using the awscliI usedaws configure --profile exampleto set the profile up but I left everything as the defaultNow I'm gettingThe config profile (example) could not be foundI even tried creating and modifying the~\.aws\configfile with the following but to no avail[example]
region=eu-west-1
output=textAny command I try to execute will result in the above errorI also tried reinstalling the awscliHelp is much appreciated, thanks!
|
AWS CLI: The config profile (example) could not be found
|
Using the task definition (portal or JSON) you can define"secrets"inside the"containerDefinitions"section which will be retrieved from secrets manager.Note: At the time of writing, Fargate only supports secrets that are a single value, not the JSON or key value secrets. So choose OTHER when creating the secret and just put a single text value there.{
"ipcMode": null,
"executionRoleArn": "arn:aws:iam::##:role/roleName",
"containerDefinitions": [
{
...
"secrets": [{
"name": "SomeEnvVariable",
"valueFrom": "arn:aws:secretsmanager:region:###:secret:service/secretname"
}],
...
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
...
}Note: that execution role defined in the task needs a policy attached such asSecretsManagerReadWriteMore info in docsShareFollowansweredMay 13, 2020 at 1:06lkolko8,27199 gold badges4545 silver badges6262 bronze badgesAdd a comment|
|
I'm trying to automate the Cloudformation deployment of our fargate instances. I have cloudformation deploying successfully if i hard the environment variables entries but if i try to add as parameters, type string, it complains about it not being a string.here is the parameter"EnvVariables": {
"Description": "All environment Variables for Docker to run",
"Type": "String"
},In my task definition i have the following settings for the Container Definition"Environment": [
{
"Name": "JAVA_OPTS",
"Value": "-Djdbc.url=jdbc:dbdriver://xxxx.eu-west-1.rds.amazonaws.com:xxxx/xxxxxxxxx -Djdbc.user=xxxxx -Djdbc.password=xxxxx"
}
]If i enter the following into the parameter field via the gui"-Djdbc.url=jdbc:dbdriver://xxxx.eu-west-1.rds.amazonaws.com:xxxx/xxxxxxxxx -Djdbc.user=xxxxx -Djdbc.password=xxxxx"it complains about it not being a string.How do i edit this to be accepted as a parameter?
|
aws fargate adding a parameter for environment variables
|
As described here:https://docs.aws.amazon.com/cli/latest/reference/sns/opt-in-phone-number.htmlaws sns opt-in-phone-number --phone-number ###-###-####ShareFolloweditedJun 18, 2020 at 15:47TylerH21k7070 gold badges7878 silver badges104104 bronze badgesansweredJun 18, 2020 at 15:42Vadim LyakhovichVadim Lyakhovich5111 silver badge22 bronze badgesAdd a comment|
|
As part of some testing that I was doing, I replied STOP to an SMS message that was sent via Amazon's Pinpoint service. I received the acknowledgement that I had been removed from further notifications.I want to opt back in to receiving those messages, but I can't figure out how to do that. I looked at the Pinpoint documentation and I did not see a way to do it. I looked in the Amazon Pinpoint Console and I did not see a way to remove a number from the blacklist. I have tried the standard terms that other SMS providers use such as UNSTOP, UNBLOCK, and START, but none of those work either. Does anyone have any suggestions. I do not want to contact Amazon support about this unless I have to.
|
How can I opt back in to receive SMS messages via Amazon Pinpoint
|
Running on a different hosting will cause extra latency.Let's do the math on AWS RDS for the smallest instances (taking eu-west-1 region as example)Running on RDS: db.t2.micro $0.018 per hour, or $12.96 per month for RDS. Free the first year underAWS free tier.Running on EC2: t2.micro (You configure MySQL and backups, ...), $0.0126 per hour, or $9.07 per month. Free the first year underAWS free tierIf your application is small enough, you could host both your database and your application on the same machine (solution 2)ShareFollowansweredMay 24, 2018 at 14:53ThomasVdBergeThomasVdBerge7,81255 gold badges4646 silver badges6262 bronze badges4My plan is to use 1. wordpress on www.example.com for simple marketing to show my iOS app products. 2. api.example.com for my iOS applications to connect to API (lots of requests data). Do you think it is enough both of my options to your (solution 2)?–HotDudeSmithMay 24, 2018 at 15:02It all depends on the amount of visitors. It will be enough for a normal average site–ThomasVdBergeMay 24, 2018 at 19:36You could also consider using Amazon Lightsail. There is aWordpress configuration available for $5/monthand there's a free tier for that too, I think.–John RotensteinMay 25, 2018 at 0:0020$ for an "average site" sounds way too expensive–bieboebapMay 28, 2023 at 14:45Add a comment|
|
I have wordpress application running on my EC2 - AWS. I haven't decide which one is Amazon RDS or my own database on different hosting. Which one is the cheapest to use? Let's say I have my own MySQL database from Lunarpages or Bluehost hosting, to allow my wordpress on EC2 instance to connect/remote to my database on Lunarpages not allow my wordpress to connect remote to Amazon RDS. Which one is the cheapest to use? I heard people saying when you use Amazon RDS is very expensive, so I thought maybe to save costs to allow my wordpress to connect to my own database not Amazon RDS for wordpress. I don't know it is true or not. I don't know how it performance well. Which one is the best one. Any suggestion appreciated. Thank you.
|
Which one is the cheapest to use AWS RDS or my own database?
|
I ended up setting up an elastic load balancer pointing to my single instance and then adding the web application firewall pointing to the load balancer. It works pretty well and doesn't cost too much more per month from AWS.ShareFollowansweredJul 24, 2018 at 19:24qwertyqwerty15533 silver badges99 bronze badges0Add a comment|
|
I have a web app running on my Amazon EC2 instance. How can I integrate a Web Application Firewall with my EC2?I have tried setting up the WAF, but it can only be associated with either a CloudFront distribution or an Elastic Load Balancer. Do I need to setup a CloudFront distribution and point it at my EC2 instance?
|
Use a Web Application Firewall (WAF) with an EC2 instance
|
Lots of people go these emails yesterday. A few have opened up tickets for clarification, and I suspect there will be a followup email in the near future:https://www.reddit.com/r/aws/comments/7ndvli/anybody_get_spurious_aws_budgets_alarms_early_on/ShareFollowansweredJan 1, 2018 at 15:30E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badgesAdd a comment|
|
AWS sent this email:Basically it says that i am using 1 cloudwatch alarm, and the forecasted is 31.
The fact is that currently i am not using any AWS services, in fact if i go to cloudwatch of each region this is the output:What else should I check?
Note the billing is at 0$ of course
|
AWS email "You are approaching your AWS Free Tier limit" but not
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.