Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I found the answer here.It's simply a matter of creating the new subdomain with CNAME record pointing to one of AWS's servers (the servers are listed on that page).Since the domain's DNS is managed by Cloudflare, I created the subdomain with that CNAME and with the config in my question above, it "just worked". SES handles the redirect itself, so there's no need to configure your own web server or set up any Page Rules on Cloudflare etc. Just make sure your emails include the SES custom header for your configuration set.ShareFollowansweredJan 31, 2018 at 11:30mahemoffmahemoff45.3k3636 gold badges163163 silver badges226226 bronze badges11Very helpful. Thanks.–RyanJan 2, 2019 at 17:53Add a comment| | AWS SES has a new feature where the service will transform links to support open/click analytics.It's not clear from the docs how to handle your own subdomain. Trying this, it seems to transform into a URL where the original URL is escaped as the path, with "cr0" before it and a unique string after, e.g.https://example.com/123is included in the email ashttps://mail-subdomain.example.com/cr0/http%3A//example.com/123/long-unique-id-string. But there's no info I can find about how to configure this domain, e.g. should the DNS be pointed to AWS servers using a CNAME? | Handling custom subdomain with AWS SES events |
Yes -- when a message is sent to an Amazon SNS topic,all subscribers receive the message.If you wish to contact a specific subscriber, your code will have to contact them directly (eg via email).Amazon SNS also has the ability tosend an SMS message to one or more recipientswithout using an SNS Topic. So, if your desired recipients are on SMS, this is a simple task.ShareFollowansweredJan 29, 2018 at 20:21John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges2I was also wondering if maybe I could do this with a policy or filters? Or am I misunderstanding what those features are for? Could I define a policy to only send to one subscriber based on message attributes or something?–red888Jan 29, 2018 at 20:24When a message is sent to an Amazon SNS topic, all subscribers receive the message.–John RotensteinJan 29, 2018 at 20:28Add a comment| | When I publish to a topic it hits ALL subscribers ALWAYS?I have a topic with several subscriptions, sometimes I want to publish a message to just one of those subscriptions.Is there a way to do this or do I need to create another topic and have the subscription in 2 topics? In that case I'm bugging the user (assuming this use case is to message users) twice right? | Can I publish to a specific subscription in a SNS topic? |
Not All Objects are in a NamespaceMost Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. And low-level resources, such as nodes and persistentVolumes, are not in any namespace.Source.The storage class is not a namespace object.
Try to run$ kubectl get storageclass --all-namespacesand you will notice that there is not even the indication of the namespace:[email protected]:~$ kubectl get storageclass --all-namespaces
NAMESPACE NAME PROVISIONER
slow kubernetes.io/gce-pd
standard (default) kubernetes.io/gce-pdTherefore I have never paid attention, but I believe that if you delete a namespace nothing will happen to the Storage class objects.Update:I created a namespace class "paolo" the following StorageClass:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
namespace: paolo
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: us-central1-a, us-central1-bI didn't received any error, I deleted the namespace paolo and as expected the StorageClass was still thereMy test has been performed on Google Cloud Platform.ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredJan 23, 2018 at 12:39GalloCedroneGalloCedrone4,97733 gold badges2525 silver badges4141 bronze badges11Thanks for the answer!–AVINASH SHRIMALIJan 23, 2018 at 13:40Add a comment| | I am new to Kubernetes, and have a question about it.When we create a statefulset, it gets associated with its PVC and the PVC will be associated a storageclass.So when we execute command "kubectl delete namespace", should it delete the storageclasses also?P.S. The cluster is running on AWS. | Does kubectl delete namespace command deletes associated storageclasses also? |
You can define many AWS::Lambda::EventSourceMapping for same lambda.For example;RegistrationRequestStreamMapping1:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 70
EventSourceArn:
Ref: DDBStream1ARN
FunctionName:
Fn::GetAtt:
- TestLambdaFunction
- Arn
StartingPosition: TRIM_HORIZON
Enabled: True
RegistrationRequestStreamMapping2:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 70
EventSourceArn:
Ref: DDBStream2ARN
FunctionName:
Fn::GetAtt:
- TestLambdaFunction
- Arn
StartingPosition: TRIM_HORIZON
Enabled: TrueShareFollowansweredNov 8, 2019 at 13:50omerkarabacakomerkarabacak13411 silver badge99 bronze badgesAdd a comment| | Here's the sample CloudFormation syntax I am seeing in the AWS documentations forAWS::Lambda::EventSourceMapping:Type: "AWS::Lambda::EventSourceMapping"
Properties:
BatchSize: Integer
Enabled: Boolean
EventSourceArn: String
FunctionName: String
StartingPosition: StringSay I have a set of DDB Stream ARNs which I want to use as trigger of one lambda function (instead of one ARN as trigger). I tried to define this relationship like this:Parameters:
DDBStreamARN:
Type: String
Default: arn:aws:dynamodb:us-west-2:someId1
AllowedValues:
- arn:aws:dynamodb:us-west-2:someId1
- arn:aws:dynamodb:us-west-2:someId2
- ...
Description: ARNs for the DDB Streams
Resources:
RegistrationRequestStreamMapping:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 70
EventSourceArn:
Ref: DDBStreamARN
FunctionName:
Fn::GetAtt:
- TestLambdaFunction
- Arn
StartingPosition: TRIM_HORIZON
Enabled: TrueBut the syntax doesn't seem to work since only the default value (arn:aws:dynamodb:us-west-2:someId1) is working as trigger and the other ARNs wont trigger the lambda function. Any suggestion on how to fix this? | Defining multiple ARNs which can trigger a Lambda function |
Depending on how your website backend is implemented, you could store the file in a secure S3 bucket and read the contents from your application at runtime.ShareFollowansweredJan 22, 2018 at 0:30Taylor WoodTaylor Wood16k11 gold badge2121 silver badges3737 bronze badges4Thanks that looks like it would work but I decided to use an API key instead.–CraigJan 22, 2018 at 23:57Hi @Craig, where you able to restrict access with your implementation?–DevmaleeqJul 5, 2023 at 12:41@Devmaleeq at the time it was not the best, as anyone who got the API key could use it. These days you could add access for the IAM instance profile to access S3 so your app can access the file without the file being publicly available.–CraigJul 6, 2023 at 9:00True. You can also utilize AWS Secure manager, as long as you plan to load it using EB Hooks incase of redeployment or configuraiton changes–DevmaleeqMar 6 at 15:09Add a comment| | I am running an application on AWS Elastic Beanstalk and would like to use the Google Cloud Translation API. From what I can understand the only option for authentication is a Google Service Account.https://cloud.google.com/video-intelligence/docs/common/authFrom this link is says to store the json file securely. How do I do that but still have it accessible by my application? From what I know everything in Elastic Beanstalk is published to the web. What is the best way to accomplish this? | Google Service Account key(json) and AWS Elastic Beanstalk |
Problem solved. Need to request certificate in the same region as load balancer.
And then point the custom domain(the one used to request for certificate) to load balancer using route 53ShareFollowansweredJan 22, 2018 at 1:19Marlon OuMarlon Ou49111 gold badge77 silver badges2020 bronze badgesAdd a comment| | I am trying to setup HTTPS for my EC2 instance created from Elastic Beanstalk using a certificate from AWS's ACM.
According to this articlehttps://colintoh.com/blog/configure-ssl-for-aws-elastic-beanstalk, I need to go to EC2 panel/load balancer and add a new listener rule.My problem is that for the HTTPS load balancer protocol, when I try to add SSL Certificate, and click "Choose a certificate from ACM (recommended)", there's no ACM certificate available for me.I know that I will have to request for a new Certificate for this load balancer address, butWHICH VERIFICATION METHOD SHOULD I USE?As far as I know, there are 2 ways to verify your domain ownership(Email or DNS). I guess email is not an option here because you cannot send an email to a "elb.amazonaws.com" address. But I'm not sure how to verify certificate request by DNS.And also, I tried to past the load balancer address xxx.xxx.elb.amazonaws.com into ACM to request a certificate for this address, but it says "invalid domain name"And also, if I were to add custom domain name for my load balancer (for example, create a alias of api.example.com for the load balancer),how can I set up https for that custom domain of api.example.com?Thanks a lot! | AWS request ACM HTTPS certificate for load balancer |
I had the same issue. You can use below snippet and add this into your terraform script. Next, destroy your infra setup and try to recreate it, all your EC2 instances will created with attached EBS volume which are encrypted in turn.resource "aws_ebs_encryption_by_default" "enabled" {
enabled = true
}ShareFolloweditedMar 4, 2022 at 10:22Tapan Hegde1,22811 gold badge99 silver badges2626 bronze badgesansweredJun 19, 2020 at 17:09AdamAdam2122 bronze badgesAdd a comment| | If I launch an EC2, the root EBS will not be encrypted (not sure why, maybe because the EC2 is launched from an public AMI. but even I create my own encrypted AMI, I cannot launch an EC2 from it....)Anyway, I know that I can encrypt an EBS from an exist EC2 in this way:launch an EC2 first, then snapshot it,Then copy the snapshot to a new snapshot and set the new snapshot as encrypted.Then create a volume from the new snapshot, then detach the EC2 from the existing un-encrypted volume.Then attach the EC2 to the new encrypted volume and set device as /dev/sda1.Finally, the EC2’s EBS will become encrypted. As you see, the steps are complex.In fact, I need to create an EC2, and the EC2 should have root EBS encrypted, using Terraform. The above steps seem complex and I do not know how to develop them using Terraform.My question is: How to write Terraform code encrypt EBS, after launching an EC2? Any solution is OK, I just want to develop it using Terraform. if Terraform cannot to do that, what other automation tool I should use? | How to encrypt EBS using Terraform after launching an EC2 |
The problem is that you are not following the key name convention described in this paragraph calledObject Key Guidelinesof Amazon S3.Summary from the link:The following character sets are generally safe for use in key names:Alphanumeric characters [0-9a-zA-Z]
Special characters !, -, _, ., *, ', (, and )The following are examples of valid object key names:4my-organization
my.great_photos-2014/jan/myvacation.jpg
videos/2014/birthday/video1.wmvRemove those special chars from the URL following the guideline and the problem will disappearShareFollowansweredJan 13, 2018 at 15:40MatPagMatPag43.1k1414 gold badges108108 silver badges117117 bronze badgesAdd a comment| | BackgroundI'm usingaws-android-sdkto send files from Android app to S3.
The filename contains specials character such as=.To do that, I useTransferUtility.upload(...)as explained inthis guide.The problemWhen passing a key containing special characters such as=, the key is being URL encoded.For example, the key:year=2018/month=1/versions=1,2/my_file.txtBecomes:year%253D2018/month%253D1/versions%253D1%252C2/my_file.txtMy questionHow can I upload an S3 file from my Android application, while using special characters in it's key? | Special characters is S3 key name of uploaded file |
The steps that you would need, assumption that JSON data is in S3Create a Crawler in AWS Glue and let it create a schema in a catalog (database). Assumption is that you are familiar with AWS Glue a little.Create a Glue job that transforms the JSON into your favorite format (parquet) that uses the transform step to flatten the data using Rationalize class -https://aws.amazon.com/blogs/big-data/simplify-querying-nested-json-with-the-aws-glue-relationalize-transform/and writes to parquet formatCreate a crawler for the new flatten data and create the table in aws glueUse Athena or AWS Quick sight or your favorite BI tool to query parquet dataShareFolloweditedApr 5, 2021 at 17:24answeredJun 23, 2018 at 21:27HaroonHaroon1,0911313 silver badges2929 bronze badges3The link has expired @Haroon–SubinoyJul 4, 2020 at 20:212This is the linkaws.amazon.com/blogs/big-data/…–cmolinaFeb 18, 2021 at 1:07What about with nested arrays? People seem to be having success with explode()–SigexJan 19, 2022 at 13:39Add a comment| | Trying to flatten input JSON data having two map/dictionary fields (custom_event1 and custom_event2), which may contain any key-value pair data. In order to create an output table from the data frame, will have to avoid the flattening of custom_events and store it as JSON string in the column.Followingthisdoc, Relationalize.apply is flattening the custom_events map also.Sample JSON:
{
"id": "sklfsdfskdlfsdfsdfkhsdfssdf",
"idtype": "cookieId",
"event": "install",
"sub_event": null,
"ip": "XXXXXX",
"geo": {
"country": "IN",
"city": null,
"region": null
},
"carrier": {
"operator": null,
"network": null,
"connection_type": null
},
"user_agent": "Mozilla/5.0",
"device": {
"brand": "LYF",
"model": null,
"type": null
},
"package": {
"pkgName": "XXXXXXXX",
"pkgVersion": "1.5.6.3",
"pkgRating": null,
"timestamp": "2017-12-14 11:51:27"
},
"custom_event1": {
"key1": "value1",
"key2": "value2"
},
"custom_event2": {
"key": "value"
}
}How to store JSON data with a dynamic map field in a Relational storage? | How to relationalize a JSON to flat structure in AWS Glue |
I think you have the key and the keyId in reverse wayAWS_ACCESS_KEY is the equivalent of the keyId and AWS_SECRET_KEY the key.ShareFolloweditedMay 30, 2019 at 19:59Charlie Wallace1,82011 gold badge1515 silver badges1717 bronze badgesansweredMay 30, 2019 at 19:29GabrielGabriel4377 bronze badges1This error response had me searching all ends of internet to find solution. Turns out I had the access/secret keys swapped in java config filenew BasicAWSCredentials(secretKey, accessKey)(wrong) vsnew BasicAWSCredentials(accessKey, secretKey)(correct).–user1653042Jan 9, 2020 at 22:59Add a comment| | I have tried to follow thedocsto get started with loopback 3.x and aws.Yet, no matter what. this is the response :"error": {
"statusCode": 400,
"name": "AuthorizationHeaderMalformed",
"message": "The authorization header is malformed; the Credential is mal-formed; expecting \"<YOUR-AKID>/YYYYMMDD/REGION/SERVICE/aws4_request\".",
"code": "AuthorizationHeaderMalformed"
}
}I have foundthisBut i have no idea what is going on there.have someone solved this issue ? | HOW TO Loopback storage setup - AWS S3 |
As @stdunbar mentioned, this is not by AWS but how SSL wildcards work. For example, in this case dev-360yield-admin.mydomain.com should work, but for dev.360yield.admin.mydomain.com you would need a cert for *.360yield.admin.mydomain.comShareFollowansweredJan 2, 2018 at 18:13Julio FaermanJulio Faerman13.4k99 gold badges5959 silver badges7777 bronze badgesAdd a comment| | I created an X.509 certificate using the AWS certificate manager.I used a wildcard designation,*.mydomain.com, and validated it using the AWS DNS.I then attached it to myElastic Load Balancer (ELB)along with the instances running my web service.I then set up aCNAMErecord in myAWS DNSwhere the alias name isdev.360yield.admin.mydomain.comand points to the canonicalDNSname of theELB.I get the"Not secure"notice in the address bar when I use the
alias name in the address bar.The error is the same as if I was using self-signed certificates. I thought that if I usedAWScreated certificates I would not get this error.Are my assumptions incorrect?Did I do something wrong with the setup of the certificate? | AWS Certificates Not Secure In Browser |
You won't see the security credentials tab when you list the users. To configure the columns, click onManage Columnsicon on top right.Then select the columns you want to be displayed:To see theSecurity Credentialstab, you need to click on the highlighted name of user.ShareFolloweditedDec 22, 2017 at 16:34Petr Gazarov3,67222 gold badges2222 silver badges3838 bronze badgesansweredDec 22, 2017 at 16:22helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badges0Add a comment| | Trying to reset my IAM access key. PerAmazon documentation, there supposed to be a Security Credentials tab after selecting the user, but it does not show. Here are the steps I'm performing:Sign in to AWS console and open IAM consoleIn the navigation pane, choose UsersChoose the name of the desired user, and then choose the Security Credentials tabAfter selecting the user, there is no security tab. My user has the following permissions policy:AdministratorAccess policy{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}Screenshot after selecting user: | Missing Security Credentials tab in IAM console |
AfterInstall: AfterInstall script contains the tasks need to be executed after installing the Application.Example of BeforeInstall script:#!/bin/bash
cd /home/ubuntu/production/weone-backend
sudo chown -R ubuntu:ubuntu /home/ubuntu/production
sudo NODE_ENV=production nohup nodejs app.js > /dev/null 2> /dev/null < /dev/null &In the above script, we are changing the ownership of our application folder
& starting application process.Note: Use “/dev/null 2> /dev/null < /dev/null &” to get out of nohup shell automatically, else your CodeDeploy would get stuck at AfterInstall event.https://www.oodlestechnologies.com/blogs/AWS-CodeDeployShareFolloweditedJul 8, 2022 at 18:29danronmoon3,84455 gold badges3434 silver badges5757 bronze badgesansweredJun 11, 2018 at 5:54mrwbtamrwbta5622 bronze badgesAdd a comment| | I am using CodeDeploy to deploy my applications to EC2 instances created by an Auto Scaling Group.The applications deploy fine and are moved to their correct file mapped locations, however myAfterInstallationscript is never executed. I can see in the logs that it tries to make the script executable and passes that stage, but it never gets executed.Here is my appspec.ymlversion: 0.0
os: linux
files:
- source: /
destination: /opt/lobby
hooks:
AfterInstall:
- location: bootstrap.sh
timeout: 30
runas: rootHere is the script#!/bin/bash
nohup nodejs lobby.js &
echo "finished"I do not see the echo printed nor do I see any processes running related to lobby.js. I can verify that this script works by typing ./bootstrap.sh after the deployment and this works fine. This hook should do this for me though.This needs to be a background task as I will be running multiple applications in this script but only one is displayed now just to get it working.Edit:I have referred tohttp://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-serverand tried replacing AfterInstall with ValidateService and AfterAllowTraffic (but I'm not using a LoadBalancer)QuestionWhy is my AfterInstallation script not getting called, or so it seems? | Why won't my AWS CodeDeploy run my script after deployment? |
Convert each quad to binary...$ dc -e '2o 10p 0p 96p 32p' | xargs printf '%08d\n'
00001010
00000000
01100000
00100000Now you've got the IP address as a binary number:00001010000000000110000000100000
| | | |The first 18 bits of that represent the network for this IP address...00001010000000000100000000000000
******************--------------Which, if you convert back to dotted quad notation, look like this:$ dc -e '2i 00001010 p 00000000 p 01000000 p 00000000 p'
10
0
64
0So,10.0.64.0/18. You can also calculate your broadcast:$ dc -e '2i 00001010 p 00000000 p 01111111 p 11111111 p'
10
0
127
255Or,10.0.127.255/18.And the network10.0.96.0/28is certainly within this range.ShareFolloweditedDec 19, 2017 at 5:32answeredDec 19, 2017 at 3:23ghotighoti46k88 gold badges6767 silver badges106106 bronze badgesAdd a comment| | Wondering how does this overlap, it seems like it comes afterward.CIDR block 10.0.96.32/18 overlaps with pre-existing CIDR block 10.0.96.0/28 from subnet-3fa92058. | CIDR block 10.0.96.32/18 overlaps with pre-existing CIDR block 10.0.96.0/28 from subnet |
You can set the name of your managed instances by setting the Name tag on the instance. At the moment you have to use the AWS CLI or the AWS PowerShell tools to do this, but once done you'll see the name in the console.You can find the AWS CLI documentation here:http://docs.aws.amazon.com/cli/latest/reference/ssm/add-tags-to-resource.html> aws ssm add-tags-to-resource --resource-type ManagedInstance --resource-id <your managed instance id> --tags Key=Name,Value=<instance name<Hope this helps.ShareFollowansweredNov 28, 2017 at 16:41Mats LannérMats Lannér1,31677 silver badges66 bronze badges2That works great, just sorted my managed instances. How can I do it for instances (EC2). Doesn't seem to be a resource type of instance?–John FoxNov 28, 2017 at 17:17Sorted it now. Added tag as normal to the EC2 instance in the web console. Showing as the name correctly now.–John FoxNov 28, 2017 at 21:29Add a comment| | I have been setting up Amazon EC2 Systems Manager in order to manage our Windows patch management setup. All looking good so far as we can get the on premises servers listed in the console using the activation.I have activated the following servers on the same activation (without entering a name as part of the activation). As I have now around 5 managed instances on there they have no name. In the MI section I can see the computer name but when it comes to the run command or to do anything I am only able to see the instance ID.AWS ECS Systems Manager - Managed Instances, need to be able to set the nameHow can I go back and update the name for these managed instances? Don't really have to add each server on a seperate individual activation again?Thanks | Change/Add Name of Managed Instance after Activation in Systems Manager |
Every datapoint has a unit and that unit is set when the datapoint is published. If unit is not set, it defaults toNone.You can't change the unit when graphing or when fetching the data via API, graphs and APIs simply return the unit that is set on datapoints. Also, CloudWatch won't scale your data based on unit. If you have a datapoint with a value of 1200 milliseconds for example and you request this metric in seconds you will get no data, CloudWatch won't scale your data and return 1.2 seconds as one might expect.So looks like CloudWatch logs are publishing data with unit equal toCount. I couldn't find a way to have it publish data with any other unit.ShareFollowansweredNov 25, 2017 at 17:16Dejan PeretinDejan Peretin11.4k22 gold badges4747 silver badges5555 bronze badges21Thank you for this response. So, basically any custom metrics I publish will always show count whatsoever and there's no way to set the unit right now–chrisrhyno2003Nov 29, 2017 at 4:511If this was a metric that you're publishing yourself, you could just start publishing with the desired unit, but since this is a metric published by CloudWatch Logs, there's not much you can do. If you really need the metric in a different unit, you could write a lambda function to read this metric and re-publish under a different name with the desired unit.–Dejan PeretinNov 29, 2017 at 8:43Add a comment| | I have a CloudWatch dashboard with a set of widgets. All the widgets have graphs/line charts based on custom metrics. I defined these custom metrics from metric-filters being defined on the CloudWatch log group.For every custom metric, I want to set the unit to, for example, milliseconds, seconds, hours etc. CloudWatch console somehow shows all the metric units to be counts only.Can we not modify the CloudWatch metric unit to be different than count? If not possible from the console, is it possible through the API? | CloudWatch set unit for a custom metric |
Classifiers only analyze the data within the file, not the filename itself. What you want to do is not possible today. If you can change the path where the files land, you could add the date as another partition:s3://my-bucket/id=10001/source=fromage/timestamp=2017-10-10/data-file-2017-10-10.jsonShareFollowansweredDec 13, 2017 at 14:19hoaxzhoaxz72177 silver badges1111 bronze badges4Classifiers not only analyze data as they add partitions based on the relative path. Thus was just hoping it would be possible to crawl it rather then doing this task via custom Map Reduce–madbitlomanDec 13, 2017 at 16:26I can't find any reference suggesting that the crawlers apply classifiers to the relative path of the files found. The AWS docs indicates that Crawlers (not classifiers) partition the data based on the prefix of the files.–hoaxzDec 13, 2017 at 17:09On slide 23 from hereslideshare.net/AmazonWebServices/…–madbitlomanDec 13, 2017 at 17:31On that slide it shows how it accounts the partitions as a part of the algorithm to determine if data falls into the same schema. But it doesn't suggest that the classifiers are applied to the file path.–hoaxzDec 13, 2017 at 17:47Add a comment| | So what I am trying to do is to crawl data on S3 bucket with AWS Glue. Data stored as nested json and path looks like this:s3://my-bucket/some_id/some_subfolder/datetime.jsonWhen running default crawler (no custom classifiers) it does partition it based on path and deserializes json as expected, however, I would like to get a timestamp from the file name as well in a separate field. For now Crawler omits it.For example if I run crawler on:s3://my-bucket/10001/fromage/2017-10-10.jsonI get table schema like this:Partition 1: 10001Partition 2: fromageArray: JSON dataI did try to add custom classifier based on Grok pattern:%{INT:id}/%{WORD:source}/%{TIMESTAMP_ISO8601:timestamp}However, whenever I re-run crawler it skips custom classifier and uses default JSON one. As a solution obviously I could append file name to the JSON itself before running a crawler, but was wondering if I can avoid this step? | AWS Glue custom crawler based on file name |
Your problem may be caused by the fact that ECR credentials work only for 12 hours, so maybe you are trying to use expired credentials.I recommend you to have a look atupmc-enterprises/registry-creds. This tool can be installed on your cluster and automatically refresh ECR/GCR credentials before they expire.ShareFollowansweredJan 28, 2018 at 7:49Alik KhilazhevAlik Khilazhev1,01577 silver badges1919 bronze badgesAdd a comment| | I am trying to pull the image from the ECR repository inside the Kubernetes cluster, but I am not able to do this.I tried creating a secret and updated in the pod file, but I am not able to do this I am getting an error "no basic auth credentials".Please can anyone give me the step by step instructions to pull the image from a ECR repository inside the Kubernetes cluster. | Unable to use the image ( in ECR ) in K8S cluster |
It depends on your spring boot configuration, by default:the health endpoint is mapped to/actuator/healthPart V. Spring Boot Actuator: Production-ready features - EndpoindsSo your configuration on the load balancer should be something like this.HTTP code: 200Path: /actuator/healthAnd if by any chance you already set the propertyserver.servlet.context-path, then you have to prefix that to the path.The response code for different states of your app are:DOWN: SERVICE_UNAVAILABLE (503)OUT_OF_SERVICE: SERVICE_UNAVAILABLE (503)UP: No mapping by default, so http status is 200UNKNOWN: No mapping by default, so http status is 200You can read the detailshere.ShareFollowansweredNov 15, 2018 at 13:12EduardoEduardo13333 silver badges88 bronze badgesAdd a comment| | I have deployed my fat jar on elastic beanstalk ok, it is listening on port 5000 and is succesfully connecting to an RDS mysql instance on port 3306.However, when I try to hit my API I get a 503 backend server overloaded etc error. I looked it up and it seems the cause is that the health check is failinglocally I can check system health with localhost:5000/health provided by spring boot actuator, but when I set /health as the endpoint for the load balancer to health check it fails. Since I don't have any 'healthy instances' running as they fail the health check the server is unavailable to REST requests.Anyone know how to get the load balancer to succesfully ping the app for a health check ? | spring boot application on elastic beanstalk - health check fails |
You need to create CA first:SERVER_NAME=fred
DOMAIN_NAME=domain.local
export $SERVER_NAME $DOMAIN_NAME
openssl genrsa -out CA_$SERVER_NAME.$DOMAIN_NAME.key 2048
openssl req -x509 -new -nodes -key CA_$SERVER_NAME.$DOMAIN_NAME.key -sha256 -days 1024 -out CA_$SERVER_NAME.$DOMAIN_NAME.pem -subj "/C=GB/ST=MyCounty/L=MyTown/O=MyOrganisation/OU=MyOrganisationUnit/CN=$SERVER_NAME.$DOMAIN_NAMEThen you can create certificates signed from the CA you just created.openssl genrsa -out $SERVER_NAME.$DOMAIN_NAME.key 2048
openssl req -new -key $SERVER_NAME.$DOMAIN_NAME.key -out $SERVER_NAME.$DOMAIN_NAME.csr -subj "/C=GB/ST=MyCounty/L=MyTown/O=MyOrganisation/OU=MyOrganisationUnit/CN=$SERVER_NAME.$DOMAIN_NAME.client"
openssl x509 -req -in $SERVER_NAME.$DOMAIN_NAME.csr -CA CA_$SERVER_NAME.$DOMAIN_NAME.pem -CAkey CA_$SERVER_NAME.$DOMAIN_NAME.key -CAcreateserial -out $SERVER_NAME.$DOMAIN_NAME.crt -days 365 -sha256Now you have a CA and a certificate created, you can test that the certificate is created from the CA by running:openssl verify -CAfile CA_fred.domain.local.pem fred.domain.local.crtShareFollowansweredMar 20, 2018 at 13:21JonJon3122 bronze badges1Note for future readers: if the client is MacOS, you will have to convert the cert and private key to .p12, otherwise MacOS will only load the certificate but not the private key.–Gabor LengyelMar 19, 2021 at 17:51Add a comment| | I am trying to implement Allow only trusted devices feature on AWS Workspaces with simple AD.Can someone please guide me how to generate self-signed root & client certificate with following features.Certificates must be Base64-encoded certificate files in CRT, CERT, or PEM format.
Certificates must include a Common Name.
The maximum length of certificate chain supported is 4.
Amazon WorkSpaces does not currently support device revocation mechanisms, such as certificate revocation lists (CRL) or Online Certificate Status Protocol (OCSP), for client certificates.
Use a strong encryption algorithm. We recommend SHA256 with RSA, SHA256 with CEDSA, SHA381 with CEDSA, or SHA512 with CEDSA. | AWS WorkSpace - allow only trusted devices with certificate authentication |
I think you're missinghostnamein requestOptions.
Correct:request(aws4.sign({
hostname: 'test.amazonAPI.com',
service: 'execute-api',
region: 'us-east-1',
method: 'POST',
url: 'https://test.amazonAPI.com/test/doThing', // this field is not recommended in the document.
body: load
},
{
accessKeyId: tempCreds.Credentials.AccessKeyId,
secretAccessKey: tempCreds.Credentials.SecretAccessKey,
sessionToken: tempCreds.Credentials.SessionToken
}))Reference:https://github.com/mhart/aws4ShareFollowansweredOct 10, 2018 at 9:26Hieu NguyenHieu Nguyen48144 silver badges1111 bronze badgesAdd a comment| | I'm trying to make a call to a private Amazon API with Javascript using the aws4 package, but I can't get it to work. I'm able to do the call successfully with Postman, but I'm trying to get it to work with code, and I'm failing.Here is the postman screenshot:And here is the code that is trying to replicate this:request(aws4.sign({
service: 'execute-api',
region: 'us-east-1',
method: 'POST',
url: 'https://test.amazonAPI.com/test/doThing',
body: load
},
{
accessKeyId: tempCreds.Credentials.AccessKeyId,
secretAccessKey: tempCreds.Credentials.SecretAccessKey,
sessionToken: tempCreds.Credentials.SessionToken
}))And the error I'm currently getting:Error: getaddrinfo ENOTFOUND execute-api.us-east-1.amazonaws.com execute-api.us-east-1.amazonaws.com:443
at errnoException (dns.js:53:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:95:26) | AWS4 signature in Node.js |
The error is here:.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("XXXX.us-west-2.rds.amazonaws.com:3306", "us-west-2"))You're trying to connect to your actual RDSinstancewith the SDK, and that isn't the correct approach. You need to connect to theRDS Query API Endpoint. These requests are sent through theservice.You should be able to simply use.withRegion(), and not have to actually supply the endpoint URL, since the endpoint is the same for all RDS instances within a region, and the default regional URLs are coded into the SDK.ShareFollowansweredOct 16, 2017 at 18:20Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badgesAdd a comment| | Please refer the following code which I am using to list log files available in RDS ( Mysql )AWSCredentials credentials = new BasicAWSCredentials("XXX", "XXX");
AWSCredentialsProvider provider = new StaticCredentialsProvider(credentials);
AmazonRDS rdsClient = AmazonRDSClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("XXXX.us-west-2.rds.amazonaws.com:3306", "us-west-2"))
.withCredentials(provider)
.build();
DescribeDBLogFilesRequest request = new DescribeDBLogFilesRequest();
DescribeDBLogFilesResult response = rdsClient.describeDBLogFiles(request);
List<DescribeDBLogFilesDetails> listOfFiles = response.getDescribeDBLogFiles();
System.out.println(listOfFiles.toString());
System.out.println("Program done");Here is my pom.xml dependencies :<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-rds -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-rds</artifactId>
<version>1.11.101</version>
</dependency>I am facing exception when using the above code. | RDS : Unable to execute HTTP request: Unrecognized SSL message, plaintext connection? |
Is the "Artifacts packaging" configuration set to "None" for your CodeBuild project? Changing the packaging to "Zip" will probably be much faster, so that the build doesn't take time uploading each individual node module file to S3. Instead with zip packaging, CodeBuild will zip everything up and upload one zip file to S3.This page contains instructions on where to find the packaging setting in the CodeBuild console:http://docs.aws.amazon.com/codebuild/latest/userguide/change-project.htmlShareFollowansweredOct 4, 2017 at 19:12Clare LiguoriClare Liguori1,6141212 silver badges1111 bronze badges0Add a comment| | I built a CodeBuild project for a fairly simple build pipeline. I am building a NodeJS project. My buildspec is pretty simple:version: 0.2
env:
variables:
ENVIRNOMENT: "AWSDEV"
phases:
pre_build:
commands:
- npm install
build:
commands:
- npm run -s build
artifacts:
files:
- src/dist/**/*
- node_modules/**/*
discard-paths: noThe npm run build step simply uses Babel to transpile the code into the src/dist directory. I'm running a build and it's been 37 minutes and it's still building, on the step UPLOAD_ARTIFACTS. I can see the artifacts being added to the S3 bucket so it's presumably actually still doing stuff.Is there anything I can do to improve this build process? This should be a short, 5 minute at most task I would think. Am I doing something wrong by uploading node_modules to the S3 bucket?What is the best configuration for a Node project? | AWS CodeBuild taking FOREVER on transfer to S3 step |
Turns out the problem was anebextensionscript, did not realize they were checked at that stage, but I had configured a larger root disk, and in there it referred to /dev/sda1Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize:
35changing it accordingly fixed the issue:Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize:
35ShareFollowansweredSep 19, 2017 at 16:33Paul TaylorPaul Taylor13.6k4848 gold badges200200 silver badges372372 bronze badgesAdd a comment| | I have an Elastic Beanstalk application using anm3.xlargeEC2 instance.I wanted to try out usingm4.xlargeinstead so I cloned my EB instance. Then once it was running I clicked onChange Configurationand changed theInstance Typetom4.xlargebut then this gives the following errorInvalid root device name: '/dev/sda1', expecting: '/dev/xvda'.Why is this error occurring ?I have found this articlehttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.htmlwhich gives some background information but I still dont know what to about this error. | AWS:Device Name Error when converting from Elastic Beanstalk instance from m3.xlarge to m4.xlarge |
Here is how I would solve,Steps:Create aCodeCommit Triggerunder AWS CodeCommit RepositoryListenon EC2 on Jenkins or a node express or any http appGetall latest commitsfrom repoaws s3 sync . s3://bucketnameThis is the fastest backup I can think of.For Automatic Repository Creation, you can use list-repositories,http://docs.aws.amazon.com/cli/latest/reference/codecommit/list-repositories.htmlif repo does not exist already, clone a new one or update the existing one.You can also do git export to a single file and back that file with a versioning enabled on S3. This will increase backup time everytime it runs.ShareFollowansweredSep 20, 2017 at 2:49KannaiyanKannaiyan12.8k44 gold badges4848 silver badges8686 bronze badges2I think you can also try to use triggers pointing to AWS Lambda function which can be configured per repository.–SebastianSep 24, 2017 at 10:40I have not seen that you can do common trigger for all repo. It is per repository only.–KannaiyanSep 24, 2017 at 17:43Add a comment| | I am currently working on a task which should take a backup of all AWS Codecommit repositories (around 60 repositories at the moment) and place them in an S3 bucket located in another AWS account.I have googled it to find out the possibilities around this but found nothing that best suites my requirement.1.)Considered using Code Pipeline:We can configure AWS CodePipeline to use a branch in an AWS CodeCommit
repository as the source stage for our code. In this way, when you make
changes to your selected branch in CodePipeline, an archive of the
repository at the tip of that branch will be delivered to your CodePipeline
bucket.
But, I had to neglect this option as it could be applied only to a
particular branch of a repository whereas I want a backup for 60
repositories all at a time.2.)Considered doing it using simple git command which clones the git
repositories, placing the cloned stuff into a folder and sending them to S3
bucket in another account.I had to neglect this because it complicates my process when a new git
repository is created where I need to manually go to AWS account and get the
url of that repo to clone.So, I want to know if there is a good option to automatically backup Codecommit repositories in S3 located in a different AWS account. If something in any of the repos changes, it should automatically trigger that changed part and move it to S3. | Trying to have a backup of Codecommit repos in S3 bucket of another AWS account |
What you missed is setting the session variable and calling resource on that session instance.import boto3
session = boto3.session.Session(profile_name='Credentials')
s3 = session.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)Also verify the string 'Credentials' exactly matches the [Credentials] in your ~/.aws/credentialsShareFollowansweredSep 15, 2017 at 0:43mostafazhmostafazh4,19011 gold badge2121 silver badges2626 bronze badges3That helped but now I get a message that I must specify a region. I have a config file in the .aws directory with the default region set to us-east-1. I know I can make region part of the session but is there anyway to set a default region?–Dennis M. GraySep 15, 2017 at 2:37in your~/.aws/credentialsadd a new line under your profile (in your case [Credentials]) that looks like that:region = us-east-1this would save you having to set that region variable in boto3.–mostafazhSep 15, 2017 at 11:14I thought the region was specified in the ~/.aws/config file–Dennis M. GraySep 16, 2017 at 4:05Add a comment| | I'm just getting started with boto3 and tried the following code:import boto3
boto3.session.Session(profile_name='Credentials')
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)If I name the section in ~/.aws/credentials [default], it works fine but if I name it something else, like [Credentials] and specify the profile_name as I did, it fails withbotocore.exceptions.NoCredentialsError: Unable to locate credentialsI want to be able to specify different profiles in the credentials file but I can't get past this error. Some people have answered this question saying that the section must be [default] but that cannot be right. | boto3 format and location of credentials file |
Discussing with Support AWS,Confirmed it is safe to delete those intermediate files after 24 hour period or to the max retry time.A Lifecycle rule with an automatic deletion on S3 Bucket should fix the issue.Hope it helps.ShareFollowansweredSep 30, 2017 at 5:44KannaiyanKannaiyan12.8k44 gold badges4848 silver badges8686 bronze badgesAdd a comment| | AWS Firehose uses S3 as an intermittent storage before the data is copied to redshift. Once the data is transferred to redshift, how to clean them up automatically if it succeeds.I deleted those files manually, it went out of state complaining that files got deleted, I had to delete and recreate Firehose again to resume.Deleting those files after 7 days with S3 rules will work? or Is there any automated way, that Firehose can delete the successful files that got moved to redshift. | How to clean up S3 files that is used by AWS Firehose after loading the files? |
There are documents, but not in terraform.For dimensions, aws has all documents at here:http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.htmlIf you need to find out the dimensions for instances (EC2), the document is here:http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.htmlIn the last part, the document mentions there are 4 dimensions you can use, and seems you found it out already.AutoScalingGroupName
ImageId
InstanceId
InstanceTypeShareFolloweditedSep 17, 2017 at 3:53answeredSep 15, 2017 at 0:32BMWBMW44k1313 gold badges100100 silver badges117117 bronze badgesAdd a comment| | TheTerraform documentationcovers cloudwatch alarms in the context of autoscaling groups, but not individual instances.resource "aws_cloudwatch_metric_alarm" "foobar" {
alarm_name = "terraform-test-foobar5"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
...
dimensions {
InstanceId = "${aws_instance.myOrg-myHost.id}"
}
alarm_description = "This metric monitors ec2 cpu utilization"
#This would be for autoscaling
#alarm_actions = ["${aws_autoscaling_policy.bat.arn}"]
}I'm guessing it will be something like:alarm_actions = ["arn:aws:sns:us-east-1:111122223333:MyTopic"] | Using Terraform to create a cloudwatch alert (metric_alarm). How can I use alarm_actions for an individual host? |
Whether or not you provide SSH access, it'll always be possible for your users to mount the root EBS-volume of your AMI on another EC2-instance to investigate its contents, so disabling SSH or making certain files unreadable for an SSH-user doesn't help you in this regard.Instead of trying to to keep users away from your source code I suggest you simply state clearly what the users are allowed to do with it and what not in the terms of service.Even large companies provide OS-images which contain the source code of their applications (whenever they use a scripting language) in clear form or just slightly obfuscated.ShareFollowansweredSep 6, 2017 at 16:37DunedanDunedan8,09366 gold badges4343 silver badges5353 bronze badges1Thanks for the response. I didn't realize they could attach the volume to any other instance. That makes sense though.–all about dataSep 6, 2017 at 21:20Add a comment| | I'm planning to start a small business and submit an Linux AMI to Amazon's AWS Marketplace. As I'm reading the seller's guide, I see this:AMIs MUST allow OS-level administration capabilities to allow for compliance requirements, vulnerability updates and log file access. For Linux-based AMIs this is through SSH." (6.2.2)How can I protect my source code if anyone who uses my product can SSH to the machine and poke around? Can I lock down certain folders yet still allow "os-level administration"?Here is a bit of context if needed:I'm using Ubuntu Server 16.04 LTS (HVM), SSD Volume Type (ami-cd0f5cb6) as my base AMII'm provisioning a slightly modified MySQL database that I want my customers to be able to access. This is their primary way of interacting with my service.I'm building a django web service that will come packaged on the AMI. This is what I'd like to lock down and prevent access to. | Securing Folder on EC2 Amazon Marketplace AMI |
The configuration files keys are specified in this page:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.htmlConfiguration files support the following keys that affect the Linux server your application runs on.Keys:PackagesGroupsUsersSourcesFilesCommandsServicesContainer CommandsKeys are processed in the order that they are listed above.So, in your case, you have to write your commands inside acommandskey. Your file will look like that:commands:
01_remove_old_cron_jobs:
command: "sudo cp enable_mod_pagespeed.conf /etc/httpd/conf.d"
02_remove_old_cron_jobs:
command: "sudo rpm -U -iv --replacepkgs mod-pagespeed.rpm"
03_remove_old_cron_jobs:
command: "sudo touch /var/cache/mod_pagespeed/cache.flush"The complete syntax for commands you can find here:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commandsShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredSep 4, 2017 at 13:22JundiaiusJundiaius6,92944 gold badges3333 silver badges4444 bronze badgesAdd a comment| | I have the following config file:packages:
yum:
at: []
01_remove_old_cron_jobs:
command: "sudo cp enable_mod_pagespeed.conf /etc/httpd/conf.d"
02_remove_old_cron_jobs:
command: "sudo rpm -U -iv --replacepkgs mod-pagespeed.rpm"
03_remove_old_cron_jobs:
command: "sudo touch /var/cache/mod_pagespeed/cache.flush"Labeled01.config. When I deploy this to my server, I get an error such as:Error processing file (Skipping): '.ebextensions/01.config' - Contains invalid key: '02_remove_old_cron_jobs'. For information about valid keys, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.htmlHowever, the documentation contains no information about valid keys, and this key looks similar to my other keys. | AWS EB .ebextensions - Contains invalid key |
The type is specified incorrectly. Try this:"DatabaseSubnets": {
"Description": "The subnets to place database instances in.",
"Type": "AWS::EC2::Subnet::Id"
"Default" : ""
},ShareFollowansweredAug 28, 2017 at 14:14helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badges1The type is a list of subnets, because I need at least 2 subnets for an RDS.–sashoalmAug 29, 2017 at 6:36Add a comment| | I have a CloudFormation script that creates an RDS instance and asks for a Subnet Group. But instead of making the user specify the subnets one by one and create a new group, I want to select an existing group.Right now I have this for the subnet group:"DatabaseSubnets": {
"Description": "The subnets to place database instances in.",
"Type": "List<AWS::EC2::Subnet::Id>"
},
....
"DatabaseSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "CloudFormation managed DB subnet group.",
"SubnetIds": {
"Ref": "DatabaseSubnets"
}
}
},How do I change it to select an existing group? Should I replace the"Properties"group with aRef? | Ask for existing subnet group in AWS CloudFormation script |
No, there is no way to do this without compromising security currently. The chosen way would be a proxy service like you mentioned.ShareFollowansweredAug 25, 2017 at 5:08BryceHBryceH2,76022 gold badges2121 silver badges2424 bronze badgesAdd a comment| | Is it possible toget user attributesof any AWS Cognito user from a client which is unauthenticated or authenticated as another (non-admin) user?According tothispost it is not possible. This seems like an awful limitation for user management as a service. Is the only viable solution to build a proxy service that will useAdminGetUserto achieve this? | AWS Cognito: get user attributes of any unauthenticated user |
Looks like it is a special skill called Flash Briefing skill. Though feed skill is still rendering as standard card, Amazon has not made embedding a link in the card response in custom skill available to public.check the documentation:alexa documentationSee thislink. In the table there is something calleddisplay Urlwhich will be rendered as theread morelink in the card.ShareFolloweditedAug 12, 2017 at 5:44answeredAug 12, 2017 at 5:39iwekesiiwekesi2,28644 gold badges1919 silver badges3030 bronze badgesAdd a comment| | I have seen few apps which are able to embed links in the response of Standard card in Alexa.
for eg :: see the skill :https://www.amazon.co.uk/Guardian-News-Media-The-US/dp/B01N5HQRUCThey are able to show the links in the standard card response.Something like thisThe documentation of the Alexa skills does not say anything about urls in the card response. | How to embed 'read-more' like url in Standard Card response of Alexa |
There is no such feature available and neither it will be available in future.Let me explain you why.Actually there is no sense in backing up snapshots to glacier,EBS snapshots are incremental, that means every snapshot have a dependency on many other previously created snapshots, it points to data stored in those previously taken snapshots.So even if you will find a solution to save the EBS snapshots in glacier it would be hell of a task to retrieve data and restore it to make it usable for backup purpose.Glacier is perfect for cost optimization while saving data such as files and retrieving them at later stage in time but with snapshots it doesn't work in the similar way.Hope it will help !ShareFolloweditedAug 11, 2017 at 6:47answeredAug 10, 2017 at 21:04Shashi BhushanShashi Bhushan69044 silver badges1414 bronze badgesAdd a comment| | Looking for a good and cost effective solution (which are hopefully not mutually-exclusive) for long term archiving of full system backups. I've read many comments that EC2 snapshots cannot be copied to AWS Glacier (snapshots are stored in S3) but I suspect that this simply means there's no "trivial" way to do it. Digging deeper, via scripting or coding or such, is Glacier a feasible mechanism and has anyone worked on it yet? | Is there a way to back up an EC2 snapshot to AWS Glacier |
It is not recommended to keep secrets on EC2 instances. You may useAWS KMSto keep the secret keys andAWS Certificate Managerto manage your SSL certificates.You could setup aElastic Load Balancer(ELB) in front of your EC2 instance and have your SSL certificates applied on the ELB. Here is aguide. It is good practice to terminate SSL at ELB level to take some load off the server on your EC2 instance.ShareFollowansweredAug 3, 2017 at 18:29ManojManoj2,36422 gold badges2121 silver badges3636 bronze badges5If I apply SSL on ELB level in front of the EC2 instance, I can get rid of the HTTPS setup code in my service, correct? I also assume that there no [significant] cost changes if I configure my ELB to not grow above single instance (my project is not even in early alpha, so I don't need it to autoscale at all).–Igor SoloydenkoAug 3, 2017 at 19:21Correct. Well there is a small cost involved. See the ELB pricing.aws.amazon.com/elasticloadbalancing/classicloadbalancer/pricing–ManojAug 3, 2017 at 19:32If you are concerned about cost, you can further reduce it by migrating your application to serverless. (Lambda & API Gateway). Have a look at this.github.com/awslabs/aws-serverless-express–ManojAug 3, 2017 at 19:36@IgorSoloydenko Were you able to successfully integrate your express js with ELB to use HTTPS?–Mr. KennethNov 8, 2022 at 2:23@Manoj would it be okay if you can provide the solution on how to enable this?–Mr. KennethNov 8, 2022 at 2:24Add a comment| | I am very new to Node.js & Express.js which I use to write a web API service.
To enable HTTPS the service is using the following code:const server = https
.createServer({
key: fs.readFileSync('./cert/myservice.key'),
cert: fs.readFileSync('./cert/myservice.crt')
})
.listen(serverConfig.server.port, () => logger.info(`MyService is up and running`));As it is easy to see, this code assumes that the.keyand.crtfiles are available locally in the service application location.
If I want to deploy the service to a single AWS EC2 host (for simplicity reasons) these files would have to be there, which does not seem to be a secure solution.I was thinking about using AWS IAM for securing the secrets.
The issue is that it's not possible to "deploy"/make the secrets available from IAM to an EC2 node directly.
I'd have to use IAM's API to get the secrets, but then the question is how do I make the AWS credentials available on EC2.Question:Is there a recommended secure way to deploy secrets (including certificates and keys) to AWS EC2 node? | NodeJS/Express + HTTPS: How to deploy key & certificate to AWS EC2 node? |
I believe you are experiencing that because you have no EC2 user-data attachedI conducted an experiment in my AWS Console. I launched two identical EC2 instances based an Ubuntu 18.04 image. However with one of the instances I attached user-data, the other I didn'tInstance WITH user-data$ curl http://169.254.169.254/latest
dynamic
meta-data
user-data
$ curl http://169.254.169.254/latest/user-data
(prints my specified user-data)Instance WITHOUT user-data$ curl http://169.254.169.254/latest
dynamic
meta-data
(notice the absence of user-data)
$ curl http://169.254.169.254/latest/user-data
.... <title>404 - Not Found</title> ....This answers the OPs questions, but not what John Bresnahan experiencedShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredMay 20, 2019 at 0:41Sam AnthonySam Anthony1,69922 gold badges2222 silver badges3939 bronze badges11I can confirm this, had an instance without user data was getting 404 however the meta-data prefix working, created an identical instance with a bootstrap script and the API is working.–Mo HajrAug 10, 2020 at 13:29Add a comment| | Trying to fetch EC2 userdata from win EC2 instancehttp://169.254.169.254/latest/user-dataI get 404 - Not Found error | EC2 http://169.254.169.254/latest/user-data returns 404 |
It is going sequentially through your data, and it does not know about all items that were added in the process:Scan operations proceed sequentially; however, for faster performance
on a large table or secondary index, applications can request a
parallel Scan operation by providing the Segment and TotalSegments
parameters.Not only it can miss some of the items that were added after you've started scanning it can also miss some of the items that were addedbeforethe scan started if you are using eventually consistent read:Scan uses eventually consistent reads when accessing the data in a
table; therefore, the result set might not include the changes to data
in the table immediately before the operation began.If you need to keep track of items that were added after you've started a scan you can useDynamoDB streamsfor that.ShareFollowansweredAug 3, 2017 at 10:26Ivan MushketykIvan Mushketyk8,17588 gold badges5252 silver badges6767 bronze badgesAdd a comment| | When we scan a DynamoDB table, we can/should useLastEvaluatedKeyto track the progress so that we can resume in case of failures. The documentation says thatLastEvaluateKeyisThe primary key of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.My question is if I start a scan, pause, insert a few rows and resume the scan from the previousLastEvaluatedKey, will I get those new rows after resuming the scan?My guess is I might miss some of all of the new rows because the new keys will be hashed and the values could be smaller thanLastEvaluatedKey.Is my guess right? Any explanation or documentation links are appreciated. | Scanning DynamoDB table while inserting |
According to an answer posted by AWS on the AWS Developer Forums:Currently this setting is not available in Cloudformation template but will be added in future releases.Source:https://forums.aws.amazon.com/thread.jspa?threadID=259548&tstart=0ShareFollowansweredAug 10, 2017 at 4:06cmrncmrn1,10566 silver badges1414 bronze badgesAdd a comment| | I'm trying to create a Cognito User Pool through Cloudformation but I'm unable to define a template that allow users to use an email address or phone number as their “username” to sign up and sign in to my application.I have created an AWS::Cognito::UserPool setting phone_number and email as AliasAttributes, but when I try to sign up with an email or phone number as username I get a message telling that the username cannot match an email or phone number format.I can define that through AWS Management Console (image above). Does anyone know how I can achieve that through Cloudformation template? | Cloudformation - Template attributes that allow users to use an email address or phone number as their “username” to sign up and sign in |
When I created a yaml file in.ebextensionsinstead of.elasticbeanstalk, it worked. I was simply putting the yaml file under the wrong directory..elasticbeanstalk/pandas.yml:packages:
yum:
gcc-c++: []
python3?-devel.x*: []I got error while trying to installpython-devel: []:Command failed on instance. Return code: 1 Output: Yum does not have python-devel available for installationSo the correct devel package name, in my case, is either 'python27-devel.x86_64' or 'python35-devel.x86_64'https://forums.aws.amazon.com/thread.jspa?threadID=233268How to install python3-devel on red hat 7ShareFolloweditedAug 3, 2017 at 7:13answeredAug 3, 2017 at 7:06MuatikMuatik4,1211010 gold badges4040 silver badges7474 bronze badgesAdd a comment| | I'm trying to run a flask application which haspandasdependency. Without having python-devel installed, pandas cannot be installed. So first I need to install gcc-c++ and python devel according to this thread:'gcc' failed during pandas build on AWS Elastic BeanstalkNow, my.elasticbeanstalk/config.ymllooks like:branch-defaults:
default:
environment: flask-env
group_suffix: null
global:
application_name: flask-sample-app
branch: null
default_ec2_keyname: flask-sample-app
default_platform: Python 3.4
default_region: eu-west-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
repository: null
sc: null
workspace_type: Application
packages:
yum:
gcc-c++: []
python-devel: []But after successfuleb deploycommand, I connect to it viaeb sshand see that it wasn't installed. Is my config.yml correct? | elasticbeanstalk gcc and python-devel installation |
At a guess your environment is trying to get the IP of the local machine from the hostname. AWS names hosts something likeip-172-30-1-34by default but that value isn't in /etc/hosts.A very quick fix would be to add the output fromhostnameon the command line to /etc/hosts. As root, something likeecho "127.0.0.1 hostname" >> /etc/hostsNOTE- the hostname above needs to be surrounded by backquotes but that character is also used by Stackoverflow - don't forget it.ShareFolloweditedJul 31, 2017 at 20:32answeredJul 31, 2017 at 20:25stdunbarstdunbar16.8k1111 gold badges3636 silver badges5656 bronze badges2This seemed to fix my problem. Thank you.–David DennisJul 31, 2017 at 20:37Hi, I occasionally get the error message "Caused by: java.net.UnknownHostException: url" Its intermittent , any idea ?–Diksha BalujaMay 26, 2022 at 12:40Add a comment| | So when running a Jenkins job i'm getting the following error:Unable to get host name
java.net.UnknownHostException: ip-XX-XX-XX-XXX: ip-XX-XX-XX-XXX: Name or service not knownI've read online about editing the /etc/hosts file. Right now mine looks like127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost6 localhost6.localdomain6I've done a lot of trail and error and have yet to find a solution that works. | Jenkins java.net.UnknownHostException Error |
Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above.You should add the kops nginx ingress addon:https://github.com/kubernetes/kops/tree/master/addons/ingress-nginxIn this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controllerhttps://github.com/coreos/alb-ingress-controllerfor this.Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you.https://github.com/jetstack/kube-legoShareFollowansweredAug 8, 2017 at 7:43Jonathan WickensJonathan Wickens75966 silver badges1313 bronze badges3My service type is LoadBalancer, and I see elb is created without any ingress controller created on my side–ToddamsJan 10, 2018 at 8:40i see the same thing @Toddams but I don't know if it's a good thing or a bad thing.–Randy LAug 21, 2018 at 20:03Please note thatkube-legois depricated. Usecert-managerinstead:github.com/jetstack/cert-manager–demisxMay 11, 2020 at 20:20Add a comment| | After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS withkopsand serve a few websites with it.The mind-blowing magic ofkops create cluster ...gives me a bunch of ec2 instances, makes the k8s API available attest-cluster.example.comand even configures my local~/.kube/configso that I cankubectl apply -f any-stuff.yamlright away. This is just great!I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.In my small KVM k8s I simply installtraefikand expose it on ports:80and:443. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that mytraefik helm chartservice exposes two random ports instead of fixed :80 and :443, but I'm not sure.What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS? | Make k8s services available via ingress on an AWS cluster created with kops |
By using your IAM policy on your users, you are granting S3 API access to the bucket for those users only. However, when accessing the bucket via the URL, you are not using the S3 APIs and no authentication information is being sent.In order to download objects from an S3 bucket directly via the URL, the bucket needs to be made public, which you may or may not want to do.ShareFollowansweredJul 23, 2017 at 21:15Matt HouserMatt Houser35k66 gold badges7373 silver badges9292 bronze badgesAdd a comment| | I have a policy like below which I've attached to several users:{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::foo-bar",
"arn:aws:s3:::foo-bar/*"
]
}
]
}The intent of this policy is that these users would have full control over the bucketfoo-bar. However, I'm noticing that even though the users can download the objects in these buckets using Accesskey and Secretkey. They can not download the objects via a URL e.g.https://s3.amazonaws.com/foo-bar/test.docxI am currently logged in as aIAMManageruser and also haveAmazonS3FullAccess. I can see the list of objects in this bucket but when I click the URL I can't download them. I can, however, download them via clicking the Download button.QuestionIs there anything I need to change in my policy OR the objects canonlybe downloaded via the URL when they are publicly available? | Can't download objects from S3 using the URL link |
Deprecation warnings is not an error, it's just the compiler warning you that something has been deprecated and may be removed in the future - your code will still work even if you're usingnew AmazonKinesisClient(), until that constructor is removed from the SDK sometime in the future.The new way of creating clients in the AWS SDK is to use the builder API like this:final AmazonKinesisClientBuilder builder = AmazonKinesisClient.builder();
final AmazonKinesis client = builder.build();This way, you can usebuilderto customize the client, like setting region or using STS credentials.If you just want to get an instance using the default settings you can do:final AmazonKinesis client = AmazonKinesisClient.builder().build();ShareFollowansweredJul 19, 2017 at 12:27RanizRaniz11k11 gold badge3333 silver badges6464 bronze badges1Note that while you used to instantiate and use an instance ofAmazonKinesisClientyou know use the builder to create an instance ofAmazonKinesis. This class name semantically weak IMO.–MadbreaksApr 25, 2018 at 22:20Add a comment| | I want to create Kinesis stream using Java. So I followed aws doc(URL:http://docs.aws.amazon.com/streams/latest/dev/kinesis-using-sdk-java-create-stream.html). According to that, 1st of all i have to create Kinesis Streams Client. I try it by given code which is:client = new AmazonKinesisClient();I'm using eclipse with aws toolkit for eclipse,java version "1.8.0_131" in Windows environment. Above code give me this error:The constructor AmazonKinesisClient() is deprecatedHow to overcome this problem? | AmazonKinesisClient constructor is deprecated |
I want to understand: why only the volume of 8GB has a mount point ?Because additional volumes are not formatted/mounted by default. AWS does not know whether you'd like to have ext4 or NTFS or something else as well as which mount point you'd like to have.Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?Yes if you have EBS-backed instance (unlike so-called instance-backed instances) and if you do not have other volumesmounted(not to be confused with 'attached')p.s. as far as I see, you initially had created 8GB volume then resized it via AWS console to 100GB. Pls note you resized the EBS volume (xvda) but did not resize the partition (xvda1). AWS will not resize it automatically by the same reason: it doesn't know how you're going to use the extra space.ShareFolloweditedDec 9, 2021 at 12:02answeredJul 18, 2017 at 16:56PutnikPutnik6,34688 gold badges4343 silver badges6464 bronze badgesAdd a comment| | In myEC2 instance, that is attached to a volumeEBSof100GB, I run this command:[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 100G 0 disk
└─xvda1 202:1 0 8G 0 part /Here is the file/etc/fstab:UUID=ue9a1ccd-a7dd-77f8-8be8-08573456abctkv / ext4 defaults 1 1I want to understand: why only the volume of 8GB has a mount point ?
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume? | Is the EBS volume mounted ? and where? |
Are you connecting as a Superuser? Only superusers can see all the data in STL_WLM_RULE_ACTION, other users will only see rows for their own queries. If you are executing a query as one user then checking STL_WLM_RULE_ACTION while connected as another (non super-) user you will potentially not see any rows.ShareFollowansweredJul 16, 2017 at 23:08Nathan GriffithsNathan Griffiths12.5k22 gold badges3535 silver badges5353 bronze badges1I am connecting as a superuser and I'm executing the query and checking the table as the same user.–charmanderJul 17, 2017 at 15:56Add a comment| | I have a Redshift cluster associated with a parameter group that monitors queries via workload management (WLM). I have rules that perform the "log" action whenever the number of rows scanned crosses a threshold (e.g. 100).However, when I execute the SQL queries that satisfy the rule and then check theSTL_WLM_RULE_ACTIONtable, where the query is supposed to be logged, the table comes up empty. Why is this happening? Am I missing something? | Redshift not logging to STL_WLM_RULE_ACTION |
Yes, when scaling storage, the RDS instance remains available and you can read/write though, as noted earlier, performance may be degraded during the change."Compute resources" refers to the instance class. If you change the instance class then you trigger a small period of unavailability.I believe that what's actually happening here is basically the same as during a DB version upgrade:upgrade/resize the standbypromote the standby to primaryupgrade/resize theoldprimary, which becomes thenewstandbyThe RDS instance will be unavailable for a short period of time (typically under 2 minutes) because of a DNS change in step #2 to promote the standby to primary.ShareFollowansweredJul 12, 2017 at 19:17jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badgesAdd a comment| | I have a MySQL RDS instance in a Multi-AZ zone whose storage capacity = 200 GB. My instance class is db.m3.large and my storage type is SSD.I want to double the storage capacity to 400 GB (scale up the DB). On RDS FAQ:https://aws.amazon.com/rds/faqs/it says "while the storage capacity is being increased the DB instance is still available". Does that mean I can still read/write to my instance while it's being scaled up?Also on RDS FAQ's it says when you "decide to scale the compute resources available to your DB instance up or down, your database will be temporarily unavailable while the DB instance class is modified. This period of unavailability typically lasts only a few minutes." What do the compute resources refer to? (The instance class or the storage type)? And this does "unavailability" mean that I won't be able to read/write to the DB instance while the compute resources are scaled up or down? | Clarification of downtime when increasing storage capacity of MySQL RDS Instance |
Attaching an IAM role to existing EC2 instances is a relatively new feature (announced in Feb 2017). There is no support for that in Ansible currently. If youAWS CLI 1.11.46or higher installed, then you can useshellmodule to invoke the AWS CLI and achieve desired result.See:New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLIShareFollowansweredJul 8, 2017 at 6:43helloVhelloV51.2k77 gold badges139139 silver badges148148 bronze badges1Thanks for the info @helloV.–BondJul 8, 2017 at 20:57Add a comment| | I am trying to attach anIAMrole to multiple EC2 instances based on tags. Is there a module already available which I can use. I have been searching for a bit but couldn't find anything specific. | Ansible module to attach an IAM role to existing EC2 instances |
I'm having a very similar issue (not using multi-container) and it appears Amazon has a bug. By selectingDocker 1.12.6I am able to executeeb createwhich fails for me otherwise.EDIT: This appears to have been a bug in the EB CLI. I upgraded and it works fine now.ShareFolloweditedAug 11, 2017 at 19:01answeredJul 5, 2017 at 0:37dreamriverdreamriver1,35122 gold badges1515 silver badges2020 bronze badges1but you'll have to renounce to the multi-container option–jobwatJul 12, 2017 at 23:56Add a comment| | I'm having e multi-docker setup running locally. Now, I would like to deploy that to AWS using Elastic Beanstalk.My folder config is like thisapp
/.ebextensions
composer.config
/.elasticbeanstalk
config.yml
/docker (docker-compose and additional Docker files)
docker-compose.yml
/www (root folder of application)I already raneb init, but I don't know how to actually deploy my local docker-compose configuration to AWS.I read about theDockerrun.aws.jsonfile, should I just copy paste my docker-compose to that file? Or how does it work?I already tried to:zip my folder and upload it to AWSeb create/eb deploybut then I get the following error message:Platform Multi-container Docker 17.03.1-ce (Generic) does not appear to be validThank you | Deploying multi-docker local setup to AWS using Elastic Beanstalk |
You have to first make sure that all messages you received are deleted within VisibilityTimeout. If you are usingDeleteMessageBatchfor deletion make sure that all 10 messages are deleted.Also, how did you queue messages when you enqueue them?
Order of messages are guaranteed only in a single message group.
This also means that if you set the same group id to all messages, you are limited to a single consumer so that order of messages are preserved for sure. Even if use multiple consumers, all messages that belong to a same group becomes invisible to other consumers until visibility timeout expires.ShareFolloweditedSep 9, 2017 at 4:18Nathan Tuggy2,2362727 gold badges3131 silver badges3838 bronze badgesansweredSep 9, 2017 at 3:59user2129038user212903818422 silver badges1212 bronze badges1What are "other" consumers...same SQS Client in loop is considered a single consumer or multiple different ones for each iteration?–gvasquezJul 11, 2019 at 21:40Add a comment| | I have a FIFO SQS queue, with visibility time of 30 seconds.
The requirement is to read messages as Quickly as possible and clear the queue.I have code in JAVA in a fashion shown below (this is just a representation of idea only, not complete code)://keep getting messages from FIFO and process them ASAP
while(true)
{
List<Message> messages =
sqsclient.receiveMessage(receiveMessageRequest).getMessages();
//my logic/code here to process these messages and delete them ASAP
}In the while loop as soon as the messages are received, they are processed and removed from the queue.
But,many timesthe receiveMessageRequest does not give me messages (returns zero messages).Also, the messages limitation is only 10 at a time during receive from SQS, which is already an issue, but due to these zero receives, the queues are piling up.I have no clue why this is happening. The documentation exactly is not clear on this part (or Am I missing in terms of the configuration of the queue?)Please help!Note:
1. My FIFO Queue always has messages in this scenario, so there is no case of Queue having zero messages and receive request returning zero2. The processing and delete times are also Less than the visibility timeout.Thanks.Update:I have started running multiple consumers for processing the FIFO queue. Clearly, one consumer is not coping up with the inflow of messages. I shall update in few days how multiple consumers are performing. Thanks | Amazon SQS - FIFO Queue message request, inconsistent receives |
Two approaches that you can follow to test this :
1) AWS CLI
Deploying function code from S3 allows for substantially larger deployment packages when compared to directly uploading to Lambda.There are two ways to get your Lambda function’s code into AWS Lambda: either directly uploading the function’s deployment package, or having Lambda pull it from S3.https://hackernoon.com/exploring-the-aws-lambda-deployment-limits-9a8384b0bec32) ClaudiaJS CLIRead here:https://claudiajs.com/news/2016/09/21/claudia-1.9.0.htmlclaudia create --handler lambda.handler --deploy-proxy-api --region us-south-1 --use-s3-bucket bucket-namethank you @Gojko for your contribution.ShareFolloweditedJun 29, 2018 at 10:07answeredOct 21, 2017 at 12:14Sikandar KhanSikandar Khan12722 silver badges1616 bronze badgesAdd a comment| | creating Lambda lambda.setupRequestListeners
{ RequestEntityTooLargeException: Request must be smaller than 69905067 bytes for the CreateFunction operation
message: 'Request must be smaller than 69905067 bytes for the CreateFunction operation',
code: 'RequestEntityTooLargeException',
time: 2017-06-22T08:30:52.260Z,
requestId: 'xxx',
statusCode: 413,
retryable: false,
retryDelay: 89.31111557639109
}Is my project too big or what is happening here? Can I upload it through S3 or does it have to do with the number of routes in my project?The same deploy technique works with a smaller project that has only a couple of routes.I am using claudia.js with these commands:"scripts": {
"deploy": "claudia create --handler lambda.handler --name authService --deploy-proxy-api --region eu-central-1",
"update": "claudia update",
"generate-proxy": "claudia generate-serverless-express-proxy --express-module server",
"test": "./node_modules/.bin/mocha --reporter spec"
}, | AWS Lambda setupRequestListerners RequestEntityTooLargeException claudia.js |
Only printable characters can be used as values in the environment variable in opsworks as documented in the link:http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment.So, the only way I found was to replace the special characters as strings, and then in the application that was using this character, replace the string representation received from the environment variable with the corresponding special character.ShareFollowansweredJun 29, 2017 at 13:20CodeIgnitorCodeIgnitor12599 bronze badgesAdd a comment| | I am trying to setup an environment variable in Amazon's opsworks with chef. This is intended to keep a private key which contains newline characters. This is not getting set correctly, and the deployment of my rails app fails due an Exception caused due to this incorrect variable.
Can someone please help me with this?Thanks. | Environment variable with newline in amazon opsworks |
You can just add a Block Device Mapping insi"launch_block_device_mappings": [
{
"device_name": "/dev/xvda",
"volume_type": "gp2",
"volume_size": 20,
"delete_on_termination": true
}
]You must check you AMI which device name use it could be /dev/sda1 or /dev/xvdahttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.htmlShareFollowansweredMar 11, 2021 at 16:23JeffersonJefferson6377 bronze badgesAdd a comment| | I faced the the problem that some apache logs filled so quick that the root filesystem was not big enough.I am using an AMI created with packer and centos 6.How can I resize the root FS during the AMI creation to have it ready for later usage? | How to resize the root filesystem during AMI creation with packer? |
As per:https://forums.aws.amazon.com/thread.jspa?messageID=789869󀵭joey-aws says:We are currently in the process of rolling out a solution which
addresses this exact problem. In the meantime, a common workaround
would be to update something small, such as a "description" field
which could then be used to "trigger" an API Gateway deployment when
updating the CloudFormation stack.I'll update this answer and the example repo once it's rolled out.ShareFolloweditedJan 8, 2018 at 14:30answeredJun 16, 2017 at 9:15JonathanGailliezJonathanGailliez1,61722 gold badges1515 silver badges2222 bronze badgesAdd a comment| | I'm building an API using AWS API Gateway and AWS Lambda. I would like to achieve continuous delivery for this API. The path I've chosen to do it is to use CloudFormation through AWS CodePipeline. I've managed to to it for another project using Lambdas (without API Gateway), it works perfectly and it is really pleasant to use.The issue I'm facing when deploying is that the Lambdas are properly updated but not the API definition. From what I understand, the AWS::ApiGateway::Deployment are immutable resources which means that for each deployment of the API I need to create a new AWS::ApiGateway::Deployment resource. This is not practical at all because for each of this AWS::ApiGateway::Deployment I have a new Invoke URL. This is not acceptable since I would have to either change my DNS record to the newly deployed API invoke URL or ask our API users to change the URL in their applications.What I would like is to be able to change the API definition and the Lambdas implementations without my API users having to change anything in their applications.How can I achieve this behavior?I created a tutorial to highlight my issue. You can find it at:https://github.com/JonathanGailliez/aws-api-gateway-lambda-example | AWS API Gateway: How to achieve continuous delivery? |
If you configured your EC2 key pair using theEC2KeyNameElastic Beanstalk configuration option, you can remove it using the AWS CLI:aws elasticbeanstalk update-environment --environment-name $ENV \
--options-to-remove 'Namespace=aws:autoscaling:launchconfiguration,OptionName=EC2KeyName'ShareFollowansweredJan 19, 2021 at 20:23jb.jb.10k1212 gold badges4040 silver badges3838 bronze badgesAdd a comment| | I'm looking to lock down security on my AWS Elastic Beanstalk instances. I actually manage my beanstalk instances with Chef, and I use that to deploy individual developer SSH keys to the instances.I no longer need the key that beanstalk put on the server. Can I safely remove it from the authorized keys file? I can't find any documentation from Amazon about whether this will interfere with deployments or changing out Environment Properties. | Can I remove the default SSH key of an Elastic Beanstalk instance? |
If we observe closely, AwsClientBuilder class has following methods:public final Subclass withRegion(Regions region) { }
public final Subclass withRegion(String region) { }
private Subclass withRegion(Region region) { }I was trying to use method withRegion(Region region), which is private in this base class. So we should use method withRegion(Regions region) [NOTE: The parameter is Regions instead of Region]. Using this method solved my issue.ShareFollowansweredJun 14, 2017 at 4:20Rajib BiswasRajib Biswas8121212 silver badges2727 bronze badgesAdd a comment| | I am using AWS SDK for Java to use in AWS Metering service. When I tried to useAWSMarketplaceMeteringClientBuilderto create aAWSMarketplaceMeteringClient, I found that if I usewithRegion(Region region)method, I get following compile time error:The method withRegion(Region) from the type AwsClientBuilder<AWSMarketplaceMeteringClientBuilder,AWSMarketplaceMetering> is not visibleThe client code is as shown below:AWSMarketplaceMeteringClient metClient = (AWSMarketplaceMeteringClient) AWSMarketplaceMeteringClientBuilder
.standard()
.withRegion(Regions.getCurrentRegion())
.withCredentials(InstanceProfileCredentialsProvider.getInstance())
.build();And when I try to use thesetRegion(Region region)method ofAWSMarketplaceMeteringClientdirectly, I get following runtime error:Exception in thread "main" java.lang.UnsupportedOperationException: Client is immutable when created with the builder.
at com.amazonaws.AmazonWebServiceClient.checkMutability(AmazonWebServiceClient.java:854)
at com.amazonaws.AmazonWebServiceClient.setRegion(AmazonWebServiceClient.java:349)So how should I use the withRegion(Region region) method? | AWSMarketplaceMeteringClientBuilder.withRegion() is not visible |
AFAIK, whats best worked for me so far is [ec2 --> one role -> many policies] and the role trust relation ship is assigned to the ec2 instance service.Not sure why to be concerned about the security aspect as to get the metadata you are already authenticated and have access to the ec2 instance.Hope this helps, may be more detailed use case might help to answer more precisely.ShareFollowansweredMay 26, 2017 at 14:49VSKVSK46944 silver badges1111 bronze badgesAdd a comment| | If EC2 instance required to have access to multiple AWS services(Like S3, SNS, SQS , CloudWatch etc), what is the best practice for granting access to EC2 instance,Should One ROLE has all the required permissionORCreate multiple ROLE (You can only attach one ROLE to EC2 instance. Using config file you can use multiple role. Extra coding required depending upon which language you are using) - One for each serviceAs per AWS documentation you should always create ROLE for EC2 and assign policy to ROLE according to your requirement.Is there any security concern with granting multiple service access to one ROLE?
Why I am asking is because Using EC2 metadata you can get the accesskey info assigned to the EC2 instance using that ROLE at that point. Keys are getting refreshed frequently by EC2.Any feedback or input. | AWS IAM ROLE to access multiple services |
Actually, I found out.http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsChangeKSStrategy.htmlIt is possible to restrict the replication of a keyspace to selected
datacenters, or a single datacenter. To do this, use the
NetworkTopologyStrategy and set the replication factors of the
excluded datacenters to 0 (zero), as in the following example:cqlsh> ALTER KEYSPACE cycling WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'DC1' : 0, 'DC2' : 3, 'DC3' : 0 };ShareFollowansweredMay 23, 2017 at 21:10Vincenzo MelandriVincenzo Melandri8155 bronze badges4Good job finding that on your own! In addition to setting it to zero, you also can completely omit the excluded data centers from your keyspace definition.–AaronMay 24, 2017 at 14:16Thanks, dunno how easy will that be as nodes will be bootstrapped via puppet, probably easier to set 0 due to structure of the config (hieradata). Is there an actual difference except "style"?–Vincenzo MelandriMay 24, 2017 at 21:07I don't think so. But keyspace changes replicate, so you only have to create it once... NOT with each node deploy.–AaronMay 24, 2017 at 21:38Yes it is only created once.–Vincenzo MelandriMay 24, 2017 at 23:21Add a comment| | I am looking at documentation but I can't find how to do what I need.
I need to set up Apache Cassandra (not datastax, and for now I only have one test cluster) for the following scenario:I need 1 cluster spanning 4 datacenters (some might be physical datacenters in different geographical locations, some might be AWS).In this cluster, I need 3 keyspace. One keyspace replicates across all datacenters, the remaining keyspace need to only replicate to 1 datacenter.+---------------------------------------------------+
| DC 1 | DC 2 | DC 3 | DC 4 |
+---------------------------------------------------+
| Keyspace A | Keyspace A | Keyspace A | Keyspace A |
+---------------------------------------------------+
| | | Keyspace B | Keyspace B |
+---------------------------------------------------+
| Keyspace C | Keyspace C | | |
+---------------------------------------------------+The reason is that data in KeyspaceB and KeyspaceC by legal obligation have to be kept in different regions.
Can I get away with one cluster configuring replication factor 0 or something for the empty spaces in the above table or do I need 3 different clusters?Cheer,
Vince. | Cassandra cluster, wants to replicate across some datacenters but not all |
mounting the EFS to the same EC2 instance and running a copying/rsync
operation to copy data from EBS to EFSLooks like this is the only available/feasible option.ShareFollowansweredJan 29, 2018 at 19:24A.K.DesaiA.K.Desai1,28411 gold badge1010 silver badges1717 bronze badges2rsync makes it possible to do a delta sync and you end up with consistent data so apparently it should be preferred to cp.–SebKJul 23, 2018 at 6:32Yes @SebK, that's what we followed and looks like it's still the same solution.–A.K.DesaiJul 23, 2018 at 12:52Add a comment| | We are in the process of migrating from EBS to EFS for our data storage solution. We are having Terabytes of data. Currently we are mounting the EFS to the same EC2 instance and running a copying/rsync operation to copy data from EBS to EFS.Just wanted to know if there is a way to restore a EBS Snapshot directly to EFS so complete data set goes to EFS. | AWS EBS Snapshot to EFS |
If your hashkey column is called 'threadId', your update should look like this:{
"TableName": "thread1",
"Key": {
"threadId": "AA"
},
"AttributeUpdates": {
"field1": {
"Value": "Worked!"
}
}
}ShareFollowansweredMay 22, 2017 at 21:22jens walterjens walter13.6k33 gold badges5757 silver badges5555 bronze badges11Thank you! It worked after changing it to "Key": { "threadId": { "S": "AA" }–a-sakMay 23, 2017 at 3:53Add a comment| | I'm trying to update a string field in a DynamoDB table (thread1) which only has Hashkey(threadId). Document with threadId = "AA" definitely exists and also has field1 attribute.I'm getting"The provided key element does not match the schema"ValidationException when POST of UpdateItem from API Gateway is invoked using the following Body Mapping Template.{
"TableName": "thread1",
"Key": {
"HashKeyElement": {
"S": "AA"
}
},
"AttributeUpdates": {
"field1": {
"Value": {
"S": "Worked!"
}
}
}
}I have also tried the same using UpdateExpression, which also gives the same error. | DynamoDB simple UpdateItem throwing "The provided key element does not match the schema" ValidationException |
Standard attributes are the only ones that can be searched, but not all are indexed (searchable). The complete list is available here:http://docs.aws.amazon.com/cognito/latest/developerguide/how-to-manage-user-accounts.htmlShareFollowansweredMay 21, 2017 at 5:17Jeff BaileyJeff Bailey5,69511 gold badge2323 silver badges3030 bronze badges31If I need to service a cognito user, looking them up by their website, must I keep a duplicate DB in dynamo with extra indexes to do that?–KristianMay 21, 2017 at 12:39I think that the only architecture that makes sense is to have a duplicate users table in dynamo and relegate searching to that. You can use the Post-confirmation trigger to write the value to dynamo.–Michael PellMay 22, 2017 at 21:40That is disappointing. Last year, I rolled my own auth system and user database with node and dynamodb. Then I eventually switched to cognito because it was so similar. Now that I am aware of this limitation and that i'd need to keep a copy-db of everything in order to look up customers (by an important field pertaining to the product i'm building), it makes me wonder if my own version of this was more appropriate.–KristianMay 23, 2017 at 4:37Add a comment| | In the Cognito dashboard, there is a list of standard attributes that we are given to choose from:and in the docs, it says:"You can only search for standard attributes. Custom attributes are
not searchable. This is because only indexed attributes are
searchable, and custom attributes cannot be indexed."and when calling the API forListUsers,filtering on "website", what I thought was a standard attribute, I get the following:Cannot list users on the provided selector: website = "mywebsite.com"Are these attributes notstandard enoughfor this API call? or is my input just poorly formed? | Cognito standard attribute website not searchable |
Currently, there isn't a configuration option that simply turns on transfer acceleration. You can however, use endpoint override in the client configuration to set the accelerated endpoint.ShareFollowansweredJun 5, 2017 at 16:56Jonathan HensonJonathan Henson8,14633 gold badges2828 silver badges5252 bronze badgesAdd a comment| | I enabled "Transfer acceleration" on my bucket. But I dont see any improvement in speed of Upload in my C++ application. I have waited for more than 20 minutes that is mentioned in AWS Documentation.Does the SDK support "Transfer acceleration" by default or is there a run time flag or compiler flag? I did not spot anything in the SDK code.thanks | Does AWS CPP S3 SDK support "Transfer acceleration" |
You can't point an IP address to a load balancer, so this seems like a very bad idea. You need your own domain/subdomain that clients can point their domains/subdomains to via a CNAME record on their end. Then if the location of your service ever changes you just have to update your domain record and their DNS records will continue to be correct.ShareFollowansweredMay 18, 2017 at 1:02Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badges12You got here first, so rather than also giving essentially the same answer, I'll back this one. :) And I would add, each client gets their own CNAME target in this "routing" domain, even if not necessary now because of just one back-end. Youwillregret not doing these things. And in the case of clients who want their site at the apex of a new vanity domain, not a subdomain... fine, just host their DNS for them, in Route 53. Totally worth the avoidance of hassle. The objective should be to ensure that your clientsneverhave to make a DNS change. Organizingthatgets unwieldy, fast.–Michael - sqlbotMay 18, 2017 at 2:06Add a comment| | For our SaaS app, we're allowing customers to point their domain name to our server.The plan right now is to simply hand out one of our AWS elastic IP addresses for them to point their domain to. The elastic IP address would essentially be pointed to a EC2 instance web-server...and maybe a load balancer in time (if traffic demands it!).The user would specify what their domain is in our app, and we'd be able to resolve the host name coming in as their app.My concern is the longevity of this solution. This IP cannot change. And we'll certainly be tied to AWS if we go this route.(Note: Being a 1-2 person startup, standing up a data-center is more than likely no-go, and we hope to use AWS or Azure).What solutions would make this IP address -> SaaS Web Server concept last in the long run, with flexibility, and as minor of a tie as possible to a cloud provider?With running the risk of asking "what is thebestway to do this"...what's the best way to do this, keeping in mind longevity and small opt-in to a cloud provider? | Clients pointing their domains to our IP - Concerns & System Longevity |
Yes.You need to set a cloud watch alarm (every 10 minutes, like a cronjob, and configure it as a trigger to your lambda.However (!), you will need to write the code that reads the dynamodb stream and that is going to be an it of a challenge.You will need to persist somewhere (another dynamodb table, S3 or redis) the last position processed in the dynamodb stream - so you won't be handling the same update twice.I highly recommend you use the default topology, and set the trigger to be dynamodb, then your lambda will get as input the updated records. AWS manages for you the position on the stream and that is (unlike to other option) is a scalable solution.ShareFollowansweredMay 16, 2017 at 19:44johnijohni5,48866 gold badges4242 silver badges7171 bronze badgesAdd a comment| | I am looking for a way to batch-read updates from DynamoDB in scheduled intervals.For instance, every 10 minutes I want to be able to read all the updates to the DynamoDB table that occurred since the previous read.I understand the DynamoDB Streams can be setup to trigger a Lambda Function. Is there anyway for Lambda to batch all the updates over a certain time interval? To be processed all at once? | Scheduled reading of DynamoDB Stream |
It is so valuable question for the AWS beginners.
I was also confused with this question but get clear after a while.I know you used the EB CLI for handling the EB.
With the EB CLI you don't need the .pem file for normal use.
Because the EB CLI has 'eb ssh' for connecting the EC2 instance of your EB.
Please check out :https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.htmlAlso you can't get the standard .pem file of your EB.
There are some steps.
Please check out :SSH to Elastic Beanstalk instanceShareFollowansweredFeb 5, 2018 at 7:36avantdevavantdev2,67411 gold badge2020 silver badges2323 bronze badgesAdd a comment| | I'm 100% new to AWS and I'm working on deploying my personal site. I've spun up an EB environment via the AWS EB CLI and but I would also like to be able to SSH into the EC2 instance that gets created however I can't locate the private key (.pem) file that is associated with it which I need tochmodfor permit SSH'ing in.Does a private key file get created when you create an EC2 instance via Elastic Beanstalk? If so where can I find it? Thanks a ton. | Where can I find private key file for EC2 instance I create through Elastic Beanstalk CLI? |
Currently you get one Lambda invocation for every PutLogEvents batch that CloudWatch Logs had received against that log group. However you should probably not rely on that because AWS could always change it (for example batch more, etc).You can observe this behavior by running theCWL -> Lambda examplein the AWS docs.ShareFollowansweredFeb 6, 2019 at 19:37Daniel VassalloDaniel Vassallo340k7272 gold badges509509 silver badges443443 bronze badgesAdd a comment| | The AWS documentationindicates that multiple log event records are provided to Lambda when streaming logs from CloudWatch.logEventsThe actual log data, represented as an array of log event
records. The "id" property is a unique identifier for every log event.How does CloudWatch group these logs?Time? Count? Randomly, from my perspective? | How does Amazon CloudWatch batch logs when streaming to AWS Lambda? |
I would use the User Data of the EC2 instance to launch the instance directly into the ECS cluster. This is the User Data you'll want to use:#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.configThe details of this are described in theAWS docs. You can also use this user data in an Auto Scale Group Launch Configuration.Apart from that, it might be worth it to look into languages that where made to provision infrastructure, like Terraform (also for AWS) or CloudFormation (specifically for AWS).ShareFollowansweredApr 30, 2017 at 9:08BramBram4,3722323 silver badges2424 bronze badgesAdd a comment| | I am trying to programmaticly create a ECS cluster with EC2 instance in it. As far as I understand I should first create an ECS cluster , than EC2 instance and then register instance using this method :http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ECS.html#registerContainerInstance-propertyIs it how I should do it? Which arguments are mandatory? How to get instanceIdentityDocument and instanceIdentityDocumentSignature?thanks | How to add EC2 instance to ECS cluster using AWS Node SDK |
AFAIK aws sdk does not handle rate limiting. use thisrate limitingmodule to wrap your ses.send like this,var RateLimiter = require('limiter').RateLimiter;
// Allow 50 requests per second. Also understands
// 'second', 'minute', 'day', or a number of milliseconds
var limiter = new RateLimiter(50, 'second');
//huge number of requests
for (var i = 0; i < 10000; i++) {
//Throttle requests
limiter.removeTokens(1, function (err) {
if (err) throw err
// err will only be set if we request more than the maximum number of
// requests we set in the constructor
// remainingRequests tells us how many additional requests could be sent
// right this moment
ses.sendEmail({
//body and other options
}, function (err, data) {
if (err) throw err
//parse error and attempt to retry
})
});
}ShareFolloweditedApr 25, 2017 at 14:10answeredApr 25, 2017 at 13:07Ratan KumarRatan Kumar1,64033 gold badges2525 silver badges5252 bronze badgesAdd a comment| | Do I need to handle this like :ses.sendEmail( //body and other options
}, function (err, data) {
if (err)
//parse error and attmept to retry
});Or will it be done just by handling it like :var ses = new aws.SES({apiVersion: apiVersion,maxRetries: 10}); | Does Nodes js AWS sdk automatically handles rate limiting and reties or do I need to attempt retry after parsing response |
If you're doing continuous deployment you should deregister the instance you're deploying to from ELB (say,aws elb deregister-instances-from-load-balancer), wait for the current connections to drain, deploy you app and then register an instance with ELB.http://docs.aws.amazon.com/cli/latest/reference/elb/deregister-instances-from-load-balancer.htmlhttp://docs.aws.amazon.com/cli/latest/reference/elb/register-instances-with-load-balancer.htmlIt is also a common strategy to deploy to another AutoScaling Group, then just switch ASG on the load balancer.ShareFollowansweredApr 25, 2017 at 7:49Sergey KovalevSergey Kovalev9,20022 gold badges2929 silver badges3232 bronze badges3Thanks. I saw this one -github.com/opbeat/elb-dance- seems to makes this easier. My problem with this approach is that it really adds a complexity to the deployment flow. but if no other option comes up, i'll take this direction.–JAR.JAR.beansApr 25, 2017 at 9:07It should be complex 'cause it's very customizable. For easy deployments there is AWS Beanstalk.–Sergey KovalevApr 26, 2017 at 13:31Worth noting that even with register/deregister, still seeing none 2xx responses over load. :(–JAR.JAR.beansApr 27, 2017 at 11:26Add a comment| | With an ELB setup, there as healthcheck timeout, e.g. take a server out of the LB if it fails X fail checks.For a real zero down time deployment, I actually want to be able to avoid these extra 4-5 seconds of down time.Is there a simple way to do that on the ops side, or does this needs to be in the level of the web server itself? | AWS ELB zero downtime deploy |
After a couple of hours of debugging and going through AWS documentation it seems that there is currently no way of getting exponential back of from AWS SNS for anything else apart from HTTP/HTTPS sources.You can checkout thethis.As quoted in the documentation:When a user calls the SNS Publish API on a topic that your Lambda
function is subscribed to, Amazon SNS will call Lambda to invoke your
function asynchronously. Lambda will then return a delivery status. If
there was an error calling Lambda, Amazon SNS will retry invoking the
Lambda function up to three times. After three tries, if Amazon SNS
still could not successfully invoke the Lambda function, then Amazon
SNS will send a delivery status failure message to CloudWatch.Since there is a async invocation of the Lambda SNS will not care what the exit status of the lambda is. Hence, from the point of view of SNS, a successful invocation of the lambda is success enough and will not provide a failure event, hence no customised back off.For now it seems, adding an HTTP endpoint is the only option.ShareFollowansweredApr 24, 2017 at 10:15AvneeshAvneesh34533 silver badges1313 bronze badgesAdd a comment| | As it currently stands AWS SNS provides functionality for retrial(Linear, Geometric and Exponential backoff) with HTTP/HTTPS endpoints in case of a 5XX response returned from the endpoint.Because of this my application architecture changes and I forcefully need to insert a API gateway between my SNS and Lambda so that in case of a failure I can return a 5XX status from the API gateway and utilise the retrial functionality of SNS.But there is nothing mentioned for retrial mechanism with AWS lambda. Is there any way I can use the SNS retrial facilities for non-HTTP based subscriptions?Thanks | Getting exponential backoff in AWS SNS with AWS Lambda |
SeeSetting Up a Static Website Using a Custom Domain.To allow requests for both example.com and www.example.com, you need to create two buckets even though you will host content in only one of them. You will need to configure the other bucket to redirect requests to the bucket that hosts the content.ShareFollowansweredApr 9, 2017 at 16:22jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badgesAdd a comment| | I have created a record set for my webiste volcalc.io and www.volcalc.io which is stored in an s3 bucket.When I try to browse to the website I see this error:404 Not Found
Code: NoSuchBucket
Message: The specified bucket does not exist
BucketName: volcalc.io
RequestId: xxx
HostId: xxxThe bucket name is www.volcalc.io, not volcalc.ioHow do I change it to make it look for bucket named www.volcalc.io? | aws - browse to website - 404 not found - no such bucket |
You need to set the STATICFILES_STORAGE settingSTATICFILES_STORAGE = 'path/to/custom_storages.StaticStorage'If you are using wagtail (which I assume you do since you are tagging this question with it), you can place it in the defaulthome/directory and refer to it like so: 'home/custom_storages.StaticStorage'The content of custom_storages.py are stated in the guide you are following:# custom_storages.py
from django.conf import settings
from storages.backends.s3boto import S3BotoStorage
class StaticStorage(S3BotoStorage):
location = settings.STATICFILES_LOCATIONEdit:
I have a GitHub repo (also a wagtail project) in which I am using this code, but only for my media files. You can check ithere.ShareFollowansweredApr 7, 2017 at 12:50dentemmdentemm6,33033 gold badges3232 silver badges4444 bronze badges1Thanks you! This helped.–emTr0Apr 10, 2017 at 13:54Add a comment| | I'm using the following guide:https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/At the section that instructs you on how to "Configuring Django media to use S3". I'm using this for Wagtail.I'm unclear on where to put the "custom_storages.py" settings. Everywhere I'm putting it doesn't seem to work. I reverted back to Whitenoise for now.Thanks! | Configuring Django/Wagtail media to use S3 |
Update:This became possible since the question was asked. SeeAmazon Polly Now Supports Input Character Limit of 100K and Stores Output Files in S3.ShareFolloweditedDec 1, 2020 at 8:38answeredMar 31, 2017 at 11:16John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges21Thanks John. I'm looking at building an Amazon API endpoint that hooks into a Lambda which would do this... I'll post here if successful!–Greg OlsenMar 31, 2017 at 11:26@John This is possible as of 2018 - please seestackoverflow.com/a/65082930/46249.–MatthewDec 1, 2020 at 0:08Add a comment| | Is there anyway to tell Amazon's Polly service to dump the audio file to S3 directly?Using the SDK you can get a stream of the response, which I can then upload to S3, but I was hoping to skip the step and do it directly.I've tried sending Polly S3-presigned post url as the file location but haven't gotten it to work. | Amazon Polly to S3 Directly |
This is happening because the uWSGI worker is running under a user with limited permissions. You need to create the.newspaper_scraper/memoizeddirectory first, and set the correct permissions on it (allow others to r/w). You can do this on deployment by making a script in.ebextensionsthat EB executes upon deployment.Create a file in.ebextensions/setup_newspaper.configand add the following to it:.ebextensions/setup_newspaper.configpackages:
yum:
libxslt-devel: []
libxml2-devel: []
libjpeg-devel: []
zlib1g-devel: []
libpng12-devel: []
container_commands:
01_setup_newspaper:
command: mkdir -p /home/wsgi/.newspaper_scraper/memoized && chmod 644 /home/wsgi/.newspaper_scraper/memoizedPS: It looks likenewspaperrequires some extra packages to be installed, so I added them too.Read more info on.ebextensionshere:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-container.html#create-deploy-python-custom-containerShareFollowansweredMar 28, 2017 at 15:49Daniel van FlymenDaniel van Flymen11.2k44 gold badges2525 silver badges3939 bronze badgesAdd a comment| | I'm using elastic beanstalk and django. One of my dependencies in my requirements.txt file has some setup the it performs when it's initially imported. Part of the setup is to check whether a dir exists else it create it. I'm getting permissions errors because the user ( I assume it's wsgi) does not have the permissions to create the dir.OSError: [Errno 13] Permission denied: '/home/wsgi/.newspaper_scraper/memoized'How can I setup permissions to allow for these dirs to be created in a way that will be persistent across instances I create in the future? | wsgi user permissions on elastic beanstalk |
The juypter notebook is installed on a server in this case an EC2 machine.
Any number of people can SSH into this machine if they have the credentials using putty or some ssh client, this has no connection with jupyter notebook.(assuming that the SSH port 22 is open for the other users and they are able to connect)When you launch the jupyter notebook using thejupyter notebookcommand -> you start a local instance of the jupyter notebook on the default port (maybe 8888)you will have a URL for this notebook interface and you can work on it.
Important to note -> This is a local instance of your notebook. It is not public and can only be accessed on your OS username as a localhost.If other OS users run thejupyter notebookcommand they will get their local version of the notebook on a different port (maybe 8889 by default as the port number 8888 is already in use by you )You can make your notebookpublicand then you will get a public URL for your notebook (serverip:8888 or the port you have specified)This public link can be shared with others. Now multiple people have visibility to your notebook and can edit code in your notebook.p.s -> for public notebooks the port in which you are running the notebook needs to accept connections from AWS end. This can be configured in AWS console under the security groups tabShareFolloweditedMay 9, 2018 at 7:40answeredMay 9, 2018 at 7:34Thalish SajeedThalish Sajeed1,3511212 silver badges2525 bronze badgesAdd a comment| | I am working on a project in an EC2 isntance using jupyter notebook. It seems like jupyter notebook does not allow multiple ssh to its server at the same time, I have to log out if other people want to connect to jupyter notebook through the instance. Is it possible to make multiple access to jupyter notebook from the same instance? | Is it possible to grant multiple users to Jupyter notebook? |
It's notsparkthatconvertstheTINYINTtype into abooleanbut the j-connector used under the hood.So, actually you don't need to specify a schema for that issue. Because what's actually causing this is the jdbc driver that treats the datatypeTINYINT(1)as theBITtype (because the server silently convertsBIT->TINYINT(1)when creating tables).You can check all the tips and gotchas of the jdbc connector in the MySQLofficial Connector/J Configuration Properties guide.You just need to pass the right parameters for your jdbc connector by adding the following to your url connection :val newUrl = s"$oldUrl&tinyInt1isBit=false"
val data = spark.read.format("jdbc")
.option("url", newUrl)
// your other jdbc options
.loadShareFolloweditedMar 1, 2017 at 1:09zero323326k104104 gold badges964964 silver badges937937 bronze badgesansweredFeb 27, 2017 at 8:53eliasaheliasah40k1212 gold badges127127 silver badges156156 bronze badgesAdd a comment| | I'm using Apache Spark to read data fromMySQLdatabase fromAWS RDS.It is actually inferring the schema from the database as well. Unfortunately, one of the table's columns is of typeTINYINT(1)(column name : active). Theactivecolumn has the following values:non activeactivependingetc.Spark recognizesTINYINT(1)asBooleanType. So he change all value inactivetotrueorfalse. As a result, I can’t identify the value.Is it possible to force schema definition when loading tables to spark? | Is it possible to force schema definition when loading tables from AWS RDS (MySQL) |
Try it with./gradlew bootRepackageShareFollowansweredFeb 23, 2017 at 14:42Martin LinhaMartin Linha99999 silver badges2121 bronze badges2Sorry that was a transcription error in the OP. I used./gradlew bootRepackagebut it doesn't work.–Martin ErlicFeb 23, 2017 at 14:44I am also getting the same error on MAC machine in context of my project, when I am trying to run the same type of command from my Java code, and getting same error from the relative code. Looking forward for the solution of this error...–sumeetFeb 26, 2017 at 10:52Add a comment| | I'm trying to compile a Springboot application on Amazon AWS:https://aws.amazon.com/blogs/devops/deploying-a-spring-boot-application-on-aws-using-aws-elastic-beanstalk/When I try to package the application with Gradle in GitBash I get the following error message:$ ./gradlew bootRepackage
bash: /gradlew: No such file or directoryI'm using Windows. I triedgit config core.autocrlf falseas suggested here:Error with gradlew: /usr/bin/env: bash: No such file or directory. I still have the same issue. What am I missing? | Bash: /gradlew: No such file or directory (Windows) |
Well turns out in the project.json file under the dependencies node, the wizard's serverless template made a reference to Microsoft.NETCore.App without specifying a "type" of "platform". I spotted other samples online where the type line was present and once I added it, everything started working!"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.1.0"
},ShareFollowansweredFeb 12, 2017 at 6:47ThatCreoleThatCreole52511 gold badge88 silver badges1717 bronze badgesAdd a comment| | I feel like an idiot for asking such a basic question but here goes... I'm trying out AWS Lambda in C# for the first time and according to the docs:Anything written to standard out or standard error - using
Console.Write or a similar method - will be logged in CloudWatch Logs.OK well upon execution I get the following runtime exception:Unable to load DLL 'api-ms-win-core-processenvironment-l1-1-0.dll': The specified module could not be found.
(Exception from HRESULT: 0x8007007E): DllNotFoundException
at Interop.mincore.GetStdHandle(Int32 nStdHandle)
at System.ConsolePal.GetStandardFile(Int32 handleType, FileAccess access)
at System.Console.<>c.<get_Out>b__25_0()
at System.Console.EnsureInitialized[T](T& field, Func`1 initializer)
at System.Console.WriteLine(String value)My question is, how / where am I supposed to add the reference it's asking for? The answer seems non obvious. | Using C# Console.Write* with AWS Lambda |
Try this:# Install r-base
yum install r-base
# Install newest version of R from source
wget https://cran.r-project.org/src/base/R-3/R-3.4.0.tar.gz
./configure --prefix=/home/$user/R/R-3.4.0 --with-x=yes --enable-R-shlib=yes --with-cairo=yes
make
# NEWS.pdf file is missing and will make installation crash.
touch doc/NEWS.pdf
make install
# Do not forget to update your PATH
export PATH=~/R/R-3.4.0/bin:$PATH
export RSTUDIO_WHICH_R=~/R/R-3.4.0/bin/RI ripped this from an ubuntu R install how-to:http://jtremblay.github.io/software_installation/2017/06/21/Install-R-3.4.0-and-RStudio-on-Ubuntu-16.04ShareFollowansweredJan 4, 2018 at 6:42GarglesoapGarglesoap56577 silver badges1818 bronze badgesAdd a comment| | Amazon provides a clear installation guide for launching a micro instance and having R & RStudio installed. The guide can be found here:https://aws.amazon.com/blogs/big-data/running-r-on-aws/Unfortunately this installs an older version of R. (3.2.2) which provides issues for certain packages, like slam, as they require an R version > 3.3.1In the guide for the step to change the user data they provide the below script which covers the installation of R & RStudio. How do I change the script to install the latest version of R?#!/bin/bash
#install R
yum install -y R
#install RStudio-Server
wget https://download2.rstudio.org/rstudio-server-rhel-0.99.465-x86_64.rpm
yum install -y --nogpgcheck rstudio-server-rhel-0.99.465-x86_64.rpm
#install shiny and shiny-server
R -e "install.packages('shiny', repos='http://cran.rstudio.com/')"
wget https://download3.rstudio.org/centos5.9/x86_64/shiny-server-1.4.0.718-rh5-x86_64.rpm
yum install -y --nogpgcheck shiny-server-1.4.0.718-rh5-x86_64.rpm
#add user(s)
useradd username
echo username:password | chpasswdThanks | R & RStudio Installation on AWS EC2 Linux AMI - latest version of R |
The puzzle solved: the mappings were OK, but they are actually a "bridge" between the API Gateway and the lambda, so they delivered the information to the "target" lambda function and not to the authorizer, which is a sort of "interceptor" in this case.The way to get the user groups in the authorizer is to callCognitoIdentityServiceProvider.adminListGroupsForUser()which works fine for this purpose.ShareFollowansweredFeb 9, 2017 at 12:02RastoStricRastoStric30255 silver badges1313 bronze badges4Call that from where? In the Authorizer?–Aaron McMillinMar 3, 2017 at 22:10@AaronMcMillin Yes, I call that in the authorizer.–RastoStricMar 5, 2017 at 18:32How did you get Cognito username to calladminListGroupsForUser()withinauthorizer.js?–Skate to EatJul 16, 2017 at 4:44@Ohsik username if a part of the token payload.–RastoStricJul 17, 2017 at 6:39Add a comment| | I am using a Cognito user pool with user groups and I have an AWS API Gateway with a custom authorizer. The authorizer can generate a valid IAM policy and things go well so far. I would like to generate more specific IAM policies based on user groups but I cannot get the user groups information in the authorizer. My integration request mappings are:"groups" : "$context.authorizer.claims['cognito:groups']"but in the authorizer I get"type": "TOKEN",
"authorizationToken": "...",
"methodArn": "arn:aws:execute-api:eu-west-1:...:.../test/GET/bills"How can I get the user groups attribute in the authorizer? | AWS API Gateway - get Cognito user groups to custom authorizer |
Answering my own question here: problem was Cold HDD is not an option to attach as a root of a new instance, but "Magnetic" was. Changed volume type to "Magnetic" in step 5 and it fixed it.ShareFollowansweredFeb 6, 2017 at 18:26Alberto DecaAlberto Deca16511 silver badge1212 bronze badges2This is correct, becausebooting fromsc1orst1isn't supported.–Michael - sqlbotFeb 6, 2017 at 18:39incidentally, this helped me with launching a brand new P2 instance which didn't seem to have supported SSD disk for some reason... thanks :)–Zathrus WriterJul 13, 2017 at 17:27Add a comment| | I'm trying to downgrade my EC2 instance root device (SSD) to a Cold HDD. I performed the following:Stopped my instanceDetached the root volume from my instance via the console (was mounted on /dev/xvda1) (not by force).Created a snapshot of the detached root volume in the same availability area as the instance.Downgraded my instance from t2.xlarge to t2.microCreated a new Cold HDD volume from that snapshot in the same availability area.Attached the newly created Cold HDD volume as /dev/xvda to the instanceRebooted the instanceNow I'm getting the problem that the instance stays "pending" when I reboot, and after 30 seconds or so it goes back to "stopped". The reason given is:Server.InternalError: Internal error on launchWhen I reattach the old volume, it reboots fine, so the error is coming from the new volume. Can someone tell me if I'm doing something wrong?Thanks! | Server.InternalError: Internal error on launch when switching ec2 root volume |
Acopycan be rolled backonlywithin a transaction. If you committed acopy, then it can't be rolled back.As to your 2nd question, this is something that your application layer needs to manage. Examples:Preprocess your file to add an additional marker column, such ascopy-id, in your data. So, when you need to remove data loaded by acopy, you delete all rows corresponding to thecopy-id.If data is loaded once every day, you can createtime-series tables. So, rolling back acopythat ran on a particular day involves truncating the corresponding table. You can also think of creating one table per week, depending on your use case.ShareFollowansweredFeb 6, 2017 at 16:01ketan vijayvargiyaketan vijayvargiya5,48911 gold badge2222 silver badges3535 bronze badgesAdd a comment| | Is it possible toROLLBACKaCOPYoperation in Redshift? What could be the best approach to remove only those rows inserted only as part of theCOPYoperation in a table that has data appended? | Redshift ROLLBACK for COPY |
Made a helper that leverages @cornr answer:extension Error {
/// Returns a custom description for the error if available, otherwise `localizedDescription`
var customDescription: String {
let ns = self as NSError
// AWS Errors
if ns.domain == AWSCognitoIdentityProviderErrorDomain, let code = AWSCognitoIdentityProviderErrorType(rawValue: ns.code) {
switch code {
case .invalidParameter: return "Invalid user / password"
case .invalidPassword: return "Invalid password."
case .notAuthorized: return "Not authorized."
case .userNotConfirmed: return "User not confirmed."
case .passwordResetRequired: return "Password reset required."
default: return "AWS Cognito Error: \(code.rawValue)"
}
}
return localizedDescription
}
}Didn't fill them all in, just those that I was running into. Feel free to edit with more. Callingerror.customDescriptionon any error will return either an improved string or the localizedDescription.ShareFollowansweredNov 21, 2019 at 22:46BadPirateBadPirate26k1010 gold badges9696 silver badges126126 bronze badgesAdd a comment| | I am developing an iOS app in Swift using AWS Cognito to handle user login and registration. I've found that when users do something that Cognito doesn't allow (entering the wrong username/password on login, trying to create a password that doesn't match the requirements, etc.) the app will display error messages such asThe operation couldn't be completed. (Com.amazonaws.AWSCognitoIdentityProviderErrorDomain error 0.). I've noticed that different actions can result in different error codes, but I'd like to make the error messages more descriptive so that my users will actually know what they did wrong.Currently, I get the error message by checkingif task.error != nilfor the login/registration/etc. task, and if this check return true, I get the stringtask.error!.localizedDescriptionfor the error message. I realize I could grab the error code from this string by getting the substring corresponding to the 1 digit code, but this seems like a really terrible long-term solution. At the very least, I would like to get the error code as an integer, or preferably get a description of the error that will make sense to the average user. Is there some way to do this? | Getting descriptive login error messages in iOS app using AWS Cognito |
This isn't possible in the way you envision it.The cli is actually listingallthe objects and filtering them locally. The API (which is what the cli, SDKs, and console all use) doesn't support such a query.ShareFollowansweredJan 23, 2017 at 23:17Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges1Thanks for the confirmation. That was my suspicion after running the cli with--debugand watching the GET request/response.–Mike SummersJan 24, 2017 at 14:46Add a comment| | Using the aws cli I can send a--queryto return only the objects since LastModified:aws s3api list-objects --profile <profile> --bucket <bucket> --query 'Contents[?LastModified>=`2017-01-19`][]'Works great, returns only objects>=the date.I'm trying to translate this to the Java SDK with something like this:ListObjectsV2Request req = new ListObjectsV2Request();
req.putCustomQueryParameter("LastModified>=`2017-01-19`", null);I've tried a large number of variations on the both the query and parameter strings without any luck- the query always returns all objects. So two questions:Should this work? That is is this something putCustomQueryParameter
should do?What's the correct syntax if the answer to #1 is 'Yes'?Thanks in advance. | S3 putCustomQueryParameter to return by LastModified? |
According to the documentationhere, the pagination is handled for you.A collection provides an iterable interface to a group of resources.
Collections behave similarly to Django QuerySets and expose a similar
API.A collection seamlessly handles pagination for you, making it
possible to easily iterate over all items from all pages of data.ShareFollowansweredJan 23, 2017 at 15:54Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment| | I'm using theboto3client to access date stored in an Amazon S3 bucket. After reading the docs, I see that I can make a request with this code:s3 = boto3.resource('s3')
bucket = s3.Bucket(TARGET_BUCKET)
for obj in bucket.objects.filter(
Bucket = TARGET_BUCKET,
Prefix = TARGET_KEYS + KEY_SEPARATOR
):
print(obj)I test against a bucket where I've stored 3000 objects and this fragment of code retrieves the references to all the objects. I've read that all the API calls to S3 return at most 1000 entries.But reading theboto3documentation,paginator section, I see that some S3 operations need to use pagination to retrieve all the results. I don't understand why the upper code works unless the code is using the paginator under the hood. And this is my question, can I safely assume that the upper code always will retrieve all the results.? | When to use S3 API pagination |
Sorry about this, it is a little confusing. By 'background' it does not mean off of the main thread, though these transfer are off of the main thread, rather it means when your app is in the background.Transfer Manager:Support multi-part uploadIf the app is killed Transfer Manager can resume an upload that was partially completedTransfer Utility:Transfer Utility will allow the user to exit the app, and continue to upload your files.Transfer Utility allows you to upload binary payloads without first saving it to a file.Transfer Utility is also newer and feature updates are likely to go into that client.ShareFolloweditedJan 18, 2017 at 20:22answeredJan 18, 2017 at 20:12WestonEWestonE68233 silver badges77 bronze badges2Fantastic, this clarifies. I will look more into this tomorrow once back on the code and then accept the answer unless I'll need more details :)–mm24Jan 18, 2017 at 20:361If the app is killed Transfer Manager can (OR CAN'T?) resume an upload that was partially completed?–MicahJan 18, 2018 at 21:30Add a comment| | What is the difference in usage betweenAWSS3TransferManagerandAWSS3TransferUtilityin theAmazon S3 iOS SDK?Here is what the documentation says forAWSS3TransferManager:High level utility for managing transfers to Amazon S3.
S3TransferManager provides a simple API for uploading and downloading
content to Amazon S3, and makes extensive use of Amazon S3 multipart
uploads to achieve enhanced throughput, performance and reliability.and forAWSS3TransferUtility:A high-level utility for managingbackgrounduploads and downloads.
The transfers continue even when the app is suspended. You must call +
application:handleEventsForBackgroundURLSession:completionHandler: in
the -
application:handleEventsForBackgroundURLSession:completionHandler:
application delegate in order for the background transfer callback to
work.From the description the major difference seem to be thatAWSS3TransferUtilityis designed forbackgroundtasks.Is this correct? Does this mean that I shouldn't useAWSS3TransferManagerfor background tasks? It seems counter intuitive as most of the transfers will be likely to happen as a separate background thread in a mobile client. | AWS / iOS SDK: when should I use AWSS3TransferManager and AWSS3TransferUtility? |
Here is what I implemented followingthis article.Create a .ebextensions folder at the root of your api/web project.In this project, any .config file will be used as config for your elastic beanstalk, they will be applied in alphabetical order. So create a file containing the following : (see in the link, the white space is important and I can't seem to get it right here...)commands:
setIdleTimeoutToZero:
cwd: "C:\windows\system32\inetsrv"
command: "appcmd set apppool /apppool.name:DefaultAppPool /.processModel.idleTimeout:0.00:00:00"Make sure the file is always copied to output.You should be good to go.ShareFollowansweredOct 30, 2018 at 9:09dyesdyesdyesdyes1,18733 gold badges2424 silver badges4040 bronze badgesAdd a comment| | I've setup an Elastic Beanstalk instance on the .NET (Windows/IIS) platform. I've deployed a .NET Core application there that does 2 things:Respond with Hello world! when I hit the end point - but I don't care about that.Sets up a listener for RabbitMQ (also hosted in AWS). This listener fires off an SMS every time I drop some a message in RabbitMQ.Item 2 works great - I drop off a message and less than a second later I get an SMS message on my phone.The problem is that AWS puts the application to sleep after a period of inactivity. And that causes the RabbitMQ listener to also go to sleep. This results in undelivered SMS messages. Until I wake up the instance by going to the URL assigned to my by Elastic Beanstalk.How do I make my Elastic Beanstalk instance not go to sleep? Is there something I can call from C# code to prevent it from doing so? | How do I tell my Elastic Beanstalk instance not to go to sleep? |
Anonymous users are not able to read a bucket content by default. So you should have only these lines in your policy:{
"Version": "2012-10-17",
"Id": "PutOnlyPolicy",
"Statement": [
{
"Sid": "Allow_PublicPut",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::myputbucket/*"
}
]
}ShareFollowansweredJan 13, 2017 at 16:21Iurii DrozdovIurii Drozdov1,71511 gold badge1212 silver badges2323 bronze badges3As said to another answer: this was my first attempt; I once again copy+pasted your policy. Curiously if I upload a file to the bucket, as said f.e. via the app S3anywhere, I can still publicly GET the file. That's why I decided to DENY the GET explicitly, which works, but in that case it works too good and it DENY-s even me as the bucketowner. :/–konrad_peJan 13, 2017 at 17:00@konrad_pe: Iurii's answer should work for you (as long as your bucket has no other allow-public-read rule defined somewhere else, like in the bucketPermissionspane in the management console).–Khalid T.Jan 13, 2017 at 17:17ThePermissionspane has 2 entries for this bucket: Grantee "ME" (all permissions) as well as "Any Authenticated AWS User" (LIST). Which doesn't explain why I am able to access the objects from public, right? As said, the objects themselves have no permissions set to them.–konrad_peJan 13, 2017 at 17:23Add a comment| | What I am trying to do is to let (anonymous) users share files to a specified bucket. However, they should not be possible to READ the files, which are already there (and for all I care not even the ones they submitted themselves). The only account which should be able to list/get objects from the bucket should be the bucket owner.Here is what I got so far:{
"Version": "2012-10-17",
"Id": "PutOnlyPolicy",
"Statement": [
{
"Sid": "Allow_PublicPut",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::myputbucket/*"
},
{
"Sid": "Deny_Read",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myputbucket/*"
},
{
"Sid": "Allow_BucketOwnerRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::myAWSAccountID:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myputbucket/*"
}
]
}The above Policy enables me to write files to the bucket (f.e. via the android app S3anywhere), but I can't GET the objects, not even with my authenticated account.Do you have any hints on how I could accomplish this? Thanks! | S3 policy - public write but only authenticated read |
All single-item updates to items using the UpdateItem API are atomic. Therefore, usinglist_append()in an UpdateExpression is also atomic.ShareFollowansweredMar 8, 2017 at 12:27Alexander PatrikalakisAlexander Patrikalakis5,10411 gold badge3232 silver badges4848 bronze badgesAdd a comment| | TheFAQsfor DynamoDB says:Q: Does DynamoDB support in-place atomic updates?Amazon DynamoDB supports fast in-place updates. You can increment or
decrement a numeric attribute in a row using a single API call.
Similarly, you can atomically add or remove to sets, lists, or maps.
View our documentation for more information on atomic updates.When you click the link for more documentation, it has no more info about adding to sets.Based on this I would think adding to a list/set using theADDkeyword would be atomic.But would adding to a list using thelist_appendfunction also be atomic? Is there any other documentation about this? | DynamoDB: Is adding an item using list_append atomic? |
Try removing-Dhttp.port=$PORTfromProcfile.In the ui in "Software Configuration" try setting the variableJAVA_OPTSto-Dhttp.port=9000(replace 9000 with whatever port your nginx proxy is using -- 5000 is the default I believe).ShareFolloweditedJan 9, 2017 at 2:20answeredJan 9, 2017 at 1:32Dave MapleDave Maple8,26244 gold badges4646 silver badges6464 bronze badges10I'm not seeing a "JVM Command line options" in the beanstalk UI.–novonJan 9, 2017 at 1:45are you using the Java SE container type @novon?docs.aws.amazon.com/elasticbeanstalk/latest/dg/…–Dave MapleJan 9, 2017 at 1:45Yep thats what I selected.–novonJan 9, 2017 at 1:49Sorry -- this used to be an option -- it still appears in my older applications. I just spun up a new environment and it's no longer there. Looking to see what the new method is.–Dave MapleJan 9, 2017 at 1:53ah -- ok. you can set this with Software Configuration but try this key => val: JAVA_OPTS: "-Dhttp.port=9000" (replace 9000 with whatever value). LMK if this works and I'll update the answer to reflect the latest mechanism.–Dave MapleJan 9, 2017 at 2:01|Show5more comments | I'm attempting to configure my elastic beanstalk java application using environment variables via the Procfile like so:web: ./bin/myservice -Dhttp.port=$PORTThePORTenvironment variable is set via the AWS ui in the "Software Configuration" screen of my beanstalk environment.However, the logs are indicating that$PORTis not being interpolated:Exception in thread "main" java.lang.ExceptionInInitializerError
at com.whatever.services.myservice.Main.main(Main.scala)
Caused by: java.lang.NumberFormatException: For input string: "$PORT"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:569)What is the correct way to pass options to my application?Edit 1:This seems to be an issue related to the sbt native packager plugin which I use to create a distributable zip archive of the project. The bin/myservice file it generates is interpreting $PORT literally and passing that to the application. | Environment variables not available in Elastic Beanstalk Procfile |
Yes, you can useXvfbon aws lambda with a bit configuration. For a working example seehttps://github.com/nisaacson/aws-lambda-xvfbTo get this to work, you needXvfbandxkbcompcompiled with some special configuration values. Then you will need to bundle some shared libraries with your lambda function...Compile xorg-server with the following flags to get theXvfbbinary./configure
--with-xkb-path=/var/task/xkb \
--with-xkb-output=/tmp \
--with-xkb-bin-directory=/var/task/binCompilexkbcompwith the following flags./configure \
--prefix=/usr \
--with-xkb-config-root=/var/task/xkbCompilexkeyboard-configwith the following flags./configure \
--prefix=/usr \
--with-xkb-base=/var/task/xkbShareFollowansweredMar 7, 2017 at 0:27NoahNoah34.1k55 gold badges3838 silver badges3333 bronze badges3I tried your approach in a recent docker image, things work in general, but only if I yum install the mesa drivers (seestackoverflow.com/q/60083999/4556546). Did things work out for you without doing anything about the drivers?–ikkjoFeb 7, 2020 at 10:15@ikkjo - do you have any update here? seems like a lot of stuff in the dockerfile is outdated–fullStackChrisDec 10, 2022 at 0:221@fullStackChris, no I don't, sorry. I gave up on this a the time.–ikkjoDec 10, 2022 at 14:17Add a comment| | I would like to offload some code to AWS Lambda that grabs a part of a screenshot of a URL and stores that in S3. It uses chromium-browser which in turn needs to run inxvfbon Ubuntu. I believe I can just download the Linux 64-bit version of chromium-browser and zip that up with my app. I'm not sure if I can do that withxvfb. Currently I useapt-get install xvfb, but I don't think you can do this in AWS Lambda?Is there any way to use or install xvfb on AWS Lambda? | Can I use xvfb with AWS Lambda? |
I would suggest each notification type is a different SNS topic. That way the user can control each topic he is subscribed too. That puts more work for you in your app but this way you get to your designed goal of allowing each user to subscribe to each a different type of notification.Topics are free but SNS messages sent are charged.FREE TIER: Each month, Amazon SNS customers receive 1,000,000 Amazon SNS Requests, 100,000 HTTP notifications, 1,000 email notifications and 100 SMS notifications for free.Cost calculator:http://calculator.s3.amazonaws.com/index.htmlSNS Pricing Details:https://aws.amazon.com/sns/pricing/ShareFolloweditedDec 29, 2016 at 1:53answeredDec 29, 2016 at 1:43strongjzstrongjz4,35111 gold badge1818 silver badges2727 bronze badgesAdd a comment| | I'm working SNS Push notifications into an app that I'm building, and I'm wondering how to handle user notification settings? What I don't understand is if SNS provides a way to manage a user who wants to receive notification type "A", but not type "B". A more real-world correlation is managing a Facebook user who wants notifications for comments, but not likes. Does SNS provide an easy way to manage this?I can manage it myself through my own servers/databases, but this seems like something that SNS should be able to do. | AWS SNS Mobile Push based on user notification preferences |
This should meet your requirements:Admin Dashboard server: Security Group AInbound rule allowing traffic on whatever port(s) your dashboard is served on, probably port 80 and/or 443.Default outbound rulesIngestion Feeds server: Security Group BNo inbound rules (see note below)Default outbound rulesDatabase server: Security Group CInbound rule to allow instances belonging to Security Group A access to the
database portInbound rule to allow instances belonging to Security Group B access to the database portDefault outbound rulesNote:Fromthe documentation:Security groups are stateful — if you send a request from your
instance, the response traffic for that request is allowed to flow in
regardless of inbound security group rules. Responses to allowed
inbound traffic are allowed to flow out, regardless of outbound rules.This should allow your Ingestion Feeds service to create a connection with the External Service and receive responses on that connection without any Inbound Rules assigned to the Ingestion Feeds instance.ShareFollowansweredDec 27, 2016 at 18:23Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment| | I have a single Elastic Beanstalk instance which functions as the Admin dashboard rendering HTML templates and data fed from the database. This and the database are within a specific VPC.Also within the VPC i have another single instance Elastic Beanstalk application which functions as a web socket client saving data from an external service into the database. Those are the ingestion feeds in the diagram below.The Ingestion feeds have HTTP Rest endpoints i can hit from the admin dashboard which start/stop the ingestion feeds.The problem i'm having is how to close off the Ingestion Feeds from outside of the VPC. I'd like it to only connect from the Admin Dashboard Elastic Beanstalk apps.But i also want them to be able to connect to the external service via web sockets. | Create AWS EC2 security group open only to internal VPC instances and a single external service |
This is the message the CloudWatch logs showed when a tried doing this:{
"timestamp": "2019-01-28 21:26:16.363",
"logLevel": "ERROR",
"traceId": "9e3ff9b0-fcdf-d8ae-e8a8-4b7a24902405",
"accountId": "xxx",
"status": "Failure",
"eventType": "RuleExecution",
"clientId": "basicPubSub",
"topicName": "xxx/r117",
"ruleName": "devCompDynamoDB",
"ruleAction": "DynamoAction",
"resources":
{
"ItemRangeKeyValue": "SINGLE",
"IsPayloadJSON": "true",
"ItemHashKeyField": "SerialNumber",
"Operation": "Insert",
"ItemRangeKeyField": "ClickType",
"Table": "TestIoTDataTable",
"ItemHashKeyValue": "ABCDEFG12345"
},
"principalId": "xx",
"details": "Attribute name must not be null or empty"
}To Fix itI edited the DynamoDB Rule in the IoT Web Console and I added a payload column in the "Write message data to this column" field.ShareFolloweditedSep 19, 2022 at 18:18fatihyildizhan8,74477 gold badges6666 silver badges8989 bronze badgesansweredJan 28, 2019 at 21:58RobertoHimmelbauerRobertoHimmelbauer5177 bronze badgesAdd a comment| | I am trying to update DynamoDB and I send JSON data from Rasperry PI or MQTT Client, but when I look to CloudWatch I see below error message.EVENT:DynamoActionFailure TOPICNAME:iotbutton/test CLIENTID:MQTT_FX_Client MESSAGE:Dynamo Insert record failed. The error received was Attribute name must not be null or empty. Message arrived on: iotbutton/test, Action: dynamo, Table: myTable_IoT, HashKeyField: SerialNumber, HashKeyValue: ABCDEFG12345, RangeKeyField: Some(ClickType), RangeKeyValue: SINGLEI am using the AWS IoT Tutorial (http://docs.aws.amazon.com/iot/latest/developerguide/iot-dg.pdf), The Seccion: Creating a DynamoDB Rule.The data I send to the IoT platform is:{
"serialNumber" : "ABCDEFG12345",
"clickType" : "SINGLE",
"batteryVoltage" : "5v USB"
}topic:iotbutton/ABCDEFG12345Does anyone come across this error and aware of any solution?Thanks, regards. | AWS IoT - Dynamo Insert record failed |
Can you add DependsOn for the EC2 creation till EIP is created. Having a Ref to EIP doesnt guarantee that the instance will wait till EIP is created.ShareFollowansweredNov 30, 2016 at 6:10Nitin ABNitin AB50811 gold badge55 silver badges1212 bronze badges11Good thought. I made a few adjustments such that the elastic ip is created first, the server second, and then an IP association ("AWS::EC2::EIPAssociation") third (using DependsOn). This fixed the issue. Interestingly, looks like I could use the NetworkInterface / AssociatePublicIpAddress property in the CFN script to have this happen automatically. I've not tested that yet but probably will tomorrow. Thanks for the help!–SamNov 30, 2016 at 6:30Add a comment| | I have a simple cloudformation script that builds a Server ("AWS::EC2::Instance") and an Elastic IP ("AWS::EC2::EIP") which it attaches to that server.The subnet has an igw attached.I also have UserData defined within the Properties of the Server. The problem is that until the EIP attaches to the Server, there is no internet connectivity. Since this is an internet-facing subnet and I don't have a NAT box/gateway configured, is there a best practice for delaying UserData until the EIP attaches?There is a dependency issue here: Server is created, EIP is created and attach to server ("InstanceId":{"Ref":"Server"}), so I don't believe I can DependsOn with the EIP. | Cloudformation UserData with Elastic IP |
Create aCNAMErecord to point to theLoad Balancer:awseb-e-k-AWSEBLoa-xxx.eu-central-1.elb.amazonaws.comHowever if you are using Route 53, create anArecord and useAlias=Yesto point to your Elastic Beanstalk app. This type of Alias resolution incurs no charge in Route 53.Interestingly,AWS Elastic Beanstalk Adds Support for Amazon Route 53 Aliasingsuggests that either name is now acceptable.See:Your Elastic Beanstalk Environment's Domain NameShareFolloweditedDec 30, 2020 at 11:46DT21133 bronze badgesansweredNov 16, 2016 at 21:31John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges22Could someone please elaborate on why you would choose the load balancer over environment url?–super_nooblingFeb 23, 2018 at 10:41My assumption is that the LB URI can change automatically while the environment URI can only change manually. Hence I would choose the EB URI, not the LB URI.–ManuelFeb 24, 2019 at 2:00Add a comment| | I have my custom domain xxx.com.pl I would like to run a service on Elastic Beanstalk. How should I configure my domain. Should I use my Elastic Load Balancer DNS:awseb-e-k-AWSEBLoa-xxx.eu-central-1.elb.amazonaws.comor environment URL:xxx.asdasdasda.eu-central-1.elasticbeanstalk.comIf I select environment URL I can always create another environment and use swap URLs for recovery. I cannot do this easy way If I select ELB DNS. Probably usage of ELB DNS is faster. Am I right? What is the best practice? | Where should I point my custom domain to environment URL or LoadBalancer? |
Yes, you're looking for the methodgetVersion()in the classVersionInfoUtils.From the linked documentation:Signature:public static String getVersion()Description:Returns the current version for the AWS SDK in which this class is running. Version information is obtained from from the versionInfo.properties file which the AWS Java SDK build process generates.ShareFollowansweredNov 22, 2016 at 4:57Anthony NeaceAnthony Neace25.4k88 gold badges114114 silver badges130130 bronze badgesAdd a comment| | I have my Java code running in some managed environment that provides AWS SDK. I found out that some methods on classes are unavailable in that environment compared to documentation for recent version of AWS SDK.Is there a way to programmatically find out version of AWS SDK that is available? Is there any information class that provides such information? | How do I get Java AWS SDK version from code? |
The official documentation on this subject ishere. You can have placeholders for both names and values. For example if an item in your table has the following format:{
attribute1: value1,
attribute2: value2
}attribute1is an attribute namevalue1is an attribute valueIf you want to lookup something by a dynamic attribute name, or if you are using an attribute name that conflicts with a DynamoDB reserved word, then you useExpressionAttributeNames.If you want to lookup something by a dynamic attribute value, which is what you will be doing in most of your queries, you will useExpressionAttributeValues.ShareFollowansweredNov 7, 2016 at 1:43Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment| | I'm working with DynamoDB and have consulted Amazon's documentation, which is great. But for this particular instance I can't understand the difference? | Can someone explain the difference between Expression Attribute Names and Expression Attribute Values? |
Finally figured out by finding these docs:https://webpack.github.io/docs/configuration.html#externalsHad to haveoutput: { libraryTarget: "commonjs" },setShareFollowansweredOct 31, 2016 at 6:18csilkcsilk2,84133 gold badges2424 silver badges4040 bronze badgesAdd a comment| | In my environment (aws-lambda) I have the aws-sdk set up, so in the webpack.config i've added:externals: {
'aws-sdk': 'aws-sdk'
},When building it seems to set it asmodule.exports = aws-sdk;instead ofmodule.exports = require('aws-sdk');weird thing was it was working fine before, just randomly stopped, any ideas?(using serverless-webpack and webpack) | Webpack externals not requiring |
First, try a curl request to your backend integration endpoint using the same parameters that you expect API Gateway to pass. Confirm that the request completes and takes less than 29 seconds, which is the API Gateway timeout.Next, try calling your API via the test facility in the API Gateway console and inspect the output to get more information. Confirm that API Gateway is calling the correct endpoint and is passing the header and body values that you expect. Also, observe any error messages from calling the integration endpoint.If that doesn't help, then enable CloudWatch logs on your deployed API, make a few test requests, and inspect the resulting logs.If you're still unable to figure it out, you can post the output from the previously mentioned steps along with a swagger export of your API. If any of these contain sensitive information, you can PM them to me instead.ShareFollowansweredOct 27, 2016 at 21:55MikeD at AWSMikeD at AWS3,6451717 silver badges1515 bronze badges1I figured out the problem: we were using nodeproxy to match the app to the port that AWS was listening at, but nodeproxy was not set up to restart when the server restarted, and someone restarted the server. So nodeproxy was not running. So AWS was listening and the NodeJS app was ready, but they could not hear each other.–LRK9Oct 31, 2016 at 18:54Add a comment| | HTTP/1.1 504 Gateway TimeoutI am getting the same error message as described here:https://forums.aws.amazon.com/thread.jspa?messageID=729094However, in that case, they were trying to use a custom port number, and that was the cause of the problem. We are not trying to use a custom port number.The error is:{"message": "Network error communicating with endpoint"}Yesterday I could run this query and it worked fine:curl "https://api.heddy.com/lkapi/[email protected]&pass=xxx" -X GET --header 'Accept: application/json' --header 'x-api-key: xxx'I would get back a JWT token and use it to make further queries.Yesterday I created a new "resource", added it to our API, and then deployed that to our testing stage, which is "lkapi", which you can see in the URL above.And since I've done that deploy, I can not get through the AWS API Gateway. I have no idea what I have done wrong.Any thoughts about what I should check?If I do the verbose version of "curl" I see the error is:504 Gateway TimeoutThis error typically happens if the upstream app is not running. But I know it is running. So what is left? Perhaps misconfigured port? | AWS API Gateway error, Network error communicating with endpoint, 504 Gateway Timeout |
No you can't. Lambda is stateless - you can't count on anything you read into memory on one invocation to be available to the next invocation. You will need to store your config information somewhere, and read it back in each time.ShareFollowansweredOct 17, 2016 at 10:34E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badgesAdd a comment| | I have lambda function which listens to dynamo stream and process records for any update or insert in dynamo.Currently this lambda code has list of variables which i want to convert to a config, since this list can change.So i want my lambda function to read this list from a config, but i don't want any network call so i cant make call to s3/dynamo every time. i want this config stored locally in memory.I want to initialize lambda and in this initialization read this config from table and store it in some variable and use it in every invocation.Can i do this? | Can i story config in memory and use it in AWS Lambda |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.