Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
When creating or updating your CodeBuild Project, set the ProjectArtifact type to S3 and packaging to none as explained inhttps://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectArtifacts.html#CodeBuild-Type-ProjectArtifacts-packaging.However, the above step will only work when you use CodeBuild as a standalone. When CodeBuild is used in the context of CodePipeline, your pipeline defines the source and artifact details. Your best option in this case is using awscli copy to s3 during the build step of the pipeline.
So I have a CodeBuild process, the output of which I want to be a nested Cloudformation stack and a zipped Lambda deployable, both pushed to an S3 bucket.I can do the outputting process viapip install awscliand thenaws s3 cp #{stuff}inbuildspec.yml, but on reading the CodeBuild docs it feels like I should really be usingOutputArtifactsfor this bit.So .. I remove the aboveawsclistuff, add anOutputArtifactsblock to theCodeBuildstage of my code pipeline, and add anartifactsblock tobuildspec.yml.Everything works fine, CodeBuild dumps the output artifacts to S3 .. but the problem is they are zipped. That's no good because I need another "master" CF stack to be able to reference the generated / output CF template as a nested stack via an S3 bucket/key reference.And when I look in the CodeBuild docs I can't find any reference to outputting unzipped artifacts.Any thoughts on how I might achieve this ? Should I just stick withawscli?
Can AWS CodeBuild output unzipped artifacts?
You can download the python code to/tmp/temporary storage in lambda. Then you can import the file inside your code using the import statement. Make sure your import statement gets executed after you have downloaded the file to tmp.You can also have a lookhereto see other methods to run new script from within script.EDIT: Here's how you can download to /tmp/import boto3 s3 = boto3.client('s3') s3.download_file('bucket_name','filename.py','/tmp/filename.py')Make sure your lambda role has permission to access s3
I have a python code placed in S3. That python code would be reading an excel file as source file placed in S3 and will do some transformations.I have created a Lambda function which will get triggered once there will be a PUT event on the S3(whenever source gets placed to the S3 folder).Requirement is to run that python code using the same Lambda function or to have the python code configured within the same Lambda function.Thanks in advance.
Running a python code placed in S3 using Lambda function
I would suggest to use AWSData Migration ServiceIt can listen to changes on your source database and stream them to a target (https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html)There is also a third-party blog post explaining how to set this uphttps://medium.com/tensult/cross-account-and-cross-region-rds-mysql-db-replication-part-1-55d307c7ae65Pricingis per hour, depending on the size of the replication EC2 instance. It runs in the target account, so it will not be on your cost center.
I have an AWS account with a Postgres RDS database that represents the production environment for an app. We have another team that is building an analytics infrastructure in a different AWS account. They need to be able to pull data from our production database to hydrate their reports.From my research so far, it seems there are a couple options:Create a bash script that runs on a CRON schedule that usespg_dumpandpg_restoreand stash that on an EC2 instance in one of the accounts.Automate the process of creating a Snapshot on a schedule and then ship that to the other accounts S3 bucket. Then create a Lambda (or other script) that triggers when the snapshot is placed in the S3 bucket and restore it. Downside to this is we'd have to create a new RDS instance with each restore (since you can't restore a Snapshot to an existing instance), which changes the FQDN of the database (which we can mitigate using Route53 and a CNAME that gets updated, but this is complicated).Create a read-replica in the origin AWS account and open up security for that instance so they can just access it directly (but then my account is responsible for all the costs associated with hosting and accessing it).None of these seem like good options. Is there some other way to accomplish this?
Create an RDS/Postgres Replica in another AWS account?
I was working on a similar thing lately, here is the code I was able to get working usingaws-requests-auth, it has built-in support for boto3:(Notice:host, region and quote method safe parameter)import requests from aws_requests_auth.boto_utils import BotoAWSRequestsAuth auth = BotoAWSRequestsAuth( aws_host='awis.us-west-1.amazonaws.com', aws_region='us-west-1', aws_service='awis' ) url = 'https://awis.us-west-1.amazonaws.com/api' query_params = quote( 'Action=UrlInfo&ResponseGroup=LinksInCount&Url=google.com', safe = '/-_.~=&' ) response = requests.get(url + '?' + query_params, auth=auth) print(response.content)If you prefer to do it without any 3rd party library, you could always do:from boto3.session import Session aws_credentials = Session().get_credentials() print(aws_credentials.access_key) print(aws_credentials.secret_key)Then go with the full fun signature process as described at theAWIS Documentation - Calculating Signatures.
I'm able to authenticate and connect to AWSQueryConnection using Boto3, but whenever I try to get information about a URL using the 'UrlInfo' method, I receive a204response with no data.import boto from boto.connection import AWSQueryConnection conn = AWSQueryConnection(aws_access_key_id='', aws_secret_access_key='', host='awis.amazonaws.com') response = conn.make_request('UrlInfo', params={ 'Url' : 'http://reddit.com', 'ResponseGroup': 'LinksInCount' }) print(response.status)Is there anything wrong with the way I'm using this module?
How to make AWS AWIS UrlInfo api request using Boto3 credentials
As far as I know there is a marketplace offering calledManaged rules for AWS Web Application Firewall[1] which does exactly what you ask for. There are 3rd party sellers (more precisely: AWS partner companies) which offer rules for the OWASP Top 10. [2]The offering exists since November 2017. [3]More information about the scope of existing rules is given in a newer blog post from 2018. [4]The corresponding implementation in the WAF service is calledAWS Marketplace Rule Groupsin the docs. [5]References[1]https://aws.amazon.com/marketplace/solutions/security/waf-managed-rules[2]https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=owasp[3]https://aws.amazon.com/about-aws/whats-new/2017/11/ready-to-use-managed-rules-now-available-on-aws-waf/?nc1=h_ls[4]https://aws.amazon.com/about-aws/whats-new/2018/02/new-products-for-managed-rules-on-aws-waf/?nc1=h_ls[5]https://docs.aws.amazon.com/waf/latest/developerguide/waf-managed-rule-groups.html
I am trying to configure a WAF with my Api Gateway and i am surprised AWS is not offering templates of rules (such as the owasp top 10).For SQL injections for example, everybody use the same rules am i wrong?Do you know a way to import the main security rules without having to configure it manually?
WAF Standard Rules: Do we really have to configure everything manually?
With the DynamoDBDocumentClient:to query for a number of items, you usequeryto get a single item, you usegetSo, use query and use theKeyConditionExpressionparameter to provide a specific value for the partition key. The query operation will return all of the items from the table (or index) with that partition key value.Here's an example:const AWS = require("aws-sdk"); AWS.config.update({region: 'us-east-1'}); const params = { TableName: tableName, KeyConditionExpression: '#userid = :userid', ExpressionAttributeNames: { '#userid': 'userid', }, ExpressionAttributeValues: { ':userid': '39e1f6cb-22af-4f8c-adf5-xxxxxxxxxx', }, }; const dc = new AWS.DynamoDB.DocumentClient(); dc.query(params, (err, data) => { if (err) { console.log('Error', err); } else { for (const item of data.Items) { console.log('item:', item); }; } });
I have a simple table with userId, pictureURL and a few other fields. I want to return all the fields with a certain userId but when I dodynamodb.get({ TableName: tableName, Key: { 'userid': '39e1f6cb-22af-4f8c-adf5-xxxxxxxxxx' } }, ...I getThe provided key element does not match the schemasince it seems like it requires also the sort key. When I dodynamodb.get({ TableName: tableName, Key: { 'userid': '39e1f6cb-22af-4f8c-adf5-xxxxxxxxxx', 'pictureurl': '' // Or null } }, ...I get an errorOne or more parameter values were invalid: An AttributeValue may not contain an empty stringSo how do I query for any value in the sort key?
How to query DynamoDB with any value in the sort key?
Apparently many placeholders don't work in simple email verification. You can use custom message lambda triggers to customize the message dynamically. Their documentation has code example for node.js. You can use that to write your own code.Here's the documentation:https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-message.html
I have set up cognito but the email verification page is not getting my username properly (printing placeholders).My cognito settings as followsand the verification email that I received is as follows:Do I have to use custom lambda triggers for this? (How do I achieve that)Update:Use"event.userName"to get the username in the lambda triggerif(event.triggerSource === "CustomMessage_ForgotPassword") { // Ensure that your message contains event.request.codeParameter. This is the placeholder for code that will be sent event.response.smsMessage = "You requested to reset your password " + event.request.codeParameter; event.response.emailSubject = "You requested to reset your password: " + event.userName; event.response.emailMessage = 'Hi, Username:' + event.userName+' , '+ event.request.codeParameter + ' is your verification code ' ; }
Cognito sign in verification email not getting username
I ran the jar file in command line and discovered that a classpath wasn't found, which is directed by the following code:@PropertySource("classpath:aws.properties")I added the needed file in src/main/resources in the project, which holds the following key/value pairs:aws_access_key_id=... aws_secret_access_key=...That fixed that problem, leading to another error I won't discuss here.
I built thisJava Project on GitHubby Creating a Maven Project in Eclipse IDE for Enterprise Java Developers Version: 2019-03 (4.11.0), Build id: 20190314-1200 and copying the pom.xml file and src folder into my project and updated the project with Maven. When I run the project as a Java Application in Tomcat$1 - org.apache.catalina.startup, I get this in the Console View:Error: A JNI error has occurred, please check your installation and try againI checked a lot of the other stackoverflowposts similar to this with this error message, and none of them worked. There are some that are within a different context.Any help is appreciated.
Error: A JNI error has occurred, please check your installation and try again in Maven Project in Eclipse EE
This is not possible.Custom domain names are not supported for private APIs.https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.htmlIf youreallywanted this functionality, it could be accomplished with a proxy server (e.g. HAProxy or Nginx) running inside the VPC that accepts requests for the custom domain and forwards requests to the API Gateway private endpoint using the correctHostheader and the correct TLS SNI... but this increases complexity and creates an additional dependency in your stack that seems unjustifiable just for the purpose of having a non-ugly domain name for an API that is only consumable internally.
We are setting API Gateway to be accessible only insideVPCorVPC endpoints. InAWS API Gatewayyou can create a custom domain withEdgeorRegionalconfiguration. Is there any way somehow point map DNS name from Route53 to API Gateway "ugly" DNS name forPrivatetype or to VPC Endpoint DNS name but with setting the header parameter automatically(it's also possible to send request VPC Endpoint but with specifyingheader: <APIGW DNS>)?
How do I define a custom domain name for my Amazon API Gateway API with Private endpoint type
"When the DynamoDB table is in on-demand mode, AWS Glue handles the read capacity of the table as 40000. For exporting a large table, we recommend switching your DynamoDB table to on-demand mode."https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.htmlThe below is no longer trueAs per AWSDocumentationOn-demand is currently not supported by AWS Data Pipeline, the DynamoDB import/export tool, and AWS Glue. So you need to carefully chose which tables you want to move to new on-demand capacity.
We have an AWS Glue job that is pulling from the a dynamodb table which is set to on-demand capacity. However, once we changed the table to on-demand, the glue job is taking forever to complete.Presumably the glue job is trying to use a portion of the available read capacity...but this doesn't make sense with the new capacity model.We are hoping to move all of our tables to the new on-demand capacity setting, but this would be a blocker for us.Any ideas?
AWS Glue - DynamoDB with On-Demand Capacity Super Slow
If your use Redis as a queue, try SQS instead. SQS can trigger Lambda. ElastiCache doesn't create events or log entries similar to broadcast events.
I have a small function I want to move from a dedicated EC2 instance to Lambda. This function is currently triggered by a broadcast from a Redis instance in ElastiCache. How can I make a Redis broadcast trigger a lambda function?Someone frome AWS said that this could be done through CloudWatch, but the only ElastiCache event I can find is "AWS API Call via CloudTrail".
How can I make a Redis event on ElastiCache trigger a Lambda function?
This is not possible with ECR today. You can eitherenable immutability for tags(which would include "latest" being immutable) or you must allow all tags to be mutable. There are no other options. However, there is arequest on the ECR roadmapfor this.The only way you might be able to get what you want today is to enforce this scheme after-the-fact when pushes are made to ECR by responding toECR events via EventBridge. For example, you might subscribe a lambda function to ECR push events. That lambda, in principle, could keep track of image tags and undo a tag push for any existing tag other than latest and perhaps remove the offending pushed image (if it would become untagged as the result of removing the tag).Pseudo code for such a lambda might be:def on_event(event, context): tag = event['detail']['image-tag'] repository = event['detail']['repository-name'] digest = event['detail']['image-digest'] existing_tags = get_existing_tags(repository) # check if a tag has been overwritten by this push event if tag != 'latest' and tag in existing_tags: # revert the change using our existing records previous_image_digest_for_tag = existing_tags[tag].digest tag_image(previous_image_digest_for_tag, tag) remove_if_untagged(repository, digest) # optional else: # the tag is new or 'latest' # just record this for future enforcement update_existing_tags(repository, tag, digest) return
I want restrict retagging except for latest tag in AWS ECR. It is very hard if some developers pushs image with same tag for debugging. So I would like to allow only "latest" tag to be retagged. But not for different docker image versions to same tag name. How to to do it?Thanks -prakash
AWS ECR how to restrict retagging except for latest tag
It appears that your situation is:An Amazon VPC with an Amazon Aurora databaseAn AWS Lambda function that wants to communicate with the Aurora databaseANDan Amazon SQS queueAn AWS Lambda function can be configured as:Connected to a subnet in a VPC, orNot connected to a VPC, which means it is connected to the InternetIf you wish to have an AWS Lambda function communicate with resources inside a VPCANDthe Internet, then you will need:The Lambda function connected to a private subnetA NAT Gateway in a public subnetAn Internet Gateway connected to the public subnet (it is most probably already in your VPC)Alternatively, you can use aVPC Endpoint for SQS, which allows the Lambda function to access SQS without going to the Internet. If you are wanting to connect to multiple service (eg S3, SNS, SQS), it is probably easier just to use a NAT Gateway rather than creating VPC Endpoints for each service.
I have hosted a Lambda function using AWS Chalice inside a VPC since I want it to access a Serverless Aurora DB Instance. Now I also want this function to send_message() to an SQS.I followedTutorial: Sending a Message to an Amazon SQS Queue from Amazon Virtual Private Cloudand was able to call the SQS from inside my EC2. But even then I could not use my Lambda function to call the SQS.It would be very helpful if someone could actually tell me how to do the whole thing manually rather than using the CloudFormation stack, or at least tell me how to get the SQS Endpoint working.
AWS - Send Message to an SQS from a Lambda function inside a VPC
There are many alternative available, i would recommend you to useService bus queue.from azure.servicebus import QueueClient, MessageCreate the QueueClientqueue_client = QueueClient.from_connection_string("<CONNECTION STRING>", "<QUEUE NAME>")Send a test message to the queuemsg = Message(b'Test Message') queue_client.send(msg)Here is the documentation on how to get started withpython.
i am using AWS SqS and as part of migration to Azure i am testing Azure testing Azure queuehttps://learn.microsoft.com/en-us/azure/storage/queues/storage-python-how-to-use-queue-storageand also rabbitmqhttps://www.rabbitmq.com/tutorials/tutorial-one-python.html. the App is in python so what is the most similar ?
what is the most similar queue service in Azure to Aws SqS?
This can be done with CloudFront -> Lambda@Edge (Origin request) -> S3. Since this question was asked, AWS added Accept-Encoding header to be passed to S3, so the Lambda function can use it.The lambda will take the accept-encoding header, check if brotli is in it, and if so, it will add the needed extension to the request that goes to S3 bucket. The clients can still go to the same URL but will get different results based on that accept-encoding header.Also, make sure that your CloudFront Cache Policy is based on the accept-encoding header.Example code for Lambda:'use strict'; /** * Funciton registered on 'Origin Request' CloudFront Event */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; var isBr = false; if(headers['accept-encoding'] && headers['accept-encoding'][0].value.indexOf('br') > -1) { isBr = true; } const gzipPath = '.gz'; const brPath = '.br'; /** * Update request path based on custom header */ request.uri = request.uri + (isBr ? brPath : gzipPath); callback(null, request); };
S3 + Cloudfromt is not serving .gz /.br static file when client request header contains Accept-Encoding: gxip, deflate, br.Compressed file at build time and s3 folder contains index.html, index.html.gz and index.html.brAdded Accept encoding in whitelist header of cloudfront.Added Content-Length in S3 CORS configurationAdded Content Encoding for index.html.gz as gzip and index.html.br as br with Content-Type as text/htmlDisabled Automatic compression in cloufrontBut i am not getting compressed files from S3+ cloudfront. I am able to access index.html.gz directly. but cloudfront+S3 not able to serve the file automatically. Am i missing something? Or is it not possible to serve like this?
Serving gzip and br files from cloudfront and s3
It looks like the answer is not at this time but they are working on it.From the same reference you posted it appears they are working on it but have not implemented support forBucket Names with Dots:It is important to note that bucket names with “.” characters are perfectly valid for website hosting and other use cases. However, there are some known issues with TLS and with SSL certificates. We are hard at work on a plan to support virtual-host requests to these buckets, and will share the details well ahead of September 30, 2020.ReferencesS3 Path Deprecation Plan
What I understand is : "Support for the path-style model continues for buckets created on or before September 30, 2020. Buckets created after that date must be referenced using the virtual-hosted model." (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/).Also, there is a known problem with Virtual-hosted model when using bucket names containing periods(.) and working in SSL mode (https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html). (the workaround given is to use HTTP which is obviously insecure or change the certificate verification logic)Now, what if I want to have my NEW bucket's name to contain a period (a very specific requirement for my project's hosted URL) and still be secure (use SSL) and also cannot tamper my certification verification logic in the client?Is there any alternate way? Will AWS S3 still allow bucket name to contain periods(.) post path-style deprecation?
Will periods(.) still be allowed in bucket names post path-style deprecation
There is one extra resource attribute under your container definition after ports.resources: {}This overrides original resource definition. Remove this one and apply it again.
Here is my deployment & service file for Django. The 3 pods generated from deployment.yaml works, but the resource request and limits are being ignored.I have seen a lot of tutorials about applying resource specifications on Pods but not on Deployment files, is there a way around it?Here is my yaml file:apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: djangoapi type: web name: djangoapi namespace: "default" spec: replicas: 3 template: metadata: labels: app: djangoapi type: web spec: containers: - name: djangoapi image: wbivan/app:v0.8.1a imagePullPolicy: Always args: - gunicorn - api.wsgi - --bind - 0.0.0.0:8000 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" envFrom: - configMapRef: name: djangoapi-config ports: - containerPort: 8000 resources: {} imagePullSecrets: - name: regcred restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: djangoapi-svc namespace: "default" labels: app: djangoapi spec: ports: - port: 8000 protocol: TCP targetPort: 8000 selector: app: djangoapi type: web type: NodePort
Kubernetes deployment resource limit
I fixed this issue by switching from a custom "DHCP option set" to the default "DHCP option set" provided by AWS. I created the custom "DHCP option set" months ago and assigned it to the VPC where the EKS cluster is running...How did I get to the bottom of this?After running "kubectl get events -n kube-system", I realised of the following:Warning DNSConfigForming 17s (x15 over 14m) kubelet, ip-10-4-9-155.us-west-1.compute.internal Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.4.8.2 8.8.8.8 8.8.4.48.8.8.8 and 8.8.4.4 were injected by the troublesome "DHCP options set" that I created. And I think that the reason why my services where resolving internal DNS names intermittently was because the CoreDNS service was internally forwarding DNS requests to 10.4.8.2, 8.8.4.4, 8.8.8.8 in a round robin fashion. Since the last 2 DNS servers don't know about my Route53 internal hosted zone DNS records, the resolution failed intermittently.Note 10.4.8.2 is the default AWS nameserver.As soon as switch to the default "DHCP option set" provided by AWS, the EKS services can resolve my internal DNS names consistently.I hope this will help someone in the future.
I've got a two node Kubernetes EKS cluster which is running "v1.12.6-eks-d69f1"Amazon VPC CNI Plugin for Kubernetes version: amazon-k8s-cni:v1.4.1 CoreDNS version: v1.1.3 KubeProxy: v1.12.6There are two CoreDNS pods running on the cluster.The problem I have is that my pods are resolving internal DNS names intermittently. (Resolution of external DNS names work just fine)root@examplecontainer:/# curl http://elasticsearch-dev.internaldomain.local:9200/ curl: (6) Could not resolve host: elasticsearch-dev.internaldomain.localelasticsearch-dev.internaldomain.local is registered on an AWS Route53 Internal Hosted Zone. The above works intermittenly, if I fire five requests, two of them would resolve correctly and the rest would fail.These are the contents of the /etc/resolv.conf file on the examplecontainer above:root@examplecontainer:/# cat /etc/resolv.conf nameserver 172.20.0.10 search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal options ndots:5Any ideas why this might be happening?
Kubernetes CoreDNS resolving names intermittently
My fellow engineer Jun solved this!What I was missing was the following permission:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds-db:connect" ], "Resource": [ "arn:aws:rds-db:eu-west-1:<aws_account_number>:dbuser:<rds_db_resource_id>/<postgres_user>" ] } ] }Once this permission was added to the IAM user in conjunction with AmazonRDSFullAccess or AmazonRDSDataFullAccess, then a valid token was generated that could be used as a password to login to the RDS database.
InAWSI have anRDS Postgres database. The database has users that can connect viaIAM(they have theIAM_USERrole). The connection is achieved programmatically viaPythonandBoto3, e.g.:RDS Client setupself.rds = boto3.client( 'rds', region_name = rds_region, aws_access_key_id = aws_access_key_id, aws_secret_access_key = aws_secret_access_key, )Generating the tokenreturn self.rds.generate_db_auth_token( self.db_hostname, self.port, self.db_username, Region=self.region )A valid token is generated that enables a Postgres login when I use the AWS keys of an IAM user that pretty much has access to everything in AWS.However when I use the AWS keys of a user that only hasAmazonRDSFullAccessandAmazonRDSDataFullAccessa token is generated which looks similar to a valid one but is not accepted as a password when I attempt to login to Postgres, e.g.:OperationalError: (psycopg2.OperationalError) FATAL: PAM authentication failed for user "test_iam_user"This user can access all my RDS resources correctly as far as I can tell, it just doesn't generate a valid token.I'd be grateful if anyone could tell what is the missing permission(s) and/or what I'm doing wrong.It would also be good to know the best way to troubleshoot permission problems like these in AWS to find the specific permission that's preventing access.Cheers!!!
RDS Postgres DB IAM generate_db_auth_token not working
This is something customers have asked about in the past, and we will take this as a +1 when prioritizing our future releases.
I've been using CloudFormation to keep my AppSync API in my source control so it's repeatable and checked into the repo. Problem is that if I make a change directly in the AppSync console (because it's faster and more convenient to experiment there before attempting to update the CloudFormation stack), now I need to remember to, and know how to add my changes back to the CloudFormation template. This is particularly cumbersome when dealing with resolvers, since AppSync doesn't have a tab that shows all of your resolvers - you just need to look at each field/query/mutation in the schema and see the resolvers for them one by one.My question: Is there any way to extract all of an AppSync's configuration, hopefully in CloudFormation form? For example, I want a file that describes by schema, each function, resolver, and data source. That way it's easy to ensure that each component is added back to my CloudFormation template.
Export AppSync state/config?
Amazon Kinesis Video does not store videos in S3 "out-of-the-box". The intention is to provide a service that allows videos to beprocessedin some manner.You can write a consumer app that will store the video into Amazon S3, but frankly there are easier ways to store the data in S3 (such as directly storing it S3 rather than sending it via Kinesis).Picture is from:Amazon Kinesis Video Streams: How It Works - Amazon Kinesis Video Streams
I'm sending live video streaming to amazon console by kinesis video streaming, now i want to store it into amazon S3. How to store it into S3 please explain in detail. Thanks
How to store kinesis video stream into S3 bucket?
There is no strict architecture to develop what you want. It depends on your need for isolation and maintenance. You can do it either way with Lambda.If your code is small enough for all methods. You can perform ANY integration with API Gateway, that will get all the methods under control of single Lambda.If you want to separate the code to own lambda's, you can create independent lambda and deploy them separately. If you have dependent libraries across all of your methods, you cansharethem withLambda Layers.Both of the above approaches discussedhereHope it helps.
In traditional application development using Spring-boot / nodeJS, we have a controller/router in which we create different methods to handle appropriate HTTP requestReservation Controller / Router GET getReservation(id) POST createReservation() PUT updateReservation() GET getAllReservation()Controller/router calls the service classes to get the job done. Assume that you have multiple controller/service classes like this.Now my quesiton is, If I need to create similar application using AWS lambda, I have to create multiple lambda functions separately which do not seem to be organized under a controller. (I understand that API Gateway is the controller here - please correct me if it is not). How to organize lambda functions / what is best practise you follow for your serverless architecture?
Organizing lambda functions
You mentioned compute environment and compute resources. Did you add this S3 policy to theJob Roleas mentionedhere?After you have created a role and attached a policy to that role, you can run tasks that assume the role. You have several options to do this:Specify an IAM role for your tasks in the task definition. You can create a new task definition or a new revision of an existing task definition and specify the role you created previously. If you use the console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see Creating a Task Definition.Specify an IAM task role override when running a task. You can specify an IAM task role override when running a task. If you use the console to run your task, choose Advanced Options and then choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter in the overrides JSON object. For more information, see Running Tasks.
I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file. I have attached a role withAmazonS3FullAccessto both the AWSBatchServiceRole in the compute environment and I have also attached a role withAmazonS3FullAccessto the compute resources.This is the following error that is being logged:botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3.amazonaws.com/"There is a chance that these instances are being launched in a custom VPC, not the default VPC. I'm not sure this makes a difference, but maybe that is part of the problem. I do not have appropriate access to check. I have tested this Docker image on an EC2 instance launched in the same VPC and everything works as expected.
boto3 can't connect to S3 from Docker container running in AWS batch
You will need to recreate your Lambda Functions in the new account. Go to lambda function click on Action and export your function .Download a deployment package (your code and libraries), and/or an AWS Serverless Application Model (SAM) file that defines your function, its events sources, and permissions.You or others who you share this file with can use AWS CloudFormation to deploy and manage a similar serverless application. Learn more about how to deploy a serverless application with AWS CloudFormation.
Cloning for different environments. Staging/QA/PROD/DEV etc.Is there a quick an easy way to clone my lambdas, give a different name, and adjust configurations from there?
Can you clone an AWS lambda?
This is summed up in the AWS documentationhere. Note that AWS recommends CloudTrail.
Given that both services are enabled (A single S3 bucket with Server Access Logging enabled and CloudTrail with object-level logging enabled for that bucket):1. What events will initiate logging from both services?2. In such a case, what data will one service contain that the other will not?3. What events will result in a log created by only one of the services?I am having a hard time understanding the logical difference between those two, as both support object level logging.
S3 Server access logging vs CloudTrail logs
If you do not currently have any connection between the office network and the VPC, then this will need to be established across the Internet. It requires aCustomer Gateway, which is the device on your corporate network that is accessible from the Internet, which will terminate that end of the VPN connection.If the Raspberry Pi is your VPN endpoint, then it will need to be reachable from the Internet. Alternatively, a different network device will need to be accessible, which can then forward traffic to the Raspberry Pi.See:What is AWS Site-to-Site VPN?If the Raspberry Pi is behind the firewall and therefore not accessible from the Internet, then in theory it cannot be used for the connection. However, I have seen cases where a VPN termination endpoint makes an outbound request to the Internet and, in doing so, allow "return" traffic to come back in via a stateful firewall if the traffic appears to be a "response" to the outbound request. I've seen that operate between two AWS VPCs, it might be possible to achieve a similar result with your own firewall.
We have some services running in AWS VPC. These services only accessible within VPC only. For development purpose, need access to these services from office location. So trying to setup WiFi access point on Raspberry Pi and planning to connect Raspberry Pi to VPC by AWS Site-to-Site VPN. But Raspberry Pi connected to by Ethernet. AWS VPN (Customer Gateways) needs private IP of the Appliance, in this case I will be using Raspberry Pi which will not have public IP but just local private IP (on Ethernet)Is there a way to make this workable ?
AWS VPC access from Raspberry Pi
According to AWS Documentation:If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent. If you want to ensure that an event notification is sent for every successful write, you can enable versioning on your bucket. With versioning, every successful write will create a new version of your object and will also send an event notification.https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.htmlAlso, why not directly trigger lambda from S3?
I have an AWS Lambda function that is supposed to be triggered by messages from Simple Queue Service SQS. This SQS is supposed to get a notification when new json file is written into my s3 bucket, or when existing json file in s3 bucket is overwritten. Event type for both cases is s3:ObjectCreated, and I see notification for both cases is my SQS. Now, the problem is that pretty frequently there is a new file in s3 (or updated existing file in s3), but there is no corresponding message in sqs! So many files are missing and Lambda is not aware that those should be processed. In Lambda I print the whole content of received SQS payload into the log file, and then try to find those missed files with something likeaws --profile aaa logs filter-log-events --log-group-name /aws/lambda/name --start-time 1554357600000 --end-time 1554396561982 --filter-pattern "missing_file_name_pattern"but can't find anything, which means that s3:objectCreated event was not generated for this missing file. Are there some conditions that prevents s3:objectCreated events for new/updated s3 files? Is there a way to fix it? Or workaround of some kind, may be?
Missing s3 events in AWS SQS
You can't as of right now (2019-07-02). But you can vote for them to implement it here:https://github.com/terraform-providers/terraform-provider-aws/issues/7533
I am new to Terraform and have begun creating.tffiles for my infrastructure which so far involves AWS S3 and IAM Roles. All good so far.But now I need to create an AWS MediaConvert JobTemplate via Terraform and I can't find any reference for this on theTerraform's AWS provider documentation?I don't know what to do at this point? Can I even use Terraform to create MediaConvert resources or do I need to use another tool/means?
Terraform - how to create an AWS MediaConvert JobTemplate?
Make use of theAWS Instance Metadata endpointin your userdata script to get each instance's IP address and put it into a config file. Here is a Powershell example of a userdata script:<powershell> $HostIp = (Invoke-RestMethod -URI 'http://169.254.169.254/latest/meta-data/local-ipv4' -UseBasicParsing) Add-Content "C:\installer\config.txt" "HostIp:$HostIp" </powershell>You can also get the instance'spublic-ipv4in this manner if that's desired instead.
I am trying to spin 2ec2instances usingterraform. Something like thisresource "aws_instance" "example" { count = "${var.number_of_instances}" ami = "${var.ami_name}" associate_public_ip_address = "${var.associate_public_ip_address}" instance_type = "${var.instance_type}" key_name = "${var.keyname}" subnet_id = "${element(var.subnet_ids, count.index)}" user_data = "${element(data.template_file.example.*.rendered, count.index)}" vpc_security_group_ids = ["${aws_security_group.example.id}","${var.extra_security_group_id}"] root_block_device { volume_size = "${var.root_volume_size}" volume_type = "${var.root_volume_type}" iops = "${var.root_volume_iops}" } tags { Name = "${var.prefix}${var.name}${format("%02d", count.index + 1)}" } }Intemplate_fileall I am trying to do is to generate a config file withIP Addressof both the instances usinguser_databut this fails sayingCycle Error.Is there any way to get the file to generate withIP Addresswhile theec2instances are coming up
generate user_data including "IP Address" while creating ec2 instance using terraform
Yes it has max 604800 seconds(7 days) max to exipre..ButIf You have to make public read for ACL and it will be given non expiring url...obj = s3.bucket(ENV['S3_BUCKET_NAME']).object(object_name) obj.write(:file => image_location) obj.acl = :public_read obj.public_url.to_sOr you can use scheduler in cloud watch that will run on or before 604800 secs and renew the expiration...Write a lambda with thisurl = s3.generate_presigned_url( ClientMethod='get_object', Params={ 'Bucket': 'bucket-name', 'Key': 'key-name' }, ExpiresIn=604800 )and run it in cloud watch scheduler on or before 7 days(10080)aws events put-rule --schedule-expression "rate(10079 minute)" --name bef7day
This question already has answers here:AWS S3 pre signed URL without Expiry date(5 answers)Closed4 years ago.I want generate S3 object URL with no expiration.My S3 bucket and objects are private as i don't want it to set public.I tried generating pr_signed url using lambda function but it only validates for 7 days.Help if anybody knows. Thanks in Advance.
how to generate permanent s3 object/file url (without expiration)? [duplicate]
It has not been migrated yet to my knowledge. If you'd like to use this feature, then you can still use v1 of the SDK in conjunction with v2 (I do this myself, asSelectObjectContentseems to be the easiest method to count the number of lines in an S3 file).Here is a GitHub issue on the Feature Request (created on November 20, 2018):https://github.com/aws/aws-sdk-java-v2/issues/859
I don't see this class SelectObjectContentRequest in AWS SDK 2.x. Is it replaced by a new class or is it not migrated?
Is SelectObjectContentRequest replaced in AWS SDK 2.x?
Thecom.amazonaws.AmazonClientExceptionis inaws-java-sdk-core-*.jarlibrary. BTW, be careful as it would be your first problem. Normally, the code is looking forAmazonClientExceptionwhenever there is an error.
am trying to read file from S3 in spark-shell. But am getting the below error.Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonClientExceptionI have copied aws-java-sdk-1.11.106.jar, hadoop-aws-2.8.0.jar into jars folder. Could you please let me know how to resolve this.Thanks
java.lang.ClassNotFoundException: com.amazonaws.AmazonClientException
The short answer is : Yes, it' ll throw an error classbotocore.exceptions.ClientErrorwhen the api call is failed due to any issues.The boto3 s3 API callput_objectwill return adictobject as the response if it is successful else nothing will be returned.>>> import boto3 >>> >>> s3_client = boto3.client('s3') >>> >>> s3_response = s3_client.put_object(Bucket='mybucket', Key='/tmp/new-pb.yml') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/botocore/client.py", line 312, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/lib/python2.7/site-packages/botocore/client.py", line 601, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied >>> >>> print s3_response Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 's3_response' is not defined >>>Try calling the API call inside a try-except block capturesClientErrorwhen the method is failed so that it could be safely handled to avoid the unexpected behavior of script if edge cases or not covered.Hope this will be helpful for you.
The boto3 documentation for client.put_object() does not say anything about error handling. My question is when that function is unsuccessful does it always raise an exception or does it sometimes return a False or None value?
What happens when boto3 client.put_object() is unsuccessful?
From the CloudWatch Logs quotas page:Results from a query are retrievable for 7 days. This availability time can't be changed.https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html
When you use the AWS API torun a query against Cloudwatch logs, you get back aqueryId.{ "queryId": "string" }You can thencall GetQueryResults using that query IDand retrieve results from the query.{ "results": [ [ { "field": "string", "value": "string" } ] ], "statistics": { "bytesScanned": number, "recordsMatched": number, "recordsScanned": number }, "status": "string" }My question ishow long are these query results retained?Can I run a query, and come back a month later to get the results? A year later?I can't seem to find any documentation from Amazon that explains the retention policy. In the absence of an official source, I'll accept answers based on anecdotal experience using this API.
How long are Cloudwatch Insights Query results retained?
You can query it if it's part of the log you are sending to the cloudwatch-logs. So if there is a json field "sourceIPAddress" in the log you can use your filter -{ $.sourceIPAddress != 123.123.* }You can check the content of the log in the log-group/log-stream.
I am currently using the Python librarywatchtowerto stream JSON log files from a device to CloudWatch.I now want to use AWS Kinesis Data Firehose to move the logs to Redshift. I am following this tutorial:https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExampleI am now setting up a subscription filter to move the logs. I would like to filter by the IP address the logs are streamed from.This articlediscusses implementing filters. Here is what I found:{ $.sourceIPAddress != 123.123.* }The only problem is, I don't know how if CloudWatch even stores the source IP Address. Is there some way to query CloudWatch to get the source IP address?
How to see IP Address behind Log Streams to CloudWatch
Typically when working with files the approach is to use S3 as the storage, and there are a few reasons for it, but one of the most important is the fact thatLambda has an event size limit of 6mb, so you can't easily POST a huge file directly to it.If your zipped excel files is always going to be less than that, then you are safe on that regard. If not, then you should look into a different flow, maybe something usingAWS step functionswith Lambda and S3.Concerning your issue with unzipping the file, I have personally used and can recommendadm-zip, which would look something like this://unzip and extract file entries var zip = new AdmZip(rawZipData); var zipEntries = zip.getEntries(); console.log("Zip contents : " + zipEntries.toString()); zipEntries.forEach(function(entry){ var fileContent = entry.getData().toString("utf8"); });
I'm using serverless-http to make an express endpoint on AWS Lambda - pretty simple in general. The flow is basically:POST a zip file via a multipart form to my endpointUnzip the file (which contains a bunch of excel files)Merge the files into a single Excel fileres.sendFile(file) the file back to the userI'm not stuck on this flow 100%, but that's the gist of what I'm trying to do.Lambda functions SHOULD give me access to/tmpfor storage, so I've tried messing around withMulterto store files there and then read the contents, I've also tried thedecompress-ziplibrary and it seems like the files never "work". I've even tried just uploading an image and immediately sending it back. It sends back an files calledincoming.[extension], but it's always corrupt. Am I missing something? Is there a better way to do this?
How should I post a file to AWS Lambda function, process it, and return a file to the client?
Fromput-metric-alarm — AWS CLI Command Reference:When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.Therefore, it seems that you will need to specifyallthe parameters, rather than just the parameter you wish to modify.
How can I modify an existing AWS alarm without figuring out all the parameters? Say if I just want to change a single property.I tried getting its properties withaws cloudwatch describe-alarms --alarm-names my-alarm, then modifying the json, and passing it with:aws cloudwatch put-metric-alarm --alarm-name my-alarm --cli-input-json '<minified json>'But I'm getting errors:Parameter validation failed: Missing required parameter in input: "MetricName" Missing required parameter in input: "Namespace" Missing required parameter in input: "Period" ...I sawModify Existing alarms AWS, but it uses the AWS SDK with C#, I'm looking for a CLI solution.
Modify existing AWS alarm with AWS-CLI
If you are purely serving content from Amazon S3, then you are correct that users will not be impacted by the bucket location once the data is cached.The first time that somebody requests a particular object from a particular edge location, it will be retrieved from S3 and stored in the edge cache (plus region edge cache). So, there will be a minor difference in speed for the initial fetch. Once it is cached, there will be no difference.Please note that Amazon CloudFront is populated on request ("pull"), rather than all content being loaded in every edge location ("push").
I plan to create as3bucket and link it to a CloudFront distribution. CloudFront will cache the content across all regions. Is thes3bucket region really matter in this case? If I choose Sydney as the bucket region and most of the users are from Asia, does it give a bad performance for users? (CloudFront will make sure the content are cached in Asia).
Is s3 bucket region really matter if CloudFront is enabled?
Yes, you are right. Subnet size definitely does matter, you have to be careful with your CIDR blocks. With that one last invocation (252nd), it depends on the way your lambda is invoked: synchronously (e.g. API Gateway) or asynchronously (e.g. SQS). If it is called synchronously, it'll be just throttled and your API will respond with 429 HTTP status, which stands for "too many requests". If it is asynchronous, it'll be throttled and will be retried within a six hour period window. More detailed description you can find on thispage.Also I recently published apostin my blog, which is related to your question. You may find it useful.
I understand the AWS Lambda is a serverless concept wherein a piece of code can be triggered on some event.I want to understand how does the Lambda handle scaling?For eg. if my Lambda function sits inside a VPC subnet as it wants to access VPC resources, and that the subnet has a CIDR of192.168.1.0/24, which would result in251available IPs after subtracting the AWS reserved 5 IPsWould that mean if my AWS Lambda function gets 252 invocations at the exact same time,Only 251 of the requests would be served and 1 would either timeout or will get executed once one of the 252 functions completes execution?Does the Subnet size matter for the AWS Lambda scaling?I am followingthis reference docwhich mentions concurrent execution limits per region,Can I assume that irrespective of whether an AWS Lambda function is No VPC or if it's inside a VPC subnet, it will scale as per mentioned limits in the doc?
How does an AWS Lambda function scale inside a VPC subnet?
for spark type glue jobs you can enable continuous logging but for a python shell type glue job continuous logging is not available , therefore you will not be able to create a custom log group for that, but there is a workaround for this. refer below for more infoHow to use a CloudWatch custom log group with Python Shell Glue job?
I have AWS GLUE Python application by default logs are available in Cloudwatch Log Group/aws-glue/jobs/outputand/aws-glue/jobs/errorforstdoutandstderrrespectively. I have explored AWS documentations and several websites for the process to redirect logs to a custom Cloudwatch Log Group like/dev/<app_name>/, but no luck. If any one has idea over this, please share the process.
Custom Cloudwatch Log Group for Glue job
Modulesare the main way to solve this in Terraform.If you move your existing code into a single folder you can then define variables that allow you to customise that module such as the command to be passed to your ECS service.So in your case you might have something like this:modules/foo-service/main.tfdata "template_file" "web" { template = "${file("${path.module}/tasks/web.json")}" vars { # ... command = "${var.command}" } } resource "aws_ecs_task_definition" "web" { container_definitions = "${data.template_file.web.rendered}" requires_compatibilities = ["FARGATE"] # ... } data "aws_ecs_task_definition" "web" { task_definition = "${aws_ecs_task_definition.web.family}" } resource "aws_ecs_service" "web" { name = "web" task_definition = "${aws_ecs_task_definition.web.family}:${max("${aws_ecs_task_definition.web.revision}", "${data.aws_ecs_task_definition.web.revision}")}" # ... }modules/foo-service/variables.tfvariable "command" {}staging/main.tfmodule "foo_service_web" { source = "../modules/foo-service" command = "bundle exec server" } module "foo_service_sidekiq" { source = "../modules/foo-service" command = "bundle exec sidekiq" }
We create ECS services in Terraform by defining a template_file which populates a task definition JSON template with all needed variables. Then aaws_ecs_task_definitionis created with the rendered template_file. With this task definition theaws_ecs_serviceis created:data "template_file" "web" { template = "${file("${path.module}/tasks/web.json")}" vars { ... } } resource "aws_ecs_task_definition" "web" { container_definitions = "${data.template_file.web.rendered}" requires_compatibilities = ["FARGATE"] ... } data "aws_ecs_task_definition" "web" { task_definition = "${aws_ecs_task_definition.web.family}" } resource "aws_ecs_service" "web" { name = "web" task_definition = "${aws_ecs_task_definition.web.family}:${max("${aws_ecs_task_definition.web.revision}", "${data.aws_ecs_task_definition.web.revision}")}" ... }There are additional services with task definitions nearly identical to the first one, only having small differences like another command (for example for starting sidekiq instead of the web app).Is there any other way of doing this other than duplicating everything (the JSON template,template_filewith all defined variables,aws_ecs_task_definitionandaws_ecs_service)?
How to avoid duplication in terraform when having multiple services in ECS which differ only in the command?
You may achieve this by a lambda function which invokes using DynamoDB Streams. So you don't have to periodically pull data, but AWS services will push data to your devices. Basically, when there is a update on Dynamodb, Lambda will process and return the results.As per your use case I can see two possible paths to retrieve the data processed by lambda.Once data processed, lambda can invoke SNS topic which your device is subscribed to.Your device can listen to a socket connection of API Gateway which proxy to the lambda function.https://docs.aws.amazon.com/lambda/latest/dg/with-ddb-example.htmlhttps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.htmlhttps://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/
I am working on an Android application where I am storing the lat-long values of multiple devices in Dynamo Db table & trying to fetch them and show their location on map in real time. So I want to know how can I continuously listen for the data changes in Dynamo DB Table. Currently I am using "Timer" which repeats itself every 1 second & fetches the latest values of lat-longs. But then it is not a good solution. Please help me out with this.
How can I continuously listen for Dynamo DB changes in android application?
Why would you be hesitant to include more AWS products in the mix? Generally speaking, if you aren't combining multiple AWS products to build your solutions then you aren't making very good use of AWS.For the specific task at hand I would look intoAWS WorkDocs, which is integrated very well with AWS Workspaces. If that doesn't suit your needs I would suggest placing the data files onAmazon S3.
I am managing a bunch of users using Amazon Workspaces, they have terabytes of data which they want to start playing around with on their workspace.I am wondering what is the best way to do the data upload process? Can everything just be downloaded from Google Drive or Dropbox? Or should I use something like AWS Snowball, which is specifically for migration?While something like AWS Snowball is probably the safest, best bet, I'm kind of hesitant to add another AWS product to the mix, which is why I might just have everything be uploaded and then downloaded from Google Drive / Dropbox. Then again, I am building an AWS environment that will be used long term, and long term using Google Drive / Dropbox won't be a solution.Thoughts to architect this out (short term and long term)?
Transporting data to Amazon Workspaces
According to AWS Documentation,For instances that are enabled for enhanced networking, traffic between instances within the same Region that is addressed using IPv4 or IPv6 addresses can use up to 5 Gbps for single-flow traffic and up to 25 Gbps for multi-flow traffic. A flow represents a single, point-to-point network connection.That means single flow is a single connection which can use network bandwidth upto 5 Gbps while multi-flow means parallel different connections which can take upto 25 Gbps of bandwidth in total.
As per specification's of the placement group in AWS, EC2 instance can utilize 10Gbps in single flow and 20Gbps in multi flow.What does thissingle flowandmulti flowtraffic signify?
Single flow and Multi flow in AWS placement group
I had to figure it out. I needed to do these two things to get it working1. Enable CORS on the Amazon API gateway for your APIThis will create an OPTIONS http method handler and you can allow posts from your website by setting the right value foraccess-control-allow-originheader.2. Make sure your POST method handling is sending the right parameters when sending the responseimport json from botocore.vendored import requests API_URL = "https://aladdin.mammoth.io/api/v1/user-registrations" def lambda_handler(event, context): if event['httpMethod'] == 'POST': data = json.loads(event['body']) # YOUR CODE HERE return { 'statusCode': 200, 'body': json.dumps({}), 'headers': { 'access-control-allow-headers': 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token', 'access-control-allow-methods': 'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT', 'access-control-allow-origin': '*' } } return { 'statusCode': 200, 'body': json.dumps({}) }
I was trying to make my react app running on my localhost to talk to the AWS. I have enabled CORS and the OPTIONS on the API.Chrome gives this error nowCross-Origin Read Blocking (CORB) blocked cross-origin response https://xxxxxx.execute-api.us-east-2.amazonaws.com/default/xxxxxx with MIME type application/json. See https://www.chromestatus.com/feature/5629709824032768 for more details.I inspected the network tab and the options call is going through and the OPTIONS is sending this in the response headeraccess-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token access-control-allow-methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT access-control-allow-origin: *How can I fix this CORB issue and get my first lambda function done?
How to solve Cross-Origin Read Blocking (CORB) issue with an AWS lambda gateway?
I had the same problem. It turned out that I had a VPC endpoint with private DNS in that VPC, seehttps://aws.amazon.com/ru/premiumsupport/knowledge-center/api-gateway-vpc-connections/:When you select the Enable Private DNS Name option while creating an interface VPC endpoint for API Gateway, you can access your private APIs using a private or public DNS,but you can't access your public APIs.
I created REST API using AWS API Gateway with following detailsEndpoint Type: Edge OptimizedIntegration Type: MockThe API is open withAuth: NoneApi Key Required: FalseRequest Validator: NoneResource-policy: Not definedI successfully deployed the APIAPI is accessible from the outside world/ public networkAPI is not accessible from the EC2 instance launched in same account(Not tried to access it from other AWS account). API returns with 403 { "message": "Forbidden" }What I am missing here which makes the APIinaccessible from EC2?
Webservice exposed using AWS API Gateway is not accessible from the EC2 instance. Returns 403 { "message": "Forbidden" }
As far as I can tell while using AWS SDK V1 (1.11.840), if you have environment variables such asHTTP(S)_PROXYorhttp(s)_proxyset at runtime, or properties likehttp(s).proxyHost,proxyPort,proxyUser, andproxyPasswordpassed to your application, you don't have to set any of that. It getsautomaticallyread into the newly createdClientConfigiration.As, such you'd only want to set theProxyAuthenticationMethod, if needed.ClientConfiguration clientConfig(ProxyAuthenticationMethod authMethod) { ClientConfiguration conf = new ClientConfiguration(); List<ProxyAuthenticationMethod> proxyAuthentication = new ArrayList<>(1); proxyAuthentication.add(authMethod); conf.setProxyAuthenticationMethods(proxyAuthentication); return conf; }ProxyAuthenticationMethodcan beProxyAuthenticationMethod.BASICorDIGESTorKERBEROSorNTLMorSPNEGO
I want to test my AWS code locally so I have to set a proxy to a AWS client.There is a proxy host (http://user@pass:my-corporate-proxy.com:8080) set in my environment via a variableHTTPS_PROXY.I didn't find a way how to set the proxy as whole so I came up with this code:AmazonSNS sns = AmazonSNSClientBuilder.standard() .withClientConfiguration(clientConfig(System.getenv("HTTPS_PROXY"))) .withRegion(Regions.fromName(System.getenv("AWS_REGION"))) .withCredentials(new DefaultAWSCredentialsProviderChain()) .build(); ClientConfiguration clientConfig(String proxy) { ClientConfiguration configuration = new ClientConfiguration(); if (proxy != null && !proxy.isEmpty()) { Matcher matcher = Pattern.compile("(\\w{3,5})://((\\w+):(\\w+)@)?(.+):(\\d{1,5})").matcher(proxy); if (!matcher.matches()) { throw new IllegalArgumentException("Proxy not valid: " + proxy); } configuration.setProxyHost(matcher.group(5)); configuration.setProxyPort(Integer.parseInt(matcher.group(6))); configuration.setProxyUsername(matcher.group(3)); configuration.setProxyPassword(matcher.group(4)); } return configuration; }The whole methodclientConfigis only boilerplate code.Is there any elegant way how to achieve this?
AWS Java SDK behind a corporate proxy
The problem here is that the CNAME operates on the DNS level, not on the HTTP level. The CNAME will cause the request to be forwarded to the IP address forwww.movez.co.s3-website-us-east-1.amazonaws.com, but the HTTP request will still say it's looking formoves.co. The HTTP request doesn't containwww.movez.co.s3-website-us-east-1.amazonaws.comnorwww.moves.coanywhere in the request, so Amazon has no way to know that the request should be served from the bucket forwww.movez.co.I suggest setting up a Page Rule in Cloudflare which redirects the client's browser frommovez.cotowww.movez.co. If you don't want to use a browser redirect, then either you'll need to configure Amazon to understandmovez.co(maybe by creating a whole separate bucket), or perhaps you could use a Cloudflare Worker to rewrite the HTTP requests (but you'll need to pay extra to Cloudflare for that).
So I am currently using Cloudflare for my DNS under the domainwww.movez.cobut for some reason when someone types inhttp://movez.codirectly into their web browser it spits back this:404 Not Found Code: NoSuchBucket Message: The specified bucket does not exist BucketName: movez.co RequestId: 64038C65xxx HostId: xxxOf course our bucked is namedwww.movez.coand our root record is pointed to the correct bucket (www.movez.co.s3-website-us-east-1.amazonaws.com) and our www CNAME record is pointed to the root as an alias. The bucket is publicly accessible but for some reason in specific iPhone users are getting put to the 404 page.Can anyone help me figure out why this is occurring?I've tried purging cache and there's no redirect redirect records with our registrar (GoDaddy)..
Navigating to website yields Code: NoSuchBucket when using cloudflare
I think you're confusing request parameters and request headers.addRequestParameteraddsrequest parametersto the signed URL.While S3's PUT operation expectsrequest headersto add meta data (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html)You need to useputCustomRequestHeaderinstead.Note that in addition to signing with these parameters, the client that actually sends the request will have to pass the same values as actual headers. In other words both the URL generator code and client code need to know the headers being sent.
I am trying to generate a presigned url to put files into s3 with some additional metadata with itGeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucket, objectKey) .withMethod(httpMethod) .withExpiration(expiration); if (params != null) { params.forEach( (k, v) -> generatePresignedUrlRequest.addRequestParameter( Headers.S3_USER_METADATA_PREFIX + k.toLowerCase(), v)); }where params is aMap<String, String>but after uploading file when I try to get the object usingAmazonS3.getObjectMetadata(bucketName, key).getUserMetadata()returns an empty map.Also triedgeneratePresignedUrlRequest.putCustomRequestHeader(key, value)But I see that in the generated url string the header values are not being sent.Note : I am sendingAmazons3.generatePresignedUrl(generatePresignedUrlRequest).toString();to UIAny help will be appreciated. Additional Note : I am trying to do this in my local mockS3 server which is not HTTPS
AWS S3 Presigned URL to PUT object with additional user params not working
It's hard to say what the best solution is without more information on your use case, butAWS Step Functionsare designed to handle running multiple lambdas with data passed between them in a robust way (retries, parallelization, etc).This blog postprovides a good overview, though the code in the example is JavaScript rather than Python.
I want to write two functions in a single AWS Lambda function. currently, Lambda triggers the function using the handler in case I have two functions in my code then how to change it so that the Lambda handler can execute both the functions.I have found this. However, it is using if statement. In my scenario I will have to run both the functions one after the other and also pass the output of 1st function to the 2nd function. ThanksHow to have more than one handler in AWS Lambda Function?Here is the sample code:import boto3' import json' from datetime import datetime REGION = 'us-east-1' emrclient = boto3.client('emr', region_name=REGION) def lambda_handler(event, context): EMRS = emrclient.list_clusters( ClusterStates = ['STARTING', 'RUNNING', 'WAITING', 'TERMINATING'] ) clusters = EMRS["Clusters"] for cluster in clusters : ID = cluster.get("Id") EMRid = emrclient.list_instance_groups( ClusterId = str("ID") ) print(EMRid)
Using two python functions boto3 in a single AWS Lambda
Check outAWS Amplify's Storage moduleand theFile Access Levelsdocs. Out of the box it supports aprivatelevel that lets users upload into (and view from) only their own namespaced prefixes in the bucket. This sounds like exactly what you're after.
I'm working on a project for which I need to store user uploaded images in a secure way. Currently, I'm hosting the website on AWS s3 (static content) with cloudfront. The backend is deployed separately behind application load balancer.The use case is - A user uploads image(s) from his desktop which go to a bucket in s3. I've set the bucket policies so that everybody is able to upload images to it (since it's a public website). Now, I've to restrict the image access only to the user who has uploaded it. i.e If user A uploads images A1, A2, A3, only he should be able to view those and not user B.Currently, if I get the url through browser inspect tool, I can view the image directly without any restrictions. This defeats the purpose of "securely" storing images on the website. Could someone let me know about any standard practices, pointers to this problem? Would generating the image url each time through backend with some special hash be over engineering?
How to restrict image acess to authorized website users only
Instead of specifying the s3-accelerate name for your bucket, just use the regular bucket name and add the-UseAccelerateEndpointswitch to your S3 commands. The cmdlets will then target the accelerated S3 endpoints for you.
I have a powershell script that works when accessing a normal S3 bucket but if i change the bucket name to the Transfer Accelerated bucket name then it gives an error "Bucket not found".Including the script that works with a commented out bucketname that doesn't work.# Your account access key - must have read access to your S3 Bucket $accessKey = "KEY" # Your account secret access key $secretKey = "SECRETKEY" # $region = "us-east-1" # The name of your S3 Bucket $bucket = "myBucket" #The above works!! - but if i comment out the above and uncomment the below line then it no longer works. #$bucket = "myBucket.s3-accelerate.amazonaws.com" # The folder in your bucket to copy, including trailing slash. Leave blank to copy the entire bucket $keyPrefix = "myFolder/" # The local file path where files should be copied $localPath = "D:\S3_Files\myFolder\" $objects = Get-S3Object -BucketName $bucket -KeyPrefix $keyPrefix - AccessKey $accessKey -SecretKey $secretKey -Region $region foreach($object in $objects) { $localFileName = $object.Key -replace $keyPrefix, '' if ($localFileName -ne '') { $localFilePath = Join-Path $localPath $localFileName Copy-S3Object -BucketName $bucket -Key $object.Key -LocalFile $localFilePath -AccessKey $accessKey -SecretKey $secretKey -Region $region } }
How to access Transfer Accelerated S3 bucket using Powershell
Well, you need to somehow extract that information about a user. Cognito will not help you in this case because, as you have stated, the user is not authenticated.Other option that you have is to assume that the user's language is the same as the language that is used in the place where the request originated from (let's say via sender's IP). But this is very unreliable solution.Another option is to create a DynamoDB table (or to use any other DB solution, but DynamoDB is the most suitable one for this task) and store user's email and language of that user there. Then, if you invoke lambda function, you already have email address of the user passed to it and you can use it to fetch the corresponding language from DynamoDB before you generate the reset password email.
I am using AWS Cognito to manage my users and would like to control the phrasing of the email that is sent to the user in the "Forgot Password" flow.I am basing my solution on this:https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-message.htmlBasically, AWS lets you define a Lambda function that would get invoked when the email needs to be sent and lets you define the content of the email's subject and body (injecting the verification code into the body).My question is this: The user can be using the application in one of many languages. This means that a German user could view the application in German. When he initiates the "Forgot Password" flow, he should get the email in German. However, the Lambda function cannot know anything about the user because he is logged out when this flow happens and so Cognito cannot pass this information.How can this be solved?Thanks.
How to support multi-lingual email in AWS Forgot Password scenario
This is now supported.See this article in the AWS knowledge base:How do I install a GUI on my Amazon EC2 instance running Amazon Linux 2?Last updated: 2021-04-20I want to install a graphical user interface (GUI) in my Amazon Elastic Compute Cloud (Amazon EC2) instance running Amazon Linux 2. How do I do this?Short descriptionTo install a GUI on your EC2 Linux instance, do the following:Install the MATE desktop environment. MATE is a lightweight GUI based on GNOME 2 available as an extra for Amazon Linux 2. The Amazon Linux 2 offering ofAmazon WorkSpacesuses MATE. For more information about MATE, see theMATE desktop environment website.Install a virtual network computing (VNC) service, such as TigerVNC. For more information about TigerVNC, see thetigervnc.org website.Connect to the GUI using the VNC.(Optional) Install a web browser, such as Firefox or Chromium. For more information on Firefox, see themozilla.org website. For more information on Chromium, see thechromium.org website.Note:These instructions apply only to Amazon Linux 2. To confirm the version that you're running, run the following command:cat /etc/os-release
I am trying to install GUI on my Amazon Linux 2 AMI. I tried several solutions like GNNOME and Mate Desktop, but when I try to install desktop by group list I get a warning:group Desktop does not exist or GNOME does not exist.How can I resolve this issue?
How to install GUI (Desktop) in Amazon Linux 2 AMI
You can get the sub from the ID token (which is a JWT) after you have signed in. After you have signed in using the AWSMobileClient, you can do something like the following to get the subAWSMobileClient.getInstance().getTokens().getIdToken().getClaim("sub")
I am trying to use the cognito generated unique id knows as SUB to be a PK in my tables. But I am not able to get this SUB in the response of the first sign-up call. I am using the latest version of the sdk for android:aws-android-sdk-cognitoidentityprovider:2.9.1Can somebody guide me on how or where I can get this id?The same question was asked on this thread but none of them workshttps://github.com/amazon-archives/amazon-cognito-identity-js/issues/335
How to get SUB from aws cognito token
Yes you can. Here's how to do it with a Node.js Lambda:var zlib = require('zlib'); exports.handler = function(input, context) { // decode input from base64 var zippedInput = new Buffer.from(input.awslogs.data, 'base64'); // decompress the input zlib.gunzip(zippedInput, function(error, buffer) { if (error) { context.fail(error); return; } // parse the input from JSON var payload = JSON.parse(buffer.toString('utf8')); // ignore control messages if (payload.messageType === 'CONTROL_MESSAGE') { return null; } // print the timestamp and message of each log event payload.logEvents.forEach(function(logEvent) { console.log(logEvent.timestamp + ' ' + logEvent.message); }); }); };
I have a lambda that subscribes to a Cloudwatch Log stream. This all works tickety-boo i.e. when the log stream is written to the lambda receives a notification. Now, is there a way of receiving the contents of the log or a section of the log with the notification or do I then have to query the the log stream to garner the information that I need?RegardsAngus
Is it possible to extract contents of a Cloudwatch log from a subscription
This is something that would be very nice to have, but for some reason Amazon is not willing to provide an API to check for this.One hacky way to approach this could be to run the cloudformation template over and over again and check the output for the missing permissions. Then you add them each time to a temporary IAM role and repeat until you have all the permissions to launch your template. This might take a rather long time, but could be the only actual way to programmatically approach this.
I'm working on an automated pipeline (using Jenkins) that deploys AWS Cloudformation Templates residing in a git repository to AWS.I have a working pipeline that works off of an AWS IAM user whose access keys are used by a Jenkins job to talk to the AWS Cloudformation API.The issue I'm facing is that preferably I would have this IAM user to have as little permission as possible, but it should have enough permissions both to access the Cloudformation API but also to create the resources I have templates for.In order to determine this minimal permission set, my question is whether there exists an application, package or AWS utility (I haven't been able to find one yet) to infer the IAM permissions required to execute a given (set of) Cloudformation templates, that can preferably be used programatically.Thanks in advance for any suggestion!
(Programatically) infer required AWS IAM permissions required to deploy an AWS template
+50Case-1 : You can not get the private IP of the user for the security reasons(If configured by the user, this is done by NAT or PAT (Network Address Translation or Port Address Translation behind the screen. NAT will add this ip in his table and send the request ahead with the public id or can say router-id).Case-2: If here your private ip means is if multiple users are using the same public network(WIFI etc). Then again you can define two IPs one is public which is common for all but inside there public network they have another ip which is unique inside that public network.For example: Let's say there is WIFI with public IP (1.1.1.1). It has two users A and B.Notably, as they are sharing the same WIFI so the router will have only one IP(public and common for all) but inside this router, A and B will have different IPs such as 192.1.1.1 and 192.1.1.2 which can be called as private.In both cases, you will get only the public IP(At position 0 in X-Forwarded-For header).You can get X-Forwarded-For header inside event.headers.multiValueHeaders.If you can access both then what is the benefit of having private and public ip?To access AWS VPC private subnet as well you have to use NAT and the client will never know the actual IP for the security reasons. I request you to re-review your requirements once again.
I am trying to get the users private IP and public IP in an AWS environment. Based on this answer (https://stackoverflow.com/a/46021715/4283738) there should be a header X-Forwarded-For , separated ips and also from forum (https://forums.aws.amazon.com/thread.jspa?threadID=200198)But when I have deployed my api via API Gateway + lambda + nodejs v8. I have consoled out the JSON for event and context varaibles for the nodejs handler function arguments for debugging (https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod) I am not getting the private ips.The lambda function isconst AWS = require('aws-sdk'); exports.handler = function(event, context, callback){ callback(null, { "statusCode": 200, "body": JSON.stringify({event,context}) }); }API Gateway DetailsGET - Integration RequestIntegration type -- Lambda FunctionUse Lambda Proxy integration -- TrueFunction API :https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod
private and public ip using AWS api gateway + lambda + Nodejs
How a message is displayed to the user completely depends on the output Channel you are using.\nworks well in Facebook and Slack that I know of.The Lex Console Test Chat has its own unique formatting for displaying the Lex output, so its not very reliable for testing the formatting of a message. Its really only good for quick tests to make sure your bot responds without errors. And for a glimpse at the Lex JSON response.Each output Channel will receive the Lex JSON response and display it in its own way, so the only reliable way to test message formatting, links, images, and response cards is to test it in the actual Channel.
I've been reading the AWS Lex / Lambda docs and looking at the examples. I don't see a way to return multiple lines.I want to create an intent that when a user types 'Help' It gives me an output like below.Options: Deploy new instance. Undeploy instance. List instances.I've tried this:def lambda_handler(event, context): logger.debug('event.bot.name={}'.format(event['bot']['name'])) a = { "dialogAction": { "type": "Close", "fulfillmentState": "Fulfilled", "message": { "contentType": "PlainText", "content": "Options: \nDeploy instance.\nUndeploy instance." } } } return a
AWS Lex Lambda return multiple lines with Python
There are two options that I can think of:As user @krishna_mee2004 stated, you can use CloudWatch to listen on your EC2 instance and this in turn will trigger your lambda.On your EC2 instance, there is a field calledUser dataunder the Instance Details. InUser datayou can add commands that should be ran whenever your EC2 instance is deployed. From here you can invoke your lambda.Hereis documentation on EC2 user data.Hereis documentation on invoking your lambda from the CLI.Personally, I'd recommend option 1 because I prefer using AWS tools whenever I get the chance and CloudWatch is a perfect example of this. However, option 2 might give you more control over what payload is sent to the lambda.
I am compiling a fairly complex CloudFormation template and at some point I am creating anec2instance;I want to create alambdafunction that:takes as input parameter the public IP of the instance created in this CF stackopens a security group port for that particular IP (the security group isnotpart of the specific CF template and it belongs to adifferent region).Is this possible?Ι am asking because (among others)ec2is not listed as a potential lambda trigger in the console and wanted to see whether there is a simpler way around this than posting details about the creation in ansnsorsqswhich then in turn triggers the lambda.
AWS: Use EC2 instance creation to trigger lambda
Changing the Email provider to anything else thanAWS CognitoorAWS SES– orsuppressingthe outgoing messages programmaticallywasnot possible. The only possibility was to use a Lambda trigger to at least be able to modify the template of the message:Custom Message Lambda TriggerUntil probably a few weeks/months ago:Custom sender Lambda TriggersAmazon Cognito user pools provide two Lambda triggers CustomEmailSender and CustomSMSSender to enable third-party email and SMS notifications. You can Use your choice of SMS and email providers to send notifications to users from within your Lambda function code.I did not find any release or change message on when they introduced those new triggers, but I am super happy I finally stumbled across this! I felt so much joy I almost cried.
I wonder if it is possible to swap SES for a custom Emailing Provider? I am already using one, and I don't want to configure another one because AWS Cognito needs that. Maybe there is a way to suppress all the emails coming out of AWS Cognito?Thanks Przemek
AWS Cognito using different Email Provider than SES
I did the following and it resolved my issues.Follow the instructionshereand call amplify hosting add.Then move the api and auth folders and their content from the backend folder to the #current-cloud-backend folder.Then run amplify push.
I followed the instructionshereto add authentication for my iOS app. I first ranamplify auth update, followed through all the steps, and then ranamplify push. However,amplify pushfailed with the following error:/usr/local/lib/node_modules/@aws-amplify/cli/node_modules/gluegun/build/index.js:13 throw up; ^ Error: ENOENT: no such file or directory, stat '/Users/yunfeiguo/Documents/programming/zhiyouios/amplify/#current-cloud-backend/api/zhiyou'Any idea what might the issue be here?
Error Running Amplify Push after Running Amplify Auth Update
You can do this with the commandaws s3api get-object-tagging --bucket bucketname --key objectkey. For example➜ ~ aws s3 ls helloworld-20181029141519-deployment 2018-11-24 07:19:11 0 hello.world ➜ ~ aws s3api get-object-tagging --bucket helloworld-20181029141519-deployment --key hello.world { "TagSet": [ { "Value": "1", "Key": "tagged" }, { "Value": "bar", "Key": "foo" } ] }You can use aJMESPathexpression to filter the result set.➜ ~ aws s3api get-object-tagging --bucket helloworld-20181029141519-deployment --key hello.world --query "TagSet[?Key=='foo']" [ { "Value": "bar", "Key": "foo" } ]
I need to get object tags by AWS CLI. Is it possible to display all object tags? Or even display the value of a specific key from tags.
AWS CLI S3 get object tags
Curl and Postman seem to be automatically Base64 encoding your Authentication credentials.The responses are the same. The latter response is a Base64-encoded token of the first response.
I'm trying to add a POST HTTP method to my AWS API Gateway. I'm using SAM framework with Python.I find that there is a difference in the "body" of the response when it is generated from my desktop (curl or postman) and the AWS API Gateway 'TEST'Right now, the "POST" command only prints the 'event' object received by the lambda_handler. (I'm using an object to store the event as you can see below)def add(self): response = { "statusCode": 200, "body": json.dumps(self._event) } return responseWhen I'm using the 'TEST' option of the API Gateway console, with the input:{"username":"xyz","password":"xyz"}I receive the following output:{ "body": "{\"username\":\"xyz\",\"password\":\"xyz\"}", <the rest of the response> }However, when I'm sending the curl (or postman) request:curl --header "Content-Type: application/json" --request POST --data '{"username":"xyz","password":"xyz"}' <aws api gateway link>I get the following response:{ "body": "eyJ1c2VybmFtZSI6Inh5eiIsInBhc3N3b3JkIjoieHl6In0=" <the rest of the response> }Why do you think there is a difference between the two tests?
Difference in request body in aws api gateway test and curl
I think the difference would it be mainly that you are only paying for the usage of the CPU / RAM in ECS, when in EC2 you are paying for the instance 24/7.For Keycloak in particular you might have to change the configuration of the Database, to make Keycloak point to a service or change the configuration in the database Docker to persist the information in a S3 or elsewhere. (This is you are using the Keycloak docker compose)
I am looking for the pros and cons of deploying Keycloak on ECS vs EC2. Which give me more control and which service is easy to manage.
Benefits of deploying Keycloak on EC2 vs ECS
Well, the error message looks self explanatory, the role you assigned to codebuild doesn't have enough access to go to s3. Go to codebuild -> Build projects - > Choose your project -> Click on tab 'Build Details'. You will see a 'Service Role' ARN, that if you click on it, it will send you to that IAM role (if you are not an admin for that account, you may not have enough permissions to see IAM, as it is a critical permission service, so check this with the admin.) Check the policies for that role, and check if the policies have the action: s3:GetObject on resource: your bucket. If it doesn't, then you need to add it. Use the visual editor, use S3 as service, add Get* as action, and your s3 bucket to it.
I am trying to set up a Continuous Integration pipeline for my simple AWS lambda function. To confess, the is my very first time using AWS code pipeline. I am having trouble with setting up the pipeline. The deploy stage in the pipeline is failing.I created a CodeBuildThen I created an application in CodeDeployThen I created a CodePipeline choosing the source as GitHub. The selected a repository and branch from the GitHub. Then linked the pipeline with the CodeDeploy application and CodeBuild I previously created.After I save the pipeline and when the pipeline is built, I am getting this error.When I check the error details, it says thisUnable to access the artifact with Amazon S3 object key 'the-goodyard-pipelin/BuildArtif/G12YurC' located in the Amazon S3 artifact bucket 'codepipeline-us-east-1-820116794245'. The provided role does not have sufficient permissions.Basically, that Bucket does not exist as well. Isn't the Bucket created automatically? What went wrong with my set up?The Bucket exist as well. It is just throwing error.In the bucket, I can see the zip file as well.
AWS CodePipeline deploy failed
You can useAmazon S3 Storage Class Analysis:By using Amazon S3 analytics storage class analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. This new Amazon S3 analytics featureobserves data access patternsto help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class.After storage class analysis observes the infrequent access patterns of a filtered set of data over a period of time, you can use the analysis results to help youimprove your lifecycle policies.Even if you don't use it to change Storage Class, you can use it to discover which objects are not accessed frequently.
I'm writing a service that takes screenshots of a lot of URLs and saves them in a public S3 bucket.Due to storage costs, I'd like to periodically purge the aforementioned bucket anddelete every screenshot that hasn't been accessed in the last X days.By "accessed" I mean downloaded or acquired via a GET request.I checked out the documentation and found a lot of ways to define an expiration policy for an S3 object, but couldn't find a way to "mark" a file as read once it's been accessed externally.Is there a way to define the periodic purge without code (only AWS rules/services)? Does the API even allow that or do I need to start implementing external workarounds?
AWS S3 deletion of files that haven't been accessed
Do you mean logging out of AWS console or your laptop? Your training job should still be running on the notebook instance whether you have notebook open or not. Notebook instance will always be active until you manually stop it[1]. You can always access the notebook instance again by opening the notebook through console.[1]https://docs.aws.amazon.com/sagemaker/latest/dg/API_StopNotebookInstance.html
Currently I am exploring AWS sagemaker and I am facing a problem e.g. If I want to train my network on 1000s of epochs I cant stay active all the time. But as I logout my the notebook instance also stop execution. Is there any way to keep the instance active even after you logout ?
Amazon Sagemaker Notebook instance stop execution as I logout
As you suspect, AWS does not intend Cognito to do mail-list management; only transactional emails.From what I've seen, it's pretty common to manage mail-outs (including product updates) through something like MailChimp or HubSpot, which include additional tools to track user engagement, manage campaigns, and handle the various data protection requirements that different regions expect of companies.That said, for relatively modest numbers of users (think 100s) you can get away with usingListUserswith a lambda and SES.
As far as I understand, I cannot easily iterate through the users in Amazon Cognito.Is there a way that I can send all of my users an email on updates for my app (possibly through SES)? Or should I not use Cognito because it is not built for this purpose.
Can I send emails to users in Amazon Cognito?
There is no underlying Amazon S3 API call that can copy multiple files in one request.The best option is toissue requests in parallelso that they will execute faster.Theboto3 Transfer Managermightbe able to assist with this effort.Side-note:There is no such thing as 'move' command for S3. Instead, you will need tocopy, thendelete. Just mentioning it for other readers.
I wrote a lambda function to copy files in an s3 bucket into another s3 bucket and I need to move a very large number of these files. To try and meet the volume requirements I was looking for a way to send these requests in large batches to S3 to cut down on overhead. However I cannot find any information on how to do this in Python. There's a Batch class in the boto3 documentation but I can't make sense of how it works or even what it actually does.
Make batch copy requests to AWS S3 with Python
You could bucket your users by first letter of their usernames (or something similar) as the partition key, and either A or B as the sort key, with a regular attribute as the counts.For example:PARTITION KEY | SORT KEY | COUNT -------------------------------- a | A | 5 a | B | 7 b | B | 15 c | A | 1 c | B | 3The advantage is that you can reduce the risk of hot partitions by spreading your writes across multiple partitions.Of course, you're trading hot writes for more expensive reads, since now you'll have to scan + filter(A) to get the total count that chose A, and another scan + filter(B) for the total count of B. But if you're writing a bunch and only reading on rare occasions, this may be ok.
We have a completely server-less architecture and have been using DynamoDB almost since it was released, but I am struggling to see how to deal with tabulating global numbers on a large scale. Say we have users who choose to do either A or B. We want to keep track of how many users do each and they could happen at a high scale. According to DyanamoDB best practices, you are not supposed to write continually to a single row. What is the best way to handle this outside using another service like CouchDB or ElastiCache?
How do I keep a running count in DynamoDB without a hot table row?
The URLs that you specified are proxy endpoints so they are accessed through a proxy that is usually set up on your client with:kubectl proxyI supposed you could access it from the outside if you expose your kube-apiserver to the outside which highly unrecommended.If you want to access the endpoint from the outside you usually do it through theKubernetes Servicewhich in your first case isprometheus-k8son port9090and in the second case isgrafanaon port3000. You didn't provide whether the services are exposed through aNodePortorLoadBalancerso the endpoint will vary depending on how it's exposed. You can find out with:kubectl get svc
I have a Grafana running on an EC2 instance. I installed my Kubernetes cluster k8s.mydomain.com on AWS using kops. I wanted to monitor this cluster with Grafana. Entering the below URL for Prometheus data source and the admin username and password fromkops get secrets kube --type secret -oplaintextin grafana returned an error.https://api.k8s.afaquesiddiqui.com/api/v1/namespaces/monitoring/services/prometheus-k8s:9090/proxy/graph#!/role?namespace=defaultI also tried the kops add-on forprometheusbut I wasn't able to access grafana using the following URL:https://api.k8s.mydomain.com/api/v1/namespaces/monitoring/services/grafana:3000/proxy/#!/role?namespace=defaultAm I doing something wrong? Is there a better way to do this?
Accessing a remote kubernetes cluster with grafana
Apparently this PRhttps://github.com/laravel/passport/pull/683made possible to pass the keys by envvars./* |-------------------------------------------------------------------------- | Encryption Keys |-------------------------------------------------------------------------- | | Passport uses encryption keys while generating secure access tokens for | your application. By default, the keys are stored as local files but | can be set via environment variables when that is more convenient. | */ 'private_key' => env('PASSPORT_PRIVATE_KEY'), 'public_key' => env('PASSPORT_PUBLIC_KEY'),I didn't test it yet but I will soon.UpdateWe tried it and we hit the envvars size limit of 4K:https://forums.aws.amazon.com/thread.jspa?messageID=618423&#618423At the end, we ended up using our CI instead.
I'm having trouble setting up laravels passport on aws elastic beanstalk. The eb client is set up correctly and I can deploy code changes. No errors are shown.However making requests to laravel results in error 500 afterwards, telling me I'm missing the passport keys in "app/current/storage/oauth-public.key\". Locally everything runs fine.I guess I'm missing the artisan command "php artisan passport:install", so I added it in the composer file:"post-install-cmd": [ "Illuminate\\Foundation\\ComposerScripts::postInstall", "@php artisan passport:install" ]But apparently it does not create the keys.Either the post-install hook is not executed after running eb deploy, or there is another error that does not let me create the key file (missing writing permission?)How can I verify that the post-install hook is executed? Anyone had a similar issue?I followed the suggestions in this issue but so far it did not help:https://github.com/laravel/passport/issues/418UPDATE: I sshed into the app and tried to run php artisan passport:install manually, which resulted in an error. I had to give permissions first to the folder (sudo chmod -R 777 storage) then it worked. Unfortunatly the keys are deleted everytime I run eb deploy, so I would have to redo these steps every time - pretty cumbersome. Anyone has found a good way to automate this?
Laravel Passport: Missing keys after deployment to aws
date_parseconverts a string to a timestamp. As per the documentation,date_parsedoes this:date_parse(string, format) → timestampIt parses a string to a timestamp using the supplied format.So for your use case, you need to do the following:cast(date_parse(click_time,'%Y-%m-%d %H:%i:%s')) as date )For your further reference, you can go to the below link for prestodb online documentationhttps://prestodb.github.io/docs/current/functions/datetime.html
So I've looked through documentation and previous answers on here, but can't seem to figure this out.I have aSTRINGthat represents a date. A normal output looks as such:2018-09-19 17:47:12If I do this, I get it to return in this format2018-09-19 17:47:12.000:SELECT date_parse(click_time,'%Y-%m-%d %H:%i:%s') click_time FROM table.abcBut that's not the output I need. I was just trying to show that I'm close, but clearly missing something. When I changeclick_timetodate_parse(click_time,'%Y-%m-%d'), it sends backINVALID_FUNCTION_ARGUMENT: Invalid format: "2018-09-19 17:47:12" is malformed at " 17:47:12"So there's clearly something I'm not doing correctly to get it to simply return2018-09-19.
String to YYYY-MM-DD date format in Athena
For your solution, what you can do is simply using that S3 URL that passed to the nested stack using the "Parameter" property. Please check it here:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html#cfn-cloudformation-stack-parameters"The set of parameters passed to AWS CloudFormation when this nested stack is created."And other notes:In the S3 URL that you provided, you just mapped to a specific region endpoint. The other URL is valid as well, there are more options to provide S3 URLs. That based on the documentation:https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#create-bucket-introNo, the stack is not aware of his source and you can see all the options that related to stack here:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html#aws-resource-cloudformation-stack-syntax
Suppose I have a cloudformation template with nested stacks.I check them all out of git, create an S3 bucket and folder and point Cloudformation at the master template file.I want it to import the nested stacks (and some other resources) from the "current" bucket/folder.Does the stack have a property : where it's "source" comes from?Or do I need to request the s3 URL from the user when they create the stack? So first you choose your file in S3, then you have to copy the URL press next and paste the URL. Seems like there must be a cleaner solution!(I don't want to reference a central S3 bucket with the nested stacks available for public access, for policy reasons AND I expect the stack to be modified a little bit each time it's used, it isn't always exactly the same files that are being nested.)
Cloudformation : access other resources in the stack folder, where is the folder?
At the moment, CodeDeploy does not support configuring multiple target groups in a single deployment. There are workarounds, but they're not awesome.1. Break out each application into its own deployment group and deployment individuallyYou could deploy each application separately in a different deployment group, which would allow you to register/deregister to each target group. However, this approach would not work with blue/green deployments.2. Register/deregister 2 target groups in your user scriptsYou could configure your appspec to register and deregister from 2 targets groups using a script.There is a sample script on Github, though it's not not recommended for production use.3. Break out your application into 3 sets of instancesRight now, you're running 3 different applications on the same hosts. You probably have good reason to do that, but if you could break out the applications into 3 different sets of hosts, you could break them into 3 different deployment groups and still use blue/green deployments.
This is my case.one instance with three application [ 4000, 4001, 4002 ].Created an ALB and redirected 3 domains to three target groups using rulesWhen I use to create an application in Code-deploy [Blue-green], it asks for only one target group at a time. But I have three target groups associated with the autoscaling group.After Deployment it is not registering instances with other two target groups. I tried Creating Different ALB, i.e., three ALB with three target groups. But I end up in code deploy sending traffic to one target group.I am deploying code directly from bitbucket. I need code to deploy [Blue-green] to register instance automatically with all three target groups. But as per AWS CodeDeploy documentation, only one target group can be selected at the time of code deploy. Any kind of help is much appreciated.
How to deploy code using CodeDeploy with AutoScalingGroup containing multiple target group
TLDR; I hunted for more docs and I figured that the context keyaws:ResourceTagisn't something valid/supported right now. Found the documentation here -https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html, that documentation only provides aws:RequestTag as a valid context key.To add context keys specific to a service, I had to dig deeper and find this document -https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonec2.html, which has the ec2:ResourceTag/tagKey context key. So, once I changed my IAM policy to have"ec2:ResourceTag/aws:cloudformation:stack-name": "mystackname", I started to see successes.
I have a use case where I'm creating a set of resources such as EC2 instances and S3 buckets using CloudFormation. I see that Cloudformation adds tags to both EC2 instances and S3 buckets with the tag being of the formataws:cloudformation:stack-name: <stack-name>I'm trying to write an IAM policy which has tag based permissions. The policy looks as follows:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowToDescribeAll", "Effect": "Allow", "Action": [ "ec2:*", "s3:*" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/aws:cloudformation:stack-name": "mystackname" } } }, { "Sid": "AllPermissions", "Effect": "Allow", "Action": [ "ec2:Describe*" ], "Resource": "*" } ] }Now when I attempt to perform a S3 put object API call using the CLI, I see AccessDenied permissions. Same goes when I attempt to terminate the instance which has that specific tag. I couldn't find any specific IAM documentation which mentions that we cannot use tags prefixed withaws:in IAM policies. Does anybody know what the issue might possibly be? Or is it the case that the context keyaws:ResourceTagis invalid?
Using tags prefixed with aws: in IAM policies
If you are referring to the permissions that you give to the Lambda Function to have at execution time, after it has been deployed by the Serverless Framework, then you add role permissions in theserverless.yamlfile, within theprovidersection.Here is an example of permissions for the Lambda to talk to S3, Execute other Lambdas, and Send Emails with SES:iamRoleStatements: - Effect: "Allow" Action: - "s3:PutObject" - "s3:DeleteObject" - "s3:DeleteObjects" Resource: arn:aws:s3:::${self:custom.s3WwwBucket}/content/pages/* - Effect: Allow Action: - lambda:InvokeFunction - lambda:InvokeAsync Resource: arn:aws:lambda:${self:custom.region}:*:function:${self:service}-${opt:stage}-* - Effect: "Allow" Action: - "ses:SendEmail" - "ses:SendEmailRaw" Resource: "arn:aws:ses:eu-west-1:01234567891234:identity/[email protected]"
I have created a user in the AWS console with accessonlyto the Lambda service.My question is, using the serverless framework, in myserverless.yaml, is it possible to add S3 Full access to my user and possibly any other service? Thank you.handler.js'use strict'; const aws = require('aws-sdk'); const s3 = new aws.S3({ apiVersion: '2006-03-01' }); module.exports.helloWorld = (event, context, callback) => { const params = {}; s3.listBuckets(params, function(err, data) { if (err) console.log(err, err.stack); else console.log(data); }); const response = { statusCode: 200, message: JSON.stringify({message: 'Success!'}) }; callback(null, response); };serverless.yamlprovider: name: aws runtime: nodejs8.10 region: eu-blah-1 iamRoleStatements: - Effect: "Allow" Action: - "s3:ListBucket" - "s3:PutObject" - "s3:GetObject" Resource: "arn:aws:s3:::examplebucket/*" functions: helloWorld: handler: handler.helloWorld events: - http: path: hello-world method: get cors: true
How to add AWS user permissions using serverless?
To take @Michael - sqlbot's answer and get it all in one place, here is how this code would work:import boto3 # create the session and resource objects boto3_session = boto3.Session(profile_name=some_profile_you_configured) ec2_resource = boto3_session('ec2') # create the instance instance = ec2_resource.create_instances(ImageId='ami-a0cfeed8', MinCount=1, MaxCount=1, InstanceType='t2.micro', SecurityGroups= . ['some_security_group'], KeyName='some_key') # use the boto3 waiter instance[0].wait_until_running() # reload the instance object instance[0].reload() public_ip = instance[0].public_ip_address print(public_ip)boto3 Instance Documentation - reload()
I'm trying to get the public ip address of my ec2 instance after it has been created and is running using the following code:instance = ec2_resource.create_instances(ImageId='ami-a0cfeed8', MinCount=1, MaxCount=1, InstanceType='t2.micro', SecurityGroups= . ['some_security_group'], KeyName='some_key') instance[0].wait_until_running() print(instance[0].public_ip_address)But even though the public ip is visible in the aws console, the value that gets printed by the above code isNone. If I try to print the value after all the status checks for the instance are complete, then it prints just fine.Why does this happen?How to know that the status checks for the instance are still in progress i.e. it is in the initializing state?
Public ip address of ec2 instance is None while the instance is initializing
I was struggling with what appears to be the same issue for a while. After checking my permissions in AWS and going through a variety of other troubleshooting steps, I think my issue was resolved through changing the directory permissions for a number of different files in the build path and repo that I was trying to build. I had inadvertently cloned them and copied them over as "root", such that, even with sudo permissions, maven was failing to read my Amazon credentials.Another step you can take to narrow down the issue is to determine where the fault lies. I accomplished this by downloading theAWS python sample, which will attempt to create and add to a bucket in your s3. If this works, that means that the problem is with maven, and your credentials can be accessed by other applications just fine, and that you have the correct permissions as far as s3 is concerned. If the python sample also doesn't work, it is easier to get into and debug where exactly the issue is (in my opinion).
I am trying to pull down a maven project that uses AWS. I have an aws IAM user with a key and password. I ran aws configure and configured the ~/.aws/credentials file in the default account. However, when I run maven clean install, I get the following error:Failed to execute goal on project project-name: Could not resolve dependencies for project com.project:project-name:war:1.0: Failed to collect dependencies at com.project:commons-java:jar:1.0.0: Failed to read artifact descriptor for com.project:commons-java:jar:1.0.0: Could not transfer artifact com.project:commons-java:pom:1.0.0 from/to project-maven-repo-releases (s3://project-maven-repo/releases): Unable to load AWS credentials from any provider in the chainClearly, maven cannot load the dependancies from s3. I have confirmed that my IAM user has s3 permissions. And, even though s3 is "regionless", I have supplied the correct region to the IAM account. I also have tried exporting the AWS variables to no avail. Any idea what could be the problem?Thanks!
Maven Unable to load AWS credentials from any provider in the chain
I found this frustrating as well. awscli get-rule does not return the ARN. get-web-acl does however so I used the pattern from that and it worked when passing it to the ResourceARN field for the other rule endpoints. I can't say whether it will work for use in IAM policies but this is the format that worked for me:arn:aws:waf-regional:<your-region>:<your-account-id>:rule/<rule-id>
I'm playing around with writing IAM policies for an AWS WAF regional resource. I've created a rule for which I'm trying to see if I can write an IAM policy. That's where I realized that IAM policies require ARNs and not just resource Ids.I used the GetRule API to see if that returns the ARN of the rule and it doesn't. It only returns the ID. I checked the AWS docs now:1. https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html 2. https://docs.aws.amazon.com/waf/latest/developerguide/waf-api-permissions-ref.htmlThe ARN pattern is a little confusing, the first document points out the pattern to bearn:aws:waf-regional::account-id:resource-type/resource-id, but the example below has a specific region in there.Same happens with the second document for writing IAM policies, WAF regional does seem to have a region in the ARN. Now where can I get the ARN for this resource? And which document should I be referring as the source of truth?Thanks!
Trying to find the ARN pattern for AWS WAF regional
For what you are trying to accomplish, NLB is the wrong load balancer.NLB is a layer 4 load balancer. This means that the IP address that you see (at the EC2 instance) is the IP address of the client and not the IP address of the load balancer. With NLB you must allow the client's IP address in your security group.You want a layer 7 load balancer to implement what you want to do (block other systems in your VPC from accessing your EC2 instances directly). This means ALB or the classic ELB.
I created an internal network load balancer (NLB) to connect to EC2 instances on a private subnet. I want to restrict access to the EC2 instances only from the network load balancer. I used these instructionshttps://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#target-security-groupsbut it did not work.Basically, I added the IP address of the network interface of the NLB to the security group with my specific port (eg: 8080 10.4.2.9/32) allowed but that did not work. When i switched to all allow (eg: 8080 0.0.0.0/32) it worked, but i do not want other instances to have access to the ec2 instance.Any ideas on why this is not working? Thanks
Configuring internal network load balancer with EC2 instance in private VPC
You can try thedescribe-log-groupscommand. It is available on the cli, must also be there on the API.To get the names of the log groups you can go with:aws logs describe-log-groups --query 'logGroups[*].logGroupName' --log-group-name-prefix '/aws/appsync/[name-of-the-resource]'Output will look like this:["/aws/appsync/[name-of-your-resource]"]
I'm creating a logs aggregator lambda to send Cloudwatch logs to a private log analysis service. Given the number of resources used by my employer, it was decided to create asubscriptionlambda that handles log group subscription to the aggregator.The solution works fine, but it requires to manually search a resource's log group via amazon console and then invoke the subscription lambda with it.My question:Is there a way to, given a resource arn, find which log group is mapped to it? Since I'm using Cloudformation to create resources it is easy to export a resource's arn.UPDATETo present an example:Let's say I have the following arn:arn:aws:appsync:<REGION>:<ACCOUNTID>apis/z3pihpr4gfbzhflthkyjjh6yvuwhich is an Appsync GraphQL API.What I want it a method (using te API or some automated solution) to get the Cloudwatch log group of that resource.
Find Cloudwatch log group for a given resource
This turns out to be a bug of DMS. This occurs only during ongoing replication, and not in full load. During replication the from Aurora MySql to Redshift the boolean is cast to Varchar resulting the error above.
I'm trying to enable replication with DMS, using as source an Aurora mySQL instance and as destination a Redshift instance. The replication fails on boolean columns. I have declared the boolean column as BIT(1) on the mySQL instance. According to the documentation boolean columns in mySQL should be defined as BIT:https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.DataTypesIf I remove the boolean column it works. I also tried to define the column as Boolean. That did not work either.This is the error I'm getting:2018-08-26T16:59:19 [TARGET_APPLY ]E: RetCode: SQL_ERROR SqlState: 42804 NativeError: 30 Message: [Amazon][Amazon Redshift] (30) Error occurred while trying to execute a query: [SQLState 42804] ERROR: column "state" is of type boolean but expression is of type character varying, HINT: You will need to rewrite or cast the expression. [1022502] (ar_odbc_stmt.c:4428)
AWS DMS Issue with boolean column
The problem relied on the data format as suspected. In my case all I had to do is send the data as a json serialized string array and useContentType = application/jsonbecause the python function running on the endpoint which is responsible for sending the data to the predictor was only accepting json strings.Another way to solve this issues is to modify the python function which is responsible for the input handling to accept all content types and modify the data in a way that the predictor will understand.example of working code for my case:var data = new string[] { "this movie was extremely good .", "the plot was very boring ." }; var serializedData = JsonConvert.SerializeObject(data); var credentials = new Amazon.Runtime.BasicAWSCredentials("",""); var awsClient = new AmazonSageMakerRuntimeClient(credentials, RegionEndpoint.EUCentral1); var request = new Amazon.SageMakerRuntime.Model.InvokeEndpointRequest { EndpointName = "endpoint", ContentType = "application/json", Body = new MemoryStream(Encoding.ASCII.GetBytes(serializedData)), }; var response = awsClient.InvokeEndpoint(request); var predictions = Encoding.UTF8.GetString(response.Body.ToArray());
I am trying to send a request on a model on sagemaker using .NET. The code I am using is:var data = File.ReadAllBytes(@"C:\path\file.csv"); var credentials = new Amazon.Runtime.BasicAWSCredentials("",""); var awsClient = new AmazonSageMakerRuntimeClient(credentials, RegionEndpoint.EUCentral1); var request = new Amazon.SageMakerRuntime.Model.InvokeEndpointRequest { EndpointName = "EndpointName", ContentType = "text/csv", Body = new MemoryStream(data), }; var response = awsClient.InvokeEndpoint(request); var predictions = Encoding.UTF8.GetString(response.Body.ToArray());the error that I am getting onawsClient.InvokeEndpoint(request)is:Amazon.SageMakerRuntime.Model.ModelErrorException: 'The service returned an error with Error Code ModelError and HTTP Body: {"ErrorCode":"INTERNAL_FAILURE_FROM_MODEL","LogStreamArn":"arn:aws:logs:eu-central-1:xxxxxxxx:log-group:/aws/sagemaker/Endpoints/myEndpoint","Message":"Received server error (500) from model with message \"\". See "https:// url_to_logs_on_amazon" in account xxxxxxxxxxx for more information.","OriginalMessage":"","OriginalStatusCode":500}'the url that the error message suggests for more information does not help at all.I believe that it is a data format issue but I was not able to find a solution.Does anyone has encountered this behavior before?
AWS sagemaker invokeEndpoint model internal error
I don't believe there is any feature to set a schedule or introduce a time delay for AWS IoT Jobs.Theautostartflag refers to anofflinedevice comingonline, and what behaviour happens then.You can implement this yourself by having a process running which knows to create the jobs at the correct time.
I want to run the device update using aws iot jobs can I set startTime? I saw in their documentationhttps://www.npmjs.com/package/aws-iot-device-sdk#examples."autoStart: If set to true then agent will execute launch command when agent starts up." but they didn't mention anything regarding how to schedule the start at specific time
How to schedule an IOT job in AWS to run at specific time?
You can useAWS Mobile Hubto serve as your project in AWS. You can use this service to create User Files which does exactly what you are describing for the users and provides tutorials on how to accomplish this.I would download the AWS Mobile CLI and follow the steps to create your Mobile Hub Project from within your Ionic project root folder.From there you can then start adding backend features to your app such as S3 for user files, pictures etc..I would then head over toAWS Amplifyand go through their set up tutorial for the Ionic frame work and then you can use Amplifies very easy to use functions right in your components to get and set data.I just finished a mobile app with these and I can say it was a very easy process once going through the documentation.
I will developing an app in ionic framework, and i use AWS S3 to store picture and documentation , what is the best used databases Amazon DynamoDB or mongodb ? and what the best way to connect AWS S3 with databases ?finally i want app working in offline and online .
AWS with ionic framework
When logging in into Kibana i got the following message:com.amazonaws.services.cognitoidp.model.NotAuthorizedException: Refresh Token has expired (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: NotAuthorizedException; Request ID: ...)In this case the err branch would be calledif (err) { console.log("error: " + err); callback.callbackWithParam(null); }So the handling for the expiration of the refresh token is needed to be done there. However, i settled on redirecting the user to the login page in each case exceptsession.isValid()Hope this helps someone out there :)
When retrieving the id token via get session, cognito identity js automatically retrieves a new access token with it's refresh token, if the access token has expired. However I want to implement correct handling if also the refresh token is expired, but it's hard to test because the minimum expiration time for the refresh token is 1 day.It would be nice to know if either:There is any other way how i can properly test what happens when the access and refresh token are expired (so I can test redirection to login page)Which code path is called or how i can catch the case where the refresh token is expiredCode:getIdToken(callback: Callback): void { if (callback == null) { throw("callback is null"); } if (this.getCurrentUser() != null) { this.getCurrentUser().getSession(function (err, session) { if (err) { console.log("error: " + err); callback.callbackWithParam(null); } else { if (session.isValid()) { console.log("returning id token"); callback.callbackWithParam(session.getIdToken().getJwtToken()); } else { console.log("got the id token, but the session isn't valid"); } } }); } else callback.callbackWithParam(null); }My guess is thatgot the id token, but the session isn't validwill be called, as when the refresh token is valid it automatically refreshes the access token and the session is valid again.
amazon-cognito-identity-js refresh token expiration handling
One useful tool I use daily is this:https://github.com/atward/aws-profile/blob/master/aws-profileThis makes assuming role so much easier!After you set up your access key in .aws/credentials and your .aws/configyou can do something like:AWS_PROFILE=**you-profile** aws-profile [python x.py]The part in[]can be substituted with anything that you want to use AWS credentials. e.g., terraform planEssentially, this utility simply put your AWS credentials into os environment variables. Then in your boto script, you don't need to worry about setting aws_access_key_id and etc..
I want to access AWS comprehend api from python script. Not getting any leads of how do I remove this error. One thing I know that I have to get session security token.try: client = boto3.client(service_name='comprehend', region_name='us-east-1', aws_access_key_id='KEY ID', aws_secret_access_key= 'ACCESS KEY') text = "It is raining today in Seattle" print('Calling DetectEntities') print(json.dumps(client.detect_entities(Text=text, LanguageCode='en'), sort_keys=True, indent=4)) print('End of DetectEntities\n') except ClientError as e: print (e)Error : An error occurred (UnrecognizedClientException) when calling the DetectEntities operation: The security token included in the request is invalid.
How to I access Security token for Python SDK boto3
Yes, you can use S3 object lifecycle rules to delete objects having a given tag.Based on my experience, lifecycle rules based on time are approximate, so it's possible that you need to wait longer. I have also found that complex lifecycle rules can be tricky -- my advice is to start with a simple test in a test bucket and work your way up.There'sdecent documentation hereon AWS.
I need object level auto deletion after some time in my s3 bucket, but only for some objects. I want to accomplish this by having a lifecycle rule that auto deletes objects with a certain (Tag, Value) pair. For example, I am adding the tag pair (AutoDelete, True) for objects I want to delete, and I have a lifecycle rule that deletes such objects after 1 day.I ran some experiments to see if objects are getting deleted using this technique, but so far my object has not been deleted. (It may get deleted soon??)If anyone has experience with this technique, please let me know if this does not work, because so far my object has not been deleted, even though it is past its expiration date.
Does using tags for object-level deletion in AWS S3 work?
The underlying S3 service API has no method for fetching listings along with object metadata and/or tags, so none of the SDKs implement such functionality, either.
I am using listObjectsV2 to list all the objects from AWS s3 bucket.But that list does not contain tags and metadata.I have gone through the documentation and came to know that metadata details we can get separately by fetching one by one object.But is there any way can we get tags and metadata of files from s3 bucket in one method?Note: I am using AWS-SDK(Node.js) version 2.x
How to get all list of objects in buckets including tags in single request from S3 bucket
They not so long ago changed their CLI. It looks like this now:get-login [--registry-ids <value> [<value>...]] [--include-email | --no-include-email]So simply replace-e nonewith--no-include-email.See the corresponding documentationhere.
when I generate the docker login command to my AWS ECR with the following command:aws ecr get-login --region us-east-2I get an output like:docker login -u AWS -p [bigbass] -e none https://xxxx.dkr.ecr.us-east-2.amazonaws.comThe problem is the-eflag that throws an error:unknown shorthand flag: 'e' in -e See 'docker login --help'.I first thought that the problem was a mis configuredaws configure, as I was usingnoneas "Default output format" option. After that I fixed the format option insideaws configurebut it still happens.
AWS ecr get-login generates docker login command with an unknown flag
It's not possible to specify those properties from CloudFormation yet. As seen inthe developer guide for ECSyou have to specifycontainerNamefrom theserviceRegistriesparameter. But when we look at theCloudFormation ServiceRegistry documentationwe can't find those options:{ "Port" : Integer, "RegistryArn" : String }As per thisnew feature, now you can specify thecontainerNameandcontainerPortin theService Registry.
I have the following lines in my stack template:Service: Type: AWS::ECS::Service DependsOn: ListenerRule Properties: LaunchType: EC2 ServiceRegistries: - RegistryArn: {"Fn::GetAtt": [ServiceDiscovery, Arn]}... ServiceDiscovery: Type: "AWS::ServiceDiscovery::Service" Properties: Description: Service discovery registry DnsConfig: DnsRecords: [{"Type": "SRV", "TTL": 100}] NamespaceId: Fn::ImportValue: PrivateDNS Name: ping-serviceTask definition network mode = "host".When I push the template I see the following error:When specifying 'host' or 'bridge' for networkMode, values for 'containerName' and 'containerPort' must be specified from the task definitionHowever if registry properties 'containerName' and 'containerPort' are set then it throws another error:Encountered unsupported property containerNameHow can service discovery registry be created with CloudFormation?
AWS ECS CloudFormation unable to create service with service discovery registry
Amazon Redshift does not keep track of "last time a table was used".However, you could search throughSTL_QUERYandSTL_QUERYTEXTto extract table names used in queries.Be careful — these tables rotate after a period of time so they donotcontain a complete history.
I have bunch of tables in a db spread across multiple schemas. I am sure that most of the tables are never used ( meaning they were created for backup ). I need to find out when each table was being used last time ( date and time ), and created and size. Please help.
How to find when a table being used last time in redshift
Yes APIGW only supports json payload validation.
I am trying to validate an incoming XML payload via API gateway. To be specific, I actually don't even care about the schema, I just want to make sure that the body is not empty (and maybe that it is valid XML if I can get that functionality). I see a variety of posts from years ago stating that XML input validation is not yet supported in API Gateway.Can somebody confirm if this is still the case? To provide a specific example, I have a model like this:{ "$schema" : "http://json-schema.org/draft-04/schema#", "title" : "Test Schema", "type" : "object", "minProperties": 1, "properties": { "account_id": { "type": "string", "pattern": "[A-Za-z]{6}[0-9]{6}" } }, "required": ["account_id"] }If I add request body validation using this model for content type "application/json" all is well, but if I do the same for content type "application/xml" no validation is performed.
AWS Api Gateway - Validate incoming XML payload
You canexport your CloudWatch logsto S3. In short:Create an S3 bucketAllow the CloudWatch logs principal (e.g.logs.us-west-2.amazonaws.com) to access it.Create a CloudWatch Logs export task from the Log Group to the S3 bucket.Exporting CloudWatchmetricsto S3 ist currently not supported. You could create your own tool that dumps this data to S3, e.g. by usingget-metric-statisticsor use an existing tool, likethis one.
I'm currently working on a dissertation project that involves comparing performance times between different serverless providers. In order to do this, I need to collect data on execution times.Is there a way to easily gather execution times for Lambda functions and export them in bulk (to a spreadsheet, for example)? I've looked at CloudWatch Metrics, Logs and X-Ray traces and I can't find any option to export the performance data. The alternative is sifting through each execution in X-Ray or Logs and writing the execution time down manually into a spreadsheet, which would be crazy for hundreds of executions. Azure, for example, lets you export the execution data to a spreadsheet.Any help is really appreciated.
Gathering AWS Lambda Execution Data
I had a similar issue, however it was fixed after raising a GitHubissue.
For local development with the AWS services, I am usinglocalstack. Right now I want to work with SNS and when I publish a message the subscriber doesn't receive the 'metadata' (like MessageId, TopicArn, Timestamp)I have taken the next steps:Started the docker container:docker run --net=host -it -p 4567-4578:4567-4578 -p 8080:8080 atlassianlabs/localstackCreated an SNS topic:aws --endpoint-url=http://localhost:4575 sns create-topic --name test-topicSubscribed to the topic:aws --endpoint-url=http://localhost:4575 sns subscribe --topic-arn arn:aws:sns:eu-west-1:123456789012:test-topic --protocol http --notification-endpoint http://localhost.dev.local/pathAnd finally, I've published a message:aws --endpoint-url=http://localhost:4575 sns publish --topic-arn arn:aws:sns:eu-west-1:123456789012:test-topic --message "the message"The subscriber received with success the message but 'messageId', 'Timestamp' and 'TopicArn' was missing:Actual result:{"Message": "the message", "Type": "Notification"}Expected result:{"TopicArn": "arn:aws:sns:eu-central-1:123456789012:test-topic", "MessageId": "af3a73ef-b0b2-4f78-acb1-1dee52d002d2", "Message": "the message", "Type": "Notification" "Timestamp" : "2018-07-19T16:04:28.857Z"}What am I doing wrong? And how do I ensure that the message does get this information?
MessageId missing in SNS notification from localstack
You have created a certificate for a specific domain, say 'example.com'. But you are not using this domain when accessing the ALB. Since there is a mismatch between the domain/hostname you are using ('XXXXXX.us-west-2.elb.amazonaws.com') and the certificates domain ('example.com'), your HTTP client shows you an error.Create a DNS entryexample.com CNAME XXXXXX.us-west-2.elb.amazonaws.comand access the domain usingexample.comas a hostname.
We got a certificate from ACM for our domain say example.com. On the application load balancer I deployed this and created a HTTPS listener with forwarding to my target group. The target group is an EC2 instances in a ASG. Now the issue is when I access my LB URL with HTTPS I get the SSL_ERROR_BAD_CERT_DOMAIN error with the descriptionXXXXXX.us-west-2.elb.amazonaws.com uses an invalid security certificate. The certificate is only valid for example.comI now this is probably the expected behavior, but in this case, how do I apply a ACM certificate of my domain on the application load balancer? Thanks,
ACM certificate - SSL_ERROR_BAD_CERT_DOMAIN
Type extensions aren't currently supported but the team is aware of the issue. There is currently no ETA on when this will be supported but thanks for suggesting this as it helps prioritize work!
Am I able to extend the query by using AWS cloudformation to provision Appysnc like below so that the schema can be modularized and distributed in different yml files?Schema: Type: AWS::AppSync::GraphQLSchema Properties: ApiId: xxxxxxxxxxxxxxxxx Definition: | extend type Query { you: You! } type You { name: String! }
aws cloudformation appsync extend query?
TheAWS Command-Line Interface (CLI)aws s3 cpcommand simply sends the copy request to Amazon S3. The data is transferred between the Amazon S3 buckets without downloading to your computer. Therefore, the size and bandwidth of the computer issuing the command is not related to the speed of data transfer.It is likely that theaws s3 cpcommand is only copying a small number of files simultaneously. You could increase the speed by setting themax_concurrent_requestsparameter to a higher value:aws configure set default.s3.max_concurrent_requests 20See:AWS CLI S3 Configuration — AWS CLI Command ReferenceGetting the Most Out of the Amazon S3 CLI | AWS Partner Network (APN) Blog
I need to copy some buckets from one account to another. I got all permissions so I started transferring the data via cli (cp command). I am operating on a c4.large. The problem is that there is pretty much data (9tb) and it goes realy slow. In 20 minutes I transferred like 20gb... I checked the internet speed and the download is 3000Mbit/s and the upload is 500 Mbit/s. How can I speed up it?
How to speed up copying files between two accounts