Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
This AWS blog postseems to weigh the options well.Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.Having cross region Read replica would give you the best RPO and RTO as you canpromote replicato be an independent instance which should improve your RPO / RTOAlternatively, if you choose to useAmazon Aurora Backtrackit seems to offer a similar option to having a read replica but I do not have a personal experience with this feature so can't say how effective it is in improving RTO and RPO.
We are running AWS RDS PostgreSQL, with daily automatic snapshots, encrypted by AWS managed KMS key. My objective is to minimize risks and data loss, in case when main AWS account (running RDS) got compromised or RDS deleted/damaged in some way.What we've implemented so far: RDS snapshots are shared with different (backup) account, periodically copied to backup account and re-encrypted with the KMS key from the backup account, to make copies local, and independent from the main AWS account.I'm wondering if there are better ways to minimize recovery time objective and recovery point objective in case of a disaster event?
AWS RDS disaster recovery using cross-account
You don't want a PutItem with a ConditionExpression for this case.You need to use theUpdateItemAPI. UpdateItem is an "upsert", so if the primary key does not exist, the API will cause a new item to be inserted. This API accepts anUpdate Expressionwhich can be used to set, modify, and/or remove one or more fields from the item. Update Expressions have a few special functions available to them, and in order to prevent overwriting existing data, you should useif_not_exists(path, operand). Ifpathdoes not exist, DynamoDB will put the result ofoperandatpath. Ifpathdoes exist, it will be unchanged.The update expression for your case would look something like this:SET if_not_exists(date_created, :right_now), other_field_1 = :value_1, other_field_2 = :value_2
So I'm trying to put an Item into a DDB table (nodejs). If the Item has a 'date_created' attribute, I want to update all fields in that Item except for the 'date_created'.I've looked at conditional expressions and from what I understand it's pretty binary. If condition == true then proceed, if not, then don't. What I'm looking for is to do the put no matter what but don't update 'date_created' if it exists.Is this possible? Am I even approaching this with the right mindset?
DynamoDB: How to update an item except for one attribute if the attribute already exists
You can use SQL to connect in whatever way you wish. When you create a dataset, chooseuse custom sqland you will be presented with the window below, where you can then use SQL tojoin,unionor whatever to your hearts content :)
I'm using AWS Quicksight for an Analytics dashboard and I have multiple databases that have the same tables. I then set a Data Source for each database. Now, I've been trying to create a Data Set for a table, let's say "products" table, which should be a UNION of all the "products" tables of all the databases.So far, I've only been able to combine data from those databases using the UI to do a JOIN. However, for the Analysis I'm trying to create, I'll be needing to do a UNION. How do you do a UNION from multiple different databases in AWS Quicksight? Thanks.
AWS Quicksight - MySQL UNION from multiple databases
The ElasticSearch has now been implemented as ofMoto 2.2.20Other unimplemented services/features can be mocked like so:import boto3 import botocore from unittest.mock import patch orig = botocore.client.BaseClient._make_api_call def mock_make_api_call(self, operation_name, kwarg): if operation_name == 'UnsupportedOperation': # Make whatever changes you expect to happen during this operation return { "expected": "response", "for this": "operation" } # If we don't want to patch the API call, call the original API return orig(self, operation_name, kwarg) # add supported services as appropriate @mock_s3 def test_unsupported_feature() with patch('botocore.client.BaseClient._make_api_call', new=mock_make_api_call): function_under_test()This allows you to completely customize the boto3 behaviour, on a per-method basis.Taken from the Moto documentation here:http://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
I'm using moto to mock out aws services to write my test cases and supported use cases is fine:@mock_sts def test_check_aws_profile(self): session = boto3.Session(profile_name='foo') client = session.client('sts') client.get_caller_identity().get('Account')But there are a few services like es that isn't supported at all. How do I begin to mock this?I've tried this alternative mocking approach for testing but not sure if there's a better approachdef test_with_some_domains(self, mocker): mocked_boto = mocker.patch('mymodule.boto3') mocked_session = mocked_boto.Session() mocked_client.list_domain_names.return_value = ['foo-bar-baz'] mymodule.function_call() mymodule.py def function_call(): session = boto3.Session(profile_name=my_profile) client = session.client('es', some_region) domain_names = client.list_domain_names()
How do you mock an aws service that's not mocked by moto?
Just make sure you are using the correct regions. Here is the list of the regions you can use ASK trigger with.US East (N. Virginia) EU (Ireland) US West (Oregon) Asia Pacific(Tokyo)you cannot use other regions to trigger ask.Also, I think there is a limit to the size of data you can send in session attributes. It should not exceed 24kb.
I am developing a skill for Alexa using NodeJS running in AWS Lambda functions. The skill works great in my location (Europe) but my client from USA gets errors. The NodeJS uses SessionAttributes and that is what I believe it fails, since the intents that don't have sessionAttributes they are triggered in USA but the rest of intents fail.I thought it could be because my AWS Lambda function is in Europe, so I created/duplicated one in USA through my AWS account portal and configured it in alexa developer console as follows:Unfortunately keeps failing. Also, I changed the USA lambda function to default Region(1) but same fails in usa and works great for me.We tested the skill with both echo device and through the Alexa developer console Test Page.I would really appreciate some advice if someone knows a workaround or had this issue before. The NodeJS code of the lambda functions is confidential and works great so surely it is something from the regions.
My Alexa Skill works in Europe but not in the USA
You shouldn't include the port with the host name, but specify it with the-poption.So instead of-h XX.XXXXXXXX.us-east-1.rds.amazonaws.com:5432use-h XX.XXXXXXXX.us-east-1.rds.amazonaws.com -p 5432
I am trying to export the db dump from postgress aws-rds instance through the ubunutu terminal using below command but its throwing error.pg_dump -h XX.XXXXXXXX.us-east-1.rds.amazonaws.com:5432 -Fc -o -U XXUser XXDbname > output.dump pg_dump: [archiver (db)] connection to database "XXDbname" failed: could not translate host name "XX.XXXXXXXX.us-east-1.rds.amazonaws.com:5432" to address: Name or service not knownI tried to find the ip address too using sql query as select inet_server_addr()and when I run the pg_dump command using this ip. Its throwing connection timeout. Please suggest is there a way to export the dump from a rds-postgress instance while having only the db user access.Update - tried with -p port too. But still same errorpg_dump -h XX.31.X.X -p 5432 -Fc -o -U XXuser XXdb > XXdump.dumppg_dump: [archiver (db)] connection to database "XXdb" failed: could not connect to server: Connection timed out Is the server running on host "XX.31.X.X" and accepting TCP/IP connections on port 5432?
postgress db dump from aws-rds instance
I realize this answer is a bit late and ran into a similar issue myself. According tothisyou might have better luck being explicit about your python executable and using the --python-installation flag. Try something likepython scripts/ebcli_installer.py --python-installation /path/to/some/python/on/your/computeror to be extra explicit/path/to/your/exact/python scripts/ebcli_installer.py --python-installation /path/to/some/python/on/your/computerThis is part of the "Advanced Use" section on theEB CLI github
I'm on a Mac OSX (Catalina) trying to install the AWS Elastic Beanstalk CLI.>>>python --version Python 2.7.16 >>>which python /usr/bin/python >>>python3 --version Python 3.7.5 >>>which python3 /usr/local/bin/python3What I've triedUsing Brew>>>brew uninstall awsebcli >>>brew install awsebcli >>>eb --version -bash: /Users/<user>/.local/bin/eb: /Users/<user>/projects/hello-world-flask/venv/bin/python3: bad interpreter: No such file or directoryNow the funny thing is that hello-world-flask is just a toy example I have in one of my directories, but I have no idea why the EB CLI is trying to use that venv, or why it says that it doesn't exist.>>>ls /Users/<user>/projects/hello-world-flask/venv/bin/python3 /Users/<user>/projects/hello-world-flask/venv/bin/python3Using Pip3>>>brew uninstall awsebcli >>>pip3 install awsebcli ... Successfully installed awsebcli-3.16.0 >>>eb --version -bash: /Users/<user>/.local/bin/eb: /Users/<user>/projects/hello-world-flask/venv/bin/python3: bad interpreter: No such file or directoryThe QuestionI'm assuming the EB CLI is just supposed to execute Python 3.x. How do I fix this and make the EB CLI use the correct version of Python?
AWS Elastic Beanstalk CLI Using Wrong Python Version
I am not sure about BitBucket, but natively on AWS, you can push the logs from CodeDeploy agent to CloudWatch Logs using the CloudWatch Logs agent [1]. Once in CloudWatch Logs, you will create a Metric Filter to Alarm when some specific text appears in the log entries [2].CodeDeploy agent log file locations are:LINUX *** /opt/codedeploy-agent/deployment-root/deployment-group-ID/deployment-ID/logs/scripts.log *** /var/log/aws/codedeploy-agent/codedeploy-agent.log *** /tmp/codedeploy-agent.update.log WINDOWS *** C:\ProgramData\Amazon\CodeDeploy\log\codedeploy-agent-log.txt *** C:\ProgramData\Amazon\CodeDeploy\deployment-group-ID\deployment-ID\logs\scripts.log *** C:\ProgramData\Amazon\CodeDeployUpdater\log\codedeploy-agent.updater.logReferences:[1] Quick Start: Enable Your Amazon EC2 Instances Running Windows Server 2016 to Send Logs to CloudWatch Logs Using the CloudWatch Logs Agent -https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartWindows2016.html[2]https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/
I want to fetch CodeDeploy logs from my Amazon EC2 instance when a script fails during deployment and then show the logs in BitBucket pipelines.How can I do that?Is there any API available for fetching the logs from CodeDeploy?
How to fetch AWS CodeDeploy logs and show them in a BitBucket pipeline
In my case the problem was that the key was escaped, so I just needed to make sure iT was written without the%20(that replaced spaces).INCORRECTconst params = { Bucket: bucketName, Key: "55bca7-40a-7708-8457-04666cd45f16-Screen%20Shot%202022-05-18%20at%2017.13.49.png", //key, };CORRRECTconst params = { Bucket: bucketName, Key: "55bca7-40a-7708-8457-04666cd45f16-Screen Shot 2022-05-18 at 17.13.49.png", //key,};I believe you also need to pass the region and signature version:const s3 = new aws.S3({ region, accessKeyId, secretAccessKey, signatureVersion: "v4", })
I'm trying to delete an object from S3 and I can't make it work.This is what I'm doing:const AWS = require('aws-sdk'); const s3 = new AWS.S3({ accessKeyId: ID, //My accessKeyId secretAccessKey: SECRET //My secretAccessKey }); var params = { Bucket: process.env.S3_BUCKET_NAME, //'myBucket' Key: file //'places-images/06850015-3d55-427b-a2f3-b8c2a56a42d8madametussauds.jpg' } s3.deleteObject(params, (err, data) => { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response })The data object returned is always empty and the file does not go away from S3. This bucket is not versioned btw.I tried the examples fromhereandAWS docsbut don't seem to work. It's driving me crazy because it seems pretty straighforward! Any help will be greatly appretiated!
Cannot delete object from S3 (Node.js)
The issue came from insufficient memory limit that was set to default 128MB. When I increased to 256MB the connection worked as expected and it retrieved the page content. So it turns out AWS kills the lambda when it reaches the max memory limit without any message in CloudWatch (you do get it if it gets killed by timeout though)
I'm trying to fetch the content of a webpage from AWS Lambda but it fails silently, CloudWatch logs just contain the following lines (timeout is set to 10secs so that's why duration is 10000ms):START RequestId: f101849f-1219-411b-8875-8944a76de937 Version: $LATESTEND RequestId: f101849f-1219-411b-8875-8944a76de937 REPORT RequestId: f101849f-1219-411b-8875-8944a76de937 Duration: 10008.39 msThe code I'm running just takes the URL and reads the content. It works fine from local environment but it doesn't when testing on AWS Lambda:public class TestHandler implements RequestHandler<Object, String>{ @Override public String handleRequest(Object o, Context context) { try { Connection con = HttpConnection.connect(new URL("https://www.google.com")); con.timeout(40000); return con.get().getAllElements().toString(); } catch (IOException e) { System.out.println(e.getMessage()); } } }Any AWS restriction that I'm not aware of when dealing with requests?
Silent error when fetching URLs from AWS Lambda
I am not sure about the AWS console but in AWS CLI we can run terminal commands by usingthisaws document.e.g.aws kinesis list-stream-consumers --stream-arn arn:aws:kinesis:<region_name>:<account_id>:stream/<stream_name>
Is there any way I can view all the consumers(lambda, firehose, etc.) of the kinesis stream in AWS console page?
Way to see all consumers of a Kinesis Stream in AWS console
If you are creating theRestHighLevelClientyourself (or control the creation of this object), you can use the constructor that accepts aRestClientBuilder.UseRestClient.builder()method to create aRestClientBuilderwith a customSSLContext. The following is fromElasticsearch source code:RestClientBuilder builder = RestClient.builder( new HttpHost("localhost", 9200, "https")) .setHttpClientConfigCallback(new HttpClientConfigCallback() { @Override public HttpAsyncClientBuilder customizeHttpClient( HttpAsyncClientBuilder httpClientBuilder) { return httpClientBuilder.setSSLContext(sslContext); } });In your case you need to create an SSLContext that trusts all hosts:SSLContext context = SSLContext.getInstance("SSL") context.init(null, new TrustManager[] { new X509TrustManager { void checkClientTrusted(X509Certificate[] chain, String authType) {} void checkServerTrusted(X509Certificate[] chain, String authType) {} void getAcceptedIssuers() { return null; } } }, null);The above is completely untested but may get you started. Feel free to update this answer with more details if it works for you.
Am hoping to uselocalstackto simulate elasticsearch/kinesis/dynamo. Am running into troubles with my elastic code wanting HTTPS endpoints.Testing via java 11/IntelliJIn all cases am hitting this error:Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetI've tried:Starting localstack with the USE_SSL environment variable set. curl commands (EG "curl -khttps://localhost:port") for elastic work but java does not.Using localstack annotations ("@RunWith(LocalstackDockerTestRunner.class)" and passing in "USE_SSL" as a param - no goTelling Java to ignore "bad" certs viaSystem.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY, "true");- no goPassing cmdline param-Dcom.amazonaws.sdk.disableCertChecking- no goI feel like what Im looking for is doable.. just can't seem to find the right combination of settings.
Convincing java to talk to "localstack" via SSL
I would check the target groups's health check since it is waiting for a replacement task to become healthier. Is your current deployment of ECS targets HEALTHY? If they are not, the ALB will be trying to bounce these containers to try and refresh them to have a health check passed. Also, does your CodeDeplot have access to deploy to ECR?
As the title suggests, the blue/green deployment for ecs never finishes because theinstalllifecycle event never finishes and timesout.This is the picture showing that:The appspec file:version: 0.0 Resources: - TargetService: Type: AWS::ECS::Service Properties: TaskDefinition: <TASK_DEFINITION> LoadBalancerInfo: ContainerName: "WordpressContainer" ContainerPort: 80The taskdef file:{ "executionRoleArn": "arn:aws:iam::336636872471:role/WordpressPipelineExecutionRole", "containerDefinitions": [ { "name": "WordpressContainer", "image": "<IMAGE1_NAME>", "essential": true, "portMappings": [ { "hostPort": 80, "protocol": "tcp", "containerPort": 80 } ] } ], "requiresCompatibilities": [ "FARGATE" ], "networkMode": "awsvpc", "cpu": "256", "memory": "512", "family": "wordpress" }I am pushing a bare-bones wordpress docker image to ECR which triggers a pipeline but it stucks onCodeDeploy.Any ideas what is happening? How am I even supposed to debug that?P.S. it timed-out in 60 minutes with the message:The deployment timed out while waiting for the replacement task set to become healthy. This time out period is 60 minutes.
AWS Blue/Green CodeDeploy to ECS install lifecycle event timesout
They do not. I received this answer from AWS support today:To start with, currently you need to have a user in the RDS database instance which is being used as a DMS endpoint[+]. IAM authentication to connect to RDS instances being used as DMS endpoint is currently not supported.In regards to above mentioned documentation[+], you can follow the steps to use the master user or non-master account for the PostgreSQL DB instance as the user account for the PostgreSQL source endpoint for AWS DMS.With this said, I have raised a feature request with our internal to check the feasibility to use IAM authentication to connect RDS instances being used as DMS endpoints. At the moment there is no ETA on when will this be implemented.[+]https://repost.aws/tags/TAsibBK6ZeQYihN9as4S_psg?forumID=60[+]https://aws.amazon.com/blogs/database/tag/dms/
My team is currently experimenting with IAM Database Authentication for our RDS mysql aurora clusters. We also use DMS to migrate data between DBs. However, it doesn't look like DMS support IAM authentication.Is there any support for DMS endpoints and IAM DB authentication? Or is this not the correct pattern.We tried setting the password as the token directly but the min password length for DMS endpoints is128so it's not an option.
Do DMS endpoints support RDS IAM authentication?
This depends on the type of the AWS service. Unfortunately for "best effort" events you should consider writing custom reliability layer on the top of these events. You can find the list of services and reliability levels here:https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html
I read one of thedocson AWS cloudwatch events and it states that"Note that you cannot write a program that depends on the order or existence of notification events, as they might be out of sequence or missing. Events are emitted on a best effort basis."Is it true for AWS cloudwatch events in general i.e. it is a best effort service and not a guaranteed one for all services on AWS?
Cloudwatch Events Reliability
You need to provide a credential to aws, so to looks less explicit you can add to the boto3 client a config:from botocore.client import Config s3 = boto3.client('s3', config=Config(signature_version='s3v4'), endpoint_url = 'https://console.wasabisys.com', aws_access_key_id = '<4XR7DSJX1MYFYXETUGBA>', aws_secret_access_key = '<8aD49ac2cJsuWr7crjRTAN0jqH4JzyV6uQwhJyw1>')
I am generating pre-signed url on aws s3 using python. ater generating it shows my access key in the url. how can I generate url with showing my access key?https://console.wasabisys.com/testing-usman/?AWSAccessKeyId=This is the code i am using:s3 = boto3.client('s3', endpoint_url = 'https://console.wasabisys.com', aws_access_key_id = '<4XR7DSJX1MYFYXETUGBA>', aws_secret_access_key = '<8aD49ac2cJsuWr7crjRTAN0jqH4JzyV6uQwhJyw1>') url = s3.generate_presigned_url( ClientMethod='get_object', Params={ 'Bucket': 'testing-usman', 'Key': '.' } ) print(url)I need the url without showing my access key.
how can I hide my access key in pre-signed url by aws s3 using python
This is a known issue discussed herelinkI would like to say there is no solution for this as of now. I have tried everything but due to hosted UI it is not returning that error message. In my case I'm getting error:AuthException{message=Sign-in with web UI failed, cause=com.amazonaws.mobileconnectors.cognitoauth.exceptions.AuthServiceException: invalid_request, recoverySuggestion=See attached exception for more details}.It is tellinginvalid_requestinstead of lambda error message.
I'm using hosted UI for user signup and login. I've a pre-signup trigger which deny user based on some criteria. How can I send back the error message to the caller or Callback URL?Btw, I've tried passing the error back as statedhere, but it doesn't work. I'm guessing it's because I'm using hosted UI.Thanks, Simon
Aws Cognito Hosted UI. Return error message from Triggers
You can useCredentialProviderChain:const awsManageStore = new AWS.SSM(AWS.CredentialProviderChain.defaultProviders);
I'm writing a node.js app that uses the AWS SDK.The java documentation describes a very convenient concept calledthe default credential provider chain. I could not find the same concept in theNode.js API documentation.I'm hoping that node/javascript has this as an undocumented feature. Does the javascript API provide a default credential provider chain, and if so, how do I use it?
Is there a node.js default credential provider chain?
Seems like the terraform provider doesn't support this field. I'll try to suggest a PR adding it.
I am new to terraform and aws. I have a requirement for provisioning elasticache redis with cluster mode disabled. I have gone through the documentation of aws_elasticache_replication_group resource and it specifiesprimary_endpoint_addressas the address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.And according to theaws docs:For Redis (cluster mode disabled) clusters, use the Primary Endpoint for all write operations. Use the Reader Endpoint to evenly split incoming connections to the endpoint between all read replicas. Use the individual Node Endpoints for read operations (In the API/CLI these are referred to as Read Endpoints).My question is on how can we get the reader_endpoint_address from aws_elasticache_replication_group?
How to infer reader endpoint address from aws_elasticache_replication_group with cluster mode disbaled
You can attach encrypt/decrypt permission to EMR_EC2_DefaultRole{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "kms:*", "Resource": [ ... ] } ] }Specify you keys in resources and attach it to the role.
I have the below situation for my EMR, can someone please guide how to go about to configure the same?EMR Cluster performs multiple operations across the data pipeline:EMR write toS3BUCKET1withKMSKEY1EMR write toS3BUCKET2withKMSKEY2How do I configure the above in EMR? Only options I am aware to config EMR is atemrfs-site.xml&/etc/hadoop/conf.empty/core-site.xmlThese have the tagsfs.s3.serverSideEncryption.kms.keyIdHow do I achieve my requirement with above KMS keys? I need to switch between KMS keys for different bucket write.
AWS EMR encrypt S3 bucket using KMS
You need to add -n option to ssh when run it in background to avoid reading from stdin.
I'm attempting to create a ssh tunnel, when deploying an application to aws beanstalk. I want to put the tunnel as a background process, that is always connected on application deploy. The script is hanging forever on the deployment and I can't see why."/home/ec2-user/eclair-ssh-tunnel.sh": mode: "000500" # u+rx owner: root group: root content: | cd /root eval $(ssh-agent -s) DISPLAY=":0.0" SSH_ASKPASS="./askpass_script" ssh-add eclair-test-key </dev/null # we want this command to keep running in the backgriund # so we add & at then end nohup ssh -L 48682:localhost:8080 ubuntu@[host...] -N &and here is the output I'm getting from/var/log/eb-activity.log:[2019-06-14T14:53:23.268Z] INFO [15615] - [Application update suredbits-api-root-0.37.0-testnet-ssh-tunnel-fix-port-9@30/AppDeployStage1/AppDeployPostHook/01_eclair-ssh-tunnel.sh] : Starting activity...The ssh tunnelisspawned, and I can find it by doing:[ec2-user@ip-172-31-25-154 ~]$ ps aux | grep 48682 root 16047 0.0 0.0 175560 6704 ? S 14:53 0:00 ssh -L 48682:localhost:8080[email protected]-NIf I kill that process, the deployment continues as expected, which indicates that the bug is in the tunnel script. I can't seem to find out where though.
ssh tunnel script hangs forever on beanstalk deployment
Natively CodePipeline does not give any detail information on failure apart from the final error message on failure (as still those error will be mostly visible on specific stage) and it is not possible to see even that error via paylod as it is getting passed to the Event. Which is limitation with CodePipeline
I am trying setup my project deployment using AWS code-pipeline and I would like to get email notification when my deployment fails with code-pipeline logs without, so that I don't have login into AWS account every time to see the logs.I searched through various blogs,documentation and examples but it didn't help.The below JSON I used to create AWS-cloudwatch rule:{ "detail-type": [ "CodePipeline Stage Execution State Change", "CodePipeline Action Execution State Change", "CodePipeline Pipeline Execution State Change" ], "source": [ "aws.codepipeline" ], "detail": { "pipeline": [ "ui-pipeline" ], "state": [ "FAILED" ] } }The email I am getting contains this JSON:{ "version":"0", "id":"xxx-493f-de1d-94b7-xxx", "detail-type":"CodePipeline Stage Execution State Change", "source":"aws.codepipeline", "account":"xxxx", "time":"2019-06-13T05:50:17Z", "region":"ap-south-x", "resources":[ "arn:aws:codepipeline:ap-south-1:xxx:ui-pipeline" ], "detail":{ "pipeline":"ui-pipeline", "execution-id":"xxx-fbcf-40f7-xxx-xxxx", "stage":"Deploy", "state":"FAILED", "version":1.0 } }I want the logs of AWS code-pipeline aswell.
Can we send Failed logs of AWS codepipline through AWS-simple-notification-service?
add the cross policy in your s3 bucket ( Cross-origin resource sharing (CORS) )[ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ]
I am new in s3 bucket AWs, When i try to upload file in s3 bucket i am getting error :Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://xxxxxxx.com, can anyone please help me to resolve this issue ?CORS Configuration<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>DELETE</AllowedMethod> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration>
Getting error Cross-Origin Request Blocked in s3 bucket file uploading
There are a lot of ways to get data into redshift. But even inside amazon there's only one way getting data out of redshift:unload to s3https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.htmlOn the other hand microsoft appears to provide a connector For migrating:https://learn.microsoft.com/en-us/azure/data-factory/connector-amazon-redshift
I want to migrate from Amazon Redshift to Microsoft Azure. Is there an easy way of doing this copy?
Is there an easy way of migrating from Amazon Redshift to Microsoft Azure Data Warehouse?
I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it themainlambda function's arn as oppose to thealiaslambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!From aws doc:Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html
I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.how the event source is set upresource "aws_lambda_event_source_mapping" "db_stream_trigger" { batch_size = 10 event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}" enabled = true function_name = "${aws_lambda_function.test_lambda.arn}" starting_position = "LATEST" }how the alias is createdresource "aws_lambda_alias" "test_lambda_alias" { count = "${var.create_alias ? 1 : 0}" depends_on = [ "aws_lambda_function.test_lambda" ] name = "test_alias" description = "alias for my test lambda" function_name = "${aws_lambda_function.test_lambda.arn}" function_version = "${var.current_running_version}" routing_config = { additional_version_weights = "${map( "${aws_lambda_function.test_lambda.version}", "0.5" )}" } }The lambda works with the dynamodb stream as a trigger The Alias for the lambda is successfully created. The Alias is using the correct version The Alias is using the correct weight The Alias is NOT using the dynamo-db stream as the event source
How to set up a lambda alias with the same event source mapping as the LATEST/Unqualified lambda function in terraform
I had the same problem. The issue was because our zipped code in S3 passed the 51kb limit. My zipped code is 62kb. I deleted some files, tested it again, and it worked.
I'm trying to set up a deployment process as follows: Travis-CI.com grabs the codebase (NodeJS), builds and tests it, uploads it to S3 as a zip and then kicks off a CodeDeploy deployment (ECS). Here is my.travis.yml:language: node_js node_js: - '12' before_deploy: - zip -rq latest * - mkdir -p upload - mv latest.zip upload/latest.zip deploy: - provider: s3 bucket: "myBucket" access_key_id: secure: keystuff secret_access_key: secure: keystuff local_dir: upload skip_cleanup: true on: branch: develop - provider: codedeploy bucket: "myBucket" key: latest.zip bundle_type: zip application: "myApp" deployment_group: "myDeploymentGroup" region: "us-east-1" access_key_id: secure: keystuff secret_access_key: secure: keystuff on: branch: developMy appspec.yml (Truth be told I'm not sure what should go in here.):version: 0.0 os: linuxThe upload to S3 succeeds, but the deployment task fails with the following error:The revision size is too large. Its maximum size is 51200B.I see this error under CodeDeploy > Deployments > DeploymentID.Not sure what I'm doing wrong here - any insight?
CodeDeploy Error: "The revision size is too large. Its maximum size is 51200B."
It looks like key condition expression does not supportNOT begins_with(x). This might be because the result set is not contiguous (it's the items beforex, aggregated with those afterx).Some possible solutions are:make thegameScoreIda non-key attribute (or replicate it into a new non-key attribute), then you can query on theuserIdand filter on thegameScoreId(you cannot filter on key attributes), orscan the table, in which case you can apply the filter expression that you want to use (obviously you will have performance concerns with very large tables)
I am trying to run a DynamoDB query that says I want items that do not begin with a particular value. I cannot seem to find a way to do this.I have tried the following 4 ways of evaluating, and none of them work. Each one gives me an invalid operator error.MyKeyConditionExpression(s) that I have tried look like this:!begins_with(gameScoreId, :scoreScore) AND !begins_with(gameScoreId, :scoreLevel) AND userId = :userId <>begins_with(gameScoreId, :scoreScore) AND <>begins_with(gameScoreId, :scoreLevel) AND userId = :userId NOT begins_with(gameScoreId, :scoreScore) AND NOT begins_with(gameScoreId, :scoreLevel) AND userId = :userId begins_with(gameScoreId, :scoreScore) = false AND begins_with(gameScoreId, :scoreLevel) = false AND userId = :userIdIf I remove the not operators, I get this error:KeyConditionExpressions must only contain one condition per keyIs there a way to do this in dynamodb?
DynamoDB: Does not begin with
Try to remove and redeploy the lambda.Also, make sure it has permissions to write to CloudWatch.
I accidentally deleted a lambda log group in CloudWatch.Now my lambda fails and I do not see the log group reappear in CloudWatch.Is it supposed to be recreated automatically? How can I fix the situation?I tried recreating the log group manually but it didn't receive any log.
AWS Lambda log group not recreated after deletion
Both Amazon SNS and Amazon Pinpoint, supports sending push notification to various devices (e.g Android, iOS etc)The major difference between Amazon SNS & Amazon Pinpoint is that :with Amazon SNS you have to set up your application to manage each message's audience, content, and delivery schedule. On the other hand, with Amazon Pinpoint you do not have to code these features, most of them are already built in. With Amazon Pinpoint, you can collect data about your app usage, create highly-targeted segments and send full campaigns(either immediate or scheduled) plus many more features.
For the purpose of sending push notifications from the backend, if we need a Push Notification Platform, could you please suggest which of these is intended for that purpose – Amazon SNS or Pinpoint?
Amazon Pinpoint API vs AWS Simple Notification Services
labs.vocareum manage keys for your AWS Educate. It is not possible to do this since you will be locked out before you can reschedule your session. Unfortunately, there is currently no other way but the current method.
I have a AWS Educate Starter Account, and I want to be able to generate automatically my credentials (aws_access_key_id,aws_secret_access_key,aws_session_token) from my code.Currently, the way I do it is:1) Login with my university email and password inlabs.vocareum.com2) Click on Account Details and copy and paste the credentials into ~/.aws/credentials for AWS CLI3) In my Python code I use boto3 to interact with s3But I would like to do everything in my Python script, without logging in every time and copy the credentials, since they are temporary credentials (they expire every 1 hour).The type of account doesn't allow me to create and IAM User either.Thisis a similar question, but is 2 years old and doesn't have an answer on how to do it without logging in.Is there any way to do it?
AWS Educate Starter Account obtain credentials in Python with boto3
Yes, cost does vary for different regions in S3 based on data transfer across regions. You can choose any AWS region that is geographically close to you to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you might find it advantageous to create buckets in the EU (Ireland) or EU (Frankfurt) regions.It varies in case of cloudfront too based on regional data transfer. Here you can find pricing based on regionshttps://aws.amazon.com/cloudfront/pricing/
Can the cost be varied for using CloudFront and s3 bucket at each location?
Does cost varies for each edge location using cloud front or s3 bucket?
As discussed in the comments, the React project is hosted through Expo which uses AWS in the background, which explains why the error is on an object on CloudFront.Additionally, the object is accessible directly through the link, which means the issue is not related to AWS but more so the react project itself or an issue with the deployment through Expo.
I apologize for not providing that much detail but there is nothing I can do about it. I have a React Native app hooked up with Sentry a crash analytics tool and I have been getting this crash report lately with no trace to its cause:Error: Could not download from 'https://d1wp6m56sqw74a.cloudfront.net/~assets/a9df2c73b9dd467f9205fdc02ab3828f'The error seems to be coming from a React Native library as indicatedhereby Sentry.The link in the error points to one of many images stored locally inside the project bundle files.I have done some searching and found out that this error is related to AWS. So the question is what in the world are my local images doing in AWS ? !!
Why are my users getting this AWS-related error?
This option is not available yet. Per the documentationhttps://docs.aws.amazon.com/firehose/latest/dev/firehose-tagging.html:Tagging Delivery Streams Using the Amazon Kinesis Data Firehose API You can specify tags when you invoke CreateDeliveryStream to create a new delivery stream. For existing delivery streams, you can add, list, and remove tags using the following three operations:TagDeliveryStreamListTagsForDeliveryStreamUntagDeliveryStreamCurrently, you can add tags using API:https://docs.aws.amazon.com/firehose/latest/APIReference/API_TagDeliveryStream.htmlOr CLI:https://docs.aws.amazon.com/cli/latest/reference/firehose/tag-delivery-stream.html
I am creating Kinesis firehose via cloudformation template. Need to add tags to it. How to add tag to a Kinesis Firehose Delivery stream through cloudformation template?
How to add tag to a Kinesis Firehose Delivery stream through cloudformation template?
You will need following grants to owner of the procedure. Probably from your DBA.Grant execute on rdsadmin.RDSADMIN_S3_TASKS to <procedure_Owner>;
Today, I followedthe instructions givento perform Oracle RDS integration with S3 to import files from S3 bucket into a database directory.I was able to perform all the steps well and able to see the files imported from my S3 bucket in theDATA_PUMP_DIRdirectory on the RDS instance.When I run the querySELECT filename FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by mtime;I get the output listing the files I imported.Now, I am planning to get these files in a PLSQL block and the issue arises here. When I run something like this:DECLARE BEGIN FOR fn in (SELECT * FROM table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime) LOOP dbms_output.put_line('File name is ' || fn.filename); END LOOP; END;I can see the output in the dbms output window.However, when I try to call thisinside a procedurelike the following:CREATE OR REPLACE PROCEDURE test1 IS BEGIN FOR fn in (SELECT * FROM table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime) LOOP dbms_output.put_line('File name is ' || fn.filename); END LOOP; END test1;I receive the error:3/14 PL/SQL: SQL Statement ignored3/43 PL/SQL: ORA-01031: insufficient privilegesI searched online for this error and couldn't get any leads.I tried writing the procedure with invoker rightsCREATE OR REPLACE PROCEDURE test1 AUTHID CURRENT_USER ISand it still gave me the same error.Can someone please throw light on this?
RDSADMIN.RDS_FILE_UTIL.LISTDIR works from block and not in procedure
Have you triedsam packagebefore deploy? you might using old code which cron is wrong... trysam package --s3-bucket your-bucket-name --output-template-file packaged.yaml
We're trying to have a lambda on AWS, that is scheduled to run every Monday at 9:45am.Our cron expression is in the form of:cron(45 9 ? * MON *)AWS docs explicitly state, that you there are six required fields for the crons (you can ignore seconds) - see documentation here:https://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.htmlHowever, deploying the lambda results in an error:An error occurred: ConsumerEventsRuleSchedule13 - Parameter ScheduleExpression is not valid. (Service: AmazonCloudWatchEvents; Status Code: 400; Error Code: ValidationException)I've managed to fix it, by adding seconds to the expression:cron(0 45 9 ? * MON *)However, I'm still confused why the original expression was invalid? AWS documentation (linked above) even provides a working expression for a task running every work day (Monday - Friday):cron(0 18 ? * MON-FRI *)which looks just like what we tried to use originally, minus the weekday range (since we want one specific weekday).Any clues?
Why does this cron expression is incorrect on AWS?
OK, I did. It is really bad but it works. I used both boto3 and aws-cliimport subprocess import boto3 folders = [] with open('folders_list.txt', 'r', newline='') as f: for line in f: line = line.rstrip() folders.append(line) def download(bucket_name): s3_client = boto3.client("s3") result = s3_client.list_objects(Bucket=bucket_name, Prefix="my_path/{}/".format(folder), Delimiter="/") subfolders = [] for i in result['CommonPrefixes']: subfolders.append(int(i['Prefix'].split('{}/'.format(folder),1)[1][:-1])) subprocess.run(['aws', 's3', 'cp', 's3://my_bucket/my_path/{0}/{1}'.format(folder, max(subfolders)), 'C:\\Users\it_is_me\my_local_folder\{}.'.format(folder), '--recursive']) for folder in folders: download('my_bucket')
I have a list of folder names in a txt file like:folder_B folder_CThere is a path in S3 bucket where I have folders like:folder_A folder_B folder_C folder_DEach of this folder has subfolders like:0 1 2 3For every folder in the text file I have to find folder in S3 and download content of its subfolder with the highest number only.Doing this by python boto3 seems to be complicated.Is it a simple way to do this by AWS command line?
Download from Amazon S3, AWS CLI or Boto3?
The two option to achieve CDC in glue is by 1. using audit column in the source database and passing it in the sql to extract data 2. If the data is no more than few hundred thousand records then extract the full data and compare using spark sql.
We are comparingChange Data Capture (CDC)capabilities for AWS Glue to SnapLogic and Informatica. AWS Glue has the ability to detect changes in thedata structure.I am looking for specific examples of how to detectchanges in data (i.e. modified data or new data). Has someone used AWS Glue to pull in only new/modified records? If so, how?
How to support CDC with AWS Glue
You can run your code locally using theserverless-offlinepluginSupply your JDWP agent string via the environment variableJAVA_TOOL_OPTIONS. Be sure to setsuspend=y, to ensure you can connect your debugger before the function finishes executing. There are lots of ways of doing this — I suggest using theserverless-dotenvplugin.For example:cat >> .serverless/.env < EOF JAVA_TOOL_OPTIONS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:3888 EOF sls offline --stage local --debug=*UsingJAVA_TOOL_OPTIONSshould work in AWS, according tothe docs... But your mileage may vary with tunneling etc.
I am working on a project using the serverless framework with code written in Java and deployed on AWS. I am able to deploy the lambda functions to AWS and I am also able to use the "serverless invoke local" command to run them locally.What I haven't figured out is how to attach to and debug this locally running function. I don't see any promising command line options, and the java invocation didn't make use of the $JAVA_OPTS environment variable to configure it there.
debugging java functions via serverless-framework invoke local
I think this sort of thing is better suited to being either in the build stage of your Docker container by usingRUNcommands, which is good for installing things or wrapped up into a single script and added into your docker image using theCOPYcommand in the dockerfile.
I am running my first batch job through console using amazon/linux as my image.Docker image has command set as CMD "/bin/bash" How do I pass a script through command override during Job submission.When I pass command asyum -y install unzip aws-cli --aws s3 lsI get an error also I tried using&&no luck. How do I combine multiple statements and pass via command override to run it in a sequence ?Appreciate some ones help.
AWS Batch - Container override - How to pass a sequence of statements
the delay is caused by multiple actions that need to be done when launching a training job, including provisioning the instances, downloading algorithm docker image, as well as downloading your dataset. SageMaker team is continuously improving the platform to reduce the latency. Meanwhile, if you are running your training job with deep learning frameworks, you can leverage the local mode feature to run your training job in your notebook instance when you are testing it. After that, you can launch the training job on a remote cluster to train your model against a large dataset.To enable local mode, you simply specify the instance type to be "local" when you launch a training job in notebook instance. More details about local model can be found here:https://github.com/aws/sagemaker-python-sdk#sagemaker-python-sdk-overview
When running a non-local instance on SageMaker likeml.p3.2xlarge, I can't use a localfile://URI and must uses3://This makes sense, since it is a new instance.However, when using the s3 URI for a 100GB datasettf_estimator.fit(s3://bucket/path/to/my/data)so I can use a larger non-local training instance, I have to wait around 10 minutes for the data to be downloaded from s3 to the instance.It seems to be an issue even on the instances with quoted 10gig/second or faster connection.Is there a way around this wait time which seems like it would become a severe problem with larger datasets?
Is there a way to bypass downloading training data on SageMaker?
DynamoDB needs a global secondary index. The@keytransform (or directive) can be added to yourschema.graphqlfile as a sibling to the already [email protected] Directive DocumentationRegarding the definition below:directive @key(fields: [String!]!, name: String, queryField: String) on OBJECTThe important thing is that the first element of the "fields" argument is the partition (or hash) key. Every subsequent element is a sort key. Together, the sort keys will determine the order or the returned results.(Doing so with causeamplify pushto create an entirely new GraphQL field on theQuerytype. The value ofqueryFieldwill be the new field name. Thenameargument is the global secondary index name.)
I'm writing a facebook wall like function for my webapp with amplify-cli and vue, and I need to do a simple serverside orderby/sort in my query. It seems impossible..I have tried the standard graphql way, with adding sort, it does not work...The query generated by amplify-cli:query ListWallposts( $filter: ModelWallpostFilterInput $limit: Int $nextToken: String ) { listWallposts(filter: $filter, limit: $limit, nextToken: $nextToken ) { items { id content createdAt comments { nextToken } user { id firstname lastname } } nextToken } }My addition:query ListWallposts( $filter: ModelWallpostFilterInput $limit: Int $nextToken: String ) { listWallposts(filter: $filter, limit: $limit, nextToken: $nextToken,sort:{ field:createdAt, order:ASC } ) { items { id content createdAt comments { nextToken } user { id firstname lastname } } nextToken } }I can not add primary sortkey to dynamoDB, after amplify-cli table creation. I have spent days trying to figure this simple thing out... Any help would be very welcome.
How to serverside order/sort query results in amplify-cli
The AWS Console displays only S3 and Lambda, since at the moment only those two services are supported for logging data events. E.g.PutObjectetc.DeleteTableis a management event, and is listed in thedocumentationyou posted. If you configure your Trail to log all management events, all AWS services you use, including DynamoDB, will be logging these management events.Just create a new trail, and include all management events. Then in your Trail's event history you will find the events like below.
In AWS it is indicated that there is support to use Cloudtrail to track events in DynamoDB in the link here.https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/logging-using-cloudtrail.htmlHowever, in the instructions, there is no option to pick DynamoDB anywhere (only S3 and Lambda options are available) so I am looking for any instructions anywhere on how to track DynamoDb events. Specifically I want to know when a table has been deleted.https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.htmlHas anyone had any luck?Thanks!
How To Create a CloudTrail for DynamoDb in AWS?
I don't know for sure if this will work, but you are probably wanting to change that {s3} pathParam to a {proxy+} which transforms that pathParam into a /'*' wildcard. Seehttps://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-settings-method-request.htmlGood luck!
I have trouble with accessing S3 via Api Gataway. I subscribe the following template:/s3: get: produces: - application/json parameters: - name: "key" in: "query" required: false type: "string" responses: "200": description: 200 response x-amazon-apigateway-integration: credentials: Fn::GetAtt: - ApiRole - Arn requestParameters: - integration.request.path.key: "method.request.querystring.key" uri: "arn:aws:apigateway:eu-west-1:s3:path/{key}" consumes: - application/json produces: - application/json responses: default: statusCode: '200' passthroughBehavior: when_no_match httpMethod: GET type: awsBut when I check the work:I get an error:Execution failed due to configuration error: Illegal character in path at index 35: https://s3-eu-west-1.amazonaws.com/{key} Thu Dec 13 22:46:03 UTC 2018 : Method completed with status: 500Probably my querystring is not overridden in the integration request. But I can't figure out how to do it right.
Execution failed due to configuration error: Illegal character in path in Api Gataway
I just saw that I left this question unanswered. So, here's what I did to fix the issue.Shortly after posting this question, I tried theAWS CLI Windows Installeronce more and it worked. I still don't know why the initial installation didn't work, but I am able to upload to S3 via CLI without Python encoding errors.
I'm having the exact same issue asanother unanswered post, but I'm willing to give whatever code/setup needed to get the question answered properly.Like in the post I mentioned above, I am also trying to deploy files to S3 with the AWS CLI and I receive the same error:upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idnaI have the newest version of Python and the AWS CLI. I can get the Python shell to import encodings.idna, but the AWS CLI process boots its own shell to run the commands I assume. Which may mean that I need to somehow inject the import statement into the AWS CLI process. I've tried to edit the aws.cmd programs (one in /bin and one in /scripts), but nearly every change stopped the program from working properly.I'm not sure what to post that can help determine what my issue is, so please let me know.
Windows Python & AWS CLI Unknown Encoding idna
You can download the complete file via AWS Console. But, if you can download via script, I recommend use this method:https://linuxtut.com/en/fdabc8bc82d7183a05f3/Please, update variables values into script.profile = "default" instance_id = "database-1" region = "ap-northeast-1"
I am unable to download complete logs using AWS CLI for an postgres RDS instance -aws rds download-db-log-file-portion \ --db-instance-identifier $INSTANCE_ID \ --starting-token 0 --output text \ --max-items 99999999 \ --log-file-name error/postgresql.log.$CDATE-$CHOUR > DB_$INSTANCE_ID-$CDATE-$CHOUR.logThe log files which I see in console shows it's of ~10GB but using the CLI I always get a log file of just ~100MB.Ref -https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.htmlAWS docs -In order to download the entire file, you need --starting-token 0 parameter:aws rds download-db-log-file-portion --db-instance-identifier test-instance \ --log-file-name log.txt --starting-token 0 --output text > full.txtCan someone please suggest.
AWS RDS download full log file (access & error)
I agree with JohnRotenstein that, there needs more information to provide the answer. I would suggest you to take simple data points and simple table. Here are step-by-step solution, I hope by doing that, you should be able to resolve your issue.Assume here is your table structure.Here I'm doing most of data types to prove my point. create table sales( salesid integer, commission decimal(8,2), saledate date, description varchar(255), created_at timestamp default sysdate, updated_at timestamp);Just to make it simple, here is your data file resides in S3.Content in CSV(sales-example.txt)salesid,commission,saledate,description,created_at,updated_at 1|3.55|2018-12-10|Test description|2018-05-17 23:54:51|2018-05-17 23:54:51 2|6.55|2018-01-01|Test description|2018-05-17 23:54:51|2018-05-17 23:54:51 4|7.55|2018-02-10|Test description|2018-05-17 23:54:51|2018-05-17 23:54:51 5|3.55||Test description|2018-05-17 23:54:51|2018-05-17 23:54:51 7|3.50|2018-10-10|Test description|2018-05-17 23:54:51|2018-05-17 23:54:51Run following two command using the psql terminal or any sql connector. Make sure to run second command as well.copy sales(salesid,commission,saledate,description,created_at,updated_at) from 's3://example-bucket/foo/bar/sales-example.txt' credentials 'aws_access_key_id=************;aws_secret_access_key=***********' IGNOREHEADER 1; commit;I hope, this should help you in debugging your issue.
I've been trying to load data intoRedshiftfor the last couple of days with no success. I have provided the correctIAMrole to the cluster, I have given access toS3, I am using theCOPYcommand with either theAWScredentials or theIAMrole and so far no success. What can be the reason for this? It has come to the point that I don't have many options left.So the code is pretty basic, nothing fancy there. See below:copy test_schema.test from 's3://company.test/tmp/append.csv.gz' iam_role 'arn:aws:iam::<rolenumber>/RedshiftCopyUnload' delimiter ',' gzip;I didn't put any error messages because there are none. The code simply hangs and I have left it running for well over 40 minutes with no results. If I go into the Queries section in Redshift I dont see any abnormal. I am using Aginity and SQL Workbench to run the queries.I also tried to manually insert queries in Redshift and seems that works. COPY and UNLOAD do not work and even though I have created Roles with access to S3 and associated with the cluster I still get this problem.Thoughts?EDIT: Solution has been found. Basically it was a connectivity problem within our VPC. A VPC endpoint had to be created and associated with the subnet used by Redshift.
Copying data from S3 to Redshift hangs
If you are creating the instances via CFN template, you can parameterize the tags and use the same parameters in your cloudwatch alarm resource
I am trying to add a CloudWatch alarm over multiple instances in my AWS account based on the instance tags.For example, I have 4 instances running with tagsName=DEV,APP=WebServer.I am new to AWS CloudFormation templates, so I am not sure how to add the tags in the CloudWatch alarm'sdimensionsproperty. Can I attach a single alarm to multiple instances by referring to them by tags when I create the instances?Here's a snippet from my template.CPUAlarm: Type: AWS::CloudWatch::Alarm Properties: AlarmDescription: CPUtilization AlarmActions: "SNS TOPIC ARN" MetricName: CPUUtilization Namespace: AWS/EC2 Statistic: Average Period: '60' EvaluationPeriods: '2' Threshold: '80' ComparisonOperator: GreaterThanThreshold Dimensions:
How to pass tags of existing resources to cloud watch alarm in CFT in AWS?
I noticed these error comming up to me in my AWS Glue Job so I found something that could be helpful from AWS:This WARN message is not so special, and does not mean job failure or any errors directly. I guess there should be other cause. I would recommend you to enable continuous logging, and check both driver/executor logs to see if there are any suspicious behavior. If you enable job bookmark, please try disabling it and see how it goes without bookmark.https://forums.aws.amazon.com/thread.jspa?messageID=927547I had dissabled bookmarks from the begining. What I check is that my Glue job writing data to S3 and got an exeption per Memory, so what I did is to repartition the data.MyDynamicFrame.coalesce(100).write.partitionBy("month").mode("overwrite").parquet("s3://"+bucket+"/"+path+"/out_data")so if you have some write opperations, I'll recommend to check how you are writing to S3
I am running a test job on AWS. I am reading CSV data from S3 bucket, running a GLUE ETL job on it and storing the same data on Amazon Redshift. GLUE job is just reading the data from S3 and storing in Redshift without any modification. The job runs fine and I get the desired result in Redshift but it returns an error which I am unable to understand.Here is the error log:18/11/14 09:17:31 WARN YarnClient: The GET request failed for the URL http://169.254.76.1:8088/ws/v1/cluster/apps/application_1542186720539_0001 com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.conn.HttpHostConnectException: Connect to 169.254.76.1:8088 [/169.254.76.1] failed: Connection refused (Connection refused)It is a WARN rather than error but I want to understand what is causing the WARN. I tried to search for the IP that is indicated in the WARN but I am not able to find the machine with the mentioned IP.
AWS Glue job runs correct but returns a connection refused error
The reason that your lambda is failing is that to uselistObjects, yourlambda functionneed to have the IAM permissions3:ListBucketwhich works against a single bucket (no object wildcard is needed)(docs).i.e. you shouldset your lambda's IAM policyto:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-bucket" } ] }
I have an s3 bucket with a bunch of files that I want to access from my lambda (both lambda and s3 bucket created by the same account):def list_all(): s3 = boto3.client('s3') bucket = 'my-bucket' resp = s3.list_objects(Bucket=bucket, MaxKeys=10) print("s3.list_objects returns", resp)This gives an error like so:{ "errorMessage": "An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied", "errorType": "ClientError", "stackTrace": [ [ "/var/task/lambda_function.py", 41, "lambda_handler", "list_all()" ], ...My bucket settings are shown like this on aws:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-bucket/*" } ] }I had two questions:1) what do I set my Action field to be so that I can list all files in any folder from my lambda using boto3?2) what should I set my principal to be so that only my aws account (eg when I run my lambda) can access the buckets?
listing all objects in an S3 bucket using boto3
I think your solution may be a little too complex by utilizing the New Headers() method.Try this:fetch('api/public/libraries/sign-out-discourse', { method: "POST", headers: { 'Authorization': jwtToken } });
I want to add Cognito authorization to my API request so that the API Gateway can pass the information on to my Lambdas. I have read in other threads that I should add the id token as authorization header, so that is what I have tried so far.I have tried the following:fetch('api/public/libraries/sign-out-discourse', { method: 'POST', headers: new Headers([ // I get the idToken from CognitoUser.getSession => getIdToken() ['Authorization', idToken], ]), })I get the error message{"message":"'Object]' not a valid key=value pair (missing equal-sign) in Authorization header: '[object Object]'."}I have tried the following:fetch('api/public/libraries/sign-out-discourse', { method: 'POST', headers: new Headers([ // I get the idToken from CognitoUser.getSession => getIdToken().getJwtToken() ['Authorization', jwtToken], ]), })I get the error message:{"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=eyJraWQiOiJOemhFe..."}What is the right way to pass authorization information to the api?
How to add Cognito authorization to API request? So that Lambdas can access it
You can use dynamodb streams and write a function that runs in lambda to capture changes and add documents to cloudsearch, flatenning them at that point, instead of keeping an additional dynamodb table.For example, within my lambda function I have logic that keeps the list of nested fields (within a "body" parent in this case) and I create a just flatten them with their field name, in the case of duplicate sub-field names you can append the parent name to create a new field such as "body-name" as the key.... misc. setup ... headers = { "Content-Type": "application/json" } indexed_fields = ['app', 'name', 'activity'] #fields to flatten def handler(event, context): #lambda handler called at each update document = {} #document to be uploaded to cloudsearch document['id'] = ... #your uid, from the dynamo update record likely document['type'] = 'add' all_fields = {} #flatten/pull out info you want indexed for record in event['Records']: body = record['dynamodb']['NewImage']['body']['M'] for key in indexed_fields: all_fields[key] = body[key]['S'] document['fields'] = all_fields #post update to cloudsearch endpoint r = requests.post(url, auth=awsauth, json=document, headers=headers)
I'm trying to set up AWS' Cloudsearch with a DynamoDB table. My data structure is something like this:{ "name": "John Smith", "phone": "0123 456 789" "business": { "name": "Johnny's Cool Co", "id": "12345", "type": "contractor", "suburb": "Sydney" }, "profession": { "name": "Plumber", "id": "20" }, "email": "[email protected]", "id": "354684354-4b32-53e3-8949846-211384", }Importing this data from DynamoDB -> Cloudsearch is a breeze, however I want to be able to index on some of these nested object parameters (likebusiness.name,profession.nameetc).Cloudsearch is pulling insomeof the nested objects likesuburb, but it seems like it's impossible for it to differentiate between thenamein the root of the object and thenamewithin thebusinessandprofessionobjects.Questions:How do I make these nested parameters searchable? Can I index onbusiness.nameor something?If #1 is not possible, can I somehow send my data through a transforming function before it gets to Cloudsearch? This way I could flatten all of my objects and give the fields unique names likebusinessNameandprofessionNameEDIT:My solution at the moment is to have a separate DynamoDB table which replicates ouruserstable, but stores it in a CloudSearch-friendly format. However, I don't like this solution at all so any other ideas are totally welcome!
AWS: Transforming data from DynamoDB before it's sent to Cloudsearch
My guess: Your security group that was applied was "launch-wizard-2" which by default sets exlusion rules. You need to associate that EC2 instance with one of the two security groups listed in your second screen shot to allow TCP connections on port 22 from inbound ip range. OR you could modify launch-wizard-2 to incorporate the relevant rules to allow for ssh connection.
Another bad day. I have all the configuration for my ec2 instance. Till yesterday I was able to connect it via ssh on mac. but know why it's not getting connect now.Configuration is as below:Security Group:-I'm using below steps as usual and I'm same directory where mypleaks-inst.pem kept.
EC2 is not responding for ssh connection
I'm assuming you're going to put the command underUserData.Scripts entered as user data are executed as the root user, so do not use the sudo command in the script. Remember that any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script. Also, because the script is not run interactively, you cannot include commands that require user feedback (such as yum update without the -y flag).Here's the fulldocumentationdiscussing topic
I am baking an image on top of Amazon linux image. I need to run a service as ec2-user. Is it possible to run a launch script of any kind as user other than root?
AWS EC2 cloud-init script run as ec2-user
Maybe you can create a CloudWatch insight query, something like this:fields ispresent(execution_arn) as isRes | filter isRes | filter type in ["ExecutionStarted", "ExecutionSucceeded", "ExecutionFailed", "ExecutionAborted", "ExecutionTimedOut"] | stats latest(type) as status, earliest (event_timestamp) as starttime, latest (event_timestamp) as endtime, endtime - starttime as duration by execution_arn | sort duration descyou will have to enable CW logs for state machine:https://docs.aws.amazon.com/step-functions/latest/dg/cw-logs.html
I have a use case to alert in case of SLA miss. My application emits metric on startTime (M1) & endTime (M2). If my job completes, I will be able to know SLA misses by doing metric math like (M2-M1) and having alerting on this.But if my job is stuck, I still want to get alerted by computing (currentTime-M1) (may be on scheduled basis). Is this possible with AWS CloudWatch? Non-AWS based approaches & solutions are also welcome!!
Compute current timestamp in CW metric math
In your last block try making theprintline come beforeresult[dnsIpAddress] = "FAILURE"My guess is either there is more code than what is shown here or the line before the print statement causes a different exception.
While trying to implement aDNSRequest, I also needed to do some exception handling and noticed something weird. The following code is able to catch DNS requesttimeoutsdef lambda_handler(event, context): hostname = "google.de" dnsIpAddresses = event['dnsIpAddresses'] dnsResolver = dns.resolver.Resolver() dnsResolver.lifetime = 1.0 result = {} for dnsIpAddress in dnsIpAddresses: dnsResolver.nameservers = [dnsIpAddress] try: myAnswers = dnsResolver.query(hostname, "A") print(myAnswers) result[dnsIpAddress] = "SUCCESS" except dns.resolver.Timeout: print("caught Timeout exception") result[dnsIpAddress] = "FAILURE" except dns.exception.DNSException: print("caught DNSException exception") result[dnsIpAddress] = "FAILURE" except: result[dnsIpAddress] = "FAILURE" print("caught general exception") return resultNow, if I removed the Timeout block, and assuming that a Timeout would occur, on a DNSException the messagecaught DNSException exceptionwill never be shown.Now, if I removed the DNSException block, and assuming that a Timeout would occur, the messagecaught general exceptionwill never be shown.But the Timeout extends the DNSException and the DNSException extends Exception. I had the expectation that at least the general expect block should work.What am I missing?
Exception handling in aws-lambda functions
Although lambda does not let you edit retried messages in any way before they get sent to DLQ, we canindirectlyadd a fewattributeslike below to the message that can explain why it failed.This only works for specific cases, mainly forasynchronous, non-stream-based invocationswhich basically means lambda’s native async retries orSNStriggers work butSQSbased retries, for instance, don’t. The other condition is that the exception returned/thrown must be an Error or an extension of the Error prototype for node lambdas.Something likeexports.handler = async (event, context, cb) => { class CustomError extends Error { constructor(message) { super(message); this.name = "Some Lambda Error"; this.message = message; } }; let error = new CustomError("Something went wrong") cb(error); // or just simply cb(new Error("Something went wrong")); };
I have a lambda trigger on an SQS queue which is configured with a DLQ.When my lambda failed the original message from the queue will be redirected to the DLQ. Now I want to add more information to this original message (like why there was an error etc). I know that I can't modify the original message but I saw that a message can have additional message attributesRequestID, ErrorCode, ErrorMessage.How can I use/ setup them from my lambda function (NodeJS) ?
Add message attributes from lambda back to SQS DLQ
Have you checked out the sectionThe steps the Lambda function takesin the article that you have mentioned hereUsing static IP addresses for Application Load Balancers?You can get the IPs to whitelist from AWS S3 bucket as well as AWS CloudWatch stream. You can even automate the process of updating the Security Group inbound and outbound rules either by updating the same AWS Lambda function or by creating your own and using AWS SDK API calls like authorize_security_group_ingress() and revoke_security_group_ingress() via a Lambda function triggered on Object upload (new IP list) on S3.
I've implemented this solution provided by AWS:Using static IP addresses for Application Load Balancersbut I came across a problem.I need to whitelist some static IP's and since this solution requires for the targets to communicate to IP's instead of instances, the IP Preservation is not done on the NLB as mentioned here:Target Groups for Your Network Load Balancers.So, I can't really do a whitelist neither on the Security Groups nor on the NACLs.Does anyone have a solution to this problem while maintaining this architecture?
AWS NLB to ALB IP Whitelisting
Lambda does not support this feature natively and, while you could conceivably build it with some combination of DynamoDB atomic counters and DynamoDB streams triggers, you should almost certainly useStep Functions.You're trying to coordinate the components of a distributed application composed of microservices, and that's precisely what Step Functions is designed to do.
Is there a way to trigger one aws lambda after the successful completion ofnparallel running lambdas.Lets call the:parallel lambdaL1final lambdaL2Some previously running task triggernlambdasL1all running on a group based triggers. Suppose there are 5L1sof group 1 and 7L1sof group 2My aim is to triggerL2when any of the above group completes its execution.If all group 1 lambdas completed successfully then ony oneL2trigger for group 1 and same for group 2. In short i am looking for grouped trigger forL2.Please note: Both lambdas are running in VPC and I am using SNS to connect them together and I did not want to use a monitoring task.Please consider scenario when 3 out of 5 lambdas are already done. 4 and 5 complete at the same time which one of them will triggerL2.Important:Internet access is blocked in VPC
Trigger one AWS Lambda after the completion of n Running AWS Lambda
ODataat its core is justRESTrelying on web standards and as such will be supported by web-standard compliant tech stack, so will work withAWS API GatewayandLambdas.However you have to ensure that you canpass custom headersandquery parametersto your function, which used to be a bit tricky.It used to be the case that you had to pass headers inside a request body as lambdas had only visibility to the request body:see this AWS technical documentation.However sinceSept 2017you can setuplambda with proxy integrationwhich will proxy request and response headers to and from your lambda verbatim.HTH.
I want to use Odata as a query builder in my api's hosted on aws lamda and exposed using AWS api gateway. On reading several aws documentation, I found that people have faced several issues with this earlier. Can someone please tell me about whether it's supported and if not what can be an alternative for the same?Thanks in advance!
Does AWS Api GateWay supports OData?
On the server side you can check theX-Forwarded-Proto(original request protocol) and if it's heaving valuehttpyou can send redirect (http 302) to a url with https protocol..though with ALB (application load balancer you may specify a set of rules, maybe it's possible to do that there..)
I have configured the load balancer to route the request to two of Ec2 Instance running a NodeJs server. I need to direct the request coming from both http (port 80) and https (port 443) to http (port 80) of the EC2 instances in NodeJs. I have uploaded the ssl certificate to AWS and configured the load balancer to use ssl certificate. The problem is the request coming from http port doesn't automatically route to https. It has to be a server side script or snipped which I need to write in server.js which should be routing the http to https, i tried to do it and it run into endless redirection. So questions -Is there any guide to do this from AWS ?If not then how one can achieve this, any pointers or suggestions would be greatly appreciated.
AWS Loadbalancer Proxy for Nodejs
As it turns out my test was incorrect and so I miss-understood howcreate_receipt_rulebehaves when theafterparameter is omitted.When theafterparameter is omitted the new rule is added as the first rule, not the last as I thought.So the answer to this question is there is no need to pass an explictnullvalue, just omitting theafterparameter achieves the goal.
Is there a way to make a boto3 client pass a parameter with an explict null value (as opposed to omitting the parameter entirely)?I'm trying to usecreate_recipt_rule on the boto3 SES clientto add a new receipt rule that I want to be the first rule.The AWS API docs (also pulled through to boto docs above) say this should be achieved by passing theAfterparameter with a null value:After (string) -- The name of an existing rule after which the new rule will be placed. If this parameter is null, the new rule will be inserted at the beginning of the rule listFrom testing I've found that this must be explictly passed asnull. Simply ommiting theAfterparameter results in the rule being added at the end of the list.I thought I'd be able to pass an explict value by having theAfterparameter present with an explitNonevalue. However this fails on boto3's parameter validation. i.e.client.create_receipt_rule( RuleSetName='my-ruleset', After=None, Rule={ ... }, )Results in the following error:Parameter validation failed: Invalid type for parameter After, value: None, type: <type 'NoneType'>, valid types: <type 'basestring'>I also tried passing the string'null'but that looks for a rule callednullto put the rule after rather than putting the rule at the start.Is there a way to pass an expitnullvalue to a parameter via the boto3 client?
How to pass an explict Null vaule using Boto3?
You need to set the keepAliveTimeout="xxxx" in your tomcat connector settings to avoid tearing down idle connections.
We are seeing 504 errors in our ELB logs, however there are no corresponding errors in the application logs. Have increased the idle timeout on ELB and can see that no requests are taking more time than that. Going through aws documentation found that we need to configure keep-alive time at ec2 instances to be equal or more than idle timeout to keep the connection open between elb and backend server. Couldn't find any way to configure keep-alive time between elb and backend server. Any suggestion to do that would be helpfulWe are using tomcat-ebs for backend servers.
How do I configure keep-alive time between elb and server?
The context portion of your policy document can contain only String, Boolean or Numeric values. Arrays and Objects are illegal.The documentation states:The returned values are all stringified. Notice that you cannot set a JSON object or array as a valid value of any key in the context map.Source:https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-lambda-authorizer-output.html
I have an AWS API Gateway that uses a custom authorizer, and if the request is authorized, it triggers another lambda function. Since yesterday, whenever I call the API, I get an error saying{ "message": null }and a 500 Internal Server Error. In the response headers it saysx-amzn-ErrorType →AuthorizerConfigurationException. I can see in the logs that the authorizer is called and returns a valid policy, and that the other lambda function is not triggered. I have not (knowingly) changed the authorizer. Can anyone give me a hint what might be wrong here? I have readthisquestion but there the mistake was that the returned policy was wrongly formatted, while I didn't change my authorizer and it worked before.
AWS API Gateway with custom authorizer returns AuthorizerConfigurationException
You can go to theattributes sectioninside theuser pool, there you can choose from the default ones:address, birthdate, email, family_name, gender, given_name, locale, middle_name, name, nickname, phone_number, picture, preferred_username, profile, timezone, updated_at, websiteor you can create yourcustom fields.Here you can read more about your question:https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html
I created an Ionic Mobile application using Ionic AWS starter template. I wish to create a new fields for the signup page for registration . I am not able to add additional fields to the cognito user pools. Is there any way by which I can achieve this ?
How to additional fields to AWS cognito signup
This issue has resolved itself now, I suspect due to an update in Glue, or the associated infrastructure.The connectivity issue was occuring from within the PySpark REPL, and not on the actual Dev Endpoint instance itself...Anyway, for anyone else troubleshooting similar network connectivity issues with Glue, here is a list of possible causes:Dev Endpoint needs to be in a 'public' subnet* DHCP options need to have the default setting Security groups, security groups, security groups Subnet should be associated with an S3 Endpoint ...
Do Glue jobs have internet access?Using this test job:def have_internet(): conn = httplib.HTTPConnection("www.google.com", timeout=5) try: conn.request("HEAD", "/") conn.close() logger.warn('ok') except: conn.close() logger.warn('no ok') have_internet()It appears they do not...Also, within a properly configured Glue dev endpoint, there is no internet access.By properly configured, I mean within a public subnet (internet gateway), with S3 endpoint and internet gateway, and a working 'connection', and security groups.But still no internet access...I want internet access to be able to interrogate an on prem database, save to S3, and run another job to transform, and load to rds...Can I use glue for the extract?
Internet access within AWS Glue job
Maybe this solution is a little overkill for your environment. But you couldset up a Cloudfront distribution. You should put your Elastic beanstalk url as origin and yourhttps://example.comunder CNAME, then you can decide if you want flexible SSL you can specify communication with the origin as HTTP only, or if you want end to end encryption you can specify HTTPS only (I think this would be the way to go in your particular case since you have configured your elastic load balancer to forward all requests from port 443 to port 80). Then under behavior you can select the option to redirect http to https and every request tohttp://example.comwill be automatically redirected by Cloudfront tohttps://example.comI hope this helps
Steps I have taken:Enabled 80 HTTP -> 80 HTTP and 443 HTTPS -> 80 HTTP on my load balancer in Elastic BeanstalkAliased my Route53 hosted zone for both www and apex A records to my load balancerSet up the SSL certificateUsed the default ASP.NET React Template with HTTPS RedirectionAdded<RuntimeIdentifier>win-x64</RuntimeIdentifier>in .csproj since EB doesn't use 2.1 yet.Deployed with Visual Studio AWS ToolsWhat works:https://www.example.comworkshttps://example.comworksWhat doesn't work:http://example.comwon't redirect tohttps://example.comhttp://www.example.comwon't redirect tohttps://www.example.comI know in the past you had to write custom extension methods to get this to work with AWS LBs. Does anyone have a working example using the standard templates?
ASP.NET Core 2.1 HTTPS Redirection behind AWS Load Balancer?
+50It looks like you have to useServer Side Authentication FlowFor server-side apps, user pool authentication is similar to that for client-side apps, except:The server-side app calls theAdminInitiateAuthAPI (instead ofInitiateAuth). This method requires AWS admin credentials. This method returns the authentication parameters.Once it has the authentication parameters, the app calls theAdminRespondToAuthChallengeAPI (instead ofRespondToAuthChallenge), which also requires AWS admin credentials.TheAdminInitiateAuthreturns among other stuff the device key.
I've implemented in my backend Cognito with Signup and Login, MFA activation and inactivation, but now I want to implement the remember devices, to reduce SMS confirmation.For that, I've adjusted the InitiateAuth Function to the following code:$client->initiateAuth([ 'AuthFlow' => 'USER_SRP_AUTH', // REQUIRED 'AuthParameters' => [ "USERNAME" => $email, "PASSWORD" => $password, "SRP_A" => $bigA, ], 'ClientId' => $this->getClientId(), // REQUIRED ]);This function runs properly, and returns the code in following image:https://i.gyazo.com/a439e48e2de85a094f56ed4cfee10f83.pngThen, I continue generating SRP Values, and call in the function respondToAuthChallenge, with the following code:$client->respondToAuthChallenge([ 'ChallengeName' => 'DEVICE_SRP_AUTH', 'ChallengeResponses' => [ 'USERNAME' => $username, 'SRP_A' => $bigA, ], 'ClientId' => $this->getClientId(), ]);Yet, It returns me an error saying: 'Missing required parameter DEVICE_KEY'.If I put a DEVICE_KEY key inside ChallengeResponses it starts returning me the error 'Device does not exist.'I've searched a lot and cannot find a way to generate the DEVICE_KEY. I've tried with unique ID and sending it in both initiateAuthand respondToAuthChallenge but the error is the same.Any clue how can I do it? I Believe that SRP code is not 100% yet, as still understanding the concept, yet, cannot understand the DEVICE_KEY part.Thanks
Handling SRP Auth and Generating Device Key (PHP - Server side)
Try using therds_task_statusstored procedure to see if any errors occurred during native backup -exec msdb.dbo.rds_task_status @db_name='aa144bgo6mn8srl'. This will produce a table of sync statuses.Do you see alifecycleof Completed when you run this query?
So I followed theAWS documentationto perform a native RDS backup using MS-SQL Server. My goal is to be able to download the.bakfile.The config seems to be correct, and I was able to execute the backup stored procedure:And I created the the option group and have the S3 bucket linked to it.But when I went to the S3 bucket, the.bakfile is not there, even the stored procedure is performed successfully.
Where to find the .bak file after RDS native backup
My previous answer was deleted because I gave a similar answer to another question asking about issues with outdated and insecure versions of Ruby, but I will reiterate the same advice I would give to anyone running into errors using an insecure version of Ruby like 2.4.3: update to 3+ or at least 2.7 and the corresponding compatible Bundler version.Note to Admins: if this answer is worthy of deletion, please delete this question as well and ask the user to repost with a supported Ruby version.
I am trying to deploy Ruby on Rails application on AWS Elastic Beanstalk. I am getting following error -ERROR: [Instance: <Instance ID>] Command failed on instance. Return code: 18 Output: (TRUNCATED)...e ']' + bundle install Don't run Bundler as root. Bundler can ask for sudo if it is needed, and installing your bundle as root will break this application for all non-root users on this machine. Your Ruby version is 2.4.3, but your Gemfile specified 2.3.3. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/10_bundle_install.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI. INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1]. ERROR: Unsuccessful command execution on instance id(s) '<Instance ID>'. Aborting the operation. ERROR: Failed to deploy application.After I got this error, I had deleted ruby version number from Gemfile but still I was getting this error.After that I deployed app in a new Beanstalk environment without any mention of Ruby version in Gemfile. But I am still getting the same error.
Ruby Version Error while deploying app on AWS Elastic Beanstalk
You can configurerules basedon Referrer or OriginUseregex based ruleexample *.domain.comTake action to Block
I am trying to use AWS WAF to block requests with certain URL patterns. I am using the string matching filter, but it is not blocking the requests. I must be doing it incorrectly.Here is what I am trying to block:https://xxx.domain.com/Agoodurl would be:https://xxx.domain.com/something/somethingThe URL with nothing after the .com slash is never used in this example and is only hit by malicious traffic.How do I use WAF to block these requests?
How to use AWS WAF to block certain URLs
According to the MDNWebSockets are an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server.That means that every websocket is a persistent connection to a specific server. In order to emit events to sockets that are in a different server you need an Adapter. For example if you are using Socket.io you can take a look atSocketIO Redis.On the other hand, to balance the load you can use an Application Load Balancer (ALB) which supports websockets and containerized applications in ECS.
I'm just starting to understand and solve this problem.Node.js's Socket.io or Go's Gorilla websocket solutions have a connection pool per instance. So within each instance I can say "send message to client xxx".However when I attempt to scale horizontally (by spawning additional instances), each instance has its own client connection pool, so trying to send a message to a specific client which is connected to another instance fails. I assume that's because the current instance doesn't have access to that instance's connection/memory pool.I understand that ECS automatically scales docker containers horizontally, if I were to spin up a WS server project on an ECS and have it scale another instance to the service, does AWS handle talk between socket server instances magically - or must I handle that?
Load balancing websockets on ECS?
If you wish to get everything you do the following.Let's say you are hitting the route/objects/{what}?human=you&[email protected]('/objects', methods=['GET']) def myobject(what): everything = app.current_request.to_dict() print("look at me: {}".format(params))For more information see:Request from the Chalice docsShareFolloweditedFeb 15, 2021 at 9:00Gino Mempin27.2k2929 gold badges107107 silver badges149149 bronze badgesansweredSep 2, 2020 at 11:45Ali PayneAli Payne15911 silver badge66 bronze badges2paramsis non existing variable here, did oyu mean.format(everything)–Anatol BivolJun 28, 2021 at 8:28yes I structured it like that to match the question but I should of better explained things ... good catch–Ali PayneJun 29, 2021 at 20:39Add a comment|
I'm using Chalice to build a fairly straightforward API on AWS Lambda & API Gateway.I need a way to get access to the raw query string (i.efoo=bar&abc=123). When accessing theapp.current_request.query_paramsdictionary, it's already been processed, such that any empty parameters (foo=&bar=) have been stripped out.Unfortunately I'm working with a third-party API that sends a signed hash value in the query string, based off the raw query string. I can't verify it without the original, unaltered query string. Is there any way to access it other thancurrent_request.query_params?
How to access the raw query string (or full URL) in a Chalice (AWS Lambda/API Gateway) app?
First make sure your identity pool and user pool are setup for google authentication.Then federatedSignIn has a capital last I.And finally just change your second param in the call to federatedSignIn as follows:Amplify.Auth.federatedSignIn('google', { token: googleResponse.id_token, expires_at: googleResponse.expires_at }, {email, name})...ShareFollowansweredApr 10, 2019 at 17:17neonguruneonguru75966 silver badges1616 bronze badgesAdd a comment|
I'm trying to use AWS Amplify to support email / password and Google authentication. Now, I want to store the details from Google into my user pool in AWS. I don't understand the flow here - there are many blog posts I read but most of them are just confusing.Here's what I tried to do:// gapi and Amplify included googleSigninCallback(googleUser => { const googleResponse = googleUser.getAuthResponse(); const profile = googleUser.getBasicProfile(); const name = profile.getName(); const email = profile.getEmail(); Amplify.Auth.federatedSignin('google', googleResponse, {email, name}) .then(response => { console.log(response); }) // is always null .catch(err => console.log(err)); });In DevTools I have the following error in the request in Network Tab:{"__type":"NotAuthorizedException","message":"Unauthenticated access is not supported for this identity pool."}Why should I enable unauthenticated access to this pool? I don't want to.Am I doing this right? Is it even possible or is it a good practice to store Google User details into the AWS User Pool? If it's not a good practice, then what is?Also, if I want to ask user for further details not provided by Google in the app and store them, how to do it if we can't store the user in User Pool?
Using AWS Amplify to authenticate Google Sign In - federatedSignin returns null?
This is a really broad question that basically reduces to "It works on Cloud9. How can I make it work somewhere else in the cloud?". I'm assuming this will eventually be marked closed for this reason, but here's my short answer.At the moment, there is no "migrate" button in Cloud9 that allows you to port over your environment into a different service. Because of this, you'll need to capture key details about your operating system, dependencies, and runtime requirements and use those to search out suitable service in AWS that meets those requirements.Because you're describing a non-production application with (I imagine) forgiving technical requirements, I would start by asking your employer which runtimes they prefer and go from there. Good luck!ShareFollowansweredJun 8, 2023 at 12:00ford-at-awsford-at-aws6966 bronze badgesAdd a comment|
I'm a student trying to share a Ruby application I created over the summer. Is there a way to convert an application created in an IDE (C9) to a server such as AWS? I want to be able to share my application with employers at any time. But I'm not sure what type of service to use with AWS. I've explored Lamda and web hosting but I'm not sure that they're the right services. Any insight would be helpful.
Sharing an IDE Ruby app with employers
Fromthis postto the forum message,Restore Snapshot doesn't allow --vpc-security-group-ids:You can grant access to IPs by adding rules to your VPC security group. First, you can call describe-clusters to determine the VPC security groups your cluster is using. If the cluster is not associated with any VPC security group, you can call modify-cluster and specify the VPC security group IDs you want to use. After you have associated a VPC security group with the Redshift cluster, you can modify the security group in the VPC console to allow access from certain IPs (seehttp://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html#SecurityGroupRules).Note that in order for the Redshift cluster to be accessible from outside of the VPC, the cluster needs to be set to be publicly-accessible.ShareFolloweditedOct 17, 2018 at 20:35Greenonline1,34888 gold badges2424 silver badges3434 bronze badgesansweredOct 17, 2018 at 17:08eatsfoodeatsfood1,05022 gold badges2525 silver badges3232 bronze badges1Please edit your answer and provide a link to the forum post–GreenonlineOct 17, 2018 at 17:37Add a comment|
When I try to create a security group I get this error:VPC-by-Default customers cannot use cluster security groups (Service: AmazonRedshift; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 7afbb99f-1f1d-11e8-9bf0-1fe6c55b7cfc) (Service: AmazonRedshift; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 7afbb99f-1f1d-11e8-9bf0-1fe6c55b7cfc)What does this mean and how do I solve it?
Error when I create a security group in AWS
Amazon provides (since 12/2018) publishing logs from RDS for PostgreSQL databases to Amazon CloudWatch Logs in Amazon RDS. Supported logs include PostgreSQL system logs and upgrade logs. [1][1]https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-supports-postgresql-logfiles-publish-to-amazon-cloudwatch-logs/ShareFollowansweredFeb 7, 2019 at 17:08FelixFelix49644 silver badges1212 bronze badgesAdd a comment|
I'm new to RDS and previously have been administrated non-cloud database. It's common monitor database error log and monitors the texts. But when it comes to RDS Postgres, there is no native service that monitors log file. (I know now RDS MySQL/MariaDB has a functionality to publish to CloudWatch logs, but RDS Postgres still cannot do it)I guess basic scenario if we want to monitor RDS log files within AWS services, create Lambda function that download error log files periodically and save to S3 buckets. And parse it and if find error message, notify some chat service(like slack).But it is not realtime and gonna call a lot of API call.I'm wondering how people deal with monitoring log file.
How everyone monitor RDS PostgreSQL error log?
)) Let's assume your public ip is 54.74.67.112.1A)) Export the port 3000 in the security group.1B))In the address bar of your browser use: 54.74.67.112:3000 .2)) To use the AWS DNS, as you basically are doing, these are the steps.2A)) In the security group you have to expose the port 80, not 3000. So, In the address bar of your browser use:https://ec2-54-74-67-112.eu-west-1.compute.amazonaws.com/2B)) Inside the EC2 shell, you have to run your node application on the port 80 with a command like this:PORT=80 node myapp.js3)) The methods 1 and 2 are just for the development cases. If you are using this for production, you should use the AWS service named ROUTE53.ShareFolloweditedOct 5, 2021 at 20:48answeredOct 5, 2021 at 20:33quine9997quine999777877 silver badges1313 bronze badgesAdd a comment|
I've developed a simpleNode.js/Socket.Ioserver running onEC2instance on port 3000.There is a load balancer setup for that instance and an elastic IP pointing to it too.However I've added the TCP port 3000 to the port configuration of the load balancer (in Listeners, where I have the HTTPS already setup for port 443, and I tried to do the same for port 3000 using the secure tcp protocol) as well as to the security groups of the instance (with source 0.0.0.0/0).However when I try to reachhttps://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:3000I get the following error:An error occurred during a connection to ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:3000. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG(same happens when I try theElastic IPor the Load Balancer host name).Can you tell me what else I should do in order to allow HTTPS connections to port 3000?Thanks Martin
How to setup HTTPS to custom port (3000 in my case) on EC2 instance?
Changing the ECS task definition to use awsvpc network mode instead of bridge mode resolved this issue for us.ShareFollowansweredApr 19, 2018 at 18:22Justin LaRoseJustin LaRose112You can't change vpc while launching on Fargate mode–rashidcmbJun 8, 2018 at 16:44This answer is irrelevant, awsvpc is the only network type acceptable on Fargate. You can't even change it.–SepehrNov 9, 2019 at 6:00Add a comment|
I've created a Network Load Balancer for use with ECS Fargate. When I try to connect to the load balancer (using either the ELB domain name or it's IP addresses) it won't connect. I don't even see the connection in the flow logs. The machine I'm using to connect to the LB can reach the instances fine, but when I try to hit it through the load balancer I don't get a TCP connection. The security group on the Fargate containers allows anything from anywhere. The load balancer shows the instances health checks as "Healthy" but I still can't get to them.
Can't connect to AWS Network Load Balancer with Fargate
Checkout this project onGithubIts not on raspberry pi but would be helpful. For greengrass to work on rpi you would have to switch to devicemapper storage driver which is not the default with latest docker engine (overlay2 is the default)ShareFollowansweredMay 28, 2018 at 6:20Sujay PillaiSujay Pillai43522 silver badges1212 bronze badgesAdd a comment|
I have an AWS Greengrass Core setup in a docker container. Everything seems to check out fine, but the greengrass daemon fails to start - error is: Greengrass deamonxxfailed to start Failed to create overlay fs for container nosysRootfs operation not permittedI had the same core setup as a non-Docker container, so the certs and config.json file should be correct.
aws greengrass core running in docker container on rPi3
bootstrap is the solution. write a shell script, pip install all your required packages and put it in the bootstrap option. It will be executed on all nodes when you create a cluster. just keep in mind that if the bootstrap takes too long time (1 hour or so?), it will fail.ShareFollowansweredApr 26, 2018 at 14:58jerrytimjerrytim68866 silver badges1212 bronze badges1To my knowledge, the bootstrap script runs once only when the cluster is created so it doesn't play well with scaling clusters up and down. We are opting for Mesos to manage containers with appropriate dependencies that way.–JustinApr 26, 2018 at 20:37Add a comment|
I am currently runningspark-submitjobs on an AWS EMR cluster. I started running into python package issues where a module is not found in during imports.One obvious solution would be to go into each individual node and install my dependencies. I would like to avoid this if possible. Another solution I can do is write a bootstrap script and create a new cluster.Last solution that seems to work is I can alsopip installmy dependencies and zip them and pass them through thespark-submitjob through--py-files. Though that may start becoming cumbersome as my requirements increase.Any other suggestions or easy fixes I may be overlooking?
Module not found in AWS EMR slave nodes
The feature has not landed inbotocoreyet. It's unfortunate thatusage examplesappeared in the AWS docs already, when those service definitions are not released to PyPI yet.WatchPR 1356for merging.ShareFollowansweredJan 17, 2018 at 18:46wimwim348k105105 gold badges631631 silver badges766766 bronze badgesAdd a comment|
I'm trying to use the recent Amazon transcribe service with:transcribe = boto3.client('transcribe')and I get the following error:botocore.exceptions.UnknownServiceError: Unknown service: 'transcribe'. Valid service names are: ...I've tried upgrading boto3 and botocore using:pip install botocore --upgrade pip install boto3 --upgrade
Boto3 does not support transcribe service [duplicate]
Yes, it is, however, you won't be able to use it as a normal partition. Partitioning is usually used to reduce the amount of data being read for each query and therefore improving the query performance. This is why most of the time people chose partition keys like dt=2019-11-05. What is your actual goal here? You can achieve the same with bucketing. and just create as many buckets as many ranges you want to have.ShareFollowansweredNov 4, 2019 at 8:48IstvanIstvan7,9891212 gold badges6363 silver badges114114 bronze badgesAdd a comment|
Suppose there is an external table in AWS Athena containing a column 'Id' which is an integer along with numerous other columns.Is there a way for partitioning this table on 'Id' column by range?For example, create partition in the following manner:0 >= Id < 10 10 >= Id < 20 20 >= Id < 30 30 >= Id < 40and so on..This could be useful when the amount of data for one value of Id is not large enough. We could then keep the data corresponding to a range in one bucket and reduce the partitioning overhead.
How can range partitioning be created in AWS Athena?
Look in the Apache Hive documents for the details of each type of Serializer/Deserializer. E.g. for the OpenCSVSerde:https://hive.apache.org/javadocs/r2.1.1/api/org/apache/hadoop/hive/serde2/OpenCSVSerde.htmlBased on my rudimentary understanding of Java, I think you can set four parameters:LOGSEPARATORCHARQUOTECHARESCAPECHARFrom theAWS docs for Athenawe have this tip:Enter appropriate values for separatorChar, quoteChar, and escapeChar. The separatorChar value is a comma (,), the quoteChar value is double quotes ("), and the escapeChar value is the backslash (\).So it appears that you are supposed to uselowerCamelCaseversions of the Java fields. Although I've never seen that convention documented in Glue docs.ShareFolloweditedJul 17, 2023 at 19:57gbeaven1,71822 gold badges2121 silver badges4343 bronze badgesansweredSep 6, 2019 at 18:09Matt MoehrMatt Moehr12322 silver badges1010 bronze badges1Same info, but on a different location.cwiki.apache.org/confluence/display/Hive/CSV+Serde&docs.aws.amazon.com/athena/latest/ug/csv-serde.html–Jimson JamesFeb 17, 2021 at 15:27Add a comment|
Hereis a link to descriptionSerDeInfoparameter. They definedparametersas map, but what key and value they expect? There are some examples like:"SerdeInfo": { "SerializationLibrary": "org.apache.hadoop.hive.serde2.OpenCSVSerde", "Parameters": { "field.delim": ",", "serialization.format": "1" } },But what full list is?
What is the full parameters list for SerDeInfo in aws glue?
I was dealing with the same problem, I had to add the files required for my application to run in thebuildspec.ymlWhen it was failing mybuildspec.ymlfile had just the code bellow on the artifacts section:artifacts: files: - app.js - package.jsonI fixed it adding config/*artifacts: files: - app.js - package.json - config/*If you're having problems with some packages you can also add:- node_modules/**/*ShareFollowansweredApr 30, 2021 at 20:26Carlos Eduardo PerezCarlos Eduardo Perez1Add a comment|
var _config = require('./config/config.js'); var _config2 = _interopRequireDefault(_config);Cannot find module'./config/config.js'But this file exists.Works fine on localhost, but throws this error AWS EBS.
AWS Beanstalk Error: Cannot find module
As Amit mentioned, DynamoDB doesn't provide a mechanism to enable TTL on a column.That said, I can see that this could be built on top of DynamoDB.I haven't tried this, so this is simply theory, but if one wanted to TTL a column;Imagine two tables: a. Your current table you wish to TTL a column (Lets call this Original) b. A new table your TTL logic will be based off (Lets call this TTL Table)Create a new TTL table, to trigger the column based TTLs off, and associate a Lambda to this.Tie a Lambda trigger to the Original table, you wish to TTL a column, and on every event, Perform a PUT operation on the TTL Table with the following data: a. The Primary Key from the original table b. TTL time c. Attribute to remove from original tableWhen the TTL expires on the TTL table, you can trigger the removal of the column from the original table, based on the Primary Key, and the Attribute.It would seem possible to build, but I would recommend testing it thoroughly.HTHShareFollowansweredJan 1, 2018 at 12:02Abhaya ChauhanAbhaya Chauhan1,2391111 silver badges88 bronze badges1Thanks Abhaya for the answer. I imagine it works. I learnt from another source that one can configure ddb trigger by aws lamdba on a scheduled event(cloudwatch event) to scan thru the table index to clean up stale column value. Based on the tradeoff between space and time, one can pick up either your proposal or the other one I mentioned here.–jun shenJan 1, 2018 at 19:03Add a comment|
After reading DynamoDB(DDB) doc, I know there is only builtin TTL for item/row. Then I wonder if there is a convenient way to implement TTL for column? When a TTL expires, the value in the column of an item (if any) will be eliminated. The general use case is that some field of a row is of limited durability. TTL ensures robustness of a program's business logic.I learnt that there is DDB trigger using AWS lambda but the limitation is that AWS lambda only deals with DDB stream (of item update). If an item sits silently in the table for a long time, the column of interest in the item will not be erased after an expected TTL. Please correct me if I miss something.I know that one can add a job to poll DDB periodically. I don't like the idea as that is a significant burden for DDB as the job needs to scan DDB (index) and the scan could be time consuming depending on the size of a DDB table of interest and is a source of data contention.Regards,Jun
How to implement TTL on column in DynamoDB?
Have you tried this ?First, you need to open HTTPS port (443). To do that, you go tohttps://console.aws.amazon.com/ec2/and click on the Security Groups link on the left, then create a new security group with also HTTPS available. Then, just update the security group of a running instance or create a new instance using that group.After these steps, your EC2 work is finished, and it's all an application problem.Credit to :https://stackoverflow.com/a/6253484/8131036ShareFollowansweredDec 11, 2017 at 4:40Craig A GomezCraig A Gomez13911 gold badge33 silver badges1212 bronze badges1Hi Craig, thanks for the insight and the HTTPS port 443 has already been opened however I cannot get the HTTPS working on www.mydomain.com–NPMDec 11, 2017 at 9:55Add a comment|
I have an application running on an AWS EC2 instance with the domain's nameservers on AWS as well. I have an A record with the public IP.I've create a secure certificate with ACM and also created an ELB Load Balancer. My domain still doesn't show the HTTPS in front of it.Can anyone provide some help? Many thanks
Adding a secure HTTPS certificate to AWS EC2 Instance
Based on your description of use case, you may first use Cognito Android Auth SDK to get authenticated and store the tokens. Then you may use Cognito Android CUP SDK to call getSessionInBackground.Also a quick tip: CUP Android SDK: The Adv Security, Adaptive Auth, and new MFA support is available from version 2.6.9 Cognito Auth Android SDK: The Adv Security support is available from version 2.6.9. The Adaptive Auth and new MFA support will be available through Springboard on the supported regions.ShareFollowansweredDec 15, 2017 at 0:53crystalwangcrystalwang1133 bronze badges1Are you referring to thisgetSessionInBackground? That takes the AuthenticationHandler callback which requires a password, which i dont have since they authorize through facebook.–jamescharlesworthDec 15, 2017 at 11:24Add a comment|
Having trouble understanding the authorization flow of FB users and AWS Cognito User Pools. I have followedthis guid.facebook login app has my redirect urihttps://<cognitoname>.auth.us-east-1.amazoncognito.com/oauth2/idpresponseaws cognito has my facebook appid and secretTwo issues:1) I'm expecting when my android app authenticates with fb (via login button), the fb server sends something to my userpool adding that user. that is not happening. I dont see a method in theCognitoUserobject to do this on my end with the loginResult from fb. No user is getting created in the userpool upon fb auth.2) Assuming a fb user were to be created in my pool, how would I call getSessionInBackground without the password? It does not look like the android Congito Classes have a way to handle this.Also, i am able to log in a fb user to a federated identity but i dont think that is what i want unless its part of the user pool process.
AWS Cognito User Pool and Facebook Integration
I don't believe so. You will likely have to query one-by-one.INDEXES - The response includes the aggregate ConsumedCapacity for the operation, together with ConsumedCapacity for each table and secondary index that was accessed.Note that some operations, such as GetItem and BatchGetItem , do not access any indexes at all.In these cases, specifying INDEXES will only return ConsumedCapacity information for table(s).Source:https://docs.aws.amazon.com/cli/latest/reference/dynamodb/batch-get-item.htmlShareFollowansweredAug 15, 2020 at 18:32Joshua WolffJoshua Wolff3,00622 gold badges2727 silver badges4646 bronze badgesAdd a comment|
I am using AWS.DynamoDB.DocumentClient in a nodejs program to fetch items from multiple Dynamodb tables. To make code simple, I choose to use BatchGetItem/BatchGet method.The challenge is I need to fetch items based on aGlobal Secondary Index, e.g. name+age, rather than the initial primary key generated when creating the table. I went throughBatchGetItem/BatchGetbut not see any parameters of using Global Secondary Index.I ran some testing with the following codevar params = { RequestItems: { 'Table-1': { Keys: [ { name: 'abc', age: 18, }, ] } } }; var docClient = new AWS.DynamoDB.DocumentClient(); docClient.batchGet(params, function(err, data) { if (err) console.log(err); else console.log(data); });And got following error.> ValidationException: The provided key element does not match the > schemaDoes it mean BatchGetItem/BatchGet can't use Global Secondary Index, and I have to read from tables one by one?
Does getBatchItem method of AWS.DynamoDB.DocumentClient object supports Global Secondary Index?
Asmentioned in the Boto3 bugtrackerthis might happen if you have updated yourboto3without updating yourbotocore.So I suggest to updatebotocoreand retry:pip install botocore --upgradeor in some different way, depends on how you installed botocore in the first place.ShareFollowansweredNov 24, 2017 at 10:34Grzegorz OledzkiGrzegorz Oledzki23.9k1616 gold badges6868 silver badges109109 bronze badges2thanks for your answer , but this is suitable for which is run in local or aws server. my question is how we fix it in lambda function. we just include at top of lambda function import boto3 or import botocore i tried both. but it still show the above error only, we din't install and update any package in AWS lambda , it is server-less infrastructure–Arun KumarNov 24, 2017 at 10:42@ArunKumar I see. I didn't know you were using AWS Lambda. You are right. My answer doesn't apply then.–Grzegorz OledzkiNov 24, 2017 at 10:44Add a comment|
Good day, Recent AWS release API for know the billing information.It is available in all aws SDK (c#,python,php). I just tried a lambda function to update my database table with current cost of my all linked accounts. but mt lambda function doesn't work. it show the following error"Unknown service: 'ce'. Valid service names are: acm, apigateway, application-autoscaling, appstream, athena, autoscaling,etc "my lambda code is :import boto3 from datetime import datetime, timedelta def lambda_handler(event, context): client1 = boto3.client( 'ce', aws_access_key_id=accesskey, aws_secret_access_key=secretkey) [referral link for client creation][1] response = client1.get_cost_and_usage( TimePeriod={ 'Start': startdate, 'End': enddate }, Granularity='MONTHLY', Metrics=[ 'BlendedCost', ], GroupBy=[ { 'Type': 'DIMENSION', 'Key': 'LINKED_ACCOUNT' }, ], ) print response
cost explorer in python-boto3 unknown service 'ce'
Yes you can use a combination of rclone, winfsp and NSSM to mount the bucket as drive read my answer hereMount s3 bucket in ec2 windows instanceShareFollowansweredJul 12, 2019 at 15:14Anass KartitAnass Kartit2,0601515 silver badges2525 bronze badgesAdd a comment|
I would like to use S3 bucket as a real-time file store and I wanted to upload/download large files to S3 from my Windows server 2016 frequently. Is there any option to mount an S3 bucket with windows EC2 instance without using third party paid tools
How to map a AWS S3 bucket as a mapped drive (Network Drive) in windows server 2016
It happened for me, and the problem was an error of inconsistency of the input record format the DB table. Try to checkAWS Docs of COPY commandto make sure the COPY command parameters are defined properly.ShareFollowansweredJun 13, 2018 at 8:27RonyisRonyis1,8931616 silver badges1717 bronze badges0Add a comment|
I am implement AWS kinesis-Firehose data stream and facing issue in data delivery from s3 to redshift. can you please help me and let me know what is missing?An internal error occurred when attempting to deliver data. Delivery will be retried; if the error persists, it will be reported to AWS for resolution. InternalError 2
An internal error occurred when attempting to deliver data in AWS Firehose data stream
- You can use SNS to send http requests to Web Applications Endpoints;2 - You can use AWS IoT to send notifications over WebSockets if you're looking for a "real time" front-end updates.This my help you to get started with AWS IoT:http://gettechtalent.com/blog/tutorial-real-time-frontend-updates-with-react-serverless-and-websockets-on-aws-iot.htmlSNS:http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.htmlShareFollowansweredOct 4, 2017 at 12:35Tom MeloTom Melo1,49111 gold badge1111 silver badges1717 bronze badges42I'm not looking for real time update and I'm not looking for sending http requests. I'm looking for push notifications–JeraldOct 5, 2017 at 8:091You mean your web application sending "Mobile Push Notifications"?–Tom MeloOct 5, 2017 at 12:23I was looking for "real time" notification so this is great.–rotimi-bestJan 9, 2021 at 13:461This question is about push notifications not real time. Push notifications can be received by devices even when the website is not open in the browser.–RyanNov 9, 2022 at 2:45Add a comment|
I have a web application and I want to send Push Notifications using AWS. I don't understand which AWS service I could use to send push notifications to web applications. It looks like AWS SNS service couldn't do that. But I can't find any examples. Tell me please what service to use?
AWS Push Notifications service for web applications
Add Header Key PairThese aren't raw headers, they're cookies. Although I don't use postman, it sounds like this is your issue:Based on what you've said, you wouldn't add them like this:[CloudFront-Key-Pair-Id, APKEXAMPLEQQ]Instead it should look more like this:[Cookie, CloudFront-Key-Pair-Id=APKEXAMPLEQQ]ShareFollowansweredSep 29, 2017 at 10:27Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badgesAdd a comment|
I have the code that returns the Cloud Front Signed Cookie Values.CookiesForCustomPolicy signedCookiesUrl = AmazonCloudFrontCookieSigner.GetCookiesForCustomPolicy("https://example.cloudfront.net/movies/nature.mp4", new StreamReader(File.OpenRead(Path.Combine(AppContext.BaseDirectory, "pk-2.pem"))),"APKEXAMPLEKEYID", DateTime.Now.AddDays(10), DateTime.Now, null);I use the returned values to request the object, however returns the<Error> <Code>MissingKey</Code> <Message> Missing Key-Pair-Id query parameter or cookie value </Message> </Error>.I test this through the PostMan tool putting the headers and direct request through Chrome browser and still getting the same error.I have use the correct Cloudfront Key Pair and correct resource URL. My objects are private and cloudfront have access to it. Is there any thing else that i need to work on to get this working?
AWS Cloudfront returning Missing Key-Pair-Id query parameter or cookie value
What I recall when being smacked by this same problem a while ago...When using a Lambda or HTTP proxy integration, you need to specify at a minimum theAccess-Control-Allow-Originheader the in your lambda's response. You may have specify additional (don't have any code handy at the moment). I do recall this message to be somewhat misleading, and that testing from the management console works because it's not really using CORS because of how the console executes the tests.Have a look at the last section in:Enabling CORS for a REST API resourceexports.handler = async (event) => { const response = { statusCode: 200, headers: { "Access-Control-Allow-Headers" : "Content-Type", "Access-Control-Allow-Origin": "https://www.example.com", "Access-Control-Allow-Methods": "OPTIONS,POST,GET" }, body: JSON.stringify('Hello from Lambda!'), }; return response; };You need to replaceAccess-Control-Allow-Originwith the host name used by your client app. As a quick test use*, but don't go to production as that defeats the purpose of CORS.ShareFollowansweredJun 8, 2020 at 18:41Jason ArmstrongJason Armstrong1,1541010 silver badges1919 bronze badgesAdd a comment|
I have a problem with AWS API Gateway. I'm developing a web application with Angular 4 (using TypeScript language), but if I invoke the PUT method from the frontend, the following error message appears:Method PUT is not allowed by Access-Control-Allow-Methods in preflight response.and it's very strange because in the AWS console the PUT method works perfectly (I did many tests directly from the API Gateway console with a stage after the deploy, and everything works well). If I go to "Actions/Enable CORS" all methods have the check, included the PUT method, and I don't explain exactly what is the problem with the API Gateway.Why do I get this error if it seems all ok in API gateway? Is there a way to change these CORS?
"Enable CORS" in AWS API Gateway resource?
I don't know if this is still relevant for you, but you do need to configure the EC2MetadataCredentials as it is not in the default ProviderChain ( search for new AWS.CredentialProviderChain([ in node_loader.js in the sdk).It seems you might have an old version of aws_sdk as that code works for me:import AWS from 'aws-sdk'; ... AWS.config.credentials = new AWS.EC2MetadataCredentials();ShareFollowansweredOct 24, 2018 at 11:19DavidDavid122 bronze badges1Accessing credentials this way returned iAM role with instance id appended to it. PLease see my question here:stackoverflow.com/questions/57686244/…–SachAug 28, 2019 at 9:05Add a comment|
Using the Node sdk for AWS, I'm trying to use the credentials and permissions given by the IAM role that is attached to the EC2 instance that my Node application is running on.According to the sdk documentation, that can be done using theEC2MetadataCredentialsclass to assign the configuration properties for the sdk.In the file that I'm using the sdk in to access a DynamoDB instance, I have the configuration code:import AWS from 'aws-sdk' AWS.config.region = 'us-east-1' AWS.config.credentials = new AWS.EC2MetadataCredentials({ httpOptions: { timeout: 5000 }, maxRetries: 10, retryDelayOptions: { base: 200 } }) const dynamodb = new AWS.DynamoDB({ endpoint: 'https://dynamodb.us-east-1.amazonaws.com', apiVersion: '2012-08-10' })However, when I trying to visit the web application I always get an error saying:Uncaught TypeError: d.default.EC2MetadataCredentials is not a constructorUncaught TypeError: _awsSdk2.default.EC2MetadataCredentials is not a constructorEven though that is the exact usage from the documentation! Is there something small that I'm missing?Update:Removing thecredentialsandregiondefinitions from the file result in another error that'll say:Error: Missing region|credentials in config
AWS EC2 IAM Role Credentials
It appears you would like to set TTL attribute when sending OTP messages but currently Amazon SNS does not support setting TTL for any of the following:SMSSQSHTTPemail.TTL is only applicable when sending mobile push notification using any of the following platform:APNSAPNS_SandboxFCMADMBaiduWNSShareFollowansweredOct 27, 2019 at 16:12aksyumaaksyuma3,09011 gold badge1717 silver badges3232 bronze badgesAdd a comment|
I'm writing code to send an OTP message. My current parameters and publish method look as follows:params = { Message: otpMessage, MessageStructure: 'string', PhoneNumber: contactNo }; sns.publish(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });How do I set the TTL attribute?
How to set the TTL attribute in the publish method of SNS in NodeJS?
-1There is ameta-argumentcalled lifecycle for that.resource "aws_s3_bucket" "MyPreciousBucket" { lifecycle { prevent_destroy = true } }ShareFollowansweredNov 15, 2021 at 14:35Sergey BelashSergey Belash1,45333 gold badges1616 silver badges2222 bronze badges24This is NOT a solution to @RaGe's question.prevent_destroyonly makes an entiredestroyoperation fail if set totrueon a resource. It doesn't answer OP's issue which is about being able to undertake adestroyoperation on the current terraform configuration while ensuring the given resource will exceptionally not be destroyed. I.e. destroying every resource in the config except the one in question. Your idea only prevents us from being able toterraform destroy, which is completely useless in the context of OP's post.–adamencyMay 16, 2022 at 19:203The issue is trackedhereby the way. And after 300 upvotes and 7 years waiting, Hashicorp still hasn't done anything to address this...–adamencyMay 16, 2022 at 21:02Add a comment|
I'd like to retain CloudWatch logs after I spin down a bunch of resources created using terraform - which includes the CloudWatch log group. Is there a way to tellterraform destroyto spare some resources?I suppose I could manually remove CloudWatch resources from tfstate before calling destroy, doesn't seem like the right approach.
Retaining resources after terraform destroy
Try to set a rule to specific port to your security group:Type: Custom TCP RuleProtocol: TCPPort Range: 5432Source: 0.0.0.0/0enter image description hereShareFolloweditedJul 15, 2017 at 0:57answeredJul 15, 2017 at 0:39diegofcornejodiegofcornejo31111 silver badge66 bronze badgesAdd a comment|
I've been trying to resolve the issue for the past 3 hours now. I have an ec2 instance that is running a tomcat application. I launched it from the eclipse using "Deploy to Elastic Beanstalk option" from aws plugin.And I have a postgreSQL RDS instance. I am able to connect to the database from the localhost but my ec2 instance can't connect. I've fixed my inbound rules in rds security group to allow all kinds of traffic in. Still no luck. Please help!Here's the screen from my rds security group on aws consoleEDIT:Here's my vpc and subnet info on ec2 instance:And on my rds instance:My security group inbound rules on ec2:
Can't Connect to PostgreSQL RDS from EC2 But works fine from localhost
Recently(10-Nov-2022) AWS launched a new service calledEventBridge Schedulerand I've already added a detailed answerhere, please have a look. So you don't need to handle anything manually BecauseEventBridge Scheduleris serverless.ShareFollowansweredNov 13, 2022 at 21:30Kushan GunasekeraKushan Gunasekera7,85677 gold badges4646 silver badges6060 bronze badgesAdd a comment|
Grettings everyone,We are usingAWSasPaaSand we have a couple of microservices deployed there. We got some new requirements to use some sort of cron jobs and schedulers.For example we have the following scenarios:A user can set some rules when an event must happen. For example he wants to remove some oudated documents every Friday or once a week or every 2 daysA user can configure creating copies of some objects every day till date AI used to use Quartz before and it is the first idea that comes in my mind. I think that we can use it in AWS, cause it has RDS(with PostgreSQL for instance).But I would like to know what sort of other options can I use instead of Quartz(http://www.quartz-scheduler.org) + RDS? May be AWS has something out of the box that can do the same?What do you think abouthttp://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.htmlin my case?Thank you for your time and advice :)
AWS and Quartz for running scheduled tasks
I think you'll have to request for "profile" or other scopes in order to get that.I recently created a related framework and it has an example of retrieving the user profile information.https://github.com/Thywis/MultiAccountOauthShareFollowansweredJun 24, 2017 at 5:06ThywisThywis1111 bronze badgeAdd a comment|
Folks, am working on, AWS Cognito Facebook and Google Plus login for my iOS app. I can able to SignIn with both FB and G+ credentials and I got Identity ID as a response. Is there any way to get user credentials such as username, email and mobile number. I have tried with user pools but I got null value when I use [self.pool currentUser].username.I also tried below,AWSCognito *syncClient = [AWSCognito defaultCognito]; AWSCognitoDataset *dataset = [syncClient openOrCreateDataset:@"myDataSet"]; NSString *userName = [dataset stringForKey:@"name"];But still getting null as userName. If possible can you please let me know the possible way.
AWS Cognito Facebook and Google Plus User login credentials