Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I've managed to get it working. I had to write a simple bash script that iterates through all the objects in the bucket's prefix which areGLACIERorDEEP_ARCHIVEdepending on the case. So there are two components to this:First, you need a file with all the objects:aws s3api list-objects-v2 --bucket someBucket --prefix
some/prefix/within/the/bucket/ --query "Contents[?StorageClass== 'GLACIER']"
-- output text | awk '{print $2}' > somefile.txtThelist-objects-v2will list all the objects in the prefix, with theawk '{print $2}'command we'll make sure the resulting file is iterable and contains just the names of the objects.Finally, iterate through the file restoring the objects:for i in $(cat somefile.txt);
do
#echo "Sending request for object: $i"
aws s3api restore-object --bucket $BUCKET --key $i --restore-request Days=$DAYS
#echo "Request sent for object: $i"
doneYou can uncomment theechocommands to make the execution more verbose but it's unnecessary for the most part. | I have plenty of objects in AWS S3 Glacier, I'm trying to restore some of them which are on the same prefix (aka folder). However I can't find a way to restore them all at once, it might be worth mentioning that some of the elements in this prefix are prefixes themselves which I also want to restore. | How to restore multiple files from a prefix in AWS |
Look at the logs.Your event.Id value is "NaN" which means "not a number".Also event.name is "undefined".So your problem is occuring here:exports.handler = function(event, ctx, callback) {Your event object is not populated with the values you are expecting.The payload should be proper JSON and look something like:{
"id": "6",
"name": "Test Name"
}To achieve this, in your POST from your front-end code you could use something like:data:JSON.stringify({"id": $('#Id').val(), "name": $('#name').val()})Make sure that$('#Id').val() and$('#name').val() actually have proper values. | I am trying to enter the data in AWS Dynamo DB through the AWS Lambda function using AWS HTTP API. FYI The data type of the parameter (Id) originally in Dynamo DB is Number but it is taking as String while parsing JSON data, so I have written "Number" beside "Id" parameter in order to convert it to "Number". When I am trying to run this lambda function I am getting this error. Please help, Thanks!Lambda function:payload: { "Id": $input.json('$.Id') "name": $input.json('$.name')console.log('starting function');
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});
exports.handler = function(event, ctx, callback) {
var params = {
Item: {
Id: Number(event.Id),
name: event.name
},
TableName: 'Test'
};
console.log(params)
docClient.put(params, function(err, data) {
if(err) {
callback(err, null);
} else{
callback(null, data);
}
});
}Error log: | Facing this error "ValidationException: The parameter cannot be converted to a numeric value: NaN" |
Option 1You can rename aws-exports.js to aws-exports.ts.There is a drawback. Amplify would re-generate the JavaScript version (aws-exports.js) each time, and you would have to manually rename it. Not ideal, and there is an open issue on this:github.com/aws-amplify/amplify-cli/issues/304Option 2You can also create a aws-exports.d.ts file on the same level as aws-exports with the following content:declare const awsmobile: Record<string, any>
export default awsmobile;See morehere. | I am trying to integrate my project with aws (amazon web service )
After I initialize aws globally I decided to create a project call ELab by using thefollowing commandamplify initand also using the following command to configure itnpm install aws-amplify @aws-amplify/ui-reactthen I add this to my App.tsx fileimport Amplify from 'aws-amplify'
import config from './aws-exports' // error the /aws-exports.js was not generated
Amplify.configure(config)After all those configuration aws fail to generate the aws-exports.js configuration fileis there any to resolve this issues thank you in advance | How to fix Cannot find module './aws-export' or its corresponding type declarations.ts (2307) |
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see herefor the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see herefor more details). | I have two EKS Clusters in a VPC.Cluster A running in Public subnet of VPC [Frontend application is deployed here]Cluster B running in Private subnet of VPC [ Backend application is deployed here]I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B. | AWS EKS - Multi-cluster communication |
Based on the comments.One can't upload 100 MB files into a lambda function directly. The reason is that max data payloads for the lambdaare:6 MB (synchronous)256 KB (asynchronous)The alternatives are:upload the file to S3 instead. You can setup S3 events to trigger your lambda for each newly uploaded file.use something else then lambda, EC2, Fargate or Beanstalk to be able to upload 100 MB files directly. | The AWS ALB limits size to 100 MB. Part of my API response is a rendered README.md from the Github API. I don't control the rendering, but it turns out ALB will not return a response with text of this rendered REAME.md file from Github. When I move response to directly hitting an Nginx LB from a VM, there is no issue. I looked at content, and yes, the README content is fairly large it seems. I don't know if/how exceeds 100 MB, but it seems to be he problem. Regardless, I cannot control how Github controls the rendering.Is there any workaround this 100 MB limit? If not, can I use Nginx as a reverse load balancer for lambda functions? Otherwise, I'll need to go back to regular VMs and do a weighted DNS among non-autoscaling VMs since I still can't use a load balancer for VMs, Fargate, or anything. | AWS load balancer size limits |
you might have this solved by now but I would guess the problem could be you're not returning the __typename in your code, so AppSync does not know what type within the union it's being returned. It should be returning, for instance:
{
__typename: 'Stream',
pk: 1,
starts_at: '2022-01-07'
} | I have two typesStreamandVideoand alistResourcesquery that returns a mixed list of streams and videos:type Stream {
pk: String!
starts_at: String!
}
type Video {
pk: String!
created_at: String!
}
union SearchResult = Stream | Video
type Query {
listResources(code: String!): [SearchResult]
}and below an example of calling the query:query {
listResources(code: "234") {
... on Stream {
pk
starts_at
}
... on Video {
pk
created_at
}
}
}For some reason, even though the query should be formed correctly according to the appsync and graphql docs (https://graphql.org/learn/schema/#union-types,https://docs.aws.amazon.com/appsync/latest/devguide/interfaces-and-unions.html), the query throws a 400 error.Already checked the lambda locally and in cloudwatch, the lambda returns data correctly.If I change the return type oflistResourcesquery toAWSJSONthe data gets returned properly, which confirms proper funcitonality of the lambda. The error must either be the query return type or the schema definition.Where might the culprit be? | Appsync query union return type throws 400 |
(Thanks to @JohnRotenstein for the answer in a comment to my question.)There is a separate command calleddescribe-db-cluster-snapshotsthat operates very similarly and outputs results for clusters, obviously, like Aurora. The only way to get the full list as seen in the Console is to combine this output withdescribe-db-snapshots. | I can see 77 "System" snapshots in us-east-1 on the website / AWS Console. When I run the following:aws rds describe-db-snapshots --region us-east-1 --include-shared --include-public --no-paginate --output text... I get 35. I tried this in AWS CloudShell as well as locally with the access/secret fromhttps://console.aws.amazon.com/iam/home?region=us-east-1#/security_credentialsso this should be running with maximum (my) privileges.Ithinkit's excluding Aurora snapshots because the onlyenginevalue I see ispostgresand notaurora-postgresql. I am going crazy trying to figure out why I can't see everything with the CLI ... any thoughts, pointers, RTFM's?UPDATE: I added--filters "Name=engine,Values=aurora-postgresql"and sure enough the output is blank whereas--filters "Name=engine,Values=postgres"shows the 30+ entries for non-Aurora. So why are Aurora snapshots being excluded? | Why does AWS CLI rds describe-db-snapshots not include Aurora snapshots? |
The AWS labs project contains the official documentation for running JanusGraph on AWS DynamoDB. However, a DynamoDB backend is not officially supported by JanusGraph, hence it is counted as one of the "3rd party storage adapters for JanusGraph" according to theJanusGraph home page.Note that this links to theAWS Labs github repo.The AWS documentationrefers to that repo as well.This being the case, it is safe to assume that this solution is no longer being actively maintained. Issues raised in the AWS Labs repo regardingTinkerpop 3.3.x or 3.4.x compatibilityandJanusGraph 0.4.xand0.3.x compatibility(much less the current 0.5.x or 0.6.x) received no response. It's probably best to use a different JanusGraph backend if at all possible. | I want to deploy the latest JanusGraph version on AWS with DynamoDB as backend storage.
I went through the JanusGraph documentation and I didn't found any setting for DynamoDB in backend storage settings.
I found one documentationhttps://bricaud.github.io/personal-blog/janusgraph-running-on-aws-with-dynamodb/but here they are using amazonlabs repo which is using the old JanusGraph version (i.e JanusGraph 0.2). Any help is appreciated. | How to deploy latest JanusGraph version on AWS with DynamoDB |
I tried toreproduce the issue, and it fails when you run yum withpython3instead of python2:python3 /usr/bin/yum install java-11-amazon-corretto-headless
File "/usr/bin/yum", line 30
except KeyboardInterrupt, e:
^
SyntaxError: invalid syntaxYou shoulduse python2, not python3 for yum:python2 /usr/bin/yum install java-11-amazon-corretto-headlessIt seems that in your instancedefault pythonversion was changed to python3. | Machine details:Cloud: AWSOS: Linux ip-10-196-64-140.eu-west-1.compute.internal 4.14.209-160.335.amzn2.x86_64 #1 SMP Wed Dec 2 23:31:46 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxError details:[ec2-user@ip-<hostip> ~]$ sudo yum install java-11-amazon-corretto-headless
File "/bin/yum", line 30
except KeyboardInterrupt, e:
^
SyntaxError: invalid syntax | sudo yum install <package-name> is giving weird error on AWS Linux ec2 (SyntaxError: invalid syntax) |
I tried with CLI and it still ask me Access Key ID and Secret Access Key.For CLI you have to use--no-sign-requestfor credentials to be skipped. This will only work if the objects and/or your bucket is public.CLI S3 commands, such ascprequireS3Url, not S3 arn:s3://bucket-nameyou can create it yourself from arn, as bucket-name will be in the arn. In your case it would beci****a.open:s3://ci****a.openSo you can try the following to copy everything to current working folder:aws s3 cp s3://ci****a.open . --recursive --no-sign-request | I was give this info:AWS s3 Bucket
ci****a.openAmazon Resource Name (ARN)
arn:aws:s3:::ci****a.openAWS Region
US West (Oregon) us-west-2how am I supposed to download the folder without Access Key ID and Secret Access Key ?
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
I usually use s3 browser, but it also ask for Access Key ID and Secret Access Key | How to download from s3 bucket with ARN |
I thinkLambdaservicedoes not have IPv6(dual-stack) endpoints. Only some services in some regions support IPv6, such asS3orEC2.Even if they support IPv6 you have to makeextra settingsto use this as explained in the linked docs for S3 and EC2. | I have problems using AWS CLI from an IPv6 address on Ubuntu 20.04.2 LTS.
Simple commands likeaws lambda get-account-settingsrun idle indefinitely.
When switching to an IPv4 address, everythings works fine.Same behavior when using Python's boto3 library.Any ideas? | Why are AWS CLI not working from behind a IPv6 address? |
I know that we can add an event source mapping, but that is not what I am lookingEvent source mapping for SQS, DynamoDB and Kinesis is exactly what you are looking. What you are showing on the screenshot is just AWS Console representation of the Event source mapping for the 3 services. | I am working with terraform, and I was wondering if there was a way to add triggers to a lambda function (e.g.:- a trigger when a S3 object). I know that we can add an event source mapping, but that is not what I am looking. Basically I want to know if I can include trigger functionality that is shown in the below image in my codeImage shows lambda console with "add trigger" functionality highlighted | Is there any way I can add triggers to a lambda function in Terraform? |
why Case 2 is OKVPC interface endpointshavevpc scope, not subnet scope. This explains why cases 2,1 and 4 work. Because of that, case 3 should also work. Thus, the question is why case 3 did not work?Possible reasons are that in your tests you made some configuration mistake (wrong security group, for example), or put lambda in a wrong VPC, did not enable Private DNS for the endpoint. Thus I would recommend double checking all the configurations for Case 3 and re-run the experiment. | I have a lambda function within a VPC that rotates rds password.
When I test lambda function with secret manager vpc endpoint as following:Case 1. Lambda inpublic subnet- VPC endpoint attach withpublic subnet=> Rotation is OKCase 2. Lambda inprivate subnet- VPC endpoint attach withpublic subnet=> Rotation is OK although cloudwatch has one error.Case 3. Lambda inpublic subnet- VPC endpoint attach withprivate subnet=> Rotation is failed because timeout of lambda functionCase 4. Lambda inprivate subnet- VPC endpoint attach withprivate subnet=> Rotation is OKI know I should not put the lambda function into public subnet but I want to know how lambda function within subnet works with vpc endpoint.Can anyone explains whyCase 2is OK although lambda and vpc endpoint are in different subnets. | How does Lambda within subnet access VPC endpoint? |
Try change your lamda to read the query string like belowvar params = {
Image: {
S3Object: {
Bucket: process.env.UploadBucket,
Name: event['queryStringParameters'].Image
}
},
}; | I am trying to read the query parameters passsed from a frontend application in my lambda function. Below is the code of frontend applicationmethod: 'GET',
url: API_ENDPOINT + '/detect',
params: {
Image: this.imageKey
}Below is my lambda code to read the query variablevar params = {
Image: {
S3Object: {
Bucket: process.env.UploadBucket,
Name: event.Image
}
},
};Where I am going wrong? | AWS Lambda How to read the query parameters |
Unless you are doing something special you should probably keep it simple and run across AZ.This is a good blog postthat goes into some details about pros and cons and reasons. As per the "at least two subnets" that is because of how the EKS control plane works: the EKS control plane is managed by AWS and it surfaces to the user by means of two ENIs that connect the API server to your own VPC. For HA reasons they are two and connected to two separate subnets. However, because of how VPC networking works ALL subnets can communicate with each others so regardless of where you deploy your worker nodes they will ALL be able to connect to the control plane. In other words, these are two orthogonal things (i.e. the two control plane subnets and the subnets for the worker nodes). | When you create a managed EKS nodegroup you must specify subnets, does it mean if I specify subnets from different AZs nodes from this group will be scaled across subnets' AZs? Or should I create separate node group with a single subnet for every AZ? What's the correct way to get cluster with multi-AZ nodes?Also, when I create a EKS cluster it says you must specify at least two subnets from different AZs but what if I want to create a single-AZ cluster? What's the point of having subnets from two AZs in this case? | Correct way to get multi-AZ cluster on EKS |
I am not 100% sure if it possible to tell Glue to keep the column, but in the meantime you could use this workaround:projectedEvents = projectedEvents.withColumn("type_partition",projectedEvents["type"])
glue_context.write_dynamic_frame.from_options(
frame=projectedEvents,
connection_options={"path": "$outpath", "partitionKeys": ["type_partition"]},
format="parquet"
) | Does anyone know whether it's possible to tell the Glue writer to keep the column you're partitioning on in the actual dataframe?https://aws.amazon.com/blogs/big-data/work-with-partitioned-data-in-aws-glue/Here, $outpath is a placeholder for the base output path in S3. The
partitionKeys parameter can also be specified in Python in the
connection_options dict:glue_context.write_dynamic_frame.from_options(
frame = projectedEvents,
connection_options = {"path": "$outpath", "partitionKeys": ["type"]},
format = "parquet")When you execute this write, the type field is removed from the
individual records and is encoded in the directory structure.I would like to keep thetypefield in the individual record. | AWS Glue: Keep partitioned column as value in row after writing |
Generally data from S3 is returned as a buffer. The file contents are part of theBodyparam in the response. You might be doingtoStringon the root object.You need to use.toString()on the body param to make it a string.Here is some sample code that might work for use case// Note: I am not using a different style of import
const AWS = require("aws-sdk")
const s3 = new AWS.S3()
const Bucket = "my-bucket"
async getObject(key){
const data = await s3.getObject({ Bucket, key}).promise()
if (data.Body) {return data.Body.toString("utf-8") } else { return undefined}
}To return this inexpress, you can add this to your route and pass the final data back once you have it.res.end(data));Consuming it in React should be the same as taking values from any other REST API. | I have a need to retrieve individual files from Node.js with Express.js. For that, I have installedaws-sdk, as well as@aws-sdk/client-s3. I am able to successfully fetch the file by using this simple endpoint:const app = express(),
{ S3Client, GetObjectCommand } = require('@aws-sdk/client-s3'),
s3 = new S3Client({ region: process.env.AWS_REGION });
app.get('/file/:filePath', async (req,res) => {
const path_to_file = req.params.filePath;
try {
const data = await s3.send(new GetObjectCommand({ Bucket: process.env.AWS_BUCKET, Key: path_to_file }));
console.log("Success", data);
} catch (err) {
console.log("Error", err);
}
});...but I have no idea how to return thedatacorrectly to the React.js frontend so that the file can be further downloaded. I tried to look up the documentation, but it's looking too messy for me, and I can't even get what does the function return..toString()method didn't help because it simply returns `"[object Object]" and nothing really else.On React, I am using a libraryfile-saver, which works with blobs and provides them for download using a filename defined by user.Node v15.8.0, React v16.4.0, @aws-sdk/client-s3 v3.9.0, file-saver v2.0.5Thanks for your tips and advices! Any help is highly appreciated. | File retrieval from AWS S3 to Node.js server, and then to React client |
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkitTake a backup of all files and folders and delete all from above location.This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries. | I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file? | AWS .Net API - The provided token has expired |
I put together a bit of code for this:https://gist.github.com/kbanman/0aa36ffe415cdc6c44293bc3ddb6448eThe idea is to upload a part to S3 whenever we receive a chunk of data in the stream, and then finalize the upload when the stream is finished.Complicating things is S3's minimum part size of 5MB on all but the last part in the series. This means that we need to buffer data until we can form those 5MB chunks. I accomplished this using a transformer that adds back-pressure on the content stream between each chunk upload.Parallelization is also made difficult by the fact that S3 insists on receiving the parts in order (despite asking for the parts to be numbered). | I am trying to upload files to a s3 bucket using the node jsaws-sdk V3.I know I am supposed to be using the commmands:CreateMultipartUploadCommand,UploadPartCommandand so forth. But I can't find any working example of a full multipart upload.Can anyone share any code samples?Thanks in advance | aws-sdk Multipart Upload to s3 with node.js |
Changing some of the attributes in DynamoDB is not permitted, for example, changing Partition key, adding a Local Secondary Index, etc.When such changes occur, it will need to replace the resource and to replace, it will try to delete and re-create the resource. During this process, if table already exists, it will fail.Only option is to delete the stack or manually delete the DynamoDB Table and let template create it again. Or Renaming the table.Documentationsays it will force new resourcehash_key - (Required, Forces new resource) The attribute to use as the
hash (partition) key. | I have a DynamoDB table created with this Terraform:resource "aws_dynamodb_table" "materials_table" {
name = "materials"
hash_key = "MATERIAL"
billing_mode = "PROVISIONED"
read_capacity = 5
write_capacity = 5
attribute {
name = "MATERIAL"
type = "S"
}
}The table was successfully populated (with 4 records, as noted inthis post) but in order to solve the problem (in that post) I have added a fieldPKand set that as thehash_keyfield, with this:resource "aws_dynamodb_table" "materials_table" {
name = "materials"
hash_key = "PK"
billing_mode = "PROVISIONED"
read_capacity = 5
write_capacity = 5
attribute {
name = "PK"
type = "S"
}
}This has caused the following error, when runningterraform apply:Error: error creating DynamoDB Table: ResourceInUseException: Table already exists: materialsWhat do I need to do in the.tfto get the change accepted? | Changing hash_key with Terraform causes Table already exists error |
Your wording is a bit confusing. It says that you want to "start" an instance (which suggests that the instance already exists), but then it says that it wants to "terminate" an instance (which would permanently remove it). I am going to assume that you actually intend to "stop" the instance so that it can be used again.You can put a shell script in the/var/lib/cloud/scripts/per-boot/directory.This script will then be executed every time the instance starts.When the instance has finished processing, it can callsudo shutdown now -hto turn off the instance. (Alternatively, it can tell EC2 to stop the instance, but usingshutdownis easier.)For details, see:Auto-Stop EC2 instances when they finish a task - DEV Community | I have a python script which takes video and converts it to a series of small panoramas. Now, theres an S3 bucket where a video will be uploaded (mp4). I need this file to be sent to the ec2 instance whenever it is uploaded.
This is the flow:Upload video file to S3.This should trigger EC2 instance to start.Once it is running, I want the file to be copied to a particular directory inside the instance.After this, I want the py file (panorama.py) to start running and read the video file from the directory and process it and then generate output images.These output images need to be uploaded to a new bucket or the same bucket which was initially used.Instance should terminate after this.What I have done so far is, I have created a lambda function that is triggered whenever an object is added to that bucket. It stores the name of the file and the path. I had read that I now need to use an SQS queue and pass this name and path metadata to the queue and use the SQS to trigger the instance. And then, I need to run a script in the instance which pulls the metadata from the SQS queue and then use that to copy the file(mp4) from bucket to the instance.
How do i do this?
I am new to AWS and hence do not know much about SQS or how to transfer metadata and automatically trigger instance, etc. | How to start an ec2 instance using sqs and trigger a python script inside the instance |
As pointed out by Marcin in the comments, more information can be found in:ECS console -> Cluster associated with the Batch Job -> Tasks | Any ideas what can trigger this status? | AWS Batch job fails with Status Task failed to start |
Theaws php client documentationstates:The SDK uses the getenv() function to look for the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables.=> it usesgetenv()not$_ENV.But the Symfony Dotenv component (by default) just populates$_ENVand doesn't callputenvtherefore your settings in .env files are not accessible bygetenv().Here are some options:callDotenv())->usePutenv(true)(but as symfony states: Beware thatputenv()is not thread safe, that's why this setting defaults to false)callputenv()manually exclusively for the aws settingWrap the aws client in your own symfony service and inject the settings from .env | In a Symfony 4.3 application using symfony/dotenv 4.3.11 and aws/aws-sdk-php 3.173.13:I'd like to authenticate the AWS SDK using credentials provided via environment variables, and I'd like to use the dotenv component to provide those environment variables.This should be possible: Setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables is one way to automatically authenticate the with the aws sdk. And DotEnv should turn your configuration into environment variables.However, when I set these variables in my.env.localor.envfiles, I get the following error:Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata service.This does not work:.env.local:AWS_ACCESS_KEY_ID=XXX
AWS_SECRET_ACCESS_KEY=XXXXXX$ ./bin/console command-that-uses-aws-sdkThis works:$ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=XXXXXX ./bin/console command-that-uses-aws-sdkDebug info:I made a symfony command that outputs the environment variables in$_ENV. With AWS_ACCESS_KEY_ID/SECRET in .env.local, sure enough it appears as an environment variable:...
[SYMFONY_DOTENV_VARS] => MEQ_ENV,APP_ENV,APP_SECRET,DATABASE_URL,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION,AWS_ACCOUNT
[AWS_ACCESS_KEY_ID] => XXX
[AWS_SECRET_ACCESS_KEY] => XXXXXX
... | Provide the AWS PHP SDK with credentials via Symfony DotEnv |
Amazon S3 cannot compress your data.You would need to write a program torun on an Amazon EC2 instancethat would:Download the objectsCompress themUpload the files back to S3An alternative is to use Storage Classes:If the data is infrequently accessed, useS3 Standard - Infrequent Access-- this is available immediately and is cheaper as long as data is accessed less than once per monthGlacieris substantially cheaper but takes some time to restore (speed of restore is related to cost) | We have lots of files in S3 (>1B), I'd like to compress those to reduce storage costs.
What would be a simple and efficient way to do this?Thank youAlex | Compress billions of files in S3 bucket |
Yes, you can deploy both using the same domain name. APIs should be deployed using api.domain.com and websites can deploy using domain.com. For that, you need to purchase an SSL certificate with a domain name and subdomain (eg:https://example.comandhttps://api.example.com) support and do the following.Configure certificate in AWS ACMDeploy your website in the S3 bucket with CloudFrontDeploy APIs in EC2 with the support of a Load balancer (ELB)Configure Route53 and define two routes. Ie, create Records with 'A record type' in Route53 with ELB address and CloudFront address.
See sample deployment architecture | We have to deploy Restful Webservice(API services) and static pages in the AWS environment.
Currently, our Webservice is hosted in EC2 instance with one ELB and Route53. Also, the static pages are deployed in the S3 bucket. The Webservice and Website, both should be in the same domain.When the user calls "www.domain.com/" it should be routed to the S3 server. However the API calls (www.domain.com/api/**) should be routed to EC2 through ELB. Is there any way to
route API calls to ELB and website access calls to S3 using Route53?
or What is the best approach to resolve this? | How to deploy a website and webservice in AWS using same domain name |
The following resource is incorrect:arn:aws:iam::197709948620:instance/*instanceisec2, notiam. It should be:arn:aws:ec2::197709948620:instance/* | I am getting the following error.IAM resource path must either be "*" or start with user/, federated-user/, role/, group/, instance-profile/, mfa/, server-certificate/, policy/, sms-mfa/, saml-provider/, oidc-provider/, report/, access-report/.Please help me out here.Here is my code.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:StartSession"
],
"Resource": [
"arn:aws:iam::197709948620:instance/*"
],
"Condition": {
"StringLike": {
"ssm:resourceTag/Finance": [
"Web Server"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"ssm:TerminateSession"
],
"Resource": [
"arn:aws:ssm:*:*:session/${aws:username}-*"
]
}
]
} | Getting error while creating the policy IAM resource path must either be |
- match:
- uri:
prefix: /blog
name: blog.mydomain.com
rewrite:
authority: blog.mydomain.com
uri: /blog
route:
- destination:
host: blog.mydomain.comAdd the above rule in the virtual service, then create this service entry.apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: blog
spec:
hosts:
- blog.mydomain.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS | I have an Istio VirtualService with a match and a route and redirect url defined as follows:apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-pro
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /events
route:
- destination:
host: event-service
port:
number: 8000
- match:
- uri:
prefix: /blog
redirect:
uri: /
authority: blog.mydomain.com
- route:
- destination:
host: default-service
port:
number: 8000this VirtualService work as follows:if the request iswww.mydomain.com/eventsit will forward toevent-service.if the request iswww.mydomain.com/blogit will redirect host toblog.mydomain.com.if the request iswww.mydomain.com/anyotherit will forward todefault-service.In case no.2 I am redirectingwww.mydomain.com/blogtoblog.mydomain.compage because my blog page is hosted on that domain.now my problem is while redirecting the URL, the browser URL is changing toblog.mydomain.com. I want it to remain the samewww.mydomain.com/blogbut the content ofblog.mydomain.comshould be display on the screen. | istio: VirtualService url rewriting or forwarding |
How DynamoDb works is explained in the excellent AWS presentation:AWS re:Invent 2018: Amazon DynamoDB Under the Hood: How We Built a Hyper-Scale DatabaseThe relevant part to your question is at6.46minute, where they talk aboutstorage leader nodes. So when you put or update the same item, your requests will go to a single, specific storage leader node responsible for the partition where the item exists. This means, that all your concurrent updates will end up in the single node. The node probably (not explicitly stated) will be able to queue the requests, in presumably a similar way as for global tables discussed at time51.58, which is "last writer wins" based on timestamp.There are other questions discussing similar topics, e.g.here. | I have a dynamodb table calledeventstable schema ispartition_key : <user_id>
sort_key : <month>
attributes: [<list of user events>]I opened 3 terminals and runningupdate_itemcommand at the sametime for samepartition_keyandsort_keyQuestion:How DynamoDb works in this case?Will Dynamodb follows any approach likeFIFO?ORwill Dynamodb performsupdate_itemoperationparlalleyfor the samepartition keyandsort key?Can someone tell me how Dyanmodb works? | Dynamodb update item at a same time |
Queuesshould beList of String. This means that instead of:Queues: !Ref SQSQueueyou should have:Queues:
- !Ref SQSQueueor shorter:Queues: [!Ref SQSQueue] | I am trying to create an SQS queue and its associated access policy using cloudformation. Tried a few iterations but it keeps giving me this error:Value of property Queues must be of type List of StringBelow is my template. Can anyone help me point the issue in this:SQSQueue:
Type: "AWS::SQS::Queue"
Properties:
DelaySeconds: "0"
MaximumMessageSize: "262144"
MessageRetentionPeriod: "10800"
ReceiveMessageWaitTimeSeconds: "0"
VisibilityTimeout: "30"
QueueName: "ScanQueueItems"
DocSQSSNSPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Id: MessageToSQSPolicy
Statement:
Effect: Allow
Principal: "*"
Action:
- SQS:SendMessage
Resource: !GetAtt SQSQueue.Arn
Queues: !Ref SQSQueue | Error in creating SQS Queue and its access policy through Cloudformation |
Terraform's documentation for this isn't very clear. The format fortargetisintegrations/integration-id. In your case, use"integrations/${aws_apigatewayv2_integration.mrw-int-get.id}" | I am creating a terraform script for Amazon API Gateway Version 2 using terraform, using HTTP protocol type. I am not able to figure out how to link the gateway route with the integration. I have tried using the "target" attribute in "aws_apigatewayv2_route" but its not working. Below is the code I have written for it.resource "aws_apigatewayv2_api" "mrw-api" {
name = "mrw-http-api"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_vpc_link" "mrw-link" {
name = "mrw-link"
security_group_ids = [data.aws_security_group.mrw-sg.id]
subnet_ids = [data.aws_subnet.mrw-subnet.id, data.aws_subnet.mrw-subnet2.id]
}
resource "aws_apigatewayv2_route" "healthcheck" {
api_id = aws_apigatewayv2_api.mrw-api.id
route_key = "GET /"
target = aws_apigatewayv2_integration.mrw-int-get.id
}
resource "aws_apigatewayv2_integration" "mrw-int-get" {
api_id = aws_apigatewayv2_api.mrw-api.id
integration_type = "HTTP_PROXY"
connection_type = "VPC_LINK"
connection_id = aws_apigatewayv2_vpc_link.mrw-link.id
integration_uri = aws_lb_listener.mrw-lb-listener.arn
integration_method = "GET"
tls_config {
server_name_to_verify = var.tls_server_name
}
}Can anyone help on how to link the route with the integration. | How to link "aws_apigatewayv2_route" with "aws_apigatewayv2_integration"? |
Based on the comments.The issue was identified by going toCloudFormation consoleand checking theEventstab of the EB stack that failed to deploy. The error found was:You cannot provide subnets from multiple localesBased on this it was inferred that the there are two many subnets used for the EB environment. Removal of the extra subnets was thesolutionto the problem. | Creating Elastic Beanstalk Application on AWS failed with GRAY color in health and errors as bellow, I couldn't move forward.WARN: Environment health has been set to REDERROR: Cannot update ELB target group when there is no ELB in the group resourcesERROR: Creating security group named: awseb-e-securitygroupname-stack-AWSEBSecurityGroup-THEIDOFYOURSECURTYGROUP failed Reason: Resource creation cancelledERROR: Stack named 'awseb-e-somename-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBV2LoadBalancer, AWSEBSecurityGroup]ERROR: Creating load balancer failed Reason: You cannot provide subnets from multiple locales. (Service: AmazonElasticLoadBalancing; Status Code: 400; Error Code: ValidationError; Request ID:[the id]; Proxy:null) | Creating Elastic Beanstalk Application failed |
It looks like you maybe just need to pass inauthModeas well in yourAPI.graphqlcall.In theSSR Support for AWS Amplifyblog post, inside the section titledMaking an authenticated API request in getServerSidePropsyou'll see a code sample that looks like the following (note the addition ofauthModebelow, which I don't see in your code sample above):const { API } = withSSRContext(context)
let movieData
try {
movieData = await API.graphql({
query: listMovies,
authMode: "AMAZON_COGNITO_USER_POOLS"
});
console.log('movieData: ', movieData)
} catch (err) {
console.log("error fetching movies: ", err)
}
return {
props: {
movies: movieData ? movieData.data.listMovies.items : null
}
}
}``` | I'm trying to make an user-authenticated GraphQL request on the serverside ingetServerSidePropsusing AWS Amplify and Next JS.Users in my AWS Database can only access the data if they are theownerof the document.The error I get from this is"No current user"... (which is being logged on the server).
The problem is, that I need the user available in the getServerSideProps, so I can make the authenticated request happen.Here is the code I currently haveindex.tsx:import Amplify, { API, graphqlOperation, withSSRContext } from "aws-amplify";
import config from "../aws-exports";
Amplify.configure({ ...config, ssr: true });
function Index({ bankAccounts }: { bankAccounts: BankAccount[] }) {
return (
...stuff...
);
}index.tsx (getServerSideProps):export const getServerSideProps: GetServerSideProps = async (context) => {
const { API } = withSSRContext(context);
const result = (await API.graphql({
query: listBankAccounts,
})) as {
data: ListBankAccountsQuery;
errors: any[];
};
if (!result.errors) {
return {
props: {
bankAccounts: result.data.listBankAccounts.items,
},
};
}
return {
props: {
bankAccounts: [],
},
};
};I would greatly appreciate any help or advice you could offer! | No current user - getServerSideProps Next JS and AWS Amplify |
I would suggest to repartition table by dt only (yyyy-MM-dd) instead ofyear,month,day, this is simple and partition pruning will work, though queries using year only filter likewhere year>'2020'should be rewritten asdt>'2020-01-01'and so on.Also BTW in Hive partition pruning works fine with queries like this:where concat(year, '-', month, '-', day) >= '2018-03-07'
and
concat(year, '-', month, '-', day) <= '2020-03-06'I cant check does the same works in Presto or not but it worth trying. You can use||operator instead ofconcat(). | We have large datasets partitioned in S3 likes3://bucket/year=YYYY/month=MM/day=DD/file.csv.What would be the best way to query the data in Athena from different years and take advantage of the partitioning ?Here's what I tried fordata from 2018-03-07 to 2020-03-06:Query 1- running for 2min 45s before I cancelSELECT dt, col1, col2
FROM mytable
WHERE year BETWEEN '2018' AND '2020'
AND dt BETWEEN '2018-03-07' AND '2020-03-06'
ORDER BY dtQuery 2- run for about 2min. However I don't think it would be efficient if the period were from for example 2005 to 2020SELECT dt, col1, col2
FROM mytable
WHERE (year = '2018' AND month >= '03' AND dt >= '2018-03-07')
OR year = '2019' OR (year = '2020' AND month <= '03' AND dt <= '2020-03-06')
ORDER BY dt | AWS Athena - Query data from different years in partitions |
An option is to put in place a temporary SCP on the AWS account to deny all actions for the Role session of the user as shown below:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "UserRestrictions",
"Effect": "Deny",
"Action": "*",
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"aws:userId": [
"AROAEXAMPLEROLEID:[email protected]"
]
}
}
}
]}After a day or so (or the max role duration) you could remove the SCP. This is useful if you only have a single role session but in the scenario of an AWS SSO user, the user probably has access to multiple Roles across multiple AWS accounts. Rather than adding multiple SCPs you could add a SCP higher up in the organizational hierarchy that denies actions for all Role sessions for the user as shown below:{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "UserRestrictions",
"Effect": "Deny",
"Action": "*",
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"aws:userId": [
"*:[email protected]"
]
}
}
}
]} | I'm currently managing a AWS SSO solution using it with AzureAD. For our use case we need to be able to revoke access/session of a user.In AzureAD it's pretty simple, go to the user, block him, revoke its session. It's done, user needs to relog but he won't be able to do so.In AWS SSO, it looks a bit harder, I can't seem to find a way to instantly revoke a session. I can disable its access, but once he has a session, even deleting the user/group from AWS SSO will not terminate the session.This causes quite a problem as this is compliant to my security standards.Any ideas?Thanks people | How to revoke a user session when using AWS SSO? |
There's two aspects here:Knowingwhich environment to useRestrictingthe environment that can be usedIf the Production and Test systems are in separate AWS Accounts, then they can access parameterswith the same key, since the values will be different.If the systems are in thesameAWS Account, then the code will need to know its own environment so that it requests the correct parameter (via adifferent key).This information could be provided in a configuration file or, if both instances are identical, a Tag could be added to the instance. The code could check the tag on its own instance, and then retrieve the appropriate parameters.This could be further enforced byrestricting access to the parameters by assigning different IAM permissionsto each instance. This way, the IAM Role is controllingwhichparameters they can access. The code could attempt to retrieve both parameters and then use whichever one was successfully returned, or the policy could simply be used to ensure that the right parameters are accessed. | We're trying to move some keys and secrets from .env to AWS Parameter Store for better security.
We have two EC2 instances, one for production env and another one for staging env.
Each instance has keys/values defined in .env file below public folder, so it's hidden from public access.Keys in .env file are identical between production and staging env, just values are different.
So the code to load these values were the same between production and staging.
Now that we're trying to move these keys/values to AWS Paramter Store, and since Parameter Store is account level scope,Is there a way to assign different values based on EC2 instance?e.g.secret = getSecretFromEnv('MY_KEY'); // different values are loaded depending on EC2 instancehas become (what we're trying to avoid doing)if prod {
secret = getSecureParameterFromAws('MY_PROD_KEY');
} else {
secret = getSecureParameterFromAws('MY_STAGING_KEY');
} | AWS Parameter Store: Different keys for different environments |
In order to use the split function, first installserverless-plugin-utilsnpm install -D serverless-plugin-utilsadd it to your plugins section inserverless.ymlplugins:
- serverless-plugin-utilsThe split function is now available. If I have a following custom variable sectioncustom:
my_var: 'value1,value2,value3,value4'and I want to split it into an array like belowvalues:
- value1
- value2
- value3
- value4thensplitfunction can be used from serverless utils pluginvalues:
${split(${self:custom.my_var}, ',')} | I'm using serverless framework to deploy an API on AWS. I have the following in myserverless.ymlfile:custom:
vpcSettings:
private:
securityGroupIds:
private:
fn::split:
delimiter: ','
value: ${env:VPC_SG_ID}VPC_SG_IDcontains the following string:sg-1111111111,sg-222222222,sg-3333333333However, when deploying the application, I get the following error:An error occurred: MyLambdaFunction - Value of property SecurityGroupIds must be of type List of String.If I hardcode the SGs list, it's working without any issue:custom:
vpcSettings:
private:
securityGroupIds:
private:
- "sg-1111111111"
- "sg-2222222222"
- "sg-3333333333"Why the fn::split function is not returning a list of strings?Edit:The following configuration results in the same errorcustom:
vpcSettings:
private:
securityGroupIds:
private:
Fn::Split:
- ','
- ${env:VPC_SG_ID} | Using fn::split in Serverless yaml configuration not working |
Django-storage usesboto3for uploads, as can be seen in its source codehere:obj.upload_fileobj(content, ExtraArgs=params)which is theupload_fileobjmethod inboto3:Upload afile-like objectto this bucket. The file-like object must be in binary mode. This is a managed transfer which will perform amultipartupload in multiple threads if necessary.Looking at thesavemethod inDjango-storageit future explains:Save new content to the file specified by name. The content should be a proper File object or any Pythonfile-like object, ready to be read from the beginning.So to answer your question, the file must go through your server first so that you can createfile-like objectfor it. It does not have to be necessarily stored on a hard drive, but the file data can be in memory. Also I think it also does not have to be fully stored, it should be possible to steam it as its being uploaded to your app, but I'm not certain if this works in django.To sum up, all uploaded files must go through your server taking it processing power and bandwidth. | I am using django storages library for upload my files to s3. I want to understand how this file uploads works , does the file uploads to s3 folder directly or it uploads to my server and then goes to s3?How does the file upload works in django storages.If I upload multiple files , will they use bandwidth of my server of will they be directly uploaded to s3 and will not slow the speed of my server.Thank you | Django Storages [file upload to AWS S3] |
This is simply No. It's impossible.EFS file systems are always created within a customer VPC, so Lambda functions using the EFS file system must all reside in the same VPC.Like stated here (https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications)
Lambda should be placed within same VPC where EFS is created.There might be different reasons you didn't like to place your Lambda function in VPC:Very slow initialization (Creating ENI, Attaching Lambda to it.. This takes long time significantly)Additional configuration to place in VPC etc..One solution is to use provisioned concurrency feature of Lambda (It comes with more costing)
In this way, you can get multiple Lambda functions ready to use any time by keeping it warm.Cheers | I want to connect AWS EFS to my AWS Lambda function, without connecting the Lambda function to VPC. Is it possible to do this? | Is it possible to connect an AWS Lambda function without a VPC connection to AWS EFS? |
You should try with quotation in the filter pattern. Fromdocs:Metric filter terms that include characters other than alphanumeric or underscoremust be placed inside double quotes("").This theFilterPatterncould be:FilterPattern: '"abc_found: True"'You may try different ways of escaping double quotes in CloudFormation if this does not work as expected. | I'm trying to create a metric filter in a CloudWatch template which includes a colon:e.g.TotalLocationFound:
Type: AWS::Logs::MetricFilter
Properties:
FilterPattern: "abc_found: True"
LogGroupName: "/aws/lambda/blah"
MetricTransformations:
-
MetricValue: "1"
MetricNamespace: "ProductionClient"
MetricName: "TotalAbcFound"It seems to take issue with the filter pattern. I can use that same pattern from the console but when I deploy using CloudWatch command line I get this error:Invalid metric filter pattern (Service: AWSLogs; Status Code: 400; Error Code: InvalidParameterExceptionPlaying with it seems to point to the issue being the :Thanks | CloudWatch Template - Metric Filter with a colon |
there is only one way to get this done is migration lambda trigger. In short:create new cognitocreate migration lambdaadd this lambda as a trigger to login and/or forgotten passwordspoint users at new cognitoupon login, Cognito will check locally and if user is not found, will use the trigger to check another cognito. If authentication is successful, old cognito will return object with all properties, incl passwords, which you can then insert into new cognito.more info here:https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-migrate-user.html | Is it possible to export the AWS Cognito users with password from one pool and import them to another pool?Possible way:I know we can ask users to reset the password but just wanted to know if is there any other way apart from this? | Export AWS Cognito Users with password |
S3 doesn't natively support this. If you upload the same file over and over again, a new version is added.Depending on your use case, if using the AWS CLI, you could add the--size-onlyflag when using theaws s3 synccommand.According to the docs adding this option:Makes the size of each key the only criteria used to decide whether to sync from source to destination.So it willonlycopy files to S3 if thesizeof the file has changed.This may or may not work for your use case, since it only factors in the size, so be sure to take that into consideration | I am trying to upload a large number of files to an AWS s3 bucket. I will also need to enable file versioning to have a backup incase some files get accidentally overwritten.However, with AWS s3 versioning currently enabled when I upload the exact same file that is already there aws stores both versions of the exact same file. This is an issue as I will be uploading the same file multiple times and in that case, I would like the versioning to not be used to prevent excess data charges because of storing multiple version of the same object. However, if a change is made to the file then I would like aws versioning to be in use.Is there a way to configure aws s3 bucket versioning such that duplicate upload files are ignored but changed or new files have versioning activated? (If it helps, the script that I am using to do this uses python and awscli) | Not storing duplicate files with AWS Versioning |
It does not work because you need toprovide type constrainfor your complex variable if you want to pass it though env variables:variable "list_of_jobs" {
type = list(string)
default = ["myjob1","myjob2","myjob3"]
} | I have aglue jobinaws.
I made a loop.Invariables.tfvariable "list_of_jobs" {
type = list(string)
default = ["myjob1","myjob2","myjob3"]
}Inglue.tfresource "aws_glue_job" "this" {
for_each = toset(var.list_of_jobs)
name = each.value
role_arn = var.role_arn
command {
name = "pythonshell"
python_version = 3
script_location = "s3://mybucket/${each.value}/run.py"
}
}Inmain.tfvariable "region" {}
variable "list_of_jobs" {}
module "my_glue" {
source = "../terraform-glue"
region = var.region
list_of_jobs = var.list_of_jobs
}This loop works fine, and I have 3glue jobsafter execution ofterraform apply.
The problem, when I am trying to make:export TF_VAR_list_of_jobs='["myjob1","myjob2","myjob3"]'In this case, when I am makingterraform apply, I am receiving this:Error: Invalid function argument
on ../terraform-glue/glue.tf line 2, in resource "aws_glue_job" "this":
2: for_each = toset(var.list_of_jobs)
|----------------
| var.list_of_jobsis "[\"myjob1\",\"myjob2\",\"myjob3\"]"
Invalid value for "v" parameter: cannot convert string to set of any single
type.Input variables, does not work too. Only variable fromvariables.tf. Could You help me please ? I am trying to resolve this during all night. | Terraform does not work input and environment variables |
You can use gitlab auto scaling with EC2 instanceshttps://docs.gitlab.com/runner/configuration/runner_autoscale_aws/.
If you have no EC2 instances that you can use as a runner manager createt4g.nano(cheap) and configure it accordingly to instructions in the link.Based on your runner configuration EC2 instance will die after eitherIdleTime(seconds) orMaxBuilds(number)....
[runners.machine]
IdleCount = 1
IdleTime = 1800
MaxBuilds = 100
MachineDriver = "amazonec2"
...
]
[[runners.machine.autoscaling]]
Periods = ["* * 9-17 * * mon-fri *"]
IdleCount = 50
IdleTime = 3600
Timezone = "UTC"
[[runners.machine.autoscaling]]
Periods = ["* * * * * sat,sun *"]
IdleCount = 5
IdleTime = 60
Timezone = "UTC"
...In around 2-3 minutes your pipeline will spin the EC2 instance. Remember to use proper tag for your runner manager it will spin that instance for you and run you job in it.Below start of a pipeline inside EC2 instance for job with kaniko image. Instance will be terminated according to schedule assigned in yourconfig.toml(above):from Monday to Friday from 9am till 5pm after 3600 secondsfrom Saturday to Sunday 60 secondsany other time after 1800 seconds. | I have a scenario where I need to spin up a new EC2 instance and deploy docker image inside the ec2 and run some tests. After all the tests have been executed I need to remove the ec2 instance. How can I do this using gitlab ci/cd. I am pretty new to this does anyone know if this is something achievable using gitlab? | How to spin up a new EC2 instance within a gitlab CI/CD pipeline |
You have to rename the table to extract the fields from the external schema:SELECT
a.content.my_boolean,
a.content.my_string,
a.content.my_struct.value
FROM schema.tableA a;I had the same issue on my data, I really don't know why it needs this cast but it works. If you need to access elements of an array you have to explod it like:SELECT member.<your-field>,
FROM schema.tableA a, a.content.members as member;Reference | I have a table inAWS Glue, and the crawler has defined one field as array.
The content is inS3files that have ajsonformat.
The table isTableA, and the field ismembers.There are a lot of other fields such as strings, booleans, doubles, and even structs.I am able to query them all using a simpel query such as:SELECT
content.my_boolean,
content.my_string,
content.my_struct.value
FROM schema.tableA;The issue is when I addcontent.membersinto the query.
The error I get is:[Amazon](500310) Invalid operation: schema "content" does not exist.Contentexists because i am able to select other fiels from the main key in the json (content).
Probably is something related with how to perform the query agains array field inSpectrum.Any idea? | How to query an array field (AWS Glue)? |
Probably you have to do it manually. There are open issues on github for that problematic dependency which are still not resolved:Dependency between subnets and LBs/VPC Endpoints not detectedendpoint service NLB change | After making some changes to end_point service like for example adding a new tag, network load balancer gets attempted to deleted first when runningterraform applyand it doesn't succeed since NLB is associated with endpoint_service.Endpoint service should be the first to get deleted so the network loadbalancer should get deleted later.Is there a way to set which should get deleted first?module.Tester_vpc.data.aws_instances.webservers: Refreshing state...
Error: Error deleting LB: ResourceInUse: Load balancer 'arn:aws:elasticloadbalancing:ap-south-1:123456:loadbalancer/net/myNLB/123456' cannot be deleted because it is currently associated with another service
status code: 400, request id: 25944b2d-49c7-1234-a32c-faeb6e2e7c7fHere is the NLB resources.resource "aws_vpc_endpoint_service" "nlb_service" {
count = var.create_lb ? 1 : 0
acceptance_required = false
network_load_balancer_arns = [aws_lb.myNLB[0].arn]
}
resource "aws_vpc_endpoint" "service_consumer" {
count = var.create_lb ? 1 : 0
vpc_id = data.aws_vpc.vpc_id.id
subnet_ids = data.aws_subnet_ids.private_subnet_ids.ids
security_group_ids = [data.aws_security_group.sG_myVPC.id]
vpc_endpoint_type = "Interface"
private_dns_enabled = false
service_name = aws_vpc_endpoint_service.nlb_service[0].service_name
tags = {
Name = "tester_service" # When adding a tag, NLB attemps get deleted first and fails.
}
} | Terraform - could not delete loadbalancer when its part of a endpoint service |
You can use aws-s3-transferutility api of aws mobile sdk.https://github.com/awslabs/aws-sdk-android-samples/blob/main/S3TransferUtilitySample/S3TransferUtilityTutorial.md | I am running a scheduled python script on an EC2 instance which creates and exports a JSON file to my S3 bucket for a backend. I am hitting a brick wall attempting to download the JSON file for use in an android application. The AWS mobile SDK documentation suggests the mobile SDK is depreciated in favor of AWS Amplify, but Amplify seems to be overkill for this simple backend connection, and attempts to integrate it for my project are proving nightmareish.Is there a simple way to download from S3 using an android HTTPS request library, or should I be using a different resource for backend storage entirely? Or is the correct route to continue working to use the Amplify suite to make the connection? | Method to download object from AWS S3 for use in Android app? |
The issue is that the route in route table was for CIDR range0.0.0.0/16which actually resolves to any outbound routes between0.0.0.0and0.0.255.255.The correct route is0.0.0.0/0which covers all IPv4 addresses, the route table can then route all outbound traffic to this route assuming there is not any more specific routes.For future reference a great tool to use iscidr.xyz. | I created an AWS EC2 instance where my EC2 instance is in the correct VPC and subnet.Below are some evidences (Sec Group, Inbound, outbound, NACL, Route tables respectively)N.B. NACLs have everything open for now. But I do accept that it needs to be cleaned up to have more aggressive control.Question -What is it that I am doing wrong? Also, these are what AWS docs suggest too, so what's missing? Thanks for the answers in advance. | AWS EC2 Instance - Connection timed out BUT SG exists |
Your use of :kms_key_id = "data.aws_kms_key.rds_key.arn"will result inkms_key_idbeing literally string "data.aws_kms_key.rds_key.arn".It should be either (tf 0.12+):kms_key_id = data.aws_kms_key.rds_key.arnor for tf 0.11:kms_key_id = "${data.aws_kms_key.rds_key.arn}" | Have the code below:data "aws_kms_key" "rds_key" {
key_id = "alias/rds_cluster_enryption_key"
}And I want to use this key to encrypt rds instanceresource "aws_rds_cluster" "tf-aws-rds-1" {
cluster_identifier = "aurora-cluster-1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
database_name = "cupday"
master_username = "administrator"
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
storage_encrypted = true
kms_key_id = "data.aws_kms_key.rds_key.arn"
}However, I'm getting an error like below:Error: "kms_key_id" (data.aws_kms_key.rds_key.id) is an invalid ARN: arn: invalid prefix
on main.tf line 42, in resource "aws_rds_cluster" "tf-aws-rds-1":
42: kms_key_id = "data.aws_kms_key.rds_key.id"
Error: "kms_key_id" (data.aws_kms_key.rds_key.arn) is an invalid ARN: arn: invalid prefix
on main.tf line 42, in resource "aws_rds_cluster" "tf-aws-rds-1":
42: kms_key_id = "data.aws_kms_key.rds_key.arn"How on the earth I should refer them?I do not want to disclose my account id inkms_key_id | Invalid arn error for terraform code with kms data resource |
I had an identical problem, and here's how I solved it. By default, Dash automatically serves all of the files included in the./assetsfolder. Oppositely a webserver on AWS Elastic Beanstalk expects your CSS and other files to be placed in the./staticfolder.You may create a configuration file for AWS Elastic Beanstalk to set configuration options for your static files. I took another approach:Change your resources' folder name fromassetstostatic.Update your Python code when you declare Dash app:app = dash.Dash(name=__name__,
title="My App",
assets_folder ="static",
assets_url_path="static")Deploy your app as usual - AWS Elastic Beanstalk will take resources and serve them from thestaticfolder.My final app has the following structure:static/
favicon.ico
logo.png
style.css
application.py # by default, AWS Elastic Beanstalk expects this app name
requirements.txtReferencesAdding Your Own CSS and JavaScript to Dash Apps.Using the Elastic Beanstalk Python platform with static files. | I was wondering if anyone would be able to advise me on how to upload a Dash app to elastic beanstalk.I understand from various peoples blogs to ensure that I need:application = app.server
if __name__ == '__main__':
application.run_server(debug=True,port=8080)Then I need to freeze the requirements using pip freeze > requirements.txt and zip all the files and upload to AWS Elastic Beanstalk. In all the explanations they have very simple applications with only a single file and that have no "assets" folders containing css files or no connected databasesI was wondering what the process is when you have an assets folder containing various images and css folders of styles etc. I have also connected a RDS database already set up using the URI with sqlalchemy. The app works perfectly when it is on my local machine. I tried zipping every individual file but it has not worked and I am getting quite desperate. I understand that Dash looks for the "assets" folder. Structure found below:Thank you very much for your help in advance. If anyone could highlight the exact steps I need to do I would very much appreciate it. I am very new to this.Regards | Deploy Dash with assets folder to AWS Elastic Beanstalk |
Normally, to communicate with CloudWatch (CW), for example when using cloudwatch agent, your instance must be able to connect toCW public endpoint. For CW, the endpoints arehere.These are regular HTTP/HTTPS public endpoints, which means that your instance generally requires internet connection. Without it will not be able to reach the internet and the endpoints. However, this requires your instance be in apublic subnetor use aNAT gateway.Internet access often can be not desired due to enhance security requirements. This is whereVPC endpointscome into play. They enable resources in private VPC or subnet (i.e. without any internet access) toconnect privatelyto the CW, or there services (e.g., S3, Lambda).Is a VPC with an Internet connection required to send data to CloudWatch?Yes, unless you will use VPC interface endpoint in your VPC. | I have read the AWS documentation on the connection between the resources of a VPC and CloudWatch but I have not really understood what the objective is.Does this secure the data transport between the VPC and CloudWatch?
Or is it because the internet is required for communication between one instance of a VPC and CloudWatch?Is a VPC with an Internet connection required to send data to CloudWatch? | Use CloudWatch with VPC Endpoints (PrivateLink) |
Firstly check that localstack is configured to runsts. In docker-compose this was just the SERVICES environment variable:services:
local-aws:
image: localstack/localstack
environment:
EDGE_PORT: 4566
SERVICES: secretsmanager, stsThen make sure that you set thestsendpoint as well as the service you require:provider "aws" {
region = "eu-west-2"
endpoints {
sts = "http://local-aws:4566"
secretsmanager = "http://local-aws:4566"
}
} | I'm having trouble getting a terraform AWS provider to talk tolocalstack. Whatever I try I just get the same error:Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: dc96c65d-84a7-4e64-947d-833195464538This error suggest that the provider is making contact with a HTTP server but the credentials are being rejected (as per any403). You might imagine the problem is that I'm feeding in the wrong credentials (through environment variables).However the hostnamelocal-awsexists in my/etc/hostsfile, butblahblahblahdoes not. If I swap the endpoint to point tohttp://blahblahblah:4566I still get the same 403. So I think the problem is that the provider isn't using my local endpoint. I can't work out why.resource "aws_secretsmanager_secret_version" "foo" {
secret_id = aws_secretsmanager_secret.foo.id
secret_string = "bar"
}
resource "aws_secretsmanager_secret" "foo" {
name = "rabbitmq_battery_emulator"
}
provider "aws" {
region = "eu-west-2"
endpoints {
secretsmanager = "http://local-aws:4566"
}
} | Terraform AWS not accessing localstack |
AWS Route 53 can be used with any hosting provider, it is after all a DNS service. Unless you have explicit reasons why you can keep all of the configuration in Route 53.For this configuration you would simply perform the following:Create a record to point to the root domain (Arecord), this would resolve to a public IP provided by the external hosting provider.Create an alias record forwww.example.comthat resolves to your root domain recordCreate your wildcard record*.example.comto point to the default record value.If you do want to split your DNS provider as well then you would as you've identified configure the registrar of your hostname to use the name servers of the target servers. Then create theNSrecords for domains in this configuration to resolve to the name servers from other services (such as Route 53). | I have a domain and a hosted zone in AWS Route53: mydomain.com
I have configured several subdomains with a CNAME to point to resources hosted on AWS.The main websitemydomain.comI want to host on GoDaddy (or an other external hosting service). How should I configure that?I am thinking:In Route53 I update the NS record formydomain.comto point to the GoDaddy nameserversIn Route53 I add a NS record for*.mydomain.comto point to the AWS nameserversIn Route53 I add an NS record forwww.mydomain.comto point to the GoDaddy nameserversWhat should I do? | Domain in Route53 point root to external host, keep subdomains in AWS |
CloudFormer is no longer available in the console's sample templates:"The beta for the CloudFormer template creation tool has ended.""We are not planning to enhance CloudFormer in its current form. We recommend using https://former2.com/, an opensource tool contributed by Ian McKay"https://github.com/iann0036/former2#security | I would like to create a CloudFormation YAML template from the existing resources and faces some issues. I read in the blogs that we can have the CloudFormaer template available like below:However, I don't see this coming when creating the stack from the AWS dashboard.Apparently, it supposes to be in the tools section which they deleted the template and created a new way for the template. Do I miss anything here? | AWS select the CloudFormer template |
There is in-built integration between Amazon EC2 Auto-Scaling and Elastic Load Balancers. The Auto Scaling systems knows how to modify Target Groups to add/remove instances. However, it has no knowledge about your on-premises load balancer.You could useAmazon EC2 Auto Scaling lifecycle hooksto trigger additional code (that you write) to add/remove the instances to your own load balancer, but that would require an AWS Lambda function to communicate with the on-premises load balancer to update the configuration. | I have an on-premises load balancer that I wish to use to distribute traffic on EC2s in an Autoscaling group(ASG).When AWS's Elastic Load Balancer(ELB) registers an ASG as a target, during a scale-out,new instances are automatically registered to ELB to route traffic to.Can the same functionality be achieved in any way with an on-prem load balancer? | Can an on-premises load balancer be used to connect to AWS EC2 Autoscaling group? |
Unfortunately you cannot move your resources from one region to another, you will be limited to recreating resources and migrating any data into the region that you're looking to use.Best practice for managing and maintaining your infrastructure is to use infrastructure as code, if you already have this in place it should be as simple as running this into the new region you intend to use. Otherwise this will need to be done from scratch, whilst you don't need infrastructure as code I would definitely recommend this approach in the event that you need to do any similar tasks in the future (such as another migration or a disaster recovery scenario).Be aware that certain services are also global (such as IAM and CloudFront) so these will not need any such migration.For migrating of data there are a few services that assist in delivering this:AWS Data PipelineAWS Database Migration Service | I have been using AWS for nearly 2 years with most of my resources (EC2, ELB, RDS, Elasticache) deployed into Ireland (eu-west-1), mostly due to latency. In January, AWS opened a new region -- Cape Town (af-south-1) -- which has less latency.
I wonder how can I easily move all of my resources across to the new Region?Thanks | How to migrate AWS resources across regions? |
I stumbled onto this from searching around for my own problems. I know this is old, but I thought I should post some info for others that come across this.If you follow through to theELB access log definitionoftarget_processing_time:The total time elapsed (in seconds, with millisecond precision)
from the time the load balancer sent the request to a target
until the target started to send the response headers.This is a pretty dumb statistic for Amazon to offer, and isn'treallythat useful. If your web app is unbuffered and spits out response headers right away, this value will be consistently low and have little bearing on your real performance.A better method is to track your own delivery times within your app and publish custom metrics for them. | I have a Tomcat application deployed on EC2, behind an application load balancer. The load balancer has an alarm set forTargetResponseTime, which looks likethis.The metric says the response time cannot be greater than 1 ms - but when I look at the chart, there are multiple instances of the target response times going over 2000 mscontinuously for 5 minutes.And yet, the alarm only goes off sometimes and not all the time.Why is the alarm not going off almost all the time?Can someone explain target response time better thanthe explanation given in the AWS documentation?Is 1 ms a realistic target response time to measure if I see sometimes the values going to 5000 ms?I see the message in pic 1 "Value will be converted to match cloudwatch metric units". And yet in cloudwatch, the alarm generated is of unhealthy host with no mapping to the 1ms value | How does the TargetResponseTime AWS Cloudwatch metric work? |
You can try this wayimport AWS from "aws-sdk"
const docClient = new AWS.DynamoDB.DocumentClient({ region: config.region, accessKeyId: config.accessKeyId, secretAccessKey: config.secretAccessKey });If I'm not mistaken, You will get the "ResourceNotFoundException" error, when your DynamoDB table did not exist. Check your AWS credentials, please. | I'm fairly new to using AWS and aws-cli. I'm trying to update a dynamo DB table by writing a node script using aws-SDK.I have created a shared credential file that has all the credentials from two of my aws accounts and now I'm having trouble configuring the relevant credentials to the script that I'm trying to run to update the Db. Therefore I used aws.config.update() method to update the configurations but it still doesn't do the job , hence I get the "ResourceNotFoundException" when I run the code. Here's my code.const aws = require("aws-sdk");
aws.config.update({
accessKeyId: "xxxxxxxx",
accessSecretKey: "xxxx",
region: "ap-south-1",
});
const docClient = new aws.DynamoDB.DocumentClient();
async function update() {
try {
var params = {
TableName: "employee",
Key: {
ID: "1",
},
UpdateExpression: "set EmployeeName =:fullName",
ExpressionAttributeValues: {
":fullName": "test 3",
},
};
var result = docClient.update(params, function (err, data) {
if (err) console.log(err);
else console.log(data);
});
console.log(result);
} catch (error) {
console.error(error);
}
}
update();Please help me find a solution set up the relevant configurations and the reason behind aws.config.update() not working for me . | aws.config.update() not updating the AWS credentials |
(in hindsight the solution was obvious, but wasn't to me on my first day in AWS Sagemaker world) ... a memory error means you need to increase the size of your notebook instance.In this case, sizing up the On-Demand Notebook Instance from ml.tx.xlarge (2vCPU, 8Gib) to ml.tx.2xlarge (4vCPU, 16Gib) worked. SeeAmazon SageMaker Pricingfor notebook instance CPU/Memory specifications.In an earlier attempt to fix the problem, we had increased the volume size but that is for storage of data and didn't help with memory (seeCustomize your notebook volume size, up to 16 TB, with Amazon SageMakerfor more details on storage volume); so we were able to decrease the volume size from 50 GB EBS to 10 GB EBS -Memory can be monitored by opening up a terminal using the Jupyter interface and typing the linux commandfreeTo load the picked dataframe, I simply used the solution from @kindjacket in this post:How to load a pickle file from S3 to use in AWS Lambda?, which was as follows:import pickle
import boto3
s3 = boto3.resource('s3')
my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read()) | I have attempted the code on the many posts on how to load a pickle file (1.9GB) from an S3 bucket, but none seem to work for our notebook instance on AWS Sagemaker. Notebook size is 50GB.Some of the methods attempted:Method 1import io
import boto3
client = boto3.client('s3')
bytes_buffer = io.BytesIO()
client.download_fileobj(Bucket=my_bucket, Key=my_key_path, Fileobj=bytes_buffer)
bytes_io.seek(0)
byte_value = pickle.load(bytes_io)This gives:Method 2: This actually gets me something back with no error:client = boto3.client('s3')
bytes_buffer = io.BytesIO()
client.download_fileobj(Bucket=my_bucket, Key=my_key_path, Fileobj=bytes_buffer)
byte_value = bytes_buffer.getvalue()
import sys
sys.getsizeof(byte_value)/(1024**3)this returns: 1.93but how do I convert the byte_value into the pickled object?
I tried this:pickled_data = pickle.loads(byte_value)But the kernel "crashed" - went idle and I lost all variables. | Load Python Pickle File from S3 Bucket to Sagemaker Notebook |
Just because the data is replicated multiple times within AZ, it doesn't mean the durability is 100%.If you considergp2volume type, its durability is99.8% - 99.9%which might not be the worst but compared to S3 that has 99.999999999% durability, it is lacking a bit.RAID 1 is indeed safer from the perspective of data durability. AWS is not telling you that this isthecorrect option, instead, it is one of the options if you need any. You can as well create EBS snapshots and/or use another data replication strategy (or you might be OK with the above-mentioned numbers in which case you don't need to do anything else beyond provisioning EBS volume in this regard). | I do not understand why AWS still encourages RAID configuration on EBS. I thought the volumes are replicated mutliple times within a single AZ. Optionally they can also be replicated to another AZ.On GCP you don't need to do this RAID configuration. The documentation explicitly discourages this.Quote:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.htmlCreating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume.A RAID 1 array offers a "mirror" of your data for extra redundancy.Before you perform this procedure, you need to decide how large your RAID array should be and how many IOPS you want to provision.Note, I am talking aboutsafety/fault-tolerancein this question. I am not talking about performance, but at least from the GCP documentation they discourage striping disks together even for performance reasons:Quote:https://cloud.google.com/compute/docs/disksCompute Engine optimizes performance and scaling on persistent disks automatically.You don't need to stripe multiple disks together or pre-warm disks to get the best performance.When you need more disk space or better performance, resize your disks and possibly add more vCPUs to add more storage space, throughput, and IOPS | Why does RAID configuration on AWS EBS provide extra safety? |
I didn't work with AWS, but in my opinion performance testing in case serverless applications should perform pretty the same way as in traditional way with own physical servers.Despite the name serverless, physical servers are still used (though are managed by aws).So I will approach to this task with next steps:send backend metrics (response time, count requests and so on) to some metrics system (graphite, prometheus, etc)build dashboard in this metric system (ideally you should see requests count and response time per every instance and count of instances)take a load testing tool (jmeter, gatling or whatever) and start your load test scenarioDuring the test and after the test you will see how many requests your app processing, it response times and how change count of instances depending of concurrent requests.So in such case you will agnostic from aws management tools (but probably aws have some management dashboard and afterwards it will good to compare their results). | In Traditional Performance Automation Testing:
There is an application server where all the requests hits are received. So in this case; we have server configuration (CPU, RAM etc) with us to perform load testing (of lets say 5k concurrent users) using Jmeter or any load test tool and check server performance.In case of AWS Serverless; there is no server - so to speak - all servers are managed by AWS. So code only resides in lambdas and it is decided by AWS on run time to perform load balancing in case there are high volumes on servers.So now; we have a web app hosted on AWS using serverless framework and we want to measure performance of the same for 5K concurrent users. With no server backend information; only option here is to rely on the frontend or browser based response times - should this suffice?
Is there a better way to check performance of serverless applications? | Performance testing for serverless applications in AWS |
The issue isn't that SSM is waiting for your background command to finish, it's that your command isn't actually put into background because yournohupcommand waits for input. To fix, replacenohup whatever_your_command_is &bynohup whatever_your_command_is < /dev/null 2> /dev/null > /dev/null & | I am trying to sync a S3 bucket which takes close to 3 hours to completely sync.sync-bucket.sh:nohup aws s3 sync "$source_bucket/$folder/" "s3://$destination_bucket/" \
--profile abc --acl bucket-owner-full-control --sse "aws:kms" \
--sse-kms-key-id "$KEY_ARN" > /var/log/$folder.log 2>&1 &
echo "Successfully triggered the sync job"I was hoping to trigger the sync job using AWS SSM send command something like below:trigger.sh:COMMAND=$(aws ssm send-command --document-name "AWS-RunShellScript" \
--targets "Key=instanceids,Values=${RECOVERY}" \
--parameters '{"executionTimeout":["10800"],"commands":["/opt/scripts/sync-bucket.sh"]}' \
--output-s3-bucket-name "some-bucket" \
--timeout-seconds 10800 \
| jq -r '.Command.CommandId')My observation is that SSM waits for this background job to finish before marking the execution as 'Success'. Is there a way we could perhaps just trigger the background job and make SSM finish the execution without having to wait for background job to finish?Or is there a better way of doing this? I am basically trying to automate the process here and happy to let the job running in background on demand without having to login to instance and run command manually.Thanks for your time. | Can we run command as background process through AWS SSM? |
I do believe answer is 3. you should be able to add third party provider for cognito user pool and then use Cognito authorizer for the gateway -https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.htmlHowever if you don't need Cognito user pools, simpler option seem to be lambda authorizer as you can use existing library for JWT verification and don't need to bother with Cognito.BTW, in case you can use AWS Api Gateway HTTP API - it supports JWT authorization out of the box -https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html | I want to protect my AWS API Gateway with Okta. The APIs should respond only if the request contain Okta access token in the header (Authorization). We cannot use IAM authorization for this. So, I planned to use one of the following Authorizer Types:LambdaCognito (I checked thislinkand I understood we can use Okta as an IdentityProvider in Cognito User Pool)Please confirm which of the following will be correct:We can use only Lambda and not Cognito in this caseWe can use only Cognito and not Lambda in this caseWe can use either Lambda or Cognito in this case | Which AWS API Gateway Authorizer Type should I use to protect my APIs with Okta? Lambda/Cognito? |
CloudFront is the easiest and cheapest way to add SSL termination, because AWS will handle it all for you through its integration with certificate manager.If you add an ELB, you have to run it 24/7 and it will double the cost of a single instance server.If you want to support SSL termination on the server itself, you're going to have to do that yourself (using your web container, such as apache, nginx, tomcat or whatever you're running). Its not easy to setup.Even if you don't need caching, CloudFront is going to be worth it just for handling your certificate (which is as simple as selecting the certificate from a drop-down). | My instance is a single instance, no load balancer.I cannot seem to add a load balancer to my existing app instance.Other recommendations regarding Elastic Load Balancer are obsolete - there seems to be no such service in AWS.I do not need caching or edge delivery - my application is entirely transactional APIs, so probably don't need CloudFront.I have a domain name and a name server (external to AWS). I have a certificate (generated in Certificate Manager).How do I enable HTTPS for my Elastic Beanstalk Java application? | How do I enable HTTPS for my Elastic Beanstalk Java application? |
There are two ways to add a region to a global table. In the old way - which was the usual way until November 2019 - you would need to create the same table yourself, and indeed you would also need to createthe same indexesyourself in the other region too. You would then useUpdateGlobalTable. Quoting this operation'sdocumentation:If global secondary indexes are specified, then the following conditions must also be met:The global secondary indexes must have the same name.The global secondary indexes must have the same hash key and sort key (if present).The global secondary indexes must have the same provisioned and maximum write capacity units.The new (November 2019) way to replicate to another region is to useUpdateTablewith theReplicaUpdatesparameter. This way does not require you to create the table table manually in the other reason. Amazon did not seem to document how that table is created, and whether the same indexes are also created on it, but given the above information, I don't see any reason why it wouldn't create the same indexes, just like was always the requirement.Of course, the best thing for you to do is to just try it, and report back your findings :-) | I have a gsi defined ( in usw2 region) of a global table that is configured to replicate automatically to use2 . I have a gsi defined in usw2 for my table - will the index be replicated automatically ? or do i need go create that manually in the other region too ? | Are GSI on Global table of dynamodb replicated automatically? |
In youraws_s3_bucket_policy, instead ofbucket = aws_s3_bucket.this.id[count.index]it should bebucket = aws_s3_bucket.this[count.index].idassuming that everything else is correct, e.g.data.aws_iam_policy_document.this.jsonis valid. | I am trying to create multiple s3 buckets each with different bucket settings. i am Looking for syntax on how to refer the bucket ids of the dynamically created bucket in other bucket resource blocks.
New to terraform. looking for sample code or terrraform document for this syntaxBel0w is sample code for creating bucket from list namesresource "aws_s3_bucket" "this" {
count=length(var.bucket_names)
bucket = var.bucket_names[count.index]
acl="private"
versioning {
enabled = var.bucket_versioning
}
}In this code i want to refer the dynamically created bucket id's and assign their bucket policy settings. Need the syntax . not sure if this correctresource "aws_s3_bucket_policy" "this" {
count=length(var.bucket_names)
bucket = aws_s3_bucket.this.id[count.index]
policy = data.aws_iam_policy_document.this.json
} | Terraform multiple s3 bucket creation |
My first attempt would be toprovision an instanceinus-east-1withiotype EBS volume of required size. From what I see there is about 14GB of data from 2018 and 15 GB from 2019. Thus an instance with 40-50 GB should be enough. Or as pointed out in the comments, you can havetwo instances, one for 2018 files, and the second for 2019 files. This way you can download the two sets in parallel.Then you attach anIAM roleto the instance which allows S3 access. With this, you execute your AWS S3 sync command on the instance. The traffic between S3 and your instance should be much faster then to your local workstation.Once you have all the files, youzip themand then download the zip file. Zip should help a lot as the IRS files are txt-based XMLs. Alternatively, maybe you could just process the files on the instance itself, without the need to download them to your local workstation.General recommendation on speeding up transfer between S3 and instances are listed in theAWS blog:How can I improve the transfer speeds for copying data between my S3 bucket and EC2 instance? | I've been trying to download these files all summer from the IRS AWS bucket, but it is so excruciatingly slow. Despite having a decent internet connection, the files start downloading at about 60 kbps and get progressively slower over time. That being said, there are literally millions of files, but each file is very small approx 10-50 kbs.The code I use to download the bucket is:aws s3 sync s3://irs-form-990/ ./ --exclude "*" --include "2018*" --include "2019*Is there a better way to do this?Hereis also a link to the bucket itself. | How to speed up download of millions of files from AWS S3 |
Starting by follow the guides forconfiguring ActiveStorage and S3. Then setup the attachments on your model.class Kitteh < ApplicationRecord
has_one_attached :photo
endWith ActiveStorage you candirectly attach files to recordsby passing an IO object:photos = Rails.root.join('path/to/the/images', '*.{jpg,gif,png}')
100.times do |n|
path = photos.sample
File.open(path) do |file|
Kitteh.new(name: "Kitteh #{n}") do |k|
k.photo.attach(
io: file,
filename: path.basename
)
end.save!
end
endThis example creates 100 records with a random image selected from a directory on your hard drive and will upload it to the storage you have configured. | For a school project, I'm working on a Rails app which "sells" pics of kittens. I picked 10 pictures of cats online, they are currently on my computer. I'm using Postgresql for the DB. I have a class/modelItemwhich represents the kitten photos.What I'm looking for is a way to, when generating fake data throughseeds.rbloops, attaching a kitten photo to eachItemclass/model, which will be then stored to an AWS S3 bucket that is already created (it's calledcatz-temple). I have my two access and secret S3 keys on a.envfile, I already have modified mystorage.ymlfile like so :amazon:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: eu-central-1
bucket: catz-templeI found out there was a gem calledaws-sdk-ruby, but I just can't find out what approach I should have on this topic.For now, I just put my bucket in public access and take each bucket photos' urls, but there's no API and secure approach into this...Thank you all | Rails Active Storage & AWS S3 : How to attach image to model through seeds.rb and then store it in S3 private bucket? |
Assuming your CodeBuild (CB) has permissions tosts:AssumeRole, in yourbuildspec.ymlyou have toexplicitlyassume the role in Acc B.There aretwo waysin which you can do this."Manually" callassume-rolein yourbuildspec.yml. The call will return a set of temporary credentials. The credentials obtained can then be used to execute AWS CLI commands in Acc B from your CB.Setup AWS CLI credentials files as shownhereorherein your CB container for assuming the roles.In both cases the CB service-role needssts:AssumeRolepermissions. | I have my codebuild build sitting onAccount Aand s3 buckets onAccount B. I tried to set up a trusted IAM STS role onAccount Band policy onAccount Ato include theAccount BIAM role, attached this policy to my codebuild service role. But still, my codebuild shows buckets on s3. Am I doing or configuring something wrong here?Role with trust relation on Account B{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Account:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]policy on Account A{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::Account B:role/testcli"
}
]
}CodeBuild BuildSpec.ymlversion: 0.2
env:
variables:
TF_VERSION: "0.12.28"
phases:
install:
commands:
# install required binary
- echo test
pre_build:
commands:
- echo print s3 buckets
- aws s3 ls
post_build:
commands:
- echo test1 | AWS codebuild to list out s3 buckets of other account |
KUBECONFIG env var supports multiple files, comma-separated:export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"This should be enough to see all of them inkubectx.You can even merge all configs to 1 file:export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
kubectl config view --flatten > ~/.kube/config | I need to use now multiple cluster, currently what I did is simple put all the kubeconfig
under.kubefolder and any time update the config file with the cluster which I need , e.g.mv config cluseronevi configinsert new kubeconfig to theconfigfile and start working with the new cluster,
Let say inside the/Users/i033346/.kubeI've all the kubeconfig file one by one.is there a way to use them as contexts without creating a new file which contain all of them.I try to use alsokubectxhowever when I use:export KUBECONFIG=/Users/i033346/.kube/trialandexport KUBECONFIG=/Users/i033346/.kube/prodand usekubectxI always get the last one and doenst get list of the defined contexts,any idea? | Using kubeconfig contexts simply |
I just talked to an AWS Professional and was told this is not a possible architecture. | I have 10 different APIs across two AWS regions (us-east-1, ca-central-1). Using base path mapping, us-east-1.example.com is serving 5 APIs in US and ca-central-1.example.com is serving the other 5 APIs (API Gateway). Although the backend is running the same code, it was part of the requirement from clients. Our clients are public universities and they want to have their own servers in their own country.
For example, the current setup is using Custom Name & base path from API Gateway.American universities:us-east-1.example.com/harvard
us-east-1.example.com/stanford
us-east-1.example.com/mitCanadian universities:ca-central-1.example.com/ubc
ca-central-1.example.com/bcit
ca-central-1.example.com/waterlooIs there a way to combine them into a single custom domain using Route 53 like the following?api.example.com/harvard
api.example.com/ubc | Multiple API Gateways from different regions with the same Custom Name and different base path |
Taken from AWS docs:10.0.0.2: Reserved by AWS. The IP address of the DNS server is the base of the VPC network range plus two. For VPCs with multiple CIDR blocks, the IP address of the DNS server is located in the primary CIDR. We also reserve the base of each subnet range plus two for all CIDR blocks in the VPC. For more information, see Amazon DNS server.https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html | I'm using Route-53 as a DNS management service.
I have a problem that I'm not really sure how to solve it. I've come here to seek ideas.I have a partner who wants an IP address of the DNS server, so that they can integrate their on-prem DNS server, to what I'm using(Route-53). This is not possible as Route-53 doesn't give an IP address for accessing the DNS servers. This is because it's a managed service. How can I get IP address for the Route-53 DNS servers so that my integrating partner can use to integrate the DNS server from their end to mine(Route-53)?I appreciate your advice. | AWS Route-53 DNS Server IP Address |
Athena always stores query results on S3. QuickSight probably just uses a different bucket. There should be queries from QuickSight in the query history (possibly in a work group that is not the primary), if you look at the query execution of one of these you should be able to figure out where the output is stored (e.g.aws athena get-query-execution --region AWS_REGION --query-execution-id IDand look forOutputLocationin the result). | I've checked AWS FAQ, and other resources however cannot find an answer to it. I can contact AWS for technical support however I do not have permission.I've checked S3 that stores query results from Athena however it does not seem to have query results from queries using Athena via QuickSight.Is there somewhere else Athena via QuickSight stores there query results?thanks! | Are result of queries using AWS Athena through AWS QuickSight stored in S3? |
I think the reason is due to how you store it. Iverifiedusing my own sandbox account the use ofaws_secretsmanager_secret_versionand it works. However, I stored it as a pain text, not json:Then Isuccessfulyused it as follows for an instance:resource "aws_instance" "public" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
key_name = "key-pair-name"
security_groups = [aws_security_group.ec2_sg.name]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ec2-user"
private_key = data.aws_secretsmanager_secret_version.example.secret_string
host = "${self.public_ip}"
}
inline = [
"ls -la"
]
}
depends_on = [aws_key_pair.key]
} | I need to store a Private Key in AWS. Because when I create an ec2 instance from AWS I need to use this primary key to auth in provisioner "remote-exec". I don't want to save in repo AWS.It's a good idea to save a private key in Secret Manager? And then consume it?And in the case affirmative, How to save the primary key in Secret Manager and then retrieve in TF aws_secretsmanager_secret_version?In my case, if I validate from a file(), it's working but if I validate from a string, is failed.connection {
host = self.private_ip
type = "ssh"
user = "ec2-user"
#private_key = file("${path.module}/key") <-- Is working
private_key = jsondecode(data.aws_secretsmanager_secret_version.secret_terraform.secret_string)["ec2_key"] <-- not working. Error: Failed to read ssh private key: no key found
} | How to get private key from secret manager? |
You can userasterioto access sub-windows of the image:(I'm assuming that the AWS credentials are set up for use withboto3and you have the necessary permissions)import boto3
from matplotlib.pyplot import imshow
import rasterio as rio
from rasterio.session import AWSSession
from rasterio.windows import Window
# create AWS session object
aws_session = AWSSession(boto3.Session(), requester_pays=True)
with rio.Env(aws_session):
with rio.open("s3://sentinel-s2-l1c/tiles/7/W/FR/2018/3/31/0/B8A.jp2") as src:
profile = src.profile
win = Window(0, 0, 1024, 1024)
arr = src.read(1, window=win)
imshow(arr)print(arr.shape)(1024, 1024)Explanation:If the AWS credentials are properly configured forboto3, you can create anAWSSessionobject based on aboto3.Session(). This will set the necessary credentials for the S3 access. Add the flagrequester_pays=Trueso you can read from the requester-pays bucket.TheAWSSessionobject can be passed into arasterio.Envcontext, sorasterio(and more importantly the underlyinggdalfunctions) has access to the credentials.Usingrasterio.windows.WindowI'm reading the arbitrary sub-window (0, 0, 1024, 1024) into memory, but you could also define a window using coordinates, as explained in thedocumentation.From there you can process the array or save it to disk. | I am using following code for downloading full size Sentinel file from S3import boto3
s3_client = boto3.Session().client('s3')
response = s3_client.get_object(Bucket='sentinel-s2-l1c',
Key='tiles/7/W/FR/2018/3/31/0/B8A.jp2',
RequestPayer='requester')
response_content = response['Body'].read()
with open('./B8A.jp2', 'wb') as file:
file.write(response_content)But I don't want to download full size image. Is there any way to download image based on latMax, longMin,LatMin and LatMax? I was using below command but it is not working since when data was made requester-payer on S3gdal_translate --config CPL_TMPDIR temp -projwin_srs "EPSG:4326" -projwin 23.55 80.32 23.22 80.44 /vsicurl/http://sentinel-s2-l1c.s3-website.eu-central-1.amazonaws.com/tiles/43/R/EQ/2020/7/26/0/B02.jp2 /TestScript/B02.jp2Is there any way to achieve this using Python boto? | Download Sentinel file from S3 using Python boto3 |
I recently did this but a different way. I used the Nuget package AwsSignatureVersion4 and an IAM user with appropriate permissions to the ElasticSearch service.But basically, use the ImmutableCredentials and just do what I need to do via the REST calls and the C# HttpClient. I find it easier than using the .NET ElasticSearch library. I can then copy/paste back and forth from Kibana.var credentials = new ImmutableCredentials("access_key", "secret_key", null);
HttpContent httpContent = new StringContent(JsonConvert.SerializeObject(someObjOrQuery), Encoding.UTF8);
httpContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
var resp = httpClient.PostAsync(es_url,
httpContent,
regionName: "us-east-1",
serviceName: "es",
credentials: credentials).GetAwaiter().GetResult();
if(resp.IsSuccessStatusCode)
{
//Good to go
}
else
{
//this gets what ES sent back
var content = response.Content.ReadAsStringAsync();
dynamic respJson = JObject.Parse(content.Result());
//Now you can access stuff by dot and it's dynamic respJson.something
} | I am new to Amazon Web Services.
I configured domain to use ElasticSearch in AWS(Amazon Web Services) console. Confirured usage of Http Requests.
Went through documantation of creating ElasticSearch client fromhttps://www.elastic.co/guide/en/elasticsearch/client/net-api/1.x/security.htmlvar response = client.RootNodeInfo(c => c
.RequestConfiguration(rc => rc
.BasicAuthentication("UserName", "Password")
));Works fine to me (Response is 200)
But when i try to configure authentication credentials like this and pass config to client constructor i need to have "cloudId" i didnt find in at AWS where sould i search for it? or what i have to do?My client code:BasicAuthenticationCredentials credentials = new BasicAuthenticationCredentials("UserName", "Password");
var config = new ConnectionSettings("cloudId???", credentials);
var client = new ElasticClient(config);
var response = client.Ping(); | How to configure client? AWS Elasticsearch request C# |
If you want the inverse of the exists call, you need to both check if it has data and if it is null.I haven't found a way to do this in one call, like the doesn't exist callfrom the Docs{ $.SomeOtherObject NOT EXISTS }However if you do both the following, it evaluates to being the invese of the above:{ $.SomeOtherObject = * EXISTS || $.SomeOtherObject IS NULL }That should get you all instances where SomeOtherObject exists in your context, be it null or set to a value | This is a terraform setup:resource "aws_cloudwatch_log_metric_filter" "account-query-time" {
name = "${local.name_prefix}-AccountQueryTime"
pattern = "{ $.message = \"AccountQuery_70fed564\" }"
log_group_name = yaddayadda
metric_transformation {
name = "AccountQueryTime"
namespace = "Accounts"
value = "$.time"
}
}Logs have JSON objects like{"message": "AccountQuery_70fed564", "time": 320, "units", "microseconds"}Relevant portion ispattern = "{ $.message = \"AccountQuery_70fed564\" }". Can I make sure the field exists as part of the pattern? Will the filter only pick fields that are valid for its metric transform?I'd like to make sure that this metric will not add in messages that are missing the time field. The docs haveNOT EXISTSbut don't have EXISTS by itself.Tried:&& $.time&& $.time EXISTS&& !!$.timeIs it not necessary to check for the existence of a field? Seems safer to confirm that it is there before you go and add it to your metric... | How do you check for the existence of a field in an Cloudwatch JSON filter? |
After you setup themongodbyou usually wouldenableit, so that it starts after each reboot:sudo systemctl enable mongod | I am using aws and installed ubuntu server on aws ec2.
I installed mongodb on it.
When installing mongodb, it is working well.
But If i reboot server and try to connect mongod in termina. I am getting the following error.MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:362:17
@(connect):2:6
exception: connect failed
exiting with code 1So whenever reboot server, i have to execute the following command and then mongo is working again.
sudo systemctl stop mongod
sudo rm /var/lib/mongodb/mongod.lock
sudo mongod --repair --dbpath /var/lib/mongodb
sudo mongod --fork --logpath /var/lib/mongodb/mongodb.log --dbpath /var/lib/mongodb
sudo systemctl start mongodI think whenever reboot server, executing above command does not make sense.
If anyone have experienced, please help me.
Thanks. | Mongodb connection error whenever rebooting server |
Yes, this is permission issues. Your app runs underwebappuser, while/var/logis own by root. Thus you can't write to it.The proper way of adding your log files to be recognized by EB is through config files.Specifically, assuming Amazon Linux 2, you can create.ebextensions/mylogfiles.configwith the content of:files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/myapplogs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/app/current/log/*.logObviously,/var/app/current/log/*.logwould point to location where your app stores its log files./var/app/currentis the home folder of your app. | I have been looking to find an easy way to view debug statements on Beanstalk as I develop. I thought I could simply log to a file on Beanstalk.In my application.properties file I setlogging.file.path=/var/logAnd that did not produce any results even on my local machine. I am not sure if it's a permission issue or what, but locally I set the path to my home directory and then I saw the file, spring.log, appear.With Beanstalk I tried /var/log, var/log/tomcat, /home/webapp/, ./, ~, and various other values. Nothing worked.I even tried what was suggested here with no luck:https://medium.com/vividcode/logging-practice-for-elastic-beanstalk-java-apps-308ed7c4d63fIf logging to file is not a good idea, what are the alternatives? I have Googled a lot about this issue and all answers are not very clear. | Spring Boot on AWS Elastic Beanstalk and logging to a file |
Redis-benchmarktool is included on redis installation. What you may do is; connecting to your redis cluster from ec2. Thistutorialshows the steps to connect it from your ec2 instance.Then you may connect to your instance like this;redis-cli -h mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com -p 6379Just like connecting to your cluster you may useredis-benchmark -h mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com -p 6379to benchmark your cluster. It will print something like this;====== mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com ======
100000 requests completed in 1.83 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.36% <= 1 milliseconds
99.83% <= 2 milliseconds
99.92% <= 3 milliseconds
99.95% <= 4 milliseconds
99.96% <= 5 milliseconds
99.97% <= 6 milliseconds
99.99% <= 7 milliseconds
100.00% <= 7 milliseconds
54585.15 requests per second | I want to test the Redis performance running in AWS ElastiCache.
I have triedredis-benchmarktool to test it in my local machine.
I need the same to test in ElastiCache but I believe there is no terminal access orredis-benchmarkutility tool.How can theredis-benchmarktest be done for the redis in AWS ElastiCache?Is there any other way to test the performance of the redis in ElastiCache? | How to test performance of redis in AWS Elasticache? |
Your application on EB executes on EB platform. There are many platform versions.
The versions get updated from time to time by AWS.The error you are getting means that there is a new platform version, while you are trying to deploy to the old one instead.Its not specified what platform do you use in your question, but the list of all current and previous platform versions is here:Platform historyTo check your platform you can do the following:eb status | grep PlatformExample output:Platform: arn:aws:elasticbeanstalk:us-east-1::platform/Python 3.7 running on 64bit Amazon Linux 2/3.0.3Or runeb configExample outputApplicationName: myenv
DateUpdated: 2020-07-21 05:00:09+00:00
EnvironmentName: my-new-env
PlatformArn: arn:aws:elasticbeanstalk:us-east-1::platform/Python 3.7 running on 64bit Amazon Linux 2/3.0.3Thus its recommended to change your platform to the current one.To select a default platform:eb platform selectand follow the prompts. | I got this error while I'm trying to deploy my project to AWS Elastic Beanstalk using CLI. This is the error I got:Does anyone know what this error is? Thanks!$ eb deploy
Alert: The platform version that your environment is using isn't recommended. There's a recommended version in the same platform branch.
Uploading: [########------------------------------------------] 17% 2020-07-21 14:36:42,196 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:36:59,354 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:37:20,237 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:37:49,380 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:38:52,916 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:38:53,199 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:39:20,861 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:39:47,738 (ERROR) ebcli.lib.aws : Botocore Error
2020-07-21 14:41:48,657 (ERROR) ebcli.lib.aws : Botocore Error | AWS Elastic Beanstalk deploy using CLI having Botocore Error |
You can useAmazon S3 Object Lock feature. It can help you prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.Note: There are two modes:Governance ModeandCompliance Mode.In your case you probably should useGovernance Mode.The difference is that inGovernance Mode, users can't overwrite or delete an object version or alter its lock settingsunless they have special permissions. With Governance Mode, you protect objects against being deleted by most users, but you can stillgrant some users permission to alter the retention settings or delete the objectif necessary.InCompliance Mode, a protected object versioncan't be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked in Compliance Mode, its retention mode can't be changed, and its retention period can't be shortened. | I have an S3 bucket and have granted access customer to upload file in it. Is there a way to ensure that object names are unique in the S3 bucket without enforcing this requirement on the customer who uploads the file?I see S3 have versioning capabilities but the object with same name may not be versions of same object but rather totally different object whose names matched unintentionally | How to enforce unique file names when uploading to Amazon S3 |
You can start the lambda invoke endpoint in the following way (officialdocs):sam local start-lambdaNow you can point your AWS resource client to port 3001 and trigger the functions locally.For eg. If you are doing this on Python, it can be acheived in the following way with boto3:boto3
# Create a lambda client
lambda_client = boto3.client('lambda',
region_name="<localhost>",
endpoint_url="<http://127.0.0.1:3001>",
use_ssl=False,
verify=False)
# Invoke the function
lambda_client.invoke(FunctionName=<function_name>,
Payload=<lambda_payload>) | I am trying to create an AWS SAM app with multiple AWS Serverless functions.The app has 1 template.yaml file which have resource of 2 different serverless lambda functions, for instance "Consumer Lambda" and "Worker Lambda". Consumer gets triggered at a rate of 5 minutes. The consumer uses boto3 library to trigger the worker lambda function. This code works when the worker lambda is deployed on AWS.But I want to test both the functions locally with Sam local invoke "Consumer" which invokes "Worker" also locally.Here's a screenshot of the YAML file:I am using Pycharm to run the project. There is an option to run only 1 function at a time which then creates only one folder in the build folder.I have to test if Consumer is able to invoke worker locally in pycharm before deployment. I think there is some way to do it but not sure how to. I did some extensive search but didn't yield anything.Any help is appreciated. Thanks in advance | Invoke AWS SAM local function from another SAM local function |
you can return Base64 encoded data from your Lambda function with appropriate headers.Here the updated Lambda function:import base64
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = 'mybucket'
key = 'myimage.gif'
image_bytes = s3.get_object(Bucket=bucket, Key=key)['Body'].read()
# We will now convert this image to Base64 string
image_base64 = base64.b64encode(image_bytes)
return {'statusCode': 200,
# Providing API Gateway the headers for the response
'headers': {'Content-Type': 'image/gif'},
# The image in a Base64 encoded string
'body': image_base64,
'isBase64Encoded': True}For further details and step by step guide, you can refer to this officialblog. | I am a beginner so I am hoping to get some help here.I want create a lambda function (written in Python) that is able to read an image stored in S3 then return the image as a binary file (eg. a byte array). The lambda function is triggered by an API gateway.Right now, I have setup the API gateway to trigger the Lambda function and it can return a hello message. I also have a gif image stored in a S3 bucket.import base64
import json
import boto3
s3 = boto.client('s3')
def lambda_handler(event, context):
# TODO implement
bucket = 'mybucket'
key = 'myimage.gif'
s3.get_object(Bucket=bucket, Key=key)['Body'].read()
return {
"statusCode": 200,
"body": json.dumps('Hello from AWS Lambda!!')
}I really have no idea how to continue. Can anyone advise? Thanks in advance! | How to return byte array from AWS Lambda API gateway? |
To add new packages to your Lambda layer, you would need to deploy anew versionof the Layer containing the original packages as well as the new packages you wanted to add.You can get the contents of a layer version by runningget-layer-versionand copying the contents from theContent.Locationvalue.Alternatively you would create a new Lambda layer and package these other packages into that. | I have a layer in my work AWS account which contains many python libraries like pandas, numpy, sqlalchemy, etc.
It has a folder structure of-> LayerName:
-> python
-> pandas
-> numpy
......I want to add my custom package also to this layer. How do I do that? | Adding packages to an existing layer in AWS lambda |
How can I find the most recent data in my stream?How would you define the most recent data? Last 10 entries? Last entry? Or data that is not yet in the shard? The question may sound silly but the answer makes a difference.The option -LATEST- that you are using is going to set the head of the iterator right after the last entry which means that unless new data arrives after the iterator has been created, there will be nothing to read.If by the most recent data you mean some records that are already in the shard then you can't useLATEST. The easy option is to useTRIM_HORIZON.Or even easier would be to subscribe lambda function to that stream that will automatically be invoked whenever a new record is put into the stream (with the record being passed to that lambda function as payload), which might be preferable if you need to handle events in near-real time. | I have problems implementing dynamodbstreams. We want to get records of changes right at the time the dynamodb table is changed.We've used the java example fromhttps://docs.aws.amazon.com/en_en/amazondynamodb/latest/developerguide/Streams.LowLevel.Walkthrough.htmland translated it for our c++ project. Instead ofShardIteratorType.TRIM_HORIZONwe useShardIteratorType.LATEST). Also I am currently testing with an existing table and do not know how many records to expect.Most of the time when iterating over the shards I retrieve from Aws::DynamoDBStreams::DynamoDBStreamsClient and the Aws::DynamoDBStreams::Model::DescribeStreamRequest I do not see any records. For testing I change entries in the dynamodb table through the aws console. But sometimes (and I do not know why) there are records and it works as expected.I am sure that I misunderstand the concept of streams and especially of shards and records. My thinking is that I need to find a way to find the most recent shard and to find the most recent data in that shard.Isn't this what ShardIteratorType.LATEST would do? How can I find the most recent data in my stream?I appreciate all of your thoughts and am curious about what happens to my first stackoverflow post ever.Best
David | I do not see any records in my dynamodb stream |
This is of course popular, although you are responsible for installing any software on the instance for hosting including any necessary web hosting software (such as Nginx).You have said you're experienced with Angular so feel free to take a look atthis tutorialwhich covers the basics.For the frontend for putting an object you can use the JavaScript SDK using eitherputObjectorupload.For credentials of a frontend only application you would configure acognitouser pool to generate the temporary credentials and then use them with the SDK. | Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed3 years ago.Improve this questionI am trying to host astatic website on an EC2 instancewhich takes file as an input from the user and stores it in a bucket in S3.
How can I achieve it ? Can anybody help me with the steps? | how to upload files to s3 through a static site hosted on EC2? [closed] |
This sounds like a security group issue causing the timeout.Elasticache clusters are always private so if you're using a public ip address, this will need to be updated to be the private ip address range of your instance/subnet/VPC.An Elasticache cluster is a resource in your VPC, therefore network transit needs to be allowed for the cluster to be accessible.More information is available in theAccessing Your Clusterpage.Additional ConfigurationThis is the issue. I also needed to delete the existing Redis instance and create another one without AUTH token enabled. | I am working on a Laravel application. I using Redis and I am using AWS ElasticCache service for that. I am trying to connect to the Redis from my Laravel application. But it is timing out. This is what I have done.I installed the Predis library by running the following command.composer require predis/predisThen I created a Redis instance in the ElastiCache service console enabling AUTH setting my password token.Then I set the variables in the .env files.CACHE_DRIVER=redis
REDIS_CLIENT=predis
REDIS_HOST=master.laravelredistest.8sm3xo.euw1.cache.amazonaws.com
REDIS_PASSWORD=mypassword
REDIS_PORT=6379When I run the code to connect to the Redis, I got the following error.Operation timed out [tcp://master.laravelredistest.8sm3xo.euw1.cache.amazonaws.com:6379]What is missing with my configuration and how can I fix it?I also updated the security group of Redis to allow the EC2 instance's security group in the inbound rules as follows:I am getting this error this time:I edited the SG of Redis to add the following inbound rule too.The security groups are in the same VPC too as you can see in the screenshot: | Laravel: connecting the AWS ElasticCache Redis is timing out |
I know that there is an .elasticextension folder but that does not allow us to add custom commands to be run on deployment.Not sure what do you mean that you can't run commands in.ebextensionsduring deployment. But the extensions are commonly used for running commands or scripts when you are deploying your app. There are special sections for that:commands: You can use the commands key to execute commands on the EC2 instance. The commands runbeforethe application and web server are set up and the application version file is extracted.container_commands: You can use the container_commands key to execute commands that affect your application source code. Container commands runafterthe application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.There are alsoplatform hookson Amazon Linux 2 to further fine tune the deployment of your applications.Finally, if all of them are not suited, you could create dedicated build step inCodePiplelinefor you application. Thededicated stepcould be used to create fully deployment version of your application for EB with minimal amount of work to do at EB instances. | I am working deploying a Laravel application to the AWS ElasticBeanstalk. I configured the CLI and I could deploy the application to an ElasticBeanstalk environment running the command. This is what I have done so far.I created an ElasticBeanstalk application and an environment in it.Then I initialised the application for deployment using "eb init" and deployed it using "eb deploy". But I would like to add some additional commands to be run during the deployment. For example, I might run "gulp build" or other commands. Where and how can I figure it? I know that there is an .elasticextension folder but that does not allow us to add custom commands to be run on deployment. | AWS ElasticBeanstalk configuring or running additional commands on deployment |
You can use the below template"DNS": {
"Type": "AWS::Route53::RecordSet",
"Properties": {
"HostedZoneId": "Z058101PST6709",
"Name": {
"Ref": "AlternateDomainNames"
},
"ResourceRecords": [{ "Fn::GetAtt": ["myDistribution", "DomainName"] }],
"TTL": "900",
"Type": "CNAME"
}
}I should raise as you're using Route 53 you should take advantage of usingAlias recordsinstead ofCNAMErecords for your CloudFront Distribution.This could be done via the below.{
"Type": "AWS::Route53::RecordSetGroup",
"Properties": {
"HostedZoneId": "Z058101PST6709",
"RecordSets": [{
"Name": {
"Ref": "AlternateDomainNames"
},
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": { "Fn::GetAtt": ["myDistribution", "DomainName"] }
}
}]
}
} | "DNS": {
"Type": "AWS::Route53::RecordSet",
"Properties": {
"HostedZoneId" : "Z058101PST6709",
"RecordSets" : [{
"Name" : {
"Ref": "AlternateDomainNames"
},
"Type" : "CNAME",
"TTL" : "900",
"ResourceRecords" : {
"Ref": "myDistribution"
},
"Weight" : "140"
}]
}
}Hi Team, I am going to create a route53 record with cloudfront please find the cloud-formation code and in which I am getting an error while create a stack. Basically I want to create a CNAME record by using cloudfront domain name. Please help me out in this. | Use Route53 template in cloudformation with Cloudfront |
+50Take a look at usingdjango-storagesto save your uploads. I use S3 for storing uploads of a django/docker/EB deployment, and include django settings that look something like this (I keep them insettings/deployment.py):if 'AWS_ACCESS_KEY_ID' in os.environ:
# Use Amazon S3 for storage for uploaded media files
# Keep them private by default
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# Amazon S3 settings.
AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"]
AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"]
AWS_STORAGE_BUCKET_NAME = os.environ["AWS_STORAGE_BUCKET_NAME"]
AWS_S3_REGION_NAME = os.environ.get("AWS_S3_REGION_NAME", None)
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_AUTO_CREATE_BUCKET = False
AWS_HEADERS = {"Cache-Control": "public, max-age=86400"}
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = 'private'
AWS_QUERYSTING_AUTH = True
AWS_QUERYSTRING_EXPIRE = 600
AWS_S3_SECURE_URLS = True
AWS_REDUCED_REDUNDANCY = False
AWS_IS_GZIPPED = False
MEDIA_ROOT = '/'
MEDIA_URL = 'https://s3.{}.amazonaws.com/{}/'.format(
AWS_S3_REGION_NAME, AWS_STORAGE_BUCKET_NAME)
USING_AWS = True | I have created a backend django app using AWS Beanstalk, and a frontend reactjs app deployed using cloudfront (plus S3)I have a model in backend that doesclass EnhancedUser(AbstractUser):
# some other attributes
picture = models.ImageField(blank=True)my settings.py hasMEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '<my_elastic_beanstalk_domain>/media/'Since I'm using cloudfront, if i just set theMEDIA_URLto/media/, it would just append/media/to my cloudfront url, so I have to hardcode it to my backend urland then, following the django docs, I added the static part to my urls.pyurlpatterns = [
path('admin/', admin.site.urls),
# some other urls
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)Note that django doc does mention we can't use absolute url forMEDIA_URL, but I have no alternative solution at the momentWhen I upload my image, it doesn't get stored in the right place, but I cannot open it with the url. It returns a 404 saying the img's url is not part of urls listMy question is:How do I set it up so I can display the imageSince the images will be updated through users/admins, these will be stored in the EC2 instance created in beanstalk, so every time I deploy, I think they will be wiped. How do I prevent this? | How to make django uploaded images to display in CloudFront frontend + Beanstalk Backend |
From thedocumentationthe DocumentClient is just an abstract class to make it easier to implement.The document client simplifies working with items in Amazon DynamoDB by abstracting away the notion of attribute values. This abstraction annotates native JavaScript types supplied as input parameters, as well as converts annotated response data to native JavaScript types.You're free to choose whichever method you want, however by using theDocumentClientclass you would be having less control over the processing or manipulation of your data. | I am learning DynamoDB and AWS serverless stack. I see that a lot of tutorials suggest using AWS.DynamoDB.DocumentClient. For example, to create an item:const dynamodb = new AWS.DynamoDB.DocumentClient();and thentry {
await dynamodb
.put({
TableName: process.env.DISHES_TABLE_NAME,
Item: dish,
})
.promise();
} catch (error) {
console.error(error);
throw new createError.InternalServerError(error);
}But thedocsays that putCreates a new item, or replaces an old item with a new item by
delegating to AWS.DynamoDB.putItem()I am confused why not use AWS.DynamoDB.putItem in the first place or when to use which one.Thank you! | AWS.DynamoDB vs. AWS.DynamoDB.DocumentClient - which one to use when? |
You can use the{proxy+}path to act as a catch all for API Gateway.By creating theproxy resourceanything that matches the prefix will automatically use that resource, if you add it to the root resource then it will process all other requests that do not match a specific URL pattern.You could also use variables in your path resource names, for your user method for example the path would end up being/user/{userId}. This is thepreferablesolution as it is still being specific to the request type.More information is availablehere. | I'm running a Flask-application on AWS-Lambda based on this tutorial:https://andrewgriffithsonline.com/blog/180412-deploy-flask-api-any-serverless-cloud-platform/#create-flask-appMy problem now is that this setup works absolutely fine for the defined home-path ("/"), but whenever I call e.g. "/user/7" the API-Gatway returns 403, since it doesn't know the route, although it is defined in the Flask-Lambda.Is there a possibility to setup API-Gateway in a way to pass the whole request through to the Lambda, regardless of whatever path the request has? | Pass dynamic path via API-Gateway to AWS-Lambda |
Had same issue, I don't want a pipeline launch on pipeline creation (which is the default beahviour).Best solution I fount is :Create an EventBridge rule which catch the pipelineExecution on
pipeline creationStop the pipeline execution from the lambda triggeredRule looks like this :{
"source": ["aws.codepipeline"],
"detail-type": ["CodePipeline Pipeline Execution State Change"],
"detail": {
"state": ["STARTED"],
"execution-trigger": {
"trigger-type": ["CreatePipeline"]
}
}
}It works fine | If you create a CodePipeline via CloudFormation. It starts it automatically, that can be a problem because the pipeline can rewrite the same stack...Is there any way to disable this behaviour?Thanks. | Is there any way to stop AWS from starting CodePipeline automatically if I deploy it via CloudFormation? |
When you enable MFA, SDKdoes not automaticallyknow how to work with it. Your regular IAM user's API and SECRET keys are no longer enough. Instead you need to use temporary credentials created only for your MFA session.To make MFA work with boto3 you have to explicitly callget_session_token:MFA-enabled IAM users wouldneed to call GetSessionTokenand submit an MFA code that is associated with their MFA device. Using thetemporary security credentialsthat are returned from the call, IAM users can then make programmatic calls to API operations that require MFA authentication.Usingget_session_tokenyou can callstsservice which is going to provide you with temporary credentials based on your MFA details:sts = boto3.client('sts')
mfa_response = sts.get_session_token(
DurationSeconds=123,
SerialNumber='string',
TokenCode='string'
)The call will return the credentials inmfa_responsewhich you can use tocreate a new boto3 session. For example:mfa_session = boto3.session.Session(
aws_access_key_id=mfa_session['Credentials']['AccessKeyId'],
aws_secret_access_key=mfa_session['Credentials']['SecretAccessKey'],
aws_session_token=mfa_session['Credentials']['SessionToken'])
dynamo = mfa_session.resource('dynamodb', ...)
# and the rest of the code | Previously when I did not set MFA to login to AWS console I've connected to dynamodb bydynamo = boto3.resource('dynamodb',
region_name='ap-northeast-2',
endpoint_url='http://dynamodb.ap-northeast-2.amazonaws.com')
table = dynamo.Table('tablename')and querying to that table was perfectly fine.response = table.query(
KeyConditionExpression =Key("user_id").eq(123123)
)After I've set MFA for additional security to login to AWS console and now when I execute above code I get:ClientError: An error occurred (UnrecognizedClientException) when calling the Query operation: The security token included in the request is invalid.I use tunnel for RDB, is there something like that I could use for connecting to dynamodb or is there a permission I need in order to access dynamodb? | Access aws dynamodb using boto3 when MFA has been set. Getting ClientError |
The parameters you described above are used for the following:NeptuneBulkloadIAMRoleArn - This is an IAM role setup to run theloader command. Instructions for setting this up foundhere.NeptuneClusterEndpoint - This is the endpoint of your Neptune database, it will be accessible either from the console or the CLI.NeptuneLambdaIAMRoleArn - This allows you to pass in your own role the Lambda should use, if not specified the CloudFormation stack should make one for you. | Hello i am planning to run the cloudFormation stack that is preconfigured by awshere.
It prompts me to fill outNeptuneBulkloadIAMRoleArnNeptuneClusterEndpointNeptuneLambdaIAMRoleArnBut i don't know what to fill in there, can you help me out? | Confusing parameter for cloudFormation script |
AWS actually restricts access to this port for security reasons. The suggestion is try using another port if you can (for example SES works over port 587 as well).You can however request that this restriction is removed, to do this you will need to do thefollowing steps:First, create a corresponding DNS A record:If you're using Amazon Route 53 as your DNS service, either create a new resource record set that includes an A record, or update your existing resource record set to include a new A record.If you're using a service other than Amazon Route 53, ask your DNS provider to create an A record for you.Then, request AWS to remove the port 25 restriction on your instance:Sign in with your AWS account, and open the Request to Remove Email Sending Limitations form.In the Use Case Description field, provide a description of your use case.(Optional) Provide the AWS-owned Elastic IP addresses that you use to send outbound emails as well as any reverse DNS records that AWS needs to associate with the Elastic IP addresses. With this information, AWS can reduce the occurrences of emails sent from the
Elastic IP addresses being marked as spam.Choose Submit. | I tried to unblock port 25 on my ec2 instance so I could send emails and I was asked to provide this:A statement of the security measures and mechanisms you will be implementing to avoid being implicated in the sending of unwanted mail (Spam)What does this mean, like what is an example of those security measures? I have no idea what I'm supposed to respond to with that. All I plan on doing is sending emails to verify email accounts and change passwords for user accounts on my website. | "A statement of the security measures and mechanisms you will be implementing" AWS (Unblock port 25) |
You will have to first configure your API Gateway API to have anHTTP integration. Add the URL of your external API in the integration config and then use Mapping Templates to add the API Key in the headers sent in the request to the external API.If you are going to have a static value for the API key header, then it should be pretty straight forward following the above doc. If you plan to get it from the client, then you will have to map the incoming value to API Gateway API to the mapping template and then send it external API. | Is it possible to connect a AWS API Gateway REST API to an external API? I guess the answer is yes since I was able to do this to a simple flask endpoint through an public IP on an EC2 instance. However, when I try to do this to an external public API endpoint I cannot figure out how to send the API Key for the remote API. The documentation really does not talk about this use case. Searching the web also did not provide any answers. | AWS API Gateway to External REST API |
Unfortunately at this time this is the only way, theAWS documentationstates the following.To close an account, you must be signed in as the AWS account root user of the account. If you sign in to an account with an AWS Identity and Access Management (IAM) user or role, you can't close the account.This will primarily be for security reasons, the exact same reason why IAM users cannot access billing (unless the root user allows it). | There are various ways to automateaccount creationin AWS Organizations, but what about delete/close?So far it looks like I have to login with the child account'srootaccount to be able to close the account. Is there anyway to close a member account using the parent root? It seems like this should be possible. | If I create an AWS account in AWS organizations, can I delete it from my root account which owns the organization? |
Including this in CDK is tricky, because you'll have to resolve this count when you've completed your stack.Why don't you just runcdk synth | jq '.Resources | length'to get the number of resources? | I'm trying to deploy a quite big stack using CDK TypeScript. When I rancdk deploy myApp, I gotTemplate format error: Number of resources, 257, is greater than maximum allowed, 200error.After some research, I think I know what's the issue. From thedocs:The keyword here is "nested stacks" and that's what I'm planning to do now. From my reading, I'm clear what I'm suppose to do in terms of creating parent and nested stacks.The question here is, is there any available TS function in CDK library that I can call to tell me how many resources that a stack will generate?Or, in simpler words, I want the TS function to tell me "with all these declarations defined in this stack, you will create xxx number of resources under this stack".Even better, a TS function that can list all resources that will be generated under the stack.I'm looking for something like this:new cdk.CfnOutput(this, "TotalResources", {
value: ___, // <-- any idea what's available for this one?
});With this helper function, it will help me with my refactoring and nested stacks planning work. | How to get total number of resources under a CloudFormation stack? |
This is to filter the AMIs, for those that are owned by a specific AWS account. In this case this filter will only find images owned by the account id of099720109477that are namedubuntu/images/*ubuntu-xenial-16.04-amd64-server-*.Below from thedocumentationFilters the images by their owner. You may specify one or more AWS account IDs, "self" (which will use the account whose credentials you are using to run Packer), or an AWS owner alias: for example, amazon, aws-marketplace, or microsoft. This option is required for security reasons. | i am new packer and exploring few things on it while using it something like this came up"builders": [
{
"type": "amazon-ebs",
"profile" : "sumanthdev",
"region": "us-east-1",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},I want to know what "owners": ["099720109477"], stands for .
i know it takes input of an account id , but which ? the account id where it going to create the ami , or? | what does "Owner" field in packer "source_ami_filter" work on? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.