Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I solved this by adding this line to postgresql.conf :listen_addresses = '*'file location of postgresql.conf is:/etc/postgresql/9.5/main/postgresql.confAnd I added this line in file pg_hba.conf:# IPv4 local connections: host all all 0.0.0.0/0 md5 host all all 127.0.0.1/32 md5And restarted postgres service using:sudo service postgresql restart
I am trying to connect to my postgres database which is install on AWS EC2 instance.I have installed pgadmin3 on my local ubuntu machine and I am trying to connect my postgres but I am getting error:reports could not connect to server: Connection refused Is the server running on host "myip" and accepting TCP/IP connections on port 5432?On aws I have open port 5432.I edited my postgresql.conf and I added :listen_addresses = '*'and inside pg_hba.conf I added this:host all all 192.168.1.0/24 md5But now I am getting this error:FATAL: no pg_hba.conf entry for host "myip", user "postgres", database "postgres", SSL on FATAL: no pg_hba.conf entry for host "myip", user "postgres", database "postgres", SSL off getting this error
Can't connect to Postgres using pgadmin
Your compute environment will terminate if it is idle near the end of an AWS Billing Hour.Inside theCompute Environment Parameters documentationfor AWS Batch, there is a definition ofState. A compute environment is in theEnabledstate and can accept jobs from the queue. Once the compute environment is inDisabledand idle, toward the end of an AWS billing hour the compute environment is scaled in (which will terminate your EC2 instance).
I am running a simple java HelloWorld program using docker container in AWS Batch. I have created a managed Compute Environment with following valuesMinimum vCPUs 0Desired vCPUs 0Maximum vCPUs 256Instance types optimalOn submitting the Job, the job is executed successfully i.e. the job is submitted to the queue, the scheduler provisions the ec2 instance ( with aws-ecs agent container and java helloworld container which is specified in Job Definition) and the job is successfully completed with the logs in CloudWatch Stream.My issue is that after the job is succeeded the compute environment (ec2 instance) provisioned by scheduler still keeps on running instead of terminating.Pls. suggest if I am missing anything.
Terminate EC2 instance after AWS Batch is succeeded
Change the NameName = "SchedulingEngine" <----- It should be unique for each execution
I am calling a step function from a Lambda function, in a loop. However I am getting a ExecutionAlreadyExistsException. What am I doing wrong here?[Fact] public async void ActualSchedulingEngineStepFunctionCallTest() { var amazonStepFunctionsConfig = new AmazonStepFunctionsConfig { RegionEndpoint = RegionEndpoint.USWest2 }; using (var amazonStepFunctionsClient = new AmazonStepFunctionsClient(awsAccessKeyId, awsSecretAccessKey, amazonStepFunctionsConfig)) { var input = new Input { ID = "24232323232323232", Status = 1, Type = "Interim" }; var jsonData1 = JsonConvert.SerializeObject(input); var startExecutionRequest = new StartExecutionRequest { Input = jsonData1, Name = "SchedulingEngine", StateMachineArn = "arn:aws:states:us-west-2:<SomeNumber>:stateMachine:SchedulingEngine" }; var taskStartExecutionResponse = await amazonStepFunctionsClient.StartExecutionAsync(startExecutionRequest); Assert.Equal(HttpStatusCode.OK, taskStartExecutionResponse.HttpStatusCode); } }Stack trace:Amazon.StepFunctions.Model.ExecutionAlreadyExistsException : Execution Already Exists: 'arn:aws:states:us-west-2:<SomeNumber>:execution:SchedulingEngine:SchedulingEngine' ---- Amazon.Runtime.Internal.HttpErrorResponseException : Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown.
Call AWS Step Functions from .Net
If you create a pipeline with Cloudwatch events as an option to automatically start the Pipeline (you pick this option during source step) then code pipeline tries to create Cloud watch event and rule along with corresponding Role and Policy.It's not possible to manually create that Cloudwatch service role and assign during the code pipeline as it happens in the background and there is no option to customize this step.This is the step which results in "Could not create IAM role " error (If user creating pipeline does not have the permission to create IAM roles).Solution:ChooseAWS Codecommit periodic checksas an option to automate the pipeline, then you will not face this issue.
I'm trying to set up a pipeline this AWS tutorial (here). Everything was going well until I got to the end of step 5. The error message I'm getting is simple "could not create IAM role", but the role was successfully created when I checked it in the IAM console.I've canceled the wizard and tried it again a few times, even leaving it overnight in case something was stuck in the cache, but its still returning the same error message.Has anybody else come up against this?
AWS Codepipeline wizard "Could not create IAM role"
The solution is to useconn.autocommit(True)once orconn.commit()after each Select query.With this option, there will be a commit after each select query. Otherwise, subsequent selects will render the same result.This error seems to beBug #42197related to Query cache and auto-commit in MySQL. The status is won't fix! There is also anissue in pymysqlbut it is closed.In a few months, this should be irrelevant becauseMySQL 8.0 is dropping Query Cache.
Simplifying my program, I have a MySQL RDS DB, and I want to develop a Lambda function to get the last value inserted in a specific column.I have got the following code in a lambda function, based onthis AWS tutorial:# Connexion to DB outside the handler, per AWS recomendation def lambda_handler(event, context): with conn.cursor() as cur: cur.execute("SELECT column FROM DB.table ORDER BY create_time DESC LIMIT 1;") row = cur.fetchone() return row[0]I am usingpymysql.Basically, in the first call (after saving the lambda function, for example) it works as supposed to, and it returns the last value in the table.However, for any other call during a short interval (some minutes), it continues to return the same value, independent of any DB changes.Saving or Waiting for some minutes leads to the correct result. Is it possible that I'm unintentionally caching the result?
Why is an AWS Lambda python call to a MySQL RDS being cached?
It Depends.It varies from case to case as the responses will be coming from the cloud providers (AWS, Azure).Ex.If you create a VPC in terraform, it will generate a new VPC ID (terraform won't allow to use VPC ID in coding). So, it won't affect your existing resources.If you write a Route53 record in terraform, it could overwrite existing Route53 entries.But, If you import terraform state form existing resources, it will import its state and map it with the terraform resources. In that case, destroying the resource will remove the actual cloud resource.Hope I understood your question and answered it.
Just a quick question, does anyone know if Terraform will wipe out existing resources on AWS?For example if I already have an existing VPC with resources, or S3/EFS storage will Terraform ignore these resources when I run it with my configuration files to deploy say another VPC?Or as Terraform is looking for a desired state will it wipe anything existing?Am hoping unless you specifically import existing resources Terraform will just leave them alone?Thanks
Terraform - Will it wipe existing resources AWS
Thedocumentationstates fornum_cache_nodesthat you can only specify one instance for Redis. In order to create a clustered Redis setup you need to create anelasticache_replication_group, you can find the documentation to do that in Terraformhere.
I am not sure what I am doing something wrong here but the documentation does not say anything. I am trying to deploy a redis cluster but this is the error that I am getting:aws_elasticache_cluster.cluster: engine "redis" does not support num_cache_nodes > 1
Redis support in terraform
You can useresult.getRefreshToken().getToken()for that. The success callback takesCognitoUserSessionobject i.e.resultas a parameter which exposesgetRefreshTokenmethod to retrieve refresh token.Refer this link for Cognito JavaScript SDK documentation -https://github.com/aws/aws-amplify/tree/master/packages/amazon-cognito-identity-jsNot sure if I clearly understand your second question, but Use case 32 in above link might help you in dealing with it.
When successfully logged in into the cognito user pool, I can retrieveaccess tokenandid tokenfrom the callback function asonSuccess: function (result) { var accesstoken = result.getAccessToken().getJwtToken() var idToken = result.idToken.jwtToken }But how can I retrieve therefresh token? And how can I get a new token using this refresh token. I didnot find any clear answers.
Using Amazon Cognito Refresh Token to get new token in javascript
After speaking to AWS developer support I found that it's not possible to link a Cognito Identity back to a user in a Cognito User PoolHence, if you need to know which user your backend is executing code on behalf of, in a lambda perhaps, you have the following options:Send user info inside the request. Even if the lambda invocation is authenticated with a Cognito Identity, and the lambda has access to the identity in the lambda's context, if you want user info you need to send it yourself. For exemple send the ID Token in the request, validate it server side, and extract user info from it.Use Cognito Sync to create a dataset for your Cognito Identities. Store a bit of user info inside the dataset.
Is it possible to find which user (within a user pool) a given cognito identity belongs to. In the AWS Console? Programmatically ?In a Cognito Identity Pool, identities look like<region>:<guid>. When those identities come from a Cognito User Pool, then in the AWS Console, we can click on the identity and get access to some information. That information is limited toDateCreatedandLinkedLogin=cognito-idp.<region>.amazonaws.com/<userpool_id>which only tells you this identity comes from Cognito User Pool and which pool, but that is far from actually useful. Can we actually tell which user within the user pool?
Finding user associated with a Cognito Identity
When the message payload should be handled as raw binary data (rather than a JSON object), you can use the * operator to refer to it in a SELECT clause.Do this on your rule:SELECT encode(*, 'base64') AS data, timestamp() AS ts FROM 'a/b'This way it will get invoked.https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-select.html
I have configured a lambda to a IOT Rule. The MQTT topic will get binary data and on arrival of data the rule should invoke lambda.The lambda gets invoked when I post normal JSON data, but if I post any binary data, the lambda does not get invoked.But at the same time, I am able to consume the binary data posted to MQTT through my stand alone consumer and able to deserialise it successfully.So what is that I am missing here ?
AWS Lambda not getting invoked when binary data posted in AWS IOT?
No, there is no automatic "stop after free trial" feature.Your AWS Account is a full account, with all capabilities active. The AWS billing system, however, will not charge for services consumed within thefree usage tier.You willreceive emailswhen it is forecasted that you will exceed the free tier and you canMonitor Your AWS Free Tier Usagein the AWS management console.
I am trying out AWS features, and I intend to only try the "free trails" first.Is there a safety that I dont endup being charged for crossing the free limit. Like a button switch like "Stop after free trail"?this is for my entire aws account, not limited to any one service.
Amazon AWS: Stop service after free limits
Do something like this. Note you may want to limit which ECR repos are accessible.resource "aws_instance" "test" { ... } resource "aws_launch_configuration" "ecs_cluster" { ... iam_instance_profile = "${aws_iam_instance_profile.test.id}" } resource "aws_iam_role" "test" { name = "test_role" assume_role_policy = "..." } resource "aws_iam_instance_profile" "test" { name = "ec2-instance-profile" role = "${aws_iam_role.test.name}" } resource "aws_iam_role_policy_attachment" "test" { role = "${aws_iam_role.test.name}" policy_arn = "${aws_iam_policy.test.arn}" } resource "aws_iam_policy" "test" { name = "ec2-instance-pulls-from-ecr" description = "EC2 instance can pull from ECR" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] } EOF }
I'm trying to attach an IAM roles to EC2 instances (not ECS) so they can pull images from ECR.
How to attach IAM roles to EC2 instances so they can pull an specific image from ECR in Terraform
If your IAM roles are setup correctly, then you need to download the file to the Sagemaker instance first and then work on it. Here's how:# Import roles import sagemaker role = sagemaker.get_execution_role() # Download file locally s3 = boto3.resource('s3') s3.Bucket(bucket).download_file('your_training_s3_file.rec', 'training.rec') #Access locally train = mx.io.ImageRecordIter(path_imgrec=‘training.rec’ …… )
I've uploaded my own Jupyter notebook to Sagemaker, and am trying to create an iterator for my training / validation data which is in S3, as follow:train = mx.io.ImageRecordIter( path_imgrec = ‘s3://bucket-name/train.rec’ …… )I receive the following exception:MXNetError: [04:33:32] src/io/s3_filesys.cc:899: Need to set enviroment variable AWS_SECRET_ACCESS_KEY to use S3I've checked that the IAM role attached with this notebook instance has S3 access. Any clues on what might be needed to fix this?
Training data in S3 in AWS Sagemaker
You can make both ideas work (single account with multiple environments, or multiple accounts with one environment per account) and both have advantages and disadvantages.If you run multiple environments in the same account:your AWS account limits are more easily reacheda runaway dev script could impact production's ability to scale uploss of credentials endangers all of your environmentsdevelopers could accidentally damage productionI think it's also simpler to separate production costs from other costs if you use multiple accounts and consolidated billing.Setting up cross-account access is simple, if you need it.
What are the drawbacks of deploying 3 environments (DEV, QA, and Production) under the same AWS account, in different VPC IP tables.To me it makes sense, if the same team will need to manage 3 different environments.I've heard people saying that one should use separate accounts for development and production, but does that mean to use completely different environments and that they should have different console login links?Please advise. Thanks!!
what's the drawback of using same AWS account in different environments with different VPC?
You need to install X or xrdp to gain access.
Lightsail allows RDP on their Windows instances, can I do the same with the Linux variant? Would like to run Selenium etc.
Does Amazon Lightsail have remote desktop (GUI) with their Linux offering?
I found the answer. The version of the AWS Java SDK I was using wasn't recent enough to have the method.Here is how to do it:Bucket bucket = amazonS3Client.createBucket( bucketName ); ServerSideEncryptionRule serverSideEncryptionRule = new ServerSideEncryptionRule(); ServerSideEncryptionByDefault serverSideEncryptionByDefault = new ServerSideEncryptionByDefault(); serverSideEncryptionByDefault.setKMSMasterKeyID( "xxxxxxxxx-xxx-xxxxx-xxxx-xxxxx-xxxx-xxxxxxx" ); serverSideEncryptionByDefault.setSSEAlgorithm( SSEAlgorithm.KMS.getAlgorithm() ); serverSideEncryptionRule.setApplyServerSideEncryptionByDefault( serverSideEncryptionByDefault ); SetBucketEncryptionRequest setBucketEncryptionRequest = new SetBucketEncryptionRequest(); setBucketEncryptionRequest.setBucketName( bucket.getName() ); ServerSideEncryptionConfiguration serverSideEncryptionConfiguration = new ServerSideEncryptionConfiguration(); ArrayList< ServerSideEncryptionRule > serverSideEncryptionRules = new ArrayList<>(); serverSideEncryptionRules.add( serverSideEncryptionRule ); serverSideEncryptionConfiguration.setRules( serverSideEncryptionRules ); setBucketEncryptionRequest.setServerSideEncryptionConfiguration( serverSideEncryptionConfiguration ); amazonS3Client.setBucketEncryption( setBucketEncryptionRequest );
I've been unsuccessful in locating a call that would allow me to create a KMS encrypted bucket in S3 (using the Java AWS SDK).Does such a method exist? And if so, where can I find examples/documentation?
Create an S3 bucket with KMS encryption?
It is possible that you have no ec2 instances running in the region your are querying. Make sure the region parameter references a region in which you do have ec2 instances.
I am using the AWS API to create a direct http request to AWS EC2. Whenever I try to do DescribeInstances I get this:<DescribeInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/"> <requestId>133ec145-9495-41bc-ba15-bc54bf6d4d5b</requestId> <reservationSet/> </DescribeInstancesResponse>If try to do StartInstances - it says non existing ID . But for DescribeRegions it is working fine. Where do I go wrong ?
aws describe instances showing empty list
You pay for the provisioned capacity, if for example you request 400 WCU's then Amazon needs to reserve capacity to make sure you will be able to use all those WCU's. So even if you don't write anything you still need to pay Amazon for the reserve they have to make. This is the capacity you have to pay for which you have used beyond what you got for free in the free tier. This is also the reason why you should choose your provisioned capacity carefully, even if you use auto scaling. Constantly monitor your usage is key to using AWS.I think for the table you have quite some capacity provisioned and that's where I would start looking. If you are sure a table will not be used for a prolonged time I would dial down the provisioned capacity, even with auto scaling enabled. That being said it might be a good idea to check out auto scaling and see if it could have helped you here.
I've been working on a new project and have put some data onAmazon DynamoDB. The project was kind of on hold last month and I was surprised to see such high costs for a DB that was essentially almost untouched for the whole month.Here is the bill details:What doesper hour for units of write capacity beyond the free tiermean?Thanks.
Confused about Amazon DynamoDB, what is per hour for units of write capacity?
An S3 bucket is an S3 bucket. It doesn't matter which AWS account it is in. If you have permission to access the bucket then you can access it.Simply provide the name of the S3 bucket (it must be in the same region in this specific case) and make sure the credentials you are using are allowed access to the S3 bucket.
When writing the AWS CloudFormation template to create a Lambda function, 'Code' field is required. I found the documentation here:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.htmlThe document says you can specify the source of your Lambda function as a zip file in a S3 bucket. And in theS3Bucketfield, it says "You can specify a bucket from another AWS account as long as the Lambda function and the bucket are in the same region."If you put a bucket name in theS3Bucketfield, it will try to find the bucket in the same AWS account. So my question is how can I specify a bucket from another AWS account?A code snippet in yaml I created for the CFT:MyLambdaFunction: Type: AWS::Lambda::Function Properties: Handler: index.handler Runtime: nodejs6.10 Role: !GetAtt LambdaRole.Arn FunctionName: 'MyLambda' MemorySize: 1024 Timeout: 30 Code: S3Bucket: 'my-bucket' S3Key: 'my-key'
AWS CloudFormation: How to specify a bucket from another AWS account for Lambda code?
Using DynamoDB's server-side encryption option is sufficient. You do not need to pre-encrypt the data before sending it to DynamoDB for encryption. The data also needs to be encrypted in transit to DynamoDB, of course.Note that while HIPAA itself requires encryption at rest, AWSadditionallyrequires that you store the data in anAWS HIPAA-eligible service(which DynamoDB is).You must additionally execute an AWS BAA and then you may useanyAWS service (even those not on the HIPAA-eligible list) in an account designated as a HIPAA Account,butyou may only process, store and transmit PHI data using the HIPAA-eligible services.Update November 2018:all DynamoDB tables are encrypted at rest.
We develop a PWA that needs hipaa compliance based on AWS. In this paper writes AWSAWS Architecture Whitepaperwhen PHI is stored in DynamoDB needs to encrypt before is stored in DynamoDB. Now has AWS relased Enryiption at Rest at some DynamoDB regions. Is it required to encrypt PHI when i enable encryption at DynamoDB level to be hipaa compliance?
DynamoDB encryption for hipaa compliance
There is no way to do this out of the box using CloudFormation. However,this threadsuggests a workaround involving a Lambda-backed custom resource.
I have this role attachment resource that as is, deploys just fine:CognitoIdentityPoolRoleAttachment: DependsOn: [ CognitoIdentityPool, CognitoIdentityPoolAuthRole, CognitoIdentityPoolUnauthRole ] Type: "AWS::Cognito::IdentityPoolRoleAttachment" Properties: IdentityPoolId: !Ref CognitoIdentityPool RoleMappings: 'cognito-idp.us-west-2.amazonaws.com/us-west-2_naEXQTLxD:44rd7mu8dncna2kqimd74f7u98': Type: Token AmbiguousRoleResolution: AuthenticatedRole Roles: unauthenticated: !GetAtt CognitoIdentityPoolUnauthRole.Arn authenticated: !GetAtt CognitoIdentityPoolAuthRole.ArnHowever, as you can see I have a RoleMappings property that is actually my CognitoUserPool ProviderName appended to my Cognito User Pool Client ID, so I need this property to have a dynamic name.Looking over the docs, however, I can't find a way to actually use intrinsic functions on an object key. When I try this:RoleMappings: !Sub '${CognitoUserPool.ProviderName}:${CognitoUserPoolClient}': Type: Token AmbiguousRoleResolution: AuthenticatedRoleI get an invalid template error. Is there a special syntax I'm missing that allows you to define keys instead of properties? Or am I going to have to do this some other way?
AWS CloudFormation - Any way to use an intrinsic function as an object key?
Doh, I was using boto3.client('elb'), which doesn't return ARN, instead of boto3.client('elbv2'), which does. I hope someone will find this useful...
Given a load balancer name, is it possible to find its ARN with AWS API? The closest I see is the describe_load_balancers function, but its output doesn't include ARN. Am I missing something simple?Full context: The script adds "DDoSAttackBitsPerSecond" metrics to a AWS Dashboard, and metric description includes the ARN of the load balancer as one of its dimensions.Any suggestions appreciated,Mike
How to find AWS Load Balancer ARN from its name with python api?
There are several Storage Backends in Vault, and only some of them supports HA, like Consul. However, if a backend doesn't support HA it doesn't mean that it can't be used at all.So, if you need to run multiple Vault istance, each one independent from the other ones, you should be able to use S3 as a Storage Backend. But if you need HA you need to use Consul, or any other backend that support HA.Hope this help
I have 3 Availability Zones in my AWS VPC and I would like to run Vault to connect to S3. I would like to run 3 Vault servers (one for each zone) all of them syncing to the same S3 bucket. Is this HA scenario for Vault possible?I read that Vault doesn't support HA using S3 as the backend and might need to use Consul (which runs 3 servers by default). A bit confused about this. All I want is to run multiple Vault servers all storing/reading secrets from the same S3 bucket.Thanks for your inputs.Abdul
Multiple Hashicorp Vault servers in different AZs in AWS
Try thisaws dynamodb query --table-name name-of-table --key-condition-expression 'id = :idval' --expression-attribute-values '{":idval":{"S":"91"}}'You have to use value substitution on attributes. You can optionally use attribute name substitution.Note i've assume your id attribute is of type String. Change it to "N" if its a number.
I'm trying to use the query expression an gettingATTEMPT EDIT - 1 PASSAn error occurred (ValidationException) when calling the Query operation: Invalid KeyConditionExpression: Syntax error; token: "{", near: "{""This query looks likeaws dynamodb query --table-name name-of-table --key-condition-expression 'id=:91'I've tried '"id"="91"', ':id=":91"', etc.id is the partition key so this is also the required attribute needed.
AWS KeyConditionExpression dynamodb query
I've been working with AWS SES recently. I'm pretty sure you can send 50 destinations in a single call to SES even though your max send rate is 14ps. I was able to.Definition of Max Send Rate: The maximum number of emails that Amazon SES can accept from your account per second. You can exceed this limit for short bursts, but not for a sustained period of time.https://docs.aws.amazon.com/ses/latest/DeveloperGuide/manage-sending-limits.html?icmpid=docs_ses_consoleHope this helps.
I have two questions regarding AWS SES SendBulkTemplatedEmail:1) Does anyone know about any step-by-step tutorial in .Net? I have only seen examples using the CLI, and I am fumbling with the API to try to make it work.2) Each SendBulkTemplatedEmail request can contain 50 destinations (recipients). My daily send quota is 100,000 emails, and my rate is 14 per second. Does that mean I must send in 14 destinations max per call, and then sleep the thread for a second before sending in the next call with 14 recipients? Or can I send in the full 50 per call? If I do 50, do I still have to sleep the thread? Or will AWS manage this and queue the messages?
AWS SES SendBulkTemplatedEmail, example and what happens if quota is exceeded?
Unfortunately, AWS doesn't provide a way to dynamically update the template as per the requirement.I have solved a similar problem usingMustache Templatesusing Java LibraryHandle Bars. Using this library you can generate template on the fly based on the requirements.Hope this helps.
I have been usingnested stacksinCloudFormationfor several months and they are very useful. So I thought I should spend sometime to make each nested stack reusable to other teams in the org.I saw the use case ofAWS::Includein several places likehereandhereand it makes good sense to me.One approach I have in mind is onesnippetfor each resource, like anAWS::EC2::SubnetorAWS::EC2::InternetGatewaywhich can be includedzero or more timesinto avpc.jsontemplate, which itself can be used as a nested stack in a larger application.The snippet does not take any parameters, but can reference a parameter that exists in the parent template.At first glance this doesn't seem enough to me. Consider this example:"PublicSubnet": { "Type": "AWS::EC2::Subnet", "Properties": { "VpcId": {"Ref": "VPC"}, "AvailabilityZone": { "Fn::Select" : [ "0", { "Fn::GetAZs" : {"Ref": "AWS::Region"} }] }, "CidrBlock": { "Fn::FindInMap": ["AZSubnetMap", { "Fn::Select" : [ "0", { "Fn::GetAZs" : {"Ref": "AWS::Region"} }]}, "PublicSubnet"]}, "MapPublicIpOnLaunch": "true", "Tags": [..] } }How can I avoid hard coding that"0"for the AZ in a Subnet snippet for example?
Writing reusable CloudFormation snippets with AWS::Include and Nested Stacks
You can do either. The better choice is to use weighted resource records.An Amazon Load Balancer has more than one IP address. DNS queries will usually return two IP addresses. If you create a single record with more than one entry, it is very likely that only the first IP address of the first load balancer will be used by clients. By using a weighted record, you will be able to balance traffic to the load balancers.
I see an option for weighted routing in Route53 for Alias records i.e. Alias + Alias Option.I am confused if I am supposed to create two identical A + Alias Option records or If I am supposed to enter two load balancer DNS records into the Alias Record in the AWS Console.Basically I am trying to do weighted routing between two load balancers.
Can I use multiple LoadBalancers in Route53 Alias records?
If you're connecting via CodeCommit, you can split strings to get more useful values such as Account Id and Repo Name using:echo "Region = ${AWS_REGION}" echo "Account Id = $(echo $CODEBUILD_BUILD_ARN | cut -f5 -d ':')" echo "Repo Name = $(echo $CODEBUILD_SOURCE_VERSION | cut -f2 -d '/')" echo "Commit Id = ${CODEBUILD_RESOLVED_SOURCE_VERSION}"Which outputs:Region = us-west-2 Account Id = 0123456789 Repo Name = my-app Commit Id = a46218c9160f932f2a91748a449b3f9818964642
I created a Pipeline that takes the code from a CodeCommit repository, builds it via CodeBuild and pushes the code to an S3 bucket.For my Codebuild I'm using an AWS managed image. aws/codebuild/nodejs:7.0.0If I start my build manually via the CodeBuild Console and specify the repository I get the repository URL when I run the following command in the buildspec- printf ${CODEBUILD_SOURCE_REPO_URL}but If the CodeBuild is triggered automatically by a push to a repository. CODEBUILD_SOURCE_REPO_URL returns nothing
AWS CodeBuild default environment variables.
When you saywhich instance, do you mean theinstance idorinstance nameorinstance private iporinstance public ip?Query the instance metadata server.curl 169.254.169.254/latest/meta-data/instance-idIf you want the instance tags:aws ec2 describe-tags --filters "Name=resource-id,Values=instance_id"oraws ec2 describe-tags --filters "Name=resource-id,Values=`curl 169.254.169.254/latest/meta-data/instance-id`"For instance's private IP:curl 169.254.169.254/latest/meta-data/local-ipv4For all available values:curl 169.254.169.254/latest/meta-data/
Is there an easy way to determine (ie, through boto3 or aws-cli), which instance I'm on, from an SSH session in that instance?
How to find out name of current instance from command-line?
In my experience, Fire TV and Fire Stick will pull from your mobile icons when you side-load a build. You won't see the actual TV app icon until the app is available after submission.I haven't tried it, but there could be workarounds, like maybe replacing the square mobile icons with the rectangular TV app icons, but it sounds to me like you've already done something similar, and it's forcing it to be square. If that workaround works, I'm not sure it's worth the risk of forgetting to change it back to a square.
App launcher icon is wrongly displayed on Amazon TV Stick device on home scren. Image is correctly displayed in category Recent but in category Apps&Games instead to be rectangular it is shrinked to square as shown in image:https://ibb.co/bYGiymI have Amazon TV Stick with FireOS 5.2.6.1. For testing I am using android app just with empty activity, and in drawable directory placed image 1280x720px, as they specified inhttps://developer.amazon.com/docs/app-submission/asset-guidelines.html#firetvassetsI read that there might be different behavior with apps in development and those that are uploaded to the Appstore. Is this correct?Are resources actually loaded from some Amazon web service rather than referenced from the manifest file andis it possible that this icon will be correctly shown only after submitting it to the Appstore?Why FireOS shrinks app icon on home screen to square or4:3 format?So, is this some problem with Amazon and FireOS and installing applications localy or I am just not setting up things correctly? If it is up to me what should I do localy in my environment in order that icon is displayed in correct rectangular shape in all categories?
Launcher app icon on Amazon TV Stick
Unfortunately, I don't think you can. Here is what AWS says in theirdocs-To be able to undelete a deleted object,you must have had versioning enabledon the bucket that contains the object before the object was deleted.
Unfortunately, this morning I accidentally deleted a number of images from my S3 account, and I need to restore them. I have read about versioning, however this was not enabled on the bucket at the time of deletion (I have now enabled).Is there any way of restoring these files either manually, or via Amazon directly?Thanks, Pete
Restoring Amazon S3 files that are not versioned
Put simply, no.Here is what I just triedCreated a topicSubscribed one of my phone numbersPublished a message with a 3600s TTL (got the message right away)Subscribed my second phonePublished another message with a 3600s TTLBoth phones got the second message. The first message was not sent to the second phone (even though I subscribed it well within the first message TTL, but after its publication).
I know that generally SNS is a Pub-Sub mechanism, which "duplicates" a message to all the consumers that subscribed to the published topic.Nevertheless, I saw a field "TTL" in the SNS API, which defines the expiration fo the message (in seconds since the message was created).I was wondering: if I publish a message to topic T with expiration of 5 minutes, and after 2 minutes, a consumer subscribe to topic T - Will the consumer get the message?
Amazon SNS - is it possible to subscribe after publication and still get the message?
DynamoDB can consume up to 300 seconds of unused throughput inburst capacity.The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.EDIT: By the way, I should mention that all AWS DynamoDB SDKs handle Throttling Exceptions for you. If you try and read an item, but get denied because you don't have enough throughput available, the SDK backs off and try again. So unless your table really is under provisioned, you shouldn't have to worry about handling Throttling Exceptions.
There's something which I cant understand about AWS DynamoDb throughput.Lets consider strongly consistent reads.Now, I understand that in this case, 1 unit of capacity would mean I can read up to 4KB of per second.It's the "per second" bit that slightly confuses me. If you know exactly how quickly you want to read data then you can set the units appropriately. But what if you're not too fussy about the read time?Say I do have only 1 read unit assigned to my table and I try to read an item which is more than 4KB. Now surely that just means that my read is going to take more than 1 second? That would be fine but the documentation talks about Requests failing. How can AWS determine that I used too many units when I didn't request that the data be read within a particular time?Maybe I am missing something obvious. Can you someone help clear this up?
aws dynamo db throughput
This is not possible.An AMI is merely an image of a disk. AWS can (usually) detect the Operating System of the AMI (eg Windows, Linux) but it has no knowledge of the software actually installed on the AMI.In general, any instance type can be used for any AMI. The exception to this is thevirtualization type-- some Linux AMIs might only run onPV(Paravirtualization) orHVM(Hardware Virtual Machine).If you are launching an instance from an AMI provided by AWS, the EC2 Management Console is smart enough to ensure that the correct Instance Type is selected for the given virtualization type.However, if you (or somebody else) created the AMI, there is no way to know the type of virtualization and therefore no way to know which instances would support it.These days, the default is HVM, which is supported by all modern instance types (but notm1for example).
I know this has been asked before, but I have yet to find a workaround or solution for getting the list of possible instance types for a given Amazon AMI. I'm using the .NET SDK. Has anyone been able to figure out a way to do this?
List all possible instance types for a specific AMI?
I'm not sure what you mean when you ask "by IP or HTTP?" The Google Maps API supports IP whitelisting so that you can limit the IP addresses that can use your API key to send requests.The problem that you'll run into when trying to whitelist the IP address associated with your Lambda function is that you can't predict the IP. It'll be somewhere in the IP space of AWS Lambda.One option to consider is to setup a VPC with a NAT gateway, assign an Elastic IP to the NAT gateway, and route traffic from private subnets through the NAT device. Then you can configure the Lambda function to runinside a private subnet of the VPC. All outbound traffic from the Lambda function to the internet (and Google Maps) will route via the NAT gatewayand hence come from a static IP (the Elastic IP you assigned to the NAT). Configure that in your Google Maps IP whitelisting.
We're setting up anAPI Keyfor usingGoogle Maps APIfrom anendpointatAWS Lambdafunction. And we are thinking aboutrestrictit for the services used fromAWSbackend.ByIPorHTTP? Since AWS uses load balancers and stuff, we're considering a proxy or something like that. Any ideas? Thanks! :D
How to restrict Google Cloud API to AWS Lambda Endpoint?
From yourcomment:I am trying to find the maximum size/length of idempotent_id so far in my single table.In order to do this without any auxiliary data, you will need to perform afull table scanand get the result attributes you care for from each item. You can use aProjectionExpressionsto reduce the amount of data retrieved.You could store the value in another attribute and create aGSIon that which will give you the ability to query that index in an ordering.Another option would be to use something likeDynamoDB Streamsto listen to events and keep track of the max size in a different storage medium.
I have an use case to place constraints on the key size in my application. I tried to find the max length of partition key so far in my DynamoDB table. This will help me to know my data before placing any internal constraints on the data that I am using as a partition key in Dynamo DB.Example: Let's say here is my table with a partition key (idempotent_id). I want to know the max length of partition keys so far (in this case 7).idempotent_id 1234 12 1234567 12345I tried using Dynamo DB console from my aws account. I looked atqueryandscanapi of DynamoDB. But nothing seems good fit for me. May be this is something we can't find using DynamoDB? or may be I am searching wrongly? Any help would be appreciated.
Maximum Partition key length of my data in Dynamo DB
This is a quick snipped of how to create a schedule based trigger. Notice how you can have multiple jobs (soft limit is 10 per trigger) ran by the trigger:# Initialize glue client import boto3 client = boto3.client('glue') # Create trigger 'body' trigger = dict( Name='trigger_name', Description='My trigger description', Type='SCHEDULED', Actions=[ dict(JobName='first_job_name_to_be_triggered'), dict(JobName='second_job_name_to_be_triggered') ], Schedule='cron(0 8 * * ? *)' #Every day at 8am UTC ) # Create the trigger client.create_trigger(**trigger) # After trigger is created, you want to activate it client.start_trigger(Name=trigger['Name'])If you wanted the trigger to run the job after some other jobs succeed you would define the trigger like this:trigger = dict( Name='trigger_name', Description='My trigger description', Type='CONDITIONAL', Actions=[dict(JobName='job_name_to_be_triggered')], Predicate=dict( Logical='AND', Conditions=[ dict( JobName='first_job_required_to_succeed', LogicalOperator='EQUALS', State='SUCCEEDED' ), dict( JobName='second_job_required_to_succeed', LogicalOperator='EQUALS', State='SUCCEEDED' ), ] ) )Hope this helps
I have a table which contains few schedules for various jobs.I want to process the records andcreate Triggersvia AWS Glue API.http://docs.aws.amazon.com/glue/latest/dg/aws-glue-api.htmlThe above link shows the documentation go for AWS Glue.Is there anyone who can provide a code snippet on how to use the API? I have searched for long enough on the net and havent found any documentation that provides a code snippet!I am looking for code snippet for the following API CALL.CreateTrigger Action (Python: create_trigger)Any help would be great.
AWS Glue create Triggers via API
No, policies are not meant to be used this way, but you can solve the problem like this:1- Remove public access to the S3 bucket2- Create a web-application (maybe a simple html (hosted on s3) supported by a lambda function) to let the users to select their files to upload to S3 and provide some tags3- After your custom validations passed, call the aws-sdk api to uplad the files to S3
Can we write an iam policy to restrict the creation of s3 bucket only when tags are present. I,e user should be able to create s3 bucket only of he has certain tags are present.
Iam policy for s3 buckets
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.See:IAM Roles for Amazon EC2Clear/delete AWS credentials configurations from your instance and create an AMICreate a policy that has the minimum privileges to run your scriptCreate a IAM role and attach the policy you just createdAttach the IAM rolewhen you launch the instance(very important)Have your userdata script call/usr/bin/aws s3 cp ...without supplying credentials explicitly or using credentials file
I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command/usr/bin/aws s3 cp ...The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.Running the command with sudo after the instance has started works fine.I have run aws configure both with sudo and without.I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.If possible, I would also like to avoid writing the credentials into the script.How can I configure awscli in such a way that the credentials are used when running a user data script?
User Data script to call aws cli
To perform this for a single file, you could use the AWS S3 CLI api:aws s3api get-object --bucket YOUR_BUCKET --key YOUR_FOLDER/KEY --range 100 TARGET_FILE_NAMENote the use ofs3apiinstead ofs3.Therangeparameter here will determine which range of bytes of your file you'd like to download.Two weeks ago, however, AWS announcedAmazon S3 Select.Amazon S3 Select is now available in Preview. S3 Select is a new Amazon S3 capability designed to pull out only the data you need from an object, dramatically improving the performance and reducing the cost of applications that need to access data in S3.It's currently in preview but you can opt-in for it and once you're approved should be able to use this API.Clickhereto find out more.EDITYou can use thesyncapi operation of the S3 service to download contents of a bucket.A possible solution would beaws s3 sync s3://bucketname .To include or exclude certain files, use the flags--include "*.json"and--exclude "*test.json"respectively
Here is what I have tried: aws s3 cp s3://bucket name/folder ~/downloads --range bytes=0-200000000. I get an unknown options range error.Please provide me some guidance. I have about 1 TB of data, but I would like to do a partial download of about 100 MB of sequential, numbered files to my external hard drive. Thank you.
How to download partial s3 aws cli using the terminal? Its a 1 TB json/xml file, I only want 100-200MB of sequential, numbered files
We need to createRANGEindex, for exampleNumberwith any data (0/1).In my case it is"isActive = 1"for not deleted items.Then we do query or scan with that IndexName.In order to make item soft deleted, we need to remove attribute"isActive"DynamoDB Scan and Query with IndexOfficial Best Practice: Take Advantage of Sparse Indexes- there is described our case.In order to remove attribute use this example:const params = { TableName: this.TABLE, Key: { _id: id }, UpdateExpression: 'REMOVE isActive', ReturnValues: 'ALL_NEW' } return dynamodb.update(params).promise() .then((data) => { if (data) { return data.Attributes } return null })
Hello stackoverflow community. I am trying to organize soft delete solution for dynamodb.If you have though on the same problem and find out any solution, please share in comment.It involves thinking about: List items (isDeleted: false or 0), and use limit of results.
DynamoDB Table Modeling. Soft delete solution
You have to select the individual class and select "Amazon Web Services" option. There you can choose to upload or run the lambda function. Please refer this link:https://github.com/aws/aws-toolkit-eclipse/releases/tag/v201709262229
My symptom is missing everything that exclude "Deploy serverless project" like this image. I installed All of AWS toolkit, eclipse ADT and my eclipse version is oxygenI want to use "upload function to AWS lambda...", but it isn't show..... How can i fix it?
Eclipse AWS upload fuction to AWS lambda... is missing
A possible solution is to installaws-clion the jenkins instance, set it up withaws configure, ideally granting it specific permissions to upload to a specific S3 bucket through IAM policies.Next up is actually uploading your data, after your jenkins is done processing / building / compiling your code, run the following command from the directory you'd like to sync to S3:aws s3 sync . s3://YOUR_BUCKET_NAMEIf you'd like it to exclude certain directories, add--exclude "folder_name/**"As it states in theaws-clidocs: "Recursively copies new and updated files from the source directory to the destination."It will not delete files that are absent in the source directory but exist in the bucket.If you really want this behaviour, you could blow away the bucket contents before each upload but this would result in downtime so perhaps consider aCloudFrontsetup that caches your bucket contents and you can clean this cache from Jenkins after youraws s3 synchas concluded successfully.To invalidate aCloudFrontcache, see thedocs
Background:I have a repository say example.com along with master and develop branch. My master branch repo push the change to /efs/prod through jenkins build and develop branch repo push the change to /efs/qa through jenkins build. Now I have setup a folder say /s3 in my develop branch of repo to push the changes to amazon S3, for this I have used "Publish Artifacts to S3 Bucket" plugin in jenkins.Requirement:I am able to move the bitbucket uploads to aws s3 but when someone delete the file in bitbucket repo then I am unable to sync it with AWS s3. I have followed below source to move the repo changes to s3http://www.devops-share.com/upload-builds-from-jenkins-to-s3/Could someone let me know how could I make jenkins to sync the bitbucket repo with S3.
How to sync repo in bitbucket to S3 with jenkins
Are you creating IAM User Policies or S3 Bucket Policies? I will assume S3 bucket policies for this answer.S3 buckets can only have one policy applied at a time.This S3 bucket policy will grant anonymous access to read (get) and write (put) objects for two buckets. Note, the anonymous users will not be able to list objects.{ "Version":"2012-10-17", "Statement":[ { "Sid":"AddPerm", "Effect":"Allow", "Principal": "*", "Action":[ "s3:GetObject", "s3:PutObject" ], "Resource":[ "arn:aws:s3:::bucket-a/*", "arn:aws:s3:::bucket-b/*" ] } ] }Specifying Permissions in a Policy
What is advisable when one needs an IAM policy for more than two buckets? Bucket names are pretty different from one another.Combine all the access and put it in one IAM policy?Create n number of polices for n buckets?If 1 is the answer, can someone help with an example of read/write permissions on two buckets 1. bucket-a 2. bucket-b
AWS IAM policy for more than one bucket
The best option I found was to create an AWS CloudFront and configure it like Karan describes in hisanswerbut with some add-ons:Create a certificate from AWS Certificate Manager and approve it.Create a CloudFront distribution with the Origin Domain Name as your Heroku URL such as myapp.herokuapp.com and the custom SSL certificate as the one you created from the AWS Cert Manager.While creating the distribution, make sure that you have the TTL as 0, else all the responses will be cachedIf you don't complete this step probably you will get an error like this:This distribution is not configured to allow the HTTP request method that was used for the request. The distribution supports only cachable requests.Follow the guide I mention in the question[https://devcenter.heroku.com/articles/route-53][4]
I need to set an Amazon Domain as a custom domain for a Heroku app. I found the next tutorialhttps://devcenter.heroku.com/articles/route-53but it doesn't work if the app needs https requests. The first idea was to set up the SSL Certificate in Heroku, but the SSL Amazon Domain manager doesn't allow to download the certificate, so the SSL need to be managed by AWS.What is the best way to add Amazon SSL to a Heroku app?
How to configure a Amazon Route with SSL for a Heroku App
Please note that Reservations contain Instances.When multiple instances are launched via one command (eg launching two identical instances in the console), then both instances are part of a single Reservation.Your code is counting the number of Reservations, but you are actually expecting the count to include the number of instances in all Reservations.Solution:Loop through the reservations and add up the number of instances in each Reservation.
I'm usingaws-sdkto list all the running EC2 instances whose IAM Role isThe_Name_of_My_IAM_Role.const AWS = require('aws-sdk') let credentials = new AWS.SharedIniFileCredentials({ profile: 'my_profile' }) AWS.config.credentials = credentials AWS.config.update({ region: 'ap-northeast-1' }) const ec2 = new AWS.EC2() let params = { Filters: [ { Name: 'iam-instance-profile.arn', Values: [`arn:aws:iam::123456789123:instance-profile/The_Name_of_My_IAM_Role`] }, { Name: 'instance-state-name', Values: ['running'] } ] } ec2.describeInstances(params, (err, data) => { if (err) { console.log(`describeInstances error: ${err}`) } else { console.log(`data.Reservations.length: ${data.Reservations.length}`) } })I expect the code to return 6 EC2 instances. But it returns only 4 of them.The problem doesn't occur if I typeaws ec2 describe-instances --filters "Name=iam-instance-profile.arn,Values=arn:aws:iam::123456789123:instance-profile/The_Name_of_IAM_Role" "Name=instance-state-name,Values=running"command in my terminal.I meanaws ec2 describe-instances ...command returns all 6 EC2 instances.I've set the following environment variables before runningaws ec2 describe-instances ...command.export AWS_DEFAULT_REGION=ap-northeast-1 export AWS_DEFAULT_PROFILE=my_profileI also havemy_profiledefined in~/.aws/credentialsfile.What might be wrong my node.js code?Or is this a bug ofaws-sdk?
aws sdk ec2.describeInstances not listing all EC2 instances
It is currently not possible to set this anywhere in the AWS console. You can execute this to turn it on:use performance_schema; update setup_consumers set enabled='yes' WHERE name = 'events_statements_history';"However, this must be re-done every time the Aurora instance is restarted.MySQL > show global variables like 'performance_schema'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | performance_schema | ON | +--------------------+-------+ 1 row in set (0.00 sec) use performance_schema; MySQL [performance_schema]> select * from setup_consumers WHERE name = 'events_statements_history'; +---------------------------+---------+ | NAME | ENABLED | +---------------------------+---------+ | events_statements_history | NO | +---------------------------+---------+ 1 row in set (0.01 sec) MySQL [performance_schema]> update setup_consumers set enabled='yes' WHERE name = 'events_statements_history'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 MySQL [performance_schema]> select * from setup_consumers WHERE name = 'events_statements_history'; +---------------------------+---------+ | NAME | ENABLED | +---------------------------+---------+ | events_statements_history | YES | +---------------------------+---------+ 1 row in set (0.00 sec)
I'm working with an aurora instance. I've updated my parameter group to turn on performance schema and restarted the instance. Howeverevents_statements_historyis turned off.Am I missing something or how can this be achieved?select * from setup_consumers where name like 'events%statement%'; +--------------------------------+---------+ | NAME | ENABLED | +--------------------------------+---------+ | events_statements_current | YES | | events_statements_history | NO | | events_statements_history_long | NO | +--------------------------------+---------+
AWS Aurora performance schema, how to enable events_statements_history
You cannot migrate from CLB to ALB using any tools that I am aware of. You can create the ALB to run in parallel with the CLB. Once you are confident that the ALB is working correctly with you WAF configuration, change the Route 53 records. Wait a few days and then delete the CLB. This will allow the DNS servers around the world to catch up with the new DNS settings. The one area that you will have problems running the new ELB in parallel is if you are using SSL offload on the ELB. This will require the DNS switch over so that the DNS name matches the SSL records. For this, I usually add a "test.mydomain.com" record to verify that SSL is working.
I am looking for integrating WAF in my existing server setup, since I have Classic Load Balancer (with EC2 instances) which does not support WAF I need to migrate to Application Load Balancer.Is it possible to migrate the existing Classic Load Balancer to Application Load Balancer without changing the DNS (A Record)?
AWS Migrate Classic Load Balancer to Application Load Balancer
Bachman,I found this 3rd party draw.io plugin to create and export ASL:https://github.com/sakazuki/step-functions-draw.io. I have not used it myself, so I cannot speak to the quality/correctness of it. It looks like a pretty nice tool though!Update - Here is a youtube video of it in action:https://www.youtube.com/watch?v=NrMcFdTdhhUHope this helps!
When creating a state machine with step functions, we use Amazon States Language (ASL). A visual workflow is rendered showing the state machine. Is there anyway to create the state machine visually to begin with? Creating the states with something like drag and drop and then updating the details for, let's say the specific Lambda that needs to be invoked?I see that AWS does not provide this feature, and I couldn't find a third party that does, wondering if there's something I didn't find.
AWS step functions - Any way to create the state machine graphically?
The Python library you are looking for is theAWS SDK for Python, also called Boto3. This library is pre-loaded in the AWS Lambda environment. all you have to do is addimport boto3to your Lambda function.I believe you will need to use theCloudWatchEventsclient and either calldelete_rule()orremove_targets()depending on exactly what you want to do.
I have a lambda function and for that lambda function my cloudwatch event is a trigger on it...at the end of the lambda function i need to delete the trigger (cloud watch event ) on that lambda function programatically using python .how can i do that ? is there any python library to do that?
Delete trigger on a AWS Lambda function in python
Mobile apps would have to set the HTTPrefererheader in the requests they make for anything they attempt to load from that bucket.
Amazon uses this example to create a policy for restricting domain access:{ "Version":"2012-10-17", "Id":"http referer policy example", "Statement":[ { "Sid":"Allow get requests originating from www.example.com and example.com.", "Effect":"Allow", "Principal":"*", "Action":"s3:GetObject", "Resource":"arn:aws:s3:::examplebucket/*", "Condition":{ "StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]} } } ] }This is great and works perfectly fine for web apps, but there is no clarification on how this behaves in mobile apps. I'd like to be able to have these images display in our mobile apps but I'm not sure how this would need to work.Does anyone have any experience here or can point me in the right direction? I haven't been successful in searching the docs and doing some google searches. I'll continue looking as well.
Amazon S3 Restricting Access to a Specific HTTP Referrer on Mobile Apps
Sorry, but you are taking a wrong approach.What you need is to create a passphrase for the key, not encrypt the key with Ansible Vault.openssl rsa -in ssh_key.pem -out encrypted_ssh_key.pemGive it a passphrase and provide that passphrase every time you run it (or use some agent which would cache the password for you):ansible-playbook ansible_playbook -i inventory/ec2.py \ -e ansible_ssh_user=ubuntu \ -e ansible_user=ubuntu \ --private-key=encrypted_ssh_key.pem
Is there any way to encrypt --private-key with ansible-vault and use it encrypted with Ansible Playbook ansible-playbook command (or inside Playbook)?I tried this but it didn't worked:$ ansible-vault create encrypted_ssh_key.pem --vault-password-file vault_password_file(pasted my SSH private key into it)$ ansible-playbook ansible_playbook -i inventory/ec2.py \ -e ansible_ssh_user=ubuntu \ -e ansible_user=ubuntu \ --private-key=encrypted_ssh_key.pem \ --vault-password-file vault_password_fileIt's always asking me for a passphrase and even after I enter it (the one from vault_password_file) it doesn't accept it. I can login to EC2 instance without any problems by using that private key.
Encrypting Ansible Playbook .pem private key with ansible-vault
If you have no outside connection then you'll need to create an internet gateway through the VPC via NAT.AWS hasdocumentationfor it, and there's a bit more discussion inanother StackOverflow question.
I am working on a lambda function that needs to accessRDS,S3andRekognitionservices from AWS.I gaveS3andRekognitionpermissions via theAmazonS3FullAccessand theAmazonRekognitionFullAccesspolicies respectively and it worked fineThe thing is that I could not access myAurorainstance insideRDSbecause it's inside a VPCI changed my lambda network configurations so it would be able to access the VPC, and theAuroraconnection worked as expected, but then the connection toRekognitionstopped working, whenever I invokedetectLabelsfor example it just hangs.Am I missing some permission?
How do I invoke AWS Rekognition from a Lambda within a VPC
You will not receive Lambda notifications for objects moved from S3 to Glacier via the Lifecycle rules.When an S3 object is moved to Glacier, the object is not removed from S3. Instead, it's storage type is simply changed from Standard/RR/IA to "Glacier". And there is no notification type for storage type changes.Also, the AWS documentation states:You will not receive event notifications from automatic deletes from lifecycle policies or from failed operations.Source:http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-event-types-and-destinations
I am working on a POC where I have setup a Lifecycle rule on S3 to move objects to glacier after certain no of days (if objects have specified tag). Rule is working fine for me, objects are getting moved to glacier by lifecycle rule and storage type is change to Glacier from Standard. (so far so good).As I need to restrict user to use that file (archived file) from my application, I am looking for a way to get notification (either through SQS) or invoke Lambda function (to call my application REST endpoint) when object is actually moved to glacier.I have checked S3 supported event notification types here(http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-event-types) but it doesn't have any for storage change or object being moved to glacier.Let me know if there is any way to configure this or any other approach I can use to achieve this behavior.Regards.
AWS Lambda for objects moved to glacier
Have you enabled anHTTP endpoint for the Gremlin service? The document above explains:While the default behavior for Gremlin Server is to provide a WebSocket-based connection, it can also be configured to support plain HTTP web service. The HTTP endpoint provides for a communication protocol familiar to most developers, with a wide support of programming languages, tools and libraries for accessing it.If so, you can use an ELB HTTP health check to a target like this:HTTP:8182/?gremlin=100-1With a properly configured service, this query will return a 200 HTTP status code, which will indicate to the ELB that the service is healthy.
Is there any HTTP/TCP endpoint for a gremlin-server health check? Currently, we are using the default TCP port but it doesn't seem to indicate the gremlin-server's health.We noticed that gremlin-server crashed and was not running but the health check kept passing. We are using AWS Classic Load Balancer.
Gremlin-server health check endpoint for AWS ELB
After several try and error I found this:import jenkins.model.* import com.cloudbees.plugins.credentials.* import com.cloudbees.plugins.credentials.impl.* import com.cloudbees.plugins.credentials.domains.* import com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey import com.cloudbees.jenkins.plugins.awscredentials.AWSCredentialsImpl import org.jenkinsci.plugins.plaincredentials.StringCredentials def changePassword = { id,accessKey, secKey -> def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials( com.cloudbees.jenkins.plugins.awscredentials.AWSCredentialsImpl.class, Jenkins.instance ) def c = creds.findResult { it.id == id ? it : null } if ( c ) { println "found credential ${c.id} for accessKey ${c.accessKey}" def credentials_store = Jenkins.instance.getExtensionList( \'com.cloudbees.plugins.credentials.SystemCredentialsProvider\' )[0].getStore() def result = credentials_store.updateCredentials( com.cloudbees.plugins.credentials.domains.Domain.global(), c, new AWSCredentialsImpl(c.scope, id, accessKey, secKey,c.description) ) if (result) { println "password changed for ${accessKey}" } else { println "failed to change password for ${accessKey}" } } else { println "could not find credential for ${accessKey}" } }
In my process regularly I get temporary AWS cred and in my Jenkins file I need to update a specific Jenkins Aws crednetial. How can I update it? The reason that I need is that Jenkins docker methodwithRegistryrequires credential id and I have to update this credential whenever I get new AWS key to be able to use it.
Update aws credential in jenkins file with a groovy script
You have no way of knowing the current working directory when you execute thecdcommand. So specify full path:cd /home/centos/testingTry this:#!/bin/bash mkdir /home/centos/testing cd /home/centos/testing wget https://validlink
I launch an centos AMI I created, and try to add user data as a file which looks like this:#!/bin/bash mkdir /home/centos/testing cd testing wget https://validlinkSo simply, on launch, the user data creates a folder calledtestingand downloads thisvalidURLwhich I will not put as it links to my data - however it is valid and accessible.When I launch the instance, the foldertestingis created successfully, however there is no file inside the directory.When I ssh into the instance, and run thewgetcommand as asudo, the file is downloaded successfully inside thetestingfolder.Why does the file not get downloaded on the ec2 launch through user data?
WGET seems not to work with user data on AWS EC2 launch
May be you have not setup for s3cmd package in your AWS server. So please check all below setup. So I think you have helpful points:Setup :- 1 On CentOS/RHEL: # yum install s3cmd On Ubuntu/Debian: $ sudo apt-get install s3cmd On SUSE Linux Enterprise Server 11: # zypper addrepo http://s3tools.org/repo/SLE_11/s3tools.repo # zypper install s3cmd Setup :- 2 Install Latest s3cmd using Source $ wget http://ufpr.dl.sourceforge.net/project/s3tools/s3cmd/1.6.1/s3cmd-1.6.1.tar.gz $ tar xzf s3cmd-1.6.1.tar.gz Now install it using below command with source files. $ cd s3cmd-1.6.1 $ sudo python setup.py install Configure s3cmd Environment # s3cmd --configureEnter new values or accept defaults in brackets with Enter.Refer to user manual for detailed description of all options.For more details Please check below link :-https://tecadmin.net/install-s3cmd-manage-amazon-s3-buckets/#Batch file check :- File name sqlbackup.sh#!/bin/bash SQLDUMP="$(date +'%Y%m%d%H%M').sql.gz" SQLDUMPPATH="/backupdb/$SQLDUMP" mysqldump -pPASSWORD -u root -h HOST.amazonaws.com database_name | gzip -9 > $SQLDUMPPATH s3cmd put $SQLDUMPPATH s3://S3NAME/dbbackup/$SQLDUMP echo "Removing the backup file $SQLDUMP" rm $SQLDUMPPATH echo "WooHoo! All done"
I want to create database backup on daily bases using cron job.I have created one batch file for database backup. Below is batch file code.#!/bin/bash SQLDUMP="$(date +'%Y%m%d%H%M').sql.gz" echo "Creating backup of database to $SQLDUMP" mysqldump --host 'myhost.com' -u 'root' -p 'password' --databases 'test' | gzip -9 > $SQLDUMP echo "Dump Zipped up" echo "Uploading zipped dump to the Amazon S3 bucket…" s3cmd put $BACKUPNAME s3://example.com/dbbackup/$BACKUPNAME echo "Removing the backup file $SQLDUMP" rm $BACKUPNAME echo "Done"But database backup not storage on S3.File Path : var/app/current/app/sqlbackup.shSet for 5 hrs in Crontab : * 5 * * * /bin/sh /var/app/current/app/sqlbackup.sh
AWS Database Backup RDS to S3 By Crontab (Cron Job)
Your permissions are correct.Athena's context is not currently shared across regions. Ensure that the users are viewing Athena from the same region as the root account. When they login to AWS, they may be initially placed in another region.
My IAM users can't see the Athena tables I've created a long time ago using the root account.Their group has the following permissions:AmazonS3FullAccessAmazonAthenaFullAccessThey only see thesampledbdatabases, which is unfortunate, because they need the one we actually use. The documentation is not clear on how to make the databases accessible to everyone. How do I achieve that?
IAM users can't see Athena tables
I get around it by using the-loadoption to load the putty configuration where I placed the proxy.Example:pscp -load "my aws" test.txt 172.93.184.11:/work/
Need to pscp java war file to aws. however, we are not able to do as connection needs a proxy and there is no option to pass proxy along with pscp command. we tried explicitly setting http_proxy on the cmd. still, it didn't work. Has anyone found any solution?
pscp files to aws with proxy
Got it sorted out.It was giving 404 or raising ResourceNotFoundException because endpoint was incorrect. IoT constructor would have to be like this. Endpoint should be justiot.us-east-1.amazonaws.com.var iot = new AWS.Iot({ endpoint: "iot.us-east-1.amazonaws.com", region: "us-east-1", accessKeyId: "XXXXXXXXXX", secretAccessKey: "XXXXXXXXXX" });
I'm trying to get details of the registered things and create new things. I get ResourceNotFoundException for both of them.var AWS = require('aws-sdk'); var iot = new AWS.Iot({ endpoint: "https://XXXXXXXXXX.iot.us-east-1.amazonaws.com", region: "us-east-1", accessKeyId: "XXXXXXXXXX", secretAccessKey: "XXXXXXXXXX" }); var params = { thingName: 'D02', attributePayload: { attributes: { 'Org': 'Org2' }, merge: false }, thingTypeName: 'thing1' }; iot.createThing(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response }); iot.listThings({}, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });As for the credentials, I created a new user in IAM. Set Programmatic access as Access Type and attached AWSIoTFullAccess permission.Is there anything wrong here? What could be the reason for this?
AWS IOT node sdk gives ResourceNotFoundException for listThings and createThing
For those looking for an answer, the solution is to use a RetryPolicy with a BackOffStrategy. A backoffstrategy slowly increases the amount of time inbetween connection attempts.http://docs.aws.amazon.com/general/latest/gr/api-retries.htmlFurthermore, if you use a backoffstrategy you need to use a compatible FileStreamer which can Mark/Reset when uploading data.https://github.com/awsdocs/aws-java-developer-guide/blob/master/doc_source/best-practices.rst
I'm currently working on some code that uploads multi-part objects to S3, and I am running into this error:Caused by: com.amazonaws.ResetException: Failed to reset the request input stream; If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)Originally the readLimit was set to 5MB. I had changed the code so that the ReadLimit on the input stream would be the Object Size rounded up to the nearest 5MB (With a 5GB cap since thats the AWS limit). This seemed to fix the issue but now the same error is showing up in new places.Does anyone have any suggestions for what value to set the readLimit at for the most reliability?Any help would be appreciated,ThanksTed
AWS ResetException - Failed to reset the request input stream
I hope this answer will solve your problemaws s3 ls s3://your-bucket/ --recursive | sort -k1 | sort -k2 | head -n -30 | awk '{$1=$2=$3=""; print $0}' | sed 's/^[ \t]*//' | while read -r line ; do echo "Removing \"${line}\""; aws s3 rm "s3://your-bucket/${line}"; doneFor more details :https://stackoverflow.com/a/49373909/16885246
I am using S3 bucket to store my web application log files. Now I need to know is there any option available, to keep the latest 20 files only, regardless when they are created. I can't use S3 auto expiry option as I always need the latest 20 files inside my bucket.
How to keep only Latest "N" number of files/objects in S3 bucket periodically using bash script or any other methods
You can achieve your purpose with AWS Cognito with the newly introduced user groups feature which allows you to assume different IAM roles to groups of users.For the implementation if you go with AWS serverless stack you can use API Gateway IAM authorizer and pass through the role to Lambda to execute code with assumed role permissions. Another approach is to have different API endpoints to provide different privileges for AWS Management Access where you can authorize access through API Gateway using assumed role IAM policies(Policy to authorize API Gateway resource access). Here you can assign a different IAM role for Lambda.
Isit possible to login as an IAM user from Cognito? I am creating a tool that does AWS management functions and I want users to login as their IAM users ideally. Is this possible?2 alternatives I am considering is:App will have its own IAM credentials and perform actions on behalf of app users. App will implement ACLs to determine who can do what (but this is implementing what IAM already does)Users will login via Cognito and inherit IAM roles, but its still having 2 "IAM users" (1 Cognito + 1 IAM user) for 1 "real" userOf these 2 which is better and is there a better way?
Can I login as an IAM user from Cognito?
From:describe_instancesspot_instance_request_id- The ID of the Spot instance request.Ifspot_instance_request_idis not empty, then it is spot instanceThere is no way to check if the instance is reserved. AWS doesn't mark any instance as reserved. Your bill varies depending on your instance reservations and instance usage.for instance in instances: if instance.spot_instance_request_id: print instance.instance_id, 'is a SPOT instance' else: print instance.instance_id, 'is not a SPOT instance'
Maybe I'm blind, but I'm not seeing the metadata indicating if the instance is spot, on demand or reserved.import boto3 ec2 = boto3.resource('ec2') instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]) for instance in instances: print instance.(?)
AWS Boto describe instances spot vs on demand vs reserved
check thishttp://boto3.readthedocs.io/en/latest/reference/services/cognito-identity.html#CognitoIdentity.Client.describe_identityyou get the logins if it is a auth user. never used it, but I think that is the way to knowEDIT: From OP: yes this is correct - the important thing is that the absence of the key "Logins" in the returned value means that the user is unauthenticated.res = client2.describe_identity( IdentityId=context.identity.cognito_identity_id ) if ('Logins' not in res.keys()): return True else: return False
I have AWS Lambda functions working fine with Cognito authenticated users.I am now trying to get unauthenticated Cognito users going.I cannot find any way at the back end to determine if the current user that called the Lambda function is authenticated or unauthenticated.The identifying information that I have about the user is their Cognito IdentityId but how can I use that to find out of unauthenticated?I'm using Python boto3.6 in Lambda.
AWS Cognito - how to determine if unauthenticated user?
Edit the file/etc/motdmotdstands forMessage Of The DayMOTD(5) - Linux Programmer's ManualNAME motd - message of the dayDESCRIPTION The contents of /etc/motd are displayed by login(1) after a successful login but just before it executes the login shell.The abbreviation "motd" stands for "message of the day", and this file has been traditionally used for exactly that (it requires much less disk space than mail to all users).
All,I am using Ubuntu OS in my AWS EC2 instance. My previous developer has created some custom messages once we SSHed into the Instance (Attached). But I would like to change it. Googled extensively, but no luck. Can someone help?Text I want to change is "Live 1A"
Ubuntu How to change welcome message
Something like this should work:aws apigateway update-stage --stage-name <stage> --rest-api-id <rest-api-id> --patch-operations "op=replace,path=/deploymentId,value=<deployment-id>"
As stated in the question's title, I want to change deployment version of current stage in AWS API Gateway.It can be easily achieved via the web console, but I cannot figure out how to make it via cli/sdk. Could anybody kindly tell me whether it is possible or not? If it is, which API or command could I use?Thanks in advance.
How to change API Gateway's deployment version of current stage with aws cli or sdk?
A DAX cluster runs within your VPC. To connect from your laptop to the DAX cluster, you need to VPN into your VPC:http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html
Getting Error while accessing DAX AWS from localhost clientError:EVERE: caught exception during cluster refresh: java.io.IOException: failed to configure cluster endpoints from hosts: [daxcluster*:8111] java.io.IOException: failed to configure cluster endpoints from hosts:Sample test codepublic static String clientEndPoint = "*.amazonaws.com:8111"; DynamoDB getDynamoDBClient() { System.out.println("Creating a DynamoDB client"); AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().withRegion(Regions.US_EAST_1).build(); return new DynamoDB(client); } static DynamoDB getDaxClient(String daxEndpoint) { ClientConfig daxConfig = new ClientConfig().withEndpoints(daxEndpoint); daxConfig.setRegion(Regions.US_EAST_1.getName()); AmazonDaxClient client = new ClusterDaxClient(daxConfig); DynamoDB docClient = new DynamoDB(client); return docClient; } public static void main(String args[]) { DynamoDB client = getDaxClient(clientEndPoint); Table table = client.getTable("dev.Users"); Item fa = table.getItem(new GetItemSpec().withPrimaryKey("userid", "[email protected]")); System.out.println(fa); }
Error while accessing DAX aws from localhost client
gsutil verifies MD5 checksums on objects copied between cloud providers, so if the recursive copy command completes successfully (shell return code 0), you should have copied everything successfully. Note that gsutil isn't able to compare checksums for S3 objects larger than 5 GiB (which have a non-MD5 checksum that gsutil doesn't support), and will print a warning for cases it encounters.
I am migrating my data from Amazon-S3 to Google-Cloud Storage. I have copied my data usinggsutil:$ gsutil cp -R s3://my_bucket/* gs://my_bucketWhat I want to do next is to check if all the files in S3 is properly exist in Google Storage.At the moment all I did is to do print file list in file and then do simple Unixdiffbut that doesn't really check the file integrity.What's the good way to check that?
How to perform files integrity check between Amazon-S3 and Google Cloud Storage
Hopefully this helps someone 2+ years later...I solved this will a little help ofjq. I'm on a mac, so that's a simplebrew install jqMy goal was to use a default file of parameters, but wanted to pass my github oauth as a secret thisone time. To the point above of storing secrets in other / better places, that's ideal, but I believe can be overkill for all situations. Mine for example was just lab based work.aws cloudformation create-stack --stack-name "codepipeline-test" --template-body file://codepipeline-test.yml --parameters $(cat codepipeline-test-params.json | jq -r '.[] | "ParameterKey=" + .ParameterKey + ",ParameterValue=" + .ParameterValue') ParameterKey="GitHubOAuthToken", ParameterValue="1234567890826xxxxxxxxxx753dde68858ac2169" --tags '[{"Key": "Name","Value": "codedepipeline-test"}, {"Key": "Owner","Value": "username"}]' --capabilities CAPABILITY_NAMED_IAMFYI in the CF Template I define the github oath param to be a secret (hide in GUI) as follows:GitHubOAuthToken: Description: A valid access token for GitHub that has admin access to the GitHub Repo you specify Type: String NoEcho: true MinLength: 40 MaxLength: 40 AllowedPattern: '[a-z0-9]*'
I've got a question around using parameters in Cloudformation and more generally best practices around using secrets in Clouformation.I have a template that defines our CI servers in an autoscaling group. We could in theory stand up many of these stacks. The templates are stored in source control along with parameters.json files use to specify the details of the stack (e.g. instance type, autoscaling conditions etc.). One of those parameters is a token that allows the CI server to interact with our CI provider, I don't want to store the token in source control. I want someone to be prompted for it or be forced to pass it when creating or updating the stack.Ideally what I'm imagining is something like this, but obviously this is invalidaws cloudformation create-stack --stack-name <name> --template-body file://<template> --parameters file://<parameters-file.json> TokenParameter=xxxyyyzzzDoes anyone have any suggestions?Many Thanks
Cloudformation: Passing parameters from a file and on the command line
gs://is a way of referencing a particular GCS object used by thegsutilcommand-line utility and a small number of Google Cloud APIs to reference GCS objects. For example, a Cloud Vision API request might have a section like this:"source":{ "imageUri": "gs://bucket_name/path_to_image_object" }Or agsutilcommand might look like this:$> gsutil cp resume.pdf gs://my_bucket/There is no single mapping betweengs://andhttps://. It is simply a way to specify a particular GCS object. An example HTTP mapping from ags://URI might be something likehttps://storage.googleapis.com/bucket_name/object_name.
I see URIs for google cloud storage with the gs:// scheme and URIs for aws S3 with the s3:// scheme. How are these schemes translated into http and what do they mean?How do these schemes know which region to point to and how are they expanded into full http bucket names?
gs:// s3:// http:// equivalent
You should address this using AWS Route 53 routing policies.Route 53 has 5 different routing policies and you can use one of following two policies in this case.Geolocation routing policy – Use when you want to route traffic based on the location of your users.Latency routing policy – Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency.Since you are looking at a latency based traffic allocation, as the name suggest, you should use Latency routing policy.For more information about routing policies please referthis link.To replicate the EC2 instance to a different region,Create a snapshot of your EBS volume.Copy the EBS Snapshot to a London RegionIf you are using a custom AMI, you will have to copy the AMI to London region as wellLaunch a new EC2 Instance using the copied snapshot in the London region.
Our current server of web application is deployed in Singapore region but as we're going to launch our services in Europe so we want to replicate our ec2 instance in London region so any traffic coming from that region will be served from that instance which will give us low latency. How we can achieve that?
Multi region ec2 instance replication
There is no way to do this.Most significant is the fact that S3 is not aware of the existence of pre-signed URLs. When you generate a pre-signed URL, no interaction occurs with the service. That's all done in your local code. The service validates the signed URL when the request arrives.And, of course, an infinite number of pre-signed URLs can be generated for each object... so, for most applications, this wouldn't be all that useful of a feature.A lifecycle policy on your file-sharing bucket, to remove objects after a fixed period of time, would probably be the most straightforward solution. This has a granularity of 1 day and a margin of error of +1/-0 days, since, policies are only evaluated daily. (An object created today where the lifecycle policy is delete after 1 day will not be deleted tonight, it will be deleted tomorrow night.)http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
I was wondering if within AWS there was a way to have an S3 object be automatically deleted once the pre-signed url that has been generated to access the given object from the outside looses its validity...? More specifically I am not particularly looking at anything fancy like Lambdas (although I guess this would be one approach?)Bottom line is: is there a possibility to assign a 'lifetime' value to an S3 object to which a pre-signed URL has been generated?Cheers
AWS delete object when pre-signed URL loses its validity?
You can't have S3 post extra parameters to your Lambda function. What you can do is create a DynamoDB table that maps S3 buckets to scripts, or S3 prefixes to scripts, or something of the sort. Then your Lambda function can lookup that mapping before executing your script.
I am creating an AWS Lambda function that is triggered for each PUT on an S3 bucket. A separate Java application creates the S3 bucket, sets up the trigger to the Lambda on Put, and PUTs a set of files into the bucket. The Lambda function executes a compiled binary, it passes to the binary a script, which acts on the new S3 object.All of this is working fine.My problem is that I have a set of close to 100 different scripts, and am regularly developing new scripts. The ZIP for the Lambda contains all the scripts. Scripts correspond to different types of files, so when I run the Java application, I want to specify WHICH script in the Lambda function to use. I'm trying to avoid having to create a new Lambda for each script, since each one effectively does the exact same thing but for the name of the script.When you INVOKE a Lambda, you can put parameters into the context. But my Lambda is triggered, so most of what I react to is in the event. I can't figure out how to communicate this simple parameter to the Lambda efficiently as I set up the S3 bucket and the event trigger.How can I do this?
How to Specify Additional Parameter to AWS Lambda Function Triggered by S3
The error message:Unknown parameter in input: "stack_name_or_id", must be one of: StackName, NextTokenclearly says you are passing invalid parameter name;stack_name_or_id.InBoto3 describe_stacks, the expected parameter is:StackNameresponse = client.describe_stacks( StackName='string', NextToken='string' )For a running stack, you can pass stack name or stack ID. But for deleted stacks, you have to pass stack ID.client.describe_stacks(StackName='mystack') {u'Stacks': [{u'StackId': 'arn:aws:cloudformation:us-east-1:....... 'content-type': 'text/xml', 'date': 'Thu, 22 Jun 2017 14:54:46 GMT'}}}
Is it possible to get the status of a CloudFormation stack? If so, how?I'm creating a stack with:client = boto3.client('cloudformation',) response = client.create_stack( StackName=stackname, ... )I can see in the CloudFormation web UI that the stack successfully creates.I've tried to get the status with:print(client.describe_stacks(stack_name_or_id=hostname))But that throws exception:botocore.exceptions.ParamValidationError: Parameter validation failed: Unknown parameter in input: "stack_name_or_id", must be one of: StackName, NextTokenSo I tried to wait while the stack deploys and catch the exception with:while True: time.sleep(5) try: print(client.describe_stacks(stack_name_or_id=stackname)) except botocore.exceptions.ParamValidationError: passBut I get no response at all; theprintstatement never gets called.
Boto3 CloudFormation Status
Use the following Boto3 to get the current instance name.Warning: No exception handling is included.import boto3 import os def get_inst_name(): # Get Instance ID from EC2 Instance Metadata inst_id = os.popen("curl -s http://169.254.169.254/latest/meta-data/instance-id").read() # Get EC2 instance object ec2 = boto3.resource('ec2') inst = ec2.Instance(inst_id) # Obtain tags associated with the EC2 instance object for tags in inst.tags: if tags["Key"] == 'Name': #print tags["Value"] return tags["Value"]Get the instance id from metadata serverUse the instance id to query Boto3 resource
Let's say that my script is running on an EC2 instance namedec2-test1and it communicates with an S3 bucket named:s3-bucket-test1, but when the script is ran onec2-test2it's able to identify the EC2 instance it is currently running on and change the directory to references3-bucket-test2. Is there a way to do that? I know that for internal paths you can useos.path.dirname(os.path.realpath(__file__))but was wondering if there is a way to do something like that for EC2 instance name in Python?
Is there a way for a Python script to "recognize" which EC2 instance it is running on?
The clients for all the AWS services use HTTP calls under the hood. So irrespective of which service you are working with, DynamoDB in case of this question, the answer is the same, thatAWS recommends using a single clientinstance for multiple requests.Here is an AWS forums questionhttps://forums.aws.amazon.com/thread.jspa?messageID=247661(might need sign in), that discusses this. To quote from the answer.Each instance of one of the SDK clients (ex: AmazonS3Client) creates its own client object for sending HTTP requests, which can be relatively expensive since it manages resources like HTTP connection pools.You'll get more efficient use of your resources by reusing the same S3 client object.As you put more load through the client, you might also take a quick look at the configuration options available through the ClientConfiguration classIf you need more control on how the clients should behave, like whether they should use a common http connections pool or a separate one, then you can use the approaches described herehttps://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/creating-clients.htmlSo, to answer the question, what would be faster, creating separate clients for each requests would be wasteful on resources like connection pools and hence wont be faster. To get things to work faster one should look at client tuning options as detailed herehttps://aws.amazon.com/blogs/developer/tuning-the-aws-sdk-for-java-to-improve-resiliency/
We have a server application that read/write a lot and in parallel in DynamoDB.Today we inject a newDynamoDBwith newAmazonDynamoDBClientfor every injection point (CDI dependent scope). Mostly, a new request in our app, injects aDynamoDBinstance.I knowDynamoDBis thread-safe and I can change it's scope to@ApplicationScoped, but the requests to the DynamoDB endpoint will be serial, killing the performance of my application? Or even having a single instance ofDynamoDBit can handle simultanious requests to AWS DynamoDB endpoint?Thanks
AWS Java SDK - What's faster? A single instance of DynamoDB client (@ApplicationScoped) or creating a new one for every request?
Yes. Each lambda invocation will get record from one tableReferUsing AWS Lambda with Amazon DynamoDBFollowing is an extract from that web pageThe event your Lambda function receives is the table update information AWS Lambda reads from your stream. When you configure event source mapping, the batch size you specify is the maximum number of records that you want your Lambda function to receive per invocation.
If I have a Lambda function that has multiple DynamoDB Stream triggers, is it guaranteed that each Lambda invocation only contains records from one table?
Multiple DynamoDB triggers for Lambda - Separate invocation per table?
if are you using a Credential like below$s3 = new Aws\S3\S3Client([ 'version' => 'latest', 'region' => 'us-east-1', 'key' => "AKIAJAAAXXYASASASASDSUAG66MA", 'secret' => "8sZyAAAAXUSuUK3FJSDFSDS&D*SDSJFSFShjssa7Fx+GS9" ) ]);then change to like this$s3 = new Aws\S3\S3Client([ 'version' => 'latest', 'region' => 'us-east-1', 'credentials' => array( 'key' => "AKIAJAAAXXYASASASASDSUAG66MA", 'secret' => "8sZyAAAAXUSuUK3FJSDFSDS&D*SDSJFSFShjssa7Fx+GS9" ) ]);
In general, AWS S3 works fine in my web. However, I keep getting randomly these errors when downloading:Error retrieving credentials from the instance profile metadata server. (cURL error 28: Operation timed out after {>1000} milliseconds with 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html))Why? How can I prevent these errors from happening?I am using AWS SDK PHP v3.
AWS S3 cURL timed out
Answer my own question, in AWS ECS cluster, a daemon is running, you can queue its metadata by below commands$ curl http://localhost:51678/v1/tasks | python -mjson.tool $ curl -s http://localhost:51678/v1/metadata | python -mjson.tool { "Cluster": "application-1", "ContainerInstanceArn": "arn:aws:ecs:us-east-2:1234567890:container-instance/ee4d3451d-2de3-4180-b1c6-023ed6e8c343", "Version": "Amazon ECS Agent - v1.14.1 (467c3d7)" }This will be useful if you need to deregister itself from ECS cluster, for example, you use spot instance in ecs cluster.Refer:ecs agent introspection
From the running aws ecs instances, can I get the detail about its cluster name and container instance ID?In EC2, I can run below curl command to get its instance id, do I have similar command in ECS?curl http://169.254.169.254/latest/meta-data/instance-id
Are there any way to get ecs instance metadata?
Here is what worked for me:Lex sends request inLexEventClass type and expects response inLexResponseClass type.So i changed my parameter fromstringtoLexEventand return type fromstringtoLexResponse.public LexResponse FunctionHandler(LexEvent lexEvent, ILambdaContext context) { //Your logic goes here. IIntentProcessor process; switch (lexEvent.CurrentIntent.Name) { case "BookHotel": process = new BookHotelIntentProcessor(); break; case "BookCar": process = new BookCarIntentProcessor(); break; case "Greetings": process = new GreetingIntentProcessor(); break; case "Help": process = new HelpIntentProcessor(); break; default: throw new Exception($"Intent with name {lexEvent.CurrentIntent.Name} not supported"); } return process.Process(lexEvent, context);// This is my custom logic to return LexResponse }But i'm not sure about the root cause of the issue.
I'm new to AWS. I'm build chatbot using aws lex and aws lambda c#. I'm using sample aws lambda C# programnamespace AWSLambda4 { public class Function { /// <summary> /// A simple function that takes a string and does a ToUpper /// </summary> /// <param name="input"></param> /// <param name="context"></param> /// <returns></returns> public string FunctionHandler(string input, ILambdaContext context) { try { return input?.ToUpper(); } catch (Exception e) { return "sorry i could not process your request due to " + e.Message; } } } }I created a slot in aws lex to map first parameterinput. But i'm always getting this errorAn error has occurred: Received error response from Lambda: UnhandledIn Chrome network tab i could seeError- 424 Failed Dependencywhich is related to authentication.Please help how to troubleshoot AWS lambda C# error which is used by aws lex. I came across cloudwatch but I'm not sure about that.Thanks!
How to troubleshoot this AWS lambda error - An error has occurred: Received error response from Lambda: Unhandled?
As far as I can tell, which hosted zone is active, meaning that its record sets are returned for queries to the domain, depends on the name servers registered with the domain. So, in order to make my second zone active I have to update the domain's name servers, in Route 53, to correspond to those of the desired hosted zone.
I have twopublic hosted zonesinAmazon Route 53for the same domain name (which has Route 53 as registrar), for the reason that Route 53 automatically created one when I registered the domain name and that the second one was created byTerraform.As far as I can tell, DNS record sets in the second zone aren't applied, i.e. they're not returned for queries to the domain. Do I have to delete the first zone in order for record sets in the second zone to be active?
How does Route 53 connect multiple public hosted zones to one domain name?
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using theserialkeyword: (maybe you look for something like this and not blue green deployment)- name: test play hosts: webservers serial: 1ansible-serial-linkAlso your playbook is not a blue green deployment, I suggest you to read about it.little bit. A blue/green deployment is a software deployment strategy that relies on two identical production configurations that alternate between active and inactive. One environment is referred to as blue, and the duplicate environment is dubbed green. The two environments, blue and green, can each handle the entire production workload and are used in an alternating manner rather than as a primary and secondary space. One environment is live and the other is idle at any given time. When a new software release is ready, the team deploys this release to the idle environment, where it is thoroughly tested. Once the new release has been vetted, the team will make the idle environment active, typically by adjusting a router configuration to redirect application traffic. This leaves the alternate environment idle.
I am thinking to use Ansible to manage my AWS infrastructure; I have (2 servers with auto scaling).I will deploy usingansible-playbook -i hosts deploy-plats.yml --limit spring-bootHere mydeploy-plats.yml--- - hosts: bastion:apache:spring-boot vars: remote_user: ec2-user tasks: - name: Copies the .jar to the Spring Boot boxes copy: dest=~/ src=~/dev/plats/target/plats.jar mode=0777 - name: Restarts the plats service service: name=plats state=restarted enabled=yes become: yes become_user: rootand I am wondering if using this technology will be a Blue-green deployment or the servers will be restarted at the same time, producing a downtime
Blue-Green Deployment in AWS with Ansible
Do I need a single load balancer, or one per 'task / subdomain'?You can have a single application load balancer and three target groups for Api, Site and Web App. Then you can do a rule base routing in the load balancer listener as shown in the following screenshot.Ref:http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.htmlYou can then map your domains www.domain.com and app.domain.com to the load balancerHow do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)When you create services for your task definitions in ECS you can configure load balancing using the target groups you created.Ref:http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html(Check on "Configuring Your Service to Use a Load Balancer")
So I am trying to get my AWS setup working with DNS.I have 2 instances (currently). I have 4 task definitions. 3 of these need to run on port 80/443, however all on separate subdomains.Currently if I stop/start a task, it can end up on either of my instances. This causes issues with the subdomain DNS potentially being pointed in the wrong places.I imagine I need to setup some kind of load balancer to point the DNS at, but unsure how to get that to route through to the correct tasks.So my questions:Do I need a single load balancer, or one per 'task / subdomain'?How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)Am I over complicating this massively, or is there a simpler way to achieve this?
AWS ECS handling DNS subdomains across multiple instances
It sounds as though what you're looking to do can be done withS3 pre-signed urls.This would require you to:Set bucket/file ACL toprivate, which will prevent anyone with "the right url" from accessing the files.Allow your application to generate short-term links to files, which your users will then be able to use for access.Instead of directing your user directly to the file being requested the workflow would be something like the following:Direct your user to a route in your application which validates their right to access the desired file.Once the user has been authenticated, generated a pre-signed URL for the file requested,Return a 302 redirect to that pre-signed URL
I'm working on an application and we'd like to host our users' files to AWS S3.Initial thinking is to create a new bucket for every new user (bucket name could be the user's unique code). How can we control access to the user's bucket files using for example his username or email ? S3 provides different levels of access based on a AWS accounts.We need to make sure that the authenticated user have access to only his bucket files. What would be the best way to achieve this kind of scenario (with S3 or a similar service) and prevent anyone with the right url to access other users files ?
Control access to AWS S3 files from my application
My answer was the following, in my csproj I had the following line :<DotNetCliToolReference Include="Amazon.Lambda.Tools " Version="1.5.0" />Notice there is a small space after Tools. No complains from VS though so it was super hard to spot, and only exists because you have to edit the csproj manually when adding DotNetCliTools.
I am trying to take an existing .net core API project and run it as a lambda function (Which should be possible).I have installed the VS 2017 SDK for AWS. While following tutorials, I am supposed to be able to right click my project and select deploy to AWS Lambda. The only option I have is "Publish To Elastic Beanstalk"However, when I create a brand new empty function in Visual Studio (New Project). I do have the ability to Publish To LambdaBut I can't seem to figure out the difference between the projects. Every nuget/tooling reference between the two projects is identical when it comes to AWS Packages.
Unable To Publish To AWS Lambda From Visual Studio 2017
No, there isn't. Detaching the root volume requires that the instance be stopped, and spot instances can't be stopped.to preserve the data, I am remounting a previously existing volume as the root volume of my current instance.That's not really the correct thing to be doing. Spot instances are inherently ephemeral, and reusing the root volume is not an intended action.Using Elastic File System, if it is available in your region, is ideal for this. An EFS filesystem mounts into your hierarchy wherever you need it, someplace like/srv/datafor example, and these can also be mounted simultaneously to multiple instances.S3 can also be used, though your code has to be written with this in mind.Alternately, save your work on an EBS volume that isn't the root volume if you want to move volumes around.
I am using a spot instance to do some work and to preserve the data, I am remounting a previously existing volume as the root volume of my current instance. So the root volume that the instance started with is no longer in use and I wanted to remove it to save costs.I have unmounted the previous root volume and tried detaching it from the cli with and without the --force param and both ends in failure with this error :An error occurred (IncorrectState) when calling the DetachVolume operation: Unable to detach root volumeI realize that this would be because aws mounts the initial root in xv/s da1 and aws prevents me from detaching it.Are there any steps that I can follow to detach the unmounted root volume ?
Detach root volume of ec2 instance
Did you by any chance neglect to open in your AWS security group inbound port (i.e. 1433) to the IP from which the connection is made? Open it to 0.0.0.0/0 if you elect to open to all (be cautioned about security implication though).
unabel to connect sql server to aws , any suggestions ?remote connection is on , dbinstance and password is correct .
unable to connect sql server to AWS RDS ,error 258?
Couple of clarifying things...first, Elastic Beanstalk is abbreviated EB, as EBS stands for Elastic Block Store. Second, EB instances are wholly separate from RDS instances, so you'll need to "clone" the RDS instance separately. And finally, the concept of restoring RDS snapshots is a little different than in other RDBMS systems - restoring a snapshot creates an entirely new RDS instance. There is no way to replace the data in-place.So, I would recommend that you restore the snapshot and then point your cloned EB instance at the new RDS instance by setting the RDS_HOSTNAME environment variable to the new endpoint.
When I clone an environment in Elastic Beanstalk, the content of the RDS database on the environment doesn't come onto the clone. Is there a good way to get this behavior?I have a snapshot of the original RDS database but I can't restore it to the exising environment. Also, in the EBS environment, I can't specify a new RDS database for that environment to use.
How to copy over RDS to an Elastic Beanstalk clone
The solution I found is to create a null resource and then include the following provisioner after running my script.provisioner "remote-exec" { inline = [ "sudo shutdown -h now", ] }
I have a terraform script that, afterterraform apply, successfully launches an AWS spot instance and then runs a bash script. After the script finishes running and the creation is complete, I have been manually destroying the spot instance withterraform destroy. This is inconvenient, because I either have to watch my email for a CloudWatch alert or periodically check-in on the progress of the script. Ideally, I would be able to automatically destroy the AWS resources I created automatically. Does anyone know how I should go about doing this? Am I using the wrong AWS resources, i.e. should I be using ECS?
How do I use terraform and AWS to run a script and then terminate or destroy the resources?
There are various ways,otherthan API Gateway, to invoke Lambda functions. The one most relevant to your use case would be theInvokeAPI. You can find the official documentationhereand the Boto library's, in case you are using Boto,here.Also, as mentioned in a comment on the question, you can assign an IAM role to the EC2 instances that allows them toInvokethe Lambda function.
I am struggling with understanding how I can easily invoke my lambda function from an EC2 instance within a VPC.I think I have a quite common problem but strangely enough I didn't found anything specific for this "pattern".I have a Python application in an EC2 instance and I would like to launch heavy processing functions in parallel using Lambda functions and keep the EC2 quite light-weight.Ideally, the Lambda function could be invoked only from within the VPC (only from my EC2 instances).My understanding is that I have to create an API gateway (or add an API endpoint to the Lambda function) but I don't understand how to invoke this function from the EC2 (I am trying to use HTTP requests without success) nor how to set permissions.I used a trigger in the function to set-up the API gateway and I am using the corresponding link for requests.
How to allow invoking an AWS Lambda function only from EC2 instances inside a VPC
I was used below SDK for same.compile 'com.amazonaws:aws-android-sdk-s3:2.2.13'now used below version of sdk than it will work fine.compile 'com.amazonaws:aws-android-sdk-s3:2.4.0'
I have used the code below for uploading an image to the s3 bucket using Android.AmazonS3 s3client = new AmazonS3Client(new BasicAWSCredentials(context.getString(R.string.accesskey),context.getString(R.string.secretkey))); PutObjectResult result = s3client.putObject(new PutObjectRequest(bucketName, amazonfilepath, file).withAccessControlList(acl));When I create bucket in Asia Pacific (Singapore) regions then it works fine. But I have to change regions of bucket to Asia Pacific (Mumbai) then it throws below error.Amazon s3 image uploading:com.amazonaws.services.s3.model.AmazonS3Exception: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: Amazon S3; Status Code: 400; Error Code: InvalidRequest; Request ID: 85D0346520B14007), S3 Extended Request ID: gC67avqeowqS4+X+2qkBxzLxfj9ABVV22zWgPcm/rZBKC0RIso201+eMsvBsqdnH+8n0V9RI0J8=
Authorization issue when uploading image to the s3 bucket using Android
You need to open 8080 port in AWS.Check this to do it:https://hayato-iriumi.net/2019/08/22/how-to-install-jenkins-on-aws/After that you 're good to go by using the public IP of the EC2 and port 8080 :)
Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed7 months ago.Improve this questionI've tried setting up jenkins on some free-tier ec2 instances (I've tried ubuntu based and Amazon's own AMI) and despite jenkins running properly I can't access it on port 8080 using browser no matter what I try.I've check that jenkins is running both by running sudo service jenkins status and by running curl localhost:8080 once I'm ssh'd in.I've tried all the answers to similar questions on the web but nothing has worked. I've set up my security group exactly as all guides online state for installing jenkins on ec2 , I even went really overboard and made it overly open - nothing! Note - I've not tried setting it up behind apache/nginx but I'm feeling I shouldn't have to out of the principle that it should work on 8080UPDATE - I caved and tried to setup Nginx using this linkhttps://markunsworth.com/2012/02/11/setting-up-a-jenkins-build-server-on-ec2/. I get the standard nginx welcome screen - it doesn't serve up jenkins for meMy overly open security groupAny help would be hugely appreciated
Can't access Jenkins running on ec2-instance [closed]
I know this is a bit old, But we had the same issue today, however it only seemed to happen to one companies users. So after a little digging I discovered their computers time were off by about 8 minutes. This was causing the certificate to be expired or invalid. simply changing the computers times to the correct time, or as we did get the difference of the correct time from the server and the local machine and account for the difference when sending the request fixed the issue.
I keep receiving a 403 when trying to connect via Websocket to AWS IoT. I have a Cognito federated pool setup, which connects fine and returns credentials. It's after that step when I update the websocket credentials that I start getting 403's.I've done the following steps:I've setup IoT and have a certificate and policy setup.I created a Cognito Federated Identity Pool that allows unauthenticated users.The unauthenticated role has full access to IoT (policy below)Here's the unauthenticated role policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "mobileanalytics:PutEvents", "cognito-sync:*", "iot:*" ], "Resource": [ "*" ] } ] }Any ideas? Am I missing a step?
AWS IoT websocket connection returns a 403
Problem solved.It turns out that the Access Key ID and Access Key can be found on:https://console.aws.amazon.com/iam/home#/security_credential
Recently I was trying to upload an app to aws but an error occurred:ERROR: The current user does not have the correct permissions. Reason: Operation Denied. The security token included in the request is invalid. You have not yet set up your credentials or your credentials are incorrect You must provide your credentials. (aws-access-id): (aws-secret-key): ERROR: Operation Denied. The security token included in the request is invalid.I was wondering where to get aws-access-id and aws-secret-key for this step in order to upload the app successfully.
Where to get aws-access-id and aws-secret-key for uploading amazon web service application?
I don't know if this is the correct solution, but givingListAllMyBucketspermission worked for me.I just added another statement along with the previous one.{ "Sid": "Stmt1490288788", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::bucket-name/*" ] }{ "Sid": "Stmt1490289746001", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": [ "arn:aws:s3:::*" ] }So this policy lists all the buckets, but only allow put/delete/get access to the specific bucket. Still wondering what's the relation between rename/copy & list all bucket permissions.
Created an IAM user, with S3 full access (S3:*) on a specific ARN (only one bucket). Upload and delete works, but not able to rename or copy/paste.Here is my IAM policy.{ "Sid": "Stmt1490288788", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::bucket-name/*" ] }
Renaming object from in aws s3 console, with IAM user
The directory to which your ElasticBeanstalk application is deployed should be considered read-only after the deployment is complete. If you need to write files at run time, you should use a writeable directory such as/tmp.
I created an application on my local machine(mac os x) that generates a report and saves it to a folder in my web project folder. When I deploy the app., this folder should get published as well. Granted, no one should have access to this folder but the app. When I run the code to generate the report works fine on my local machine, but it seems to hang on elastic beanstalk. What do I need to do to make this work on the elastic beanstalk environment?In a nutshell, I am using phantomjs to convert a dynamic webpage into a pdf file that gets emailed to the appropriate parties involved. Here is the code that generates the file:page.viewportSize = { width: 2000, height: 800 }; //page.paperSize = { format: 'Letter', orientation: 'landscape', margin: '1cm' }; page.paperSize = { width: '1280px', height: '800px', margin: '0px' }; page.settings.localToRemoteUrlAccessEnabled = true; page.settings.loadImages = true; page.settings.javascriptEnabled = true; page.open("http://example.com/report/" + args[1], function start(status) { if (status === 'fail'){ phantom.exit(1); return; } //page.render('/dev/stdout', { format: 'pdf' }); page.render(fs.workingDirectory + '/tmp/' + args[3], { format: 'pdf' }); phantom.exit(); return; });
Elastic Beanstalk: How to write files to a folder in my node.js project folder
Assuming you have the AWS SDK installed in your project using composer; specifically...composer require aws/aws-sdk-phpYes you can, using the stream wrapper like this:require "vendor/autoload.php"; $aws_file = 's3://bucketname/foldername/your_file_name.pdf'; //the folder is optional if you have one within your bucket try { $s3->registerStreamWrapper(); $mpdf->Output($aws_file, \Mpdf\Output\Destination::FILE); } catch (S3Exception $e) { $data['error'] = $e->getMessage(); //show the error as a JSON callback that you can use for troubleshooting echo json_encode($data); exit(); }You might have to add write permissions to your web server as follows (using Apache server on Ubuntu AWS EC2):sudo chown -R www-data /var/www/html/vendor/mpdf/mpdf/src/Config/tmp sudo chmod -R 755 /var/www/html/vendor/mpdf/mpdf/src/Config/tmpThen edit the ConfigVariables.php file found at:\vendor\mpdf\mpdf\src\ConfigChange:'tempDir' => __DIR__ . '/../../tmp',To:'tempDir' => __DIR__ . '/tmp',Then create an empty folder named 'tmp' in that same directory. Then upload with joy.
Can I upload mpdf file to s3 server after generating.$file_name = $pdf->Output(time().'_'.'E-Prescription.pdf','F');
How to upload mpdf file after generating to s3 bucket in php
To Schedule a run, you need to do the following steps:[One time setup]Callaws devicefarm create-projectto create a project for all your testsCallaws devicefarm create-uploadfor your application under testUpload your application to the pre-signed URL returned bycreate-uploadCallaws devicefarm create-uploadfor your test scriptsUpload your test scripts to the pre-signed URL returned bycreate-uploadAfter your uploads have been processed by Device Farm, callaws devicefarm schedule-runNormally when you see"An error occurred (ArgumentException) when calling the ScheduleRun operation: Missing or unprocessed resources.", it means you forgot step 3 or step 5. You can upload your application to the pre-signed URL usingcurl. You can check whether your upload has been successfully processed by callingaws devicefarm get-upload.Here is an example blog post which uses the AWS CLI to schedule a run:Get started with the AWS Device Farm CLI
I am using this command for executing from CLI -aws devicefarm schedule-run --project-arn "project-arm value" --app-arn "app-arm value" --device-pool-arn "device-pool-arm value" --name "Automated_script" --test '{"type":"APPIUM_JAVA_TESTNG","testPackageArn":"testPackageArn value"}'But getting this errorAn error occurred (ArgumentException) when calling the ScheduleRun operation: Missing or unprocessed resources.
I am getting an issue while scheduling a run in AWS Device Farm from CLI