Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You need to use the SchemaAttributes property. For standard attributes they must be named according to the Open ID Connect specification -http://openid.net/specs/openid-connect-core-1_0.html#StandardClaimshttp://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.htmlBy doing this they will map appropriately and you can set other properties on the property such as Required. It appears the docs need to be updated to make this more clear.http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cognito-userpool-schemaattribute.html
|
I am trying to code my cognito userpools as cloudformation templates. I am running into one open question however: how to define standard attributes? Will Cognito know thatbirthdateinSchemais meant to be the standard birthdate - and validate it as such? How is the attributeSchemadefined in the cloudformation template mapped to the standard, non-custom attributes likeemail,birthdate, ...?More details: AWS Cognito separates between standard attributes and custom attributes. This separation is amongst other things important because standard attributes are validated for their format:emailandbirthdatefor example accept only an AWS defined, specific format.Thus my question: how does AWS Cognito map the cloudformationSchemadefined attributes to standard AWS Cognito attributes? Does it at all, and if so by identity of the attribute name?Also see:AWS Cognito Cloudformation SchemaExample created by someone:Cloudformation example
|
AWS Cognito Userpool via cloudformation file
|
EB has a specific folder where you can execute scripts to runpre-deploy. I created a.configfile in my.ebextensionswith bash commands I wanted executed pre deploy. It creates a file in"/opt/elasticbeanstalk/hooks/appdeploy/pre/that gets run001_script.configfiles:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/001_oracle.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
bash commands you want executed here
...
...
|
I'm deploying a rails application to AWS. One of the gem's had a dependency and needs certain files installed on the server beforebundle installis run during deployment. In my.ebextensionsfile I have the following01-oracle_sdk:
sources:
/usr/lib: https://s3-us-west-2.amazonaws.com/xyz/instantclient-sdk-linux.x64-12.2.0.1.0.zip
02-oracle-basic:
sources:
/usr/lib: https://s3-us-west-2.amazonaws.com/xyz/instantclient-basic-linux.x64-12.2.0.1.0.zip
03-oracle_sql_plus:
sources:
/usr/lib: https://s3-us-west-2.amazonaws.com/xyz/instantclient-sqlplus-linux.x64-12.2.0.1.0.zip
04-container_commands:
00_oracle_dir:
command: "export LD_LIBRARY_PATH=/usr/lib/instantclient_12_1"From what I can tell, none of this is getting run pre-deploy. It fails when it tries to install the gem because that directory is not there. When I SSH into the instance,LD_LIBRARY_PATHis not set and none of the zipfiles were downloaded and unzipped by thesourcecommand.1) Is my syntax in correct 2) How do I get these commands to execute PRE deploy/bundle install?
|
Pre-Deploy Script on ElasticBeanstalk
|
Getting a perfect setup for a SPA or static page on Cloudfront is not trivial. In short, you will need to use (at least) an origin request lambda function setting for your CF distro. You need to handle a few edge cases like:hashesredirects to urls with trailing slashes (that you mentioned)forwarding of query parameters when redirectingFor a quick starting function you can check this articlehttps://tinyendian.com/articles/better-origin-response-function-for-cloudfront-hosted-static-pagesexplaining the actual code that you can copy from herehttps://gist.github.com/karolmajta/6aad229b415be43f5e0ec519b144c26eOf course it is likely that as your app changes, you will need to modify this snippet here and there to match your needs.
|
I have a static SPA page that is using S3 as it's origin with CloudFront. If I visit www.domain.com/page, I will get the CloudFront path prefixedbucket-directory/prod/page/which is expected.Is it possible to capture the path in AWS Lambda and append the trailing slash to a request, so it becomes, www.domain.com/page > [Lambda] > www.domain.com/page/I've been looking and trying the following resources to little avail:http://blog.rowanudell.com/redirects-in-serverless/http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html
|
CloudFront redirect request with Lambda to trailing slash
|
the issue is herewith_items: ec2.instancesIt should be:with_items: '{{ ec2.instances }}'ec2 is variable referencing a dictionary so you will need to reference it with the proper syntax
|
Why does this task (fromBest way to launch aws ec2 instances with ansible):- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[webserver]" line={{ item.public_ip }}
with_items: ec2.instancescreate this error?TASK [Add the newly created EC2 instance(s) to the local host group (located inside the directory)] ********************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'\n\nThe error appears to have been in '/Users/snowcrash/ansible-ec2/ec2_launch.yml': line 55, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)\n ^ here\n"}
|
ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'
|
You can use either Apache or Nginx to deploy Django App. If you are planning to Use Nginx, first install Nginx in the server and add Django configurations to Nginx configuration. You can follow this as agood guide.
|
I have a django running in AWS Ubuntu machine. Through SSH, I start the server at 8000 port. But when i close the ssh window, server stops and I can't access it through URL. What I want is to run the server all the time once it is started. How to go about it? Thanks.
|
Running Django Python Server in AWS
|
This simple code do the same stuff without converting a lot of time etc:import boto3
from datetime import date
client = boto3.client('iam')
username = "<YOUR-USERNAME>"
res = client.list_access_keys(UserName=username)
accesskeydate = res['AccessKeyMetadata'][0]['CreateDate'].date()
currentdate = date.today()
active_days = currentdate - accesskeydate
print (active_days.days)
|
I am trying to figure out a way to get a users access key age through an aws lambda function using Python 3.6 and Boto 3. My issue is that I can't seem to find the right api call to use if any exists for this purpose. The two closest that I can seem to find arelist_access_keyswhich I can use to find the creation date of the key. Andget_access_key_last_usedwhich can give me the day the key was last used. However neither or others I can seem to find give simply the access key age like is shown in the AWS IAM console users view. Does a way exist to get simply the Access key age?
|
Getting access key age AWS Boto3
|
Here is what we ended up doing:Origin EC2 only allows HTTP (port 80)ELB only allows HTTPS (port 443) and targets the EC2 via HTTP (port 80)EC2 Security Group restricts HTTP access to the ELB's security groupCreated Route53 DNS entry for origin-blabla.example.com as an alias to the ELBCloudFront distribution redirects HTTP -> HTTPSCloudFront has origin-blabla.example.com as its originCloudFront origin has custom HTTP headerBoth CloudFront and ELB have a *.example.com TLS Certificate (I also could have used separate certs for specific domain names)URL Rewrite blocks/redirects all requests that don't have one of the following: a) the above-mentioned custom HTTP header or b) UserAgent that matches ^ELB-HealthChecker$So now all requests come to CloudFront via HTTPS (if they come as HTTP they are redirected to HTTPS), which connects to ELB via HTTPS, which in turn gets the data from EC2 via HTTP. This cannot be circumvented (unless someone is desperate enough to guess the origin DNS and brute force the custom HTTP header and add it to their browser request - and I'm not sure what they're really gaining by that) so we can rest assured that a) all requests are secure, b) there is only one domain name that can be used to access our system, and c) we don't have to worry about certificates on the server.
|
Our application is currently running on EC2 instances, requiring HTTPS (and redirecting HTTP to HTTPS). We are now considering serving all requests via CloudFront and enforcing HTTPS through CloudFront. Our thought is that once we do that we would then block HTTP/HTTPS requests not coming from CloudFront and relax the HTTPS requirement. This way all requests to CloudFront would be via HTTPS, but CloudFront would retrieve the data from the EC2 origin via HTTP. This way we would a) reduce some server overhead since the server doesn't have to do the TLS encryption and b) eliminate the need to manage certificates for the EC2 instances.Are there any security concerns with this or other reasons not to do this?
|
Using HTTP between CloudFront and EC2 for HTTPS site
|
Couldn't answer as much in the comments so I'll try here.The architecture you linked to is pretty common. The two biggest downfalls are that you're going to billed for Lambda usage even if there is nothing to do and your data may be delayed by the amount of the polling interval which is a minimum of 1 minute. Neither of these things may matter in your problem though.SQS could be used as a temporary store for data in the event of a DynamoDB failure. But what exactly are you going to do if it fails? What if SQS fails and loses your messages? What if Lambda fails and never runs your code? DynamoDB is a hosted service just like SQS and Lambda - Amazon is going to work very hard to keep it running just like their other services. Trying to architect around every possible failure scenario will mean you never deliver code. I'd focus on the simplest architecture you can and put some trust in the services you're paying for.
|
I am building one service which would use the data that would come from another source(service). So, I am thinking to use the following pipeline:-Other service ----> SNS Topic ----> SQS ----> AWS Lambda ----> Dynamo DbSo, what above flow says is Another service will push data to SNS Topic to which an SQS would be a subscriber. Now AWS Lambda will have a trigger on this SQS which would listen to the messages in SQS and push it to Dynamo Db. Although it looks okay to do this. But now I am thinking if I really need SQS or not. Can I avoid using it? Instead of using SQS, AWS Lambda directly has a trigger on SNS. I am just thinking of one case if I don't use AWS SQS. How will it handle the scenario if AWS Dynamo DB fails? I think with only SNS, I would lose some messages during the time, my Dynamo Db is in failed state but if I have SQS, then those messages would be stored in SQS queue.Please let me know if my understanding is correct.Thanks a lot for your help.
|
AWS SQS Required or not
|
I would recommend using Serverless Framework to improve developer productivity. Few of the practices we followKeep all the infrastructure changes in Serverless Framework generated CloudFormation stack template.Create different API Gateway stages for each developer.Utilize Serverless Plugins. E.g Serverless Offline, Severless DynamoDB Local & etc.Use a NodeJS proxy, if you plan to setup hybrid development environment e.g Use Serverless Offline plugin emulating API Gateway and Lambda localy, S3 with Cognito in AWS.Use a task runner like Gulp to automate starting web servers, deployment & etc.Use environmental variables to store environment specifics.Apart from this, its better to use a seperated AWS account for production. You can configure AWS organizations to simplify managing multiple accounts.
|
We are a team of 5 developers and need some guidance on the best way to develop on AWS specifically using AWS Lambda, API Gateway, DynamoDB, and Cognito. We are looking for the best practices for development. How can 5 developers develop without stepping on each other toes? Is it better to have individual accounts and use cloud formation templates that can be used by each developer?Or use the serverless framework and use a different environment for each developer? It looks like serverless provides the ability to deploy to various environments, but I believe the intent for different environments is for CI/CD where the same code can be moved through various SDLC or specific code can be pushed to a specific environment.
|
AWS Lambda + API Gateway Development Best Practices
|
EDIT: THE BUG HAS BEEN FIXED SO PLEASE DELETE these lines below if you added them on your buildspec file.Beforeterraform init, add these lines:export AWS_ACCESS_KEY_ID=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.SecretAccessKey'`
export AWS_SESSION_TOKEN=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.Token'`It is more readable.
|
Running terraform deploy in codebuild with the following buildspec.yml.
Seems terraform isn't picking up the IAM permissions provided by the codebuild role.
We're using terraform's remote state (state file is stored in s3), when terraform attempts to contact the S3 bucket containing the state file it dies asking for the terraformproviderto be configured:Downloading modules (if any)...
Get: file:///tmp/src486521661/src/common/byu-aws-accounts-tf
Get: file:///tmp/src486521661/src/common/base-aws-account-
...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.Here's the buildspec.yml:version: 0.1
phases:
install:
commands:
- cd common && git clone https://[email protected]/aws-account-tools/acs.git
- export TerraformVersion=0.9.3 && cd /tmp && curl -o terraform.zip https://releases.hashicorp.com/terraform/${TerraformVersion}/terraform_${TerraformVersion}_linux_amd64.zip && unzip terraform.zip && mv terraform /usr/bin
build:
commands:
- cd accounts/00/dev-stack-oit-byu && terraform init && terraform plan && echo terraform apply
|
AWS Codebuild terraform provider
|
You can get the globally configured credentials fromaws.config.credentialsGet the accessKeyId:var accessKeyId = aws.config.credentials.accessKeyId;Get the secretAccessKey:var secretAccessKey = aws.config.credentials.secretAccessKey;
|
I am accessing AWS sdk and its services like this in my code:var aws = require('aws-sdk');
const s3 = new aws.S3();I want to see what are the credentials being picked up when I initialise the S3 object. I tried following ways and clearly I am unable to figure out from the documentation how to use the methods and classes properly.var credo = aws.config.Credentials().get();
var credo = aws.config.Credentials;
var credo = aws.config.credentials;
var credo = aws.Credentials().get();
var credo = aws.Credentials();
var credo = aws.Credentials;Can someone tell me the right way to get this data? I am not finding aws documentation easy to understand for this part.Edit: I am able to update credentials in code usingaws.config.update({accessKeyId: 'xxx', secretAccessKey: 'yyy', sessionToken:'zzz'I want to see what these values are when I dont set them like this. Process environment variables are not set. I have credentials file set up correctly.
|
AWS-SDK for NodeJS: how to get credentials being used in program
|
First, define the output in your ec2 module:output "instance_ids" {
value = ["${aws_instance.web.*.id}"]
}Note: the resource namewebis an example. Please specify the actual resource name in the module.Next, declare the list variable in your elb module:variable "instances" {
type = "list"
}Finally, pass the output of the ec2 module to the elb module:module "instances" {
source = "../../../../modules/ec2"
ami = "ami...."
number_of_instances = 2
instance_type = "t2.micro"
}
module "elb" {
source = "../../../../modules/elb"
name = "some elb"
instances = ["${module.instances.instance_ids}"]
}
|
I'm creating 2 instances in 1 module and I now need to attach those 2 instances to an ELB which is created using another module (same file) - is this possible without manually specifying them?module "instances" {
source = "../../../../modules/ec2"
ami = "ami...."
number_of_instances = 2
instance_type = "t2.micro"
}
module "elb" {
source = "../../../../modules//elb"
name = "some elb"
instances = ["???"] #something like ["${module.ec2.instances.id}"]?
}
|
terraform aws pass instance list from 1 module to another
|
I gave up trying to pass the data through the context. However, I was able to pass the data through the Payload param:client.invoke(
FunctionName='LambdaWorker',
InvocationType='Event',
LogType='None',
Payload=json.dumps(payload)
)And then to read it from event parameter inside invoked lambda:ctx = json.dumps(event)
|
I'm trying to get working two basic lambdas using Python2.7 runtime for SQS message processing. One lambda reads from SQS invokes and passes data to another lambda via context. I'm able to invoke the other lambda but the user context is empty in it. This is my code of SQS reader lambda:import boto3
import base64
import json
import logging
messageDict = {'queue_url': 'queue_url',
'receipt_handle': 'receipt_handle',
'body': 'messageBody'}
ctx = {
'custom': messageDict,
'client': 'SQS_READER_LAMBDA',
'env': {'test': 'test'},
}
payload = json.dumps(ctx)
payloadBase64 = base64.b64encode(payload)
client = boto3.client('lambda')
client.invoke(
FunctionName='LambdaWorker',
InvocationType='Event',
LogType='None',
ClientContext=payloadBase64,
Payload=payload
)And this is how I'm trying to inspect and print the contents of context variable inside invoked lambda, so I could check logs in CloudWatch:memberList = inspect.getmembers(context)
for a in memberList:
logging.error(a)The problem is nothing works and CloudWatch shows user context is empty:('client_context', None)I've triedexample1,example2,example3,example4Any ideas?
|
How to invoke another lambda async and pass context to it?
|
Most likely it's python version and aws cli version mismatch issue. Post aws cli version and python versionpython -V
aws --versionInstall awscli with pip only so that it gets proper python version.pip install awscliRef: github.com/aws/aws-cli/issues/2403
|
I have two EC2 instances running in a custom VPC, with one running Ubuntu 16.04 and the other running Amazon Linux 2017.03. I have also assigned a IAM Role that allows read and write access to all S3 buckets.However when I try to run the copy command, to copy a file from the instance to the S3 bucket, it fails on the Ubuntu server. The command I run on both servers is:aws s3 cp /myfolder/myfile.txt s3://mybucket/backups/It gives the following error on Ubuntu:upload failed: ../../myfolder/myfile.txt to s3://mybucket/backups/myfile.txt seek() takes 2 positional arguments but 3 were givenEverything else works, for example, downloading a file from the bucket to the server through the copy command. There is no problem in the VPC setting and neither the IAM Role nor the Security Group, since the same applies to the other server running Amazon Linux.PS: Running the copy command with the --dryrun switch gives no error on the Ubuntu server.
|
AWS CLI: copy command fails when copying from instance to bucket
|
This is caused by abug in argparsethat was fixed in Python 3.4.The awscli is written in Python and it uses theargparse moduleto parse the command line. It also uses theaction="version"featureof argparse to simplify version printing. This prints the version string to stderr prior to Python 3.4 and prints to stdout in Python 3.4+.
|
Why standard checkaws --versionprints the expected output to the stderr, not stdout?$ aws --version 2>err.log
$ cat err.log
aws-cli/1.11.65 Python/2.7.13 Darwin/16.5.0 botocore/1.5.28
$ aws --version > out.log
aws-cli/1.11.65 Python/2.7.13 Darwin/16.5.0 botocore/1.5.28
$ cat out.log
$It would make sense to write the result into stdout if command completed successfully. Other commands likeaws ec2 describe-imagesoraws ec2 describe-instanceswrite the output to the stdout correctly.Checked on CentOS and MacOS.
|
Why aws --version writes to stderr?
|
I finally found a work around, there is an example in the aws forumshttps://forums.aws.amazon.com/message.jspa?messageID=728870but the code is in Kotlin. I just port it to java and making some tests i finally validate my JWT signature:byte[] decodedModulus = Base64.getUrlDecoder().decode(yourModulus);
byte[] decodedExponent = Base64.getUrlDecoder().decode(yourExponent);
BigInteger modulus = new BigInteger(1, decodedModulus);
BigInteger exponent = new BigInteger(1, decodedExponent);
RSAPublicKeySpec publicKeySpec = new RSAPublicKeySpec(modulus, exponent);
KeyFactory keyFactory;
keyFactory = KeyFactory.getInstance("RSA");
PublicKey publicKey = keyFactory.generatePublic(publicKeySpec);
JWSVerifier verifier = new RSASSAVerifier((RSAPublicKey) publicKey);
Boolean verify = parsedToken.verify(verifier);}Hope it helps to anyone with the same trouble.
|
i am using Cognito in Amazon to authenticate my mobile users, once they complete the login, Cognito provides a set of tokens, i am using the id token in my backend. I have followed the steps on the section Using ID Tokens and Access Tokens in your Web APIs onhttps://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.htmli am stuck on the 6 step.As far as i have seen, i get the modulus and the exponent from Amazon in String and i must build a PublicKey with those, to validate the JWT signature.I dont know how to build the PublicKey using this two parameters in String.
|
Verify AWS id Token on Java
|
You won't be able to extract EBS details fromaws_instancesince it's AWS side that provides an EBS volume to the resource.But you can define a EBSdata sourcewith some filter.data "aws_ebs_volume" "ebs_volume" {
most_recent = true
filter {
name = "attachment.instance-id"
values = ["${aws_instance.DCOS-master3.id}"]
}
}
output "ebs_volume_id" {
value = "${data.aws_ebs_volume.ebs_volume.id}"
}You can refer EBS filters here:http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html
|
I create instances with a default CentOS 7 AMI. This AMI creates automatically a volume and attached to the instance. Is it possible to read thats volume ID using terraform? I create the instance using the next code:resource "aws_instance" "DCOS-master3" {
ami = "${var.aws_centos_ami}"
availability_zone = "eu-west-1b"
instance_type = "t2.medium"
key_name = "${var.aws_key_name}"
security_groups = ["${aws_security_group.bastion.id}"]
associate_public_ip_address = true
private_ip = "10.0.0.13"
source_dest_check = false
subnet_id = "${aws_subnet.eu-west-1b-public.id}"
tags {
Name = "master3"
}
}
|
Terraform: How to read the volume ID of one instance?
|
You don't doanythingdifferently when using an S3 VPC endpoint after you configure it. Nothing in your code changes.When a subnet in a VPC is associated with a route table that is configured for a VPC endpoint, only one thing happens:all of the public IP addresses for S3 in your region are routed to the VPC endpoint instead of following the default route.That's it.Theprefix listpl-xxxxxxxxin your route table represents a list of all the public subnets associated with S3 in your region. The list is automatically maintained by the AWS infrastructure. When an instance sends traffic to S3, it does a DNS lookup to find an IP address for the bucket. When it connects to that IP address, if the route table for the instance's subnet includes an entry for that prefix list using the VPC endpoint for S3, it connects to S3 over the endpoint.All instances in subnets associated with the specified route tables automatically use the endpoint to access the service; subnets that are not associated with the specified route tables do not use the endpoint to access the service.http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html#vpc-endpoints-routing
|
What is the way to get an S3 object using the VPC endpoint for S3 in JAVA? Shall I use a simple http-client? Or is there a way to do this using AmazonS3ClientBuilder?
|
How do I use S3 VPC endpoint in Java?
|
Will it's simplest form passing only Bucket, Key and SourceFile attend my goals?The answer isYesit will serve your purpose but if you use Metadata then you can have more control over your object.According to AWS documentation about Object Metadata,There are two kinds of Meta data:System metadata: Metadata such as object creation Date, Last-Modified, Content-Length are system controlled where only Amazon S3 can modify the value.User-defined metadata: You can set/modify optional information as a name-value (key-value) pair when you send a PUT or POST request to create the object and you can grab them in future also.Use Case:If you have your bucket configured as a website, sometimes you might
want to redirect a page request to another page or an external URL. In
this case, a web page is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.
When you create objects, you can configure values of these system
metadata items or update the valuesFor more info about Object Meta Data,http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
|
InAWS SDK for PHP v3the methodputObjectcan receive many parameters likeContentType,ContentEncoding, etc.In it's simplest form, I can put a object using onlyBucket,KeyandSourceFile:$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath
));Considering that my app is going to insert photos and they must stay in S3 until I order to delete it, what are the pros and cons of setting metadata on them, likeContentType,ContentEncodingand others?Will it's simplest form passing onlyBucket,KeyandSourceFileattend my goals?
|
Why should I use metadata when putting objects in AWS S3?
|
You need to set your S3 bucket as aStatic Website(it's an option in S3 to set your bucket as such). The domain name will then change to something likehttp://static.my-company.com.s3-website-us-east-1.amazonaws.com., which is what you will want to set the CNAME record to in Route 53.Note that you cannot set a CNAME record to a directory like you are currently doing, it has to be a resolvable domain.
|
I have a file in my AWS bucket:https://s3.amazonaws.com/static.my-company.com/media/my-file.jpg.Currently I'm accessing that file in myindex.htmlfilewww.my-company.com. I havemy-company.comas a hosted zone in AWS Route 53.But in thatindex.html, I don't want it to be obvious that this asset is in an AWS bucket. I want it to appear as if it is hosted in on my own domain's servers. So instead ofhttps://s3.amazonaws.com/static.my-company.com/media/my-file.jpg, I want it to be addressed ashttps://static.my-company.com/media/my-file.jpg. How can I do that?I tried inserting a CNAME record in Route 53 that would pointstatic.my-company.comtos3.amazonaws.com/static.my-company.com. But that didn't work. That's what Jay Godse recommendedhere
|
How can I assign a subdomain to my AWS S3 Bucket?
|
The .Net Core seems to be providingHMACSHA256 Classwhich should be exactly what you need:static byte[] HmacSHA256(String data, byte[] key)
{
HMACSHA256 hashAlgorithm = new HMACSHA256(key);
return hashAlgorithm.ComputeHash(Encoding.UTF8.GetBytes(data));
}
|
I need to convert the following .Net code to .Net Core:static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;
return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}The above snippet is used for Amazon AWS key signing and is taken fromhere.I'm using System.Security.Cryptography.Primitives 4.3.0 and KeyedHashAlgorithm.Create method doesn't exists. Looking at thegithubI can see that the Create method is there now, but it's not supported:public static new KeyedHashAlgorithm Create(string algName)
{
throw new PlatformNotSupportedException();
}The question is what is my alternative to KeyedHashAlgorithm.Create(string algName) in .Net Core?
|
KeyedHashAlgorithm in .net core
|
To let your IAM user assume a role of specific name across multiple accounts,white-list all the required role ARNs explicitly. That's thesecure wayto do it."Resource": [
"arn:aws:iam::AWS-ACCOUNT-ID1:role/MyRole",
"arn:aws:iam::AWS-ACCOUNT-ID2:role/MyRole",
"arn:aws:iam::AWS-ACCOUNT-ID3:role/MyRole"
]Here's the complete policy: (AWS-ACCOUNT-ID is 12 digit number without hypens){
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::AWS-ACCOUNT-ID1:role/MyRole",
"arn:aws:iam::AWS-ACCOUNT-ID2:role/MyRole",
"arn:aws:iam::AWS-ACCOUNT-ID3:role/MyRole"
]
}
]
}However, looking at your attempt of using wildcard in place of account id, I wanted to emphasize that following is possible but puts your company at security risk. It violates theprinciple of least privilege.Access to assume any role in any AWS account (INSECURE)"Resource": [
"arn:aws:iam::*"
]Access to assume 'MyRole' in any AWS account (INSECURE)"Resource": [
"arn:aws:iam::*:role/MyRole"
]With this wildcard access, IAM user can assume 'MyRole' (or any role) on behalf of your company in any third-party AWS account.
|
Which condition can I apply to limit an IAM to only assume roles with a specific name ?This user has a Trusted Relationship on multiple AWS accounts, which all contain a role names "MyRole". So I want a condition like:Assumed RoleARN ~= arn:aws:iam::[0-9]*:role/MyRoleThanks
|
How can restrict an IAM user to assume Cross Account roles with a specific name
|
Waiters have configuration parameters'delay' and 'max_attempts'
like this :waiter = rds_client.get_waiter('db_instance_available')
print( "waiter delay: " + str(waiter.config.delay) )waiter.py on github
|
I started migrating my code to boto 3 and one nice addition I noticed are the waiters.I want to create a snapshot from a db instance and I want to check for it's availability before I resume with my code.My approach is the following:# Notice: Step : Check snapshot availability [1st account - Oregon]
print "--- Check snapshot availability [1st account - Oregon] ---"
new_snap = client1.describe_db_snapshots(DBSnapshotIdentifier=new_snapshot_name)['DBSnapshots'][0]
# print pprint.pprint(new_snap) #debug
waiter = client1.get_waiter('db_snapshot_completed')
print "Manual snapshot is -pending-"
sleep(60)
waiter.wait(
DBSnapshotIdentifier = new_snapshot_name,
IncludeShared = True,
IncludePublic = False
)
print "OK. Manual snapshot is -available-",but the documentation says that it polls the status every 15 seconds for 40 times. That is 10 minutes. Yet, a rather big DB will need more than that .How could I use the waiter to alleviate for that?
|
How to use boto3 waiters to take snapshot from big RDS instances
|
The problem is not putting images in version control. The problem is putting
images in version controlalongside your code. If you want to host a
separate repo for images only, knock yourself out, but do not include them in
a repo with actual code.The great aspect of a site like GitHub isit makes software collaboration easier. I can:fork a projectmake changescommitpull requesthttp://hub.github.com#contributorAdding images to a code repo makes software collaboration harder.Anyone who wants to clone your repo is going to have to deal with the extra
size, unless you put the images on a different branch, then they can dogit clone --single-branchImages do not really belong in version control. Version control is great
because you can do line or word diffs for each change, to see how the code
changes over time. You are never going to diff an imageFor code, Git is much better option than AWS. For images, you should be asking
yourself: what does Git do for images better than AWS. The answer is nothing
really other than allowing you to put everything together. It is tempting, but
I would really avoid doing this.
|
I'm working on a Jekyll project hosted on Github Pages, and wondering what the most advisable way to host images might be.Right now, all the site's images are hosted on AWS, which is fine, but adds a (slight) amount of complexity. Total number of images hangs around 200+, and probably won't go over 500 for the foreseeable future.I'd been advised by peers to NOT host images on Github Pages, but haven't gotten any concrete answers as to why one shouldn't.
|
Best image hosting solution for static site?
|
Ok, so I found it. Here it is possible to manage all the AWS IoT things:http://docs.aws.amazon.com/iot/latest/apireference/API_Operations.html
|
I am wondering how one creates many "things" in the AWS IoT solution via API without using the AWS web interface since this is not realistic in case I want thousands or millions of things. I guess you could write some script utilizing the "aws" client described here "http://docs.aws.amazon.com/iot/latest/developerguide/thing-registry.html" but thats not optimal if I want to control it from another service.I assumed there would be a RESTish API to do this but it doesn't seem like it if I read the docs:"You use the AWS IoT console or the AWS CLI to interact with the registry."Anyone who created thousands/millions of things - how did you interact with AWS IoT?
|
AWS IoT create things automatically
|
FromhereSomething like this should work to allow User1 to only access User1's folder:{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringEquals":{"s3:prefix":["","/"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringLike":{"s3:prefix":["user1/*"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::user1/*"]
}
]
}Apply that as User1's policy, and they should only be able to access theuser1/folder. The"s3:prefix":["","/"]...part can probably be changed, but I'm not familiar enough with the policy language to know how.If you substitute user2 for user1 in User2's policy, User2 should only be able to access theuser2/folder, and so on.
|
I have two users User1 and User2 that each have an IAM account in AWS. I have an s3 bucket "external_bucket.frommycompany.com". In that bucket is a folder for each user account "User1" and "User2". I want to grant R/W access to User1 to the User1 folder only and R/W access to User2 to the User2 folder only. I don't want them to be able to see each others' folders in the root directory of external_bucket.frommycompany.com. Is there a way to set up their IAM Policies such that this is possible?My goal is to enable our users to connect to the S3 bucket from an S3 browser app like cloudberry so they can upload and download files to their folders only.Any advice on the best design for this is welcome.
|
How to set up S3 Policies for multiple IAM users such that each individual only has access to their personal bucket folder?
|
You must include absolute path for the directory where your project resides (where you havecomposer.jsonfile for the dependencies).Replacecomposer updatewithcomposer update -d /var/www/laraveland It will work like charm.
|
I have setup Code Deploy service on aws and It's working great, but what I want is to runcomposer updatecommand after deploying.I have definedcomposer updatecommand in AfterInstall hook, but It doesn't seem to work.Here's my appspec.ymlversion: 0.0
os: linux
files:
- source: /
destination: /var/www/laravel/
hooks:
AfterInstall:
- location: hooks/after-install.sh
runas: rootand here's the after-install.sh file code:#!/bin/bash
php /var/www/laravel/artisan clear-compiled
php /var/www/laravel/artisan optimize
php /var/www/laravel/artisan view:clear
php /var/www/laravel/artisan cache:clear
chown -R ubuntu:www-data /var/www/laravel
sudo find /var/www/laravel -type d -exec chmod 755 {} +
sudo find /var/www/laravel -type f -exec chmod 644 {} +
chmod -R 777 /var/www/laravel/storage
composer updateall other commands work except the composer update, any help is appreciated.Thakns
|
How to run composer update command after code deploy on aws
|
In API Gateway, major versions should be represented by separate APIs. You can use the custom domain feature to map base paths to each API (i.e. myapi.com/v1 => API 1, myapi.com/v2 => API 2). You can also make use of the import/export functionality to manage changes between APIs.Using separate accounts per environment is actually a suggested best-practice. I would suggest taking a good look at CloudFormation to manage your workflow - a single CloudFormation template would work well across multiple accounts.
|
I am having trouble implementing a viable versioning scenario with API Gateway + Lambda. My requirement is to have major versioning at the API level but then minor versioning at the service level. My environments are also spread across accounts so staging is not an option for env propagation. Has anyone had success implementation API management with AWS API Gateway?
|
API Versioning with AWS API Gateway
|
There might be a better way but I've just tested this one and it works wellNote: If you use EC2 classic you can use theaws ec2 release-address --public-ip <x.x.x.x>command to release elastic IP otherwise you must useaws ec2 release-address --allocation-idEC2-Classicaws ec2 describe-addresses --query 'Addresses[].[PublicIp,AssociationId]' --output text | \
awk '$2 == "None" { print $1 }' | \
xargs -I {} aws ec2 release-address --public-ip {}EC2-VPCaws ec2 describe-addresses --query 'Addresses[].[AllocationId,AssociationId]' --output text | \
awk '$2 == "None" { print $1 }' | \
xargs -I {} aws ec2 release-address --allocation-id {}What those commands are doing:List all ElasticIP information and query only theAssociationIdfield along with either thePublicIporAllocationIdKeep only the records where theAssociationIdfield isNoneand print either thePublicIporAllocationIdvaluePass this value to therelease-addresscommand to actually release it.
|
Please Suggest any way or script to release unused ec2 elastic ips. I tried with boto and aws-cli but i'm unable to complete it. Any one could help . i thought ofaws ec2 release-address --public-ip <x.x.x.x>but I'm thinking how can i loop this by sending unused elastic ips.Thanks
|
Automated way to release Unused Elastic IPS
|
Federating with Cognito identity is free, so you will not be charged for the unauthenticated use case you mentioned above. See the last line of Cognito Identity section of the Cognito pricingdoc.If you are using Cognito user pools, which enables you to create your own directory and allows to manage the username and password based login of your user, you will be charged based on MAUs. No matter how many times the same user logs in/opens or closes the app in a given calendar month, it will be counted as a one MAU.
|
AWS Cognito doco states that its pointless to store the ID that will generated for an access request by an Unauthenticated user. Their pricing states that charges are based on Monthly Active users i.e active identities received via credentialsProvider.getIdentityId() call.So I were to implement it in an app or website, and the user either closes the app or website and revisit at a later point that a new ID would be generated and assigned . And that count will be added to the total MAU count?For ex: if the same user opens/closes the app/website 200 times a day.Will it incur 200 MAUs?
|
Will aws Cognito count every getid call made for unauthenticated users as a MAU?
|
You need to provide the following two parameters:aws_access_key_idandaws_secret_access_keyEven though they are described as optional parameters, there isone commentin the code that makes it clear.aws_access_key_id and aws_secret_access_key are currently needed for this >plugin to work right. Subsequent versions will have the credential resolution logic as follows:
|
So I spinned up a 2 instance Amazon Elasticsearch cluster.I have installed thelogstash-output-amazon_esplugin. This is my logstash configuration file :input {
file {
path => "/Users/user/Desktop/user/logs/*"
}
}
filter {
grok {
match => {
"message" => '%{COMMONAPACHELOG} %{QS}%{QS}'
}
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
locale => en
}
useragent {
source => "agent"
target => "useragent"
}
}
output {
amazon_es {
hosts => ["foo.us-east-1.es.amazonaws.com"]
region => "us-east-1"
index => "apache_elk_example"
template => "./apache_template.json"
template_name => "apache_elk_example"
template_overwrite => true
}
}Now I am running this from my terminal:/usr/local/opt/logstash/bin/logstash -f apache_logstash.confI get the error:Failed to install template: undefined method `credentials' for nil:NilClass {:level=>:error}I think I have got something completely wrong. Basically I just want to feed some dummy log inputs to my amazon elasticsearch cluster through logstash. How should I proceed?EditStorage type isInstanceand access policy is set to accessible to all.Editoutput {
elasticsearch {
hosts => ["foo.us-east-1.es.amazonaws.com"]
ssl => true
index => "apache_elk_example"
template => "./apache_template.json"
template_name => "apache_elk_example"
template_overwrite => true
}
}
|
Stream data to amazon elasticsearch using logstash?
|
+200Given that you have this type of lambda function:exports.handler = function(event, context) {
var data={"test":"data"};
context.done( null,
( !!event.cb && event.cb.length > 0 )
? event.cb.replace( /[^a-z0-9_]/i, '' ) + '(' + JSON.stringify( data ) + ')'
: data
);
};When you give it an event like{
"cb": "callback"
}It will give this output:"callback({\"test\":\"data\"})"So far, so good. Now you come to API Gateway and in Integration Response part you write this$util.parseJson($input.json('$'))Than you will getcallback({"test":"data"})as output when you invoke the API Gateway endpoint.
|
I'm trying to return jsonp as in callbackname(data.strified)callback( null,
( !!event.cb && event.cb.length > 0 )
? event.cb.replace( /[^a-z0-9_]/i, '' ) + '(' + JSON.stringify( data ) + ')'
: data
);My quick and dirty way now returns the data and if ?cb=test is given it returns:"test({\"valid\":false,\"data\":false})"Is there anyway to get rid of the quotes and escape characters?
The API should work with and without callback set.
|
Return JSONP via AWS Lambda/API Gateway
|
#This worked for me
import urllib.parse
encodedStr = 'My+name+is+Tarak'
urllib.parse.unquote_plus(encodedStr)
"My name is Tarak"
|
I have a Python lambda script that shrinks images as they are uploaded to S3. When the uploaded filename contains non-ASCII characters (Hebrew in my case), I cannot get the object (Forbidden as if the file doesn't exist).Here's (some of) my code:s3_client = boto3.client('s3')
def handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
s3_client.download_file(bucket, key, "/tmp/somefile")This raisesAn error occurred (403) when calling the HeadObject operation: Forbidden: ClientError. I also see in the log that the key contains characters like%D7%92.Following the web I also tried to unquote the key according to some sources (http://blog.rackspace.com/the-devnull-s3-bucket-hacking-with-aws-lambda-and-python/) like so, with no luck:key = urllib.unquote_plus(record['s3']['object']['key'])Same error, although this time the log states that I'm trying to retrieve a key with characters like this:פ×קס×.Note that this script is verified to work on English keys, and the tests were done on keys with no spaces.
|
Key given by lambda S3 event cannot be used when containing non-ASCII characters
|
You should not mutate the default service configuration. Instead, each service client provides the following class methods:+ register[ServiceClientName]WithConfiguration:forKey:
+ [ServiceClientName]ForKey:For example, forAWSS3TransferUtility, they are:+ registerS3TransferUtilityWithConfiguration:forKey:
+ S3TransferUtilityForKey:In this way, you can pass a different service configuration for each service client in the runtime. By following this pattern, you can avoid the unintentionally "polluted" default service configuration bugs that can be very difficult to debug.
|
I've integrated theAWS iOS SDK(v.2.3.6) into my application. It works fine and good, except that I've noticed thatdefaultServiceManagerhas a disclaimer:"You should use this singleton method instead of creating an instance of the service manager".I ordinarily wouldn't have an issue with this, except it'sdefaultServiceConfigurationis immutable:"This property can be set only once, and any subsequent setters are ignored."I have a requirement that a service configuration (ie. identityPoolId + region) be able to change at runtime.What are the possible ways around this? I'd love to be able to just reset the service configuration at any point, but that's unlikely given what the documentation says.
|
AWS iOS SDK AWSServiceManager multiple service configurations
|
Lambda keeps a worker active for a period of time and will (as you have noticed) remove that worker after a period of time of inactivity. The following is a copy of a set of suggestions posted onour forums:A few suggestions:Keep your Lambda function "warm". If it's invoked infrequently you will incur an overhead "cold start" cost as Lambda needs to allocate
resources to serve your request. See this post for more details.Invoke your Lambda function with resource-based permissions as opposed to role-based. This is to avoid the overhead of API Gateway
needing to make an assumeRole() request to STS. Resource-based
invocation is default if you set it up in the console.If appropriate, consider turning on caching for your API.Is your API doing any transformations of the request or response via mapping templates? This will obviously incur overhead linear with
the complexity of the transformation.A note, #1 should really only be used as alast resortassuming none of the other options work for you.
|
I've been tinkering with nodejs code in AWS Lambda, called by some API Gateway endpoints. I've noticed that after a certain amount of time passes without any API Gateway calls, the next API Gateway request will time out. I'll get the standard Lambda error message saying the function timed out. However, subsequent HTTP requests to trigger my Lambda work fine.Superficially, it looks like something is going into "idle" mode and needs to be charged up before the API Gateway-Lambda request can work properly. I've considered setting up a wget cron to keep things non-idle, but is there a real fix and how can I better understand what's happening?
|
AWS Lambda and API Gateway - goes idle; needs to "wake up"/no response on first request?
|
Amazon S3 does not have extensive re-writing capabilities.You can specify a default document to read when a directory reference is specified. For example, you can specify that the default document isindex.html. That way, if/is requested, then it will serve up /index.html. But this is a per-bucket setting, so you cannot have different rules for different folders.You could modify your Jekyll configuration to generatesubpages/entry1/index.htmlfromsubpages/entry1.html. This way, your URLs will continue to work.
|
I have a Jekyll generated static HTML page that I use as my homepage. Currently I am trying to migrate it from traditional hosting service on AWS S3. So far I managed to publish all of my files on bucket and enable website hosting, but when it comes to browsing, page is broken.Basicallysubpages/is not rewritten intosubpages/index.htmlandsubpages/entry1is not rewritten intosubpages/entry1.html.Earlier I used.htaccessconfig like this one:Options +FollowSymlinks
RewriteEngine On
RewriteRule ^(.*subpages/[^.]+)/?$ $1.html
RewriteRule ^(.*subpages2/[^.]+)/?$ $1.htmlto rewrite it as intended.How could such behavior be recreated with S3 routing rules?Documentationpresents rather limited set of config options in this regard and does not give examples how such scenario could be achieved.
|
AWS S3 routing rules for appending .html or index.html
|
Solved!It appears that the tutorial is outdated.I needed to update wercker.yml to work with Wercker v2.To do this, I changed:box: wercker/rubytobox: ruby.
|
I have a Jekyll blog which I'm trying to push to an AWS S3 bucket. I have followedthis tutorial.The build keeps failing. Wercker gives me the following error message:Build failed on mastersetup environmentGEThttps://registry.hub.docker.com/v1/repositories/wercker/ruby/imagesreturned 404It the displays my wercker.yml file:box: wercker/ruby
no-response-timeout: 10
build:
steps:
- bundle-install
- script:
name: Run Jekyll doctor
code: bundle exec jekyll doctor
- script:
name: Build Jekyll site
code: bundle exec jekyll build --trace
deploy:
steps:
- s3sync:
key_id: $AWS_ACCESS_KEY_ID
key_secret: $AWS_SECRET_ACCESS_KEY
bucket_url: $AWS_BUCKET_URL
source_dir: _site/
opts: --acl-public --add-header=Cache-Control:max-age=3600I'm out of my depth here. Google is only returning other Wercker pages with the same error message. What is causing the error? What steps do I need to take to fix this?Here's alinkto the error page itself.Any help would be appreciated! Thanks.
|
Wercker: Build failing on 'set up environment'. Why?
|
Looks like theConfigurationEndpoint.Addressis only supported for Memcached clusters, not for Redis. Please seethis relevant discussionin the AWS forums.Also, theAWS Auto Discovery docs(still) state:NoteAuto Discovery is only available for cache clusters running the
Memcached engine. Redis cache clusters are single node clusters, thus
there is no need to identify and track all the nodes in a Redis
cluster.Looks like your 'best' solution is to query the individual endpoint(s) in us, in order to determine the addresses to connect to, usingAWS::CloudFormation::Initas is suggested on the AWS forums thread.UPDATEAs@slimdrivepointed out below, this IS now possible, through theAWS::ElastiCache::CacheCluster. Please read further below for more details.
|
I need to create CNAME record for ElastiCache Cluster. However, I build redis cluster and there is only one node. As far as I found there is no
ConfigurationEndpoint.Address for redis cluster. Is there any chance to change DNS name for node in cluster and how to do it?Currently template looks like:"ElastiCahceDNSRecord" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to ElastiCache",
"RecordSets" : [{
"Name" : "elche01.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "myelasticache", "ConfigurationEndpoint.Address" ]
}
]
}]
}}
|
change ElastiCache node DNS record in cloud formation template
|
If the instance is EBS-based, you can do the following:Get acorrectcopy of theauthorized_keysfile ready. Get it off another one of your instances, or reconstruct it from whole cloth, or grab it off a snapshot, or use a new pem file, or whatever.Stop the instance you can't reach (do not terminate it). This step is unavoidable. If you can't stop the instance because it's running something important, you're SOL.Detach the root volume from the stopped instance. It should be something like/dev/sda1. Be sure to give it a name so you can find it in your volume list.Attach it to a different instance at another mount point, say/dev/sdp.Mount the volume into a tmpdir on that instance. Say withmkdir /tmp/myrootvol && mount /dev/xvdp /tmp/myrootvol. Note the device name will vary based on your version of Linux (if you're using Linux at all). Much older versions will use different nomenclature.At this point, you've got a filesystem, a root volume, mounted at/tmp/myrootvol. Fix theauthorized_keysfile, then unmount the device, and detach the volume.Reattach the volume to the original instance at/dev/sda1or whatever device name it was originally attached at.Start that instance back up.There you go. You'll have an accessible EC2 instance. But wow that was a pain wasn't it?
|
I accidentally overwrote the entries in .ssh/authorized_keys.
Now I am no longer able to connect to my EC2 instance using my .pem file.
I tried to generate a new .pem file, hoping that process will add entries to .ssh/authorized_keys, but it didn't. I tried to read the documentation, but it is slightly confusing for me.
Someone who can give a simplified explanation/instructions on this is much appreciated.Unfortunately, there are no active ssh sessions.. :(
|
Amazon EC2: How to restore ~/.ssh/authorized_keys file?
|
Do you actually have a file named "tickets" that you are trying to import? It sort of looks like you are trying to pass in a table name. Based onthe documentation, I think you need to renameData_2014_1.csvtotickets.csvand then run the following command:"C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqlimport.exe" -h myhostname.amazonaws.com -P 3306 -u admin -pmypassword --local --fields-terminated-by=, --lines-terminated-by="\r\n" ticketsdb tickets.csv
|
I have 4 csv files that I want to import into my AWS mysql database.I am trying to use the following command from a windows machine:"C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqlimport.exe" -h myhostname.amazonaws.com -P 3306 -u admin -pmypassword --local --fields-terminated-by=, --lines-terminated-by="\r\n" ticketsdb tickets "Data_2014_1.csv"The response that I get is this:mysqlimport: [Warning] Using a password on the command line interface can be insecure.
mysqlimport: Error: 2, File 'tickets' not found (Errcode: 2 - No such file or directory), when using table: tickets
|
import csv file to mysql remote server using mysqlimport
|
If you're using a single node and have SQL access to the cluster (e.g. via psql), you can run:select
sum(capacity)/1024 as capacity_gbytes,
sum(used)/1024 as used_gbytes,
(sum(capacity) - sum(used))/1024 as free_gbytes
from
stv_partitions where part_begin=0;This article has more:https://www.flydata.com/blog/querying-free-disk-space-on-redshift/
|
Question from Redshift newbies: I copy data using AWS datapipeline but it FAILED and log said"ERROR: Disk Full Detail:
----------------------------------------------- error: Disk Full code: 1016 context: node: 0 query: 2070045 location: fdisk_api.cpp:343
process: query0_49 [pid=15048] "I'd like to know how could we check if Redshift is really disk full via CLI or web console, any comments or hints would be appreciated.
|
How to verify that Redshift are really DISK FULL?
|
You have to download the file to the server where PHP is running first. S3 uploads are only for local files - which is why$_FILES["files"]["tmp_name"]works - its a file that's local to the PHP server.
|
Here is my code, which works for forms upload (via $_FILES) (I'm omitting that part of the code because it is irrelevant):$file = "http://i.imgur.com/QLQjDpT.jpg";
$s3 = S3Client::factory(array(
'region' => $region,
'version' => $version
));
try {
$content_type = "image/" . $ext;
$to_send = array();
$to_send["SourceFile"] = $file;
$to_send["Bucket"] = $bucket;
$to_send["Key"] = $file_path;
$to_send["ACL"] = 'public-read';
$to_send["ContentType"] = $content_type;
// Upload a file.
$result = $s3->putObject($to_send);As I said, this works if file is a$_FILES["files"]["tmp_name"]but fails if $file is a valid image url with Uncaught exception 'Aws\Exception\CouldNotCreateChecksumException' with message'A sha256 checksum could not be calculated for the provided upload body, because it was not seekable. To prevent this error you can either 1) include the ContentMD5 or ContentSHA256 parameters with your request, 2) use a seekable stream for the body, or 3) wrap the non-seekable stream in a GuzzleHttp\Psr7\CachingStream object. You should be careful though and remember that the CachingStream utilizes PHP temp streams. This means that the stream will be temporarily stored on the local disk.'. Does anyone know why this happens? What might be off? Tyvm for your help!
|
Can't upload image to S3 bucket using direct url of image
|
The reason is S3 actually has a flat structure. There are no folders but it just recognizes the forward slashes so groups the ones having the same prefix under same folder. So in your example "/BF/MUSIC" would be just another object, not an empty folder.In Amazon S3, buckets and objects are the primary resources, where
objects are stored in buckets. Amazon S3 has a flat structure with no
hierarchy like you would see in a typical file system. However, for
the sake of organizational simplicity, the Amazon S3 console supports
the folder concept as a means of grouping objects. Amazon S3 does this
by using key name prefixes for objects.Source:AWS Documentation: Working with Folders
|
I useListObjectfunction without any delimiters and as result I have something that look like:/BF
/BF/FTP
/BF/MUSIC/LIBRARY/AUDITION/BEAKING%20EARLY.MP3
/BF/VIDEO/
/BF/VIDEO/Example
/BF/VIDEO/Example/test.mp4The problem is in the music folder. Why ListObjects doesn't return S3Object with key: "/BF/MUSIC". There are many S3Objects with the same problem. Why is that happening?
|
AWS S3 - ListObjects returns incomplete directory listings
|
TL; DR: There can only be one Worker per Shard. Any additional Workers will sit idle.If you have a Kinesis stream with two shards, and you run an app on a single instance that leverages the KCL, the app will run two workers in separate threads-- one Worker per Shard (per thread).If you run two instances, your app will run a single Worker on each instance in a thread-- two instances, one worker each; one Kinesis stream, two shards.Each worker takes out a lease against a shard in a stream so no other worker of the same app can read the same shard. The Worker stores the lease information in Dynamo DB so other Workers can read it.If you were to run 3 instances in this scenario, one of the instances would sit around waiting for a Worker on one of the other instances to lose its lease. Once one of the other Workers loses its lease, the third Worker could pick up the stream and begin processing.
|
The AWS Kinesis stream documentationmentionsTypically, when you use the KCL, you should ensure that the number of instances does not exceed the number of shardsWhat would be the consequence if the number of instances exceeds the number of shards? I plan on running one worker per Web server (separate thread). So I want to know whether it isrequiredto check and compare the number of shards and running workers when a new web server instance is started. Or can one just start another worker without any side effect if the number of workers exceeds the number of shards.
|
What happens if the number of workers is > number of shards when using KCL with AWS Kinesis streams?
|
I found the solution. The problem was that in the POST request you need to send your body as a string and not a JSON object and that string needs to be formatted correctie '{"key1": "val1","key2": 22,"key3": 15,"key4": "val4"}'like so:function post() {
$.ajax({
url: "https://myapi.us-east-1.amazonaws.com/myStage/myPath",
type: "POST",
data: '{"key1": "val1","key2": 22,"key3": 15,"key4": "val4"}',
mozSystem: true,
dataType: "text", //set to"json" usually
success: function (result) {
switch (result) {
case true:
processResponse(result);
break;
default:
resultDiv.html(result);
}
},
error: function (xhr, ajaxOptions, thrownError) {
alert('error ' + xhr.status + ' ' + thrownError);
alert(thrownError);
}
});
};
|
I have a simple POST method connected to a lambda function in myAWS API Gateway.When I perform a test (via the Gateway API console) everything works fine and the test gets the results I am looking for. Its simple - post a JSON, get back a JSON.However, after deploying the API and then sending the same JSON used in the test (via http post), I am getting'Could not parse request body into json'.Does anyone know what I may be doing wrong?
Note: I am not looking to use models, I just want to pass-through the JSON. I believe that when Amazon writes things like 'input passthrough'
they mean that the input can pass through to the lambda function.Here are images of my AWS Gateway API setup:METHOD REQUEST:INTEGRATION REQUEST:METHOD RESPONSE:INTEGRATION RESPONSE:
|
AWS API Gateway - Error when calling API method - 'Could not parse request body into json'
|
You need to changeTags[?Key==Name].Value[]toTags[?Key==Name].Value[] | [0]; I think it's becauseTags[?Key==Name].Value[]returns an array which the text output format doesn't know how to put on a single line, piping to[0]extracts the (single) element out for you. So your full query should be :aws ec2 describe-instances --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value[] | [0], Placement.AvailabilityZone,InstanceType,State.Name]' --output text
|
How do I use AWS CLI to list all instances with name, state, instance size and AZ in the same line?I got close with this:aws ec2 describe-instances --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value[], Placement.AvailabilityZone,InstanceType,State.Name]' --output textBut that outputs the instance name below the rest. I want to keep them on the same line so I can copy to a spreadsheet.
|
How do I use AWS CLI to list all instances with name, state, instance size and AZ in the same line
|
Add this to yourapplication.rbor to the config file for each environment:config.paperclip_defaults = {
:storage => :s3,
:s3_host_name => 's3-eu-central-1.amazonaws.com',
:s3_credentials => {
:bucket => 'your bucket',
:access_key_id => 'your access-key-id',
:secret_access_key => 'your secret-access-key'
},
:url =>':s3_domain_url',
:path => '/:class/:attachment/:id_partition/:style/:filename'
}You can then remove the:urland:pathconfig from your model.
|
I uploaded images to Amazon S3, but they're not displaying.I get this error:<Error>
<Code>PermanentRedirect</Code>
<Message>
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
</Message>
<Bucket>cb123</Bucket>
<Endpoint>cb123.s3.amazonaws.com</Endpoint>
<RequestId>870BC2E8570EF4E7</RequestId>
<HostId>
yuBkeXxftr7O9Ib0SasFTq8Hlvgc7hkhx9VMr+VwRL74qSDgJ9rqMgEU9noRIQe/
</HostId>
</Error>Here are my settings:has_attached_file :image, styles: { medium: "400x400#", small: "250x250#", :url =>':s3_domain_url', :path => '/:class/:attachment/:id_partition/:style/:filename' }thank you!
|
Rails paperclip doesn't display uploaded images from Amazon S3
|
Amazon's permissions model divides API Gateway permissions into two services:Amazon API Gateway- Permissions for clients, currently the only action isexecute-api:invoke.Manage - API Gateway- Admin permissions for configuring the API Gateway, which has CRUD actions fitting theapigateway:*spec.The policy you have applies to the Manage API Gateway service, the simulation should work if you select that.This same separation is visible in the regular IAM policy wizard, where "Manage - API Gateway" sorts to the bottom of the service list where you can't see it.
|
The AWS IAM Policy Docs for AWS (shown here) indicate that the following policy gives full access for a role to hit the API Gateway{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"apigateway:*"
],
"Resource": [
"*"
]
}
]
}When simulating that policy with API Gateway as the target, the policy denies access. This seems like a direct contradiction to the provided documentation.
|
AWS API Gateway IAM Policy Role in Docs Fails in Simulation
|
You can achieve what you want with a custom filter plugin.Create a directory in the root of your playbook calledfilter_pluginsand create a file in there calledmake_rules.pywith the following contents:def make_rules(hosts, ports, proto):
return [{"proto": proto,
"from_port": port,
"to_port": port,
"cidr_ip": host} for host in hosts for port in map(int, ports.split(","))]
class FilterModule(object):
def filters(self):
return {'make_rules': make_rules}Then you can do this:- hosts: localhost
gather_facts: False
vars:
ip_addresses:
- 1.2.3.4/32
- 2.3.4.5/32
tasks:
- ec2_group:
name: security-group-name
description: Security group description
vpc_id: vpc-1234567
region: us-east-1
profile: profile-name
purge_rules: true
rules: {{ ip_addresses | make_rules('123', 'tcp') }}Taken from:https://gist.github.com/viesti/1febe79938c09cc29501
|
I was trying to use Ansible to add IP addresses to an AWS security group.I came up with a task syntax that looks like this:- hosts: localhost
gather_facts: False
vars:
ip_addresses:
- 1.2.3.4/32
- 2.3.4.5/32
tasks:
- ec2_group:
name: security-group-name
description: Security group description
vpc_id: vpc-1234567
region: us-east-1
profile: profile-name
purge_rules: false
rules:
- proto: tcp
from_port: 123
to_port: 123
cidr_ip: "{{ item }}"
with_items: ip_addressesThis does not do exactly what I was looking for as it basically runs theec2_grouptask multiple times instead of just looping over the rules.This also does not work if I set thepurge_rulestotrueas then it will purge all existing rules on each iteration, effectively removing all but the last IP address on the list.I'm wondering if there is something similar towith_itemsthat I can apply to therulesattribute to provide it a list of IP addresses but callingec2_taskonly once?
|
Adding and removing multiple IP address to AWS security group using Ansible
|
I believe you're asking the same thing asAmazon Cloudsearch : Filter if existsTo summarize the options from there:Add a new boolean field called 'has_target_date'Set a default target_date (eg 1/1/1970) to mean that it doesn't existThe hack: (range field=target_date [0,})Any of those options should work with the QUERY_GOES_HERE portion of your question.
|
Is there a CloudSearch structured query to return results that do not have a value within a field? For example, I have a field calledtarget_datethat does not always have a value and I want to return all results with no target_date. This field is not zero'd out or set to a default; it doesn't exist at all for items without the date.There is another case too. I need to return all results after atarget_dateAND include any results without an existing date. The structured query I am using istarget_date:['2000-03-03T00:00:00Z',}. The query to find non-existing dates should work with anandoperator, like:(and target_date:['2000-03-03T00:00:00Z',} [QUERY_GOES_HERE])
|
CloudSearch - Return results when a field does not exist
|
+50If you're asking this question you have probably taken AWS as far as you can go with the provided code sample.I have found most of the async upload functionality provided by AWS to be more theoretical, or better suited for limited use cases, instead of being production ready for the mainstream- especially end users with all those browsers and operating systems:)I would recommend rethinking the design of your program: create your own C# upload turnstile and keep the AWS SDK upload functions running as a background process (or sysadmin function) so that AWS servers are handling only your server's time.
|
I am currently building an application in C# that makes use of the AWS SDK for uploading files to S3.However, I have some users who are getting the "Request time too skewed" error when the application tries to upload a file.I understand the problem is that the user's clock is out of sync, however, it is difficult to expect a user to change this, so I was wondering, is there any way to get this error not to occur (any .NET functionality to get accurate time with NTP or the alike?)Below the current code I am using to upload files.var _s3Config = new AmazonS3Config { ServiceURL = "https://s3-eu-west-1.amazonaws.com" };
var _awsCredentials = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken);
var s3Client = new AmazonS3Client(_awsCredentials, _s3Config);
var putRequest = new PutObjectRequest
{
BucketName = "my.bucket.name",
Key = "/path/to/file.txt",
FilePath = "/path/to/local/file.txt"
};
putRequest.StreamTransferProgress += OnUploadProgress;
var response = await s3Client.PutObjectAsync(putRequest);
|
S3 Request time too skewed
|
As Eric mentioned, currently Lambda doesn't offer a REST endpoint to run the function and return its result, but may in the future.Right now, your best bet would be to use a library likelambdaws, which wraps the function deployment and execution for you and handles returning results via an SQS queue. If you'd like more control by rolling your own solution, the process is straightforward:Create an SQS queueHave your Lambda function write its result to this queueIn your client, poll the queue for a result
|
I am using the test function from AWS console:console.log('Loading event');
exports.handler = function(event, context) {
console.log('value1 = ' + event.key1);
console.log('value2 = ' + event.key2);
console.log('value3 = ' + event.key3);
context.done(null, 'Hello World'); // SUCCESS with message
};And calling it in nodejs as follows:var params = {
FunctionName: 'MY_FUNCTION_NAME', /* required */
InvokeArgs: JSON.stringify({
"key1": "value1",
"key2": "value2",
"key3": "value3"
})
};
lambda.invokeAsync(params, function(err, data) {
if (err) {
// an error occurred
console.log(err, err.stack);
return cb(err);
}
// successful response
console.log(data);
});and everything works fine://Console Output
{ Status: 202 }But I was expecting to receive the message from context.done(null, 'Message') as well...Any idea how to get the message?
|
AWS Lambda get context message
|
Both approaches can work. Here's why I would pick the Reports API:Reports are more scalable. I believe MWS reports can return an unlimited number of records.ListOrderscan only return a maximum of 100 orders. You can get more usingListOrdersByNextToken, but that brings throttling into the problem and it is not clear whether or not you're just paging by an offset (which could cause lost/duplicate orders) or whether it is a snapshot.You can acknowledge reports and filter on unacknowledged reports. Orders can be acknowledged too, but I don't think there is a way of filtering ListOrders based on acknowledgement status.Reports can be scheduled to auto-generate on an interval, as often as every 15 minutes. This means that it may not be as many calls as you think: really, it's only three every interval: one to list unacknowledged order reports, one to pull the report you want and one to acknowledge it.
|
I'm trying to integrate the orders from Amazon Marketplace into our system. I did that before with Magento and thought this should be easy as that, but somehow I got stuck.I downloaded the Java APIs fromAmazonand started playing around with the examples.So far so good - I was able to get them running.
But playing with theReports APIand theOrders API, I started to wonder which one to use if I only want to get the unshipped orders to put them into our system.1. doing this with the Report API seems very complicated and involves a lot of calls to the MWS. This is documented by Amazonhere.2. using the Orders API seems pretty straightforward. I only have to create aListOrdersRequest, define what type of orders I want to have and finally get them via aListOrderscall.So my question is:What is the reason to choose the Reports API over the Orders API?Seems like Amazon is recommending the Reports API, but I really do not understand why this should be so complicated. Why should I get Reports when I can get the Orders directly?
|
Amazon-MWS: Difference between Reports and Order lists
|
Here is a small sample if you need for type _POST_PRODUCT_PRICING_DATA_ feed on mws (here it's for amazon.co.uk):<?xml version="1.0" encoding="utf-8"?>
<AmazonEnvelope xsi:noNamespaceSchemaLocation="amzn-envelope.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Header>
<DocumentVersion>1.01</DocumentVersion>
<MerchantIdentifier>YOUR_ID</MerchantIdentifier>
</Header>
<MessageType>Price</MessageType>
<Message>
<MessageID>1</MessageID>
<Price>
<SKU>YOUR_SKU</SKU>
<StandardPrice currency="GBP">30.75</StandardPrice>
<MinimumSellerAllowedPrice currency="GBP">20</MinimumSellerAllowedPrice>
<MaximumSellerAllowedPrice currency="GBP">40</MaximumSellerAllowedPrice>
</Price>
</Message>
</AmazonEnvelope>
|
Is it possible to set the minimum-seller-allowed-price and maximum-seller-allowed-price prices of products via flat file AND submit it as a feed via the MWS API?Sellers will have to specify a min and max price for all items from 15th Jan 2015 onwards viz:"With effect from January 14, 2015, you will not be able to use the Seller Central preferences to select a blanket “opt-out” from all potential low and high-pricing error rules. The aim is to reduce price error risks to sellers and avoid potentially negative buyer experiences. Instead, you will need to set a minimum and maximum allowed selling price for each product in your inventory. If you do not chose pricing limits for each product, Amazon’s default potential pricing error rules will apply to your products...."So, from reading "https://sellercentral-europe.amazon.com/gp/help/201141430" this implies that it can be done via a "Price & Quantity Inventory" file. However, the solution that I'm after needs to be done via the MWS API.For normal price feeds, I'd set the feed type to _POST_PRODUCT_PRICING_DATA_ too.I don't think that you can set the min and max prices via XML as the price feed XSD does not contain a definition for these fields (not that I can find anyway).Sai.
|
How to set min and max prices for products using Amazon MWS API
|
You can configure a SNS topic which will get the message when there is a upload to s3 bucket.Thensubscribeall the SQS queues to that SNS topic.See this.
|
i want to be notified when file is uploaded to my s3 bucket. I know I can have sqs message or sns notification. What I need is a message send to multiple sqs queues. Is it possible?
|
s3 - file uploaded - message in multiple sqs queues
|
// Set the AWS credentials provider to use Facebook's auth token
let credentialProvider = AWSCognitoCredentialsProvider(
regionType: CognitoRegionType,
identityPoolId: CognitoIdentityPoolId)
let logins: NSDictionary = NSDictionary(dictionary:
["graph.facebook.com" : self.fbToken])
credentialProvider.logins = logins as [NSObject : AnyObject]
credentialProvider.refresh()
let configuration = AWSServiceConfiguration(
region: DefaultServiceRegionType,
credentialsProvider: credentialProvider)
AWSServiceManager.defaultServiceManager().defaultServiceConfiguration = configurationWhere self.fbToken is the Facebook token, and CognitoRegionType, CognitoIdentityPoolId, and DefaultServiceRegionType are all defined constants.
|
I am trying to create a sample iOS application for listing S3 bucket after login from facebook using amazon cognito. unfortunately I can not found any examples in swift for cognito authentication through facebook. The one provided in the example doesn’t take care of the authentication part.Can anybody provide sample code for this.So bad that amazon is not even providing a good example in swift covering major services.
|
Example for cognito login using SWIFT & Facebook
|
This higher-level Node S3 library has adownloadDirfunction which syncs a remote S3 bucket with a local directory.https://github.com/andrewrk/node-s3-client#clientdownloaddirparams
|
Currently I can iterate over all files in bucket and download one by one,
using node.js sdk.
I need to download all files from S3 bucket at once. then remove it from bucket.
|
AWS S3 Node.js SDK: Need to Download all files from one bucket at once
|
+25As to #1, if you create an AMI (amazon machine image), you can have everything you want pre-installed on a 'hibernating' image that you can use as a basis for the spot image you start:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances-getting-started.htmlFor #2, you can be notified when a spot instance terminates using SNS:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-autoscaling-notifications.htmlhttp://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.htmlBTW: You can be notified the the instance was terminated, but onlyafterit terminates. You can't get notified that an instance is about to be shutdown and gracefully save the state - you need to engineer your solution to be OK with unexpected shutdowns.No matter how high you bid, there is always a risk that your Spot
Instance will be interrupted. We strongly recommend against bidding
above the On-Demand price or using Spot for applications that cannot
tolerate interruptions.http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-protect-interruptions.html
|
I have a long batch job that I'd like to run on AWS EC2 Spot Instances, to save money. However, I can't find the answer to two seemingly critical questions:When a new instance is created, I need to upload the code onto it, configure it, and run the code. How does that get done for Spot Instances, which are created automatically and unattatendly?When an instance is stopped, I wouldpreferhaving some type of notification, so that the state could be saved. (This is not critical, as the batch job will run fine if terminated suddenly - but a clean shutdown is preferred).What is the standard way to deploy spot instances? Is there a way to do manual setup, turn it into a spot instance, and then let it hibernate until the spot price is available?
|
EC2 Spot instances: How to start tasks, how to stop them?
|
I wouldn't recommend using your home computer as a web server. Here are the steps it takes to get a java web app up and running exposed to the internet.Buy a domain name from a registrarFind a host provider that gives you some sort of linux VM (CentOS, Debian, RHEL, etc). Lowendbox has some cheap ones. AWS is more expensive but you'll get the same thingModify the DNS for wherever you registered to point to the IP address of the VM you just rented.ssh to your VM and install Java, as well as all the dependencies for your application server (Tomcat, JBoss, Netty, etc) via the command lineMost of these servers run at port 8080 by default, so you will need to find a way to reroute requests from 80 to 8080 (do not run your server directly on 80). Best to let Apache run on 80 and forward the requests to 8080 (dependent on the server that you run)Deploy your application
|
I am a newbie to java-webservices and need help to understand about hosting a web service on a web server.I successfully created a webservice and i am pointing to "localhost" in my home network to hit the service to get the response. Now i want to push the service over the internet so that the web service becomes public and client can start using it. But i am not sure about hosting and how that process happens. Though i searched online contents and i was not able to get a clear step by step guide. Could some one here help me plz. ThanksHere are the details:Any heads up on Amazon web services or converting my home computer into server would be very useful.!
|
Hosting java webservice on Live server
|
You can, but the nodes in the other regions areread replicas.You can create one or more replicas of a given source [MySQL] DB Instance within an AWS Region or across AWS Regions and serve high-volume application read traffic from multiple copies of your data [...]Multi-AZ Deployments and Read Replicas use different underlying replication technologies suited to their respective purposes. However, you can use them together for reliable, scalable production deployments.Read replicas have limitations, as you might imagine. You can't write to the replicas (obviously), and there's replication lag, which might lead to data loss if the source database goes down (which is where multi-AZ helps you). TheRDS FAQhas some discussion.
|
Is it possible to have an RDS MySQL Multi-AZ database instance that spans across two regions?
|
AWS RDS Multi-AZ across regions?
|
You won't get a cloudfront URL from S3, it's a different service. If your usingputObjectthen you already know the file path (value specified inKey).Just return the cloudfront URL in front of the file path e.g...$filePath = '/path/file.jpg';
$client->putObject(array(
'Bucket' => 'mybucket',
'Key' => $filePath,
'SourceFile' => $fileSource,
'ACL' => 'public-read'
));
return 'https://d111111111ck.cloudfront.net' . $filePath;
|
I'm uploading a file to an S3 bucket for which I've created a Cloudfront distribution. I'm using theAws\S3\S3Clientclass.After uploading withputObject, the response object and thegetObjectUrlmethod both return the object's url ashttps://s3-eu-west-1.amazonaws.com/mybucket/path/myfile.jpg. I am trying to get the Cloudfront url which would be something likehttps://d111111111ck.cloudfront.net/path/myfile.jpg.Is there any way to get this url directly, or do I have to build it from my distribution hostname and file path?
|
How do I return Cloudfront url after S3 upload?
|
Try this rule instead:RewriteEngine On
RewriteCond %{HTTPS} off [OR]
RewriteCond %{HTTP_HOST} ^www\. [NC]
RewriteCond %{SERVER_NAME} ^(www\.)?(.*)$ [NC]
RewriteRule ^/?(.*)$ https://%2/$1 [L,R=301]
|
I currently have this in place to redirect allhttptraffic tohttps.RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !=https
RewriteRule ^/(.*)$ https://%{SERVER_NAME}/$1 [R=301,L]The above is working OK.Now I'm trying to add rewrite condition to force allhttps://wwwtraffic tohttpswithout thewww.Please note that this is an AWS Elastic Beanstalk running Apache behind Elastic Load Balancer=============================EDIT:Working code:RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !=https [OR]
RewriteCond %{HTTP_HOST} ^www\. [NC]
RewriteCond %{SERVER_NAME} ^(www\.)?(.*)$ [NC]
RewriteRule ^/?(.*)$ https://%2/$1 [L,R=301]
|
Redirect all www and http ==>> non www https on AWS Elastic Beanstalk with Load Balancer
|
I realise that this is an old thread but in case anyone comes across this, as I did, then check out this thread on the AWS forums for Elasticbeanstalkhttps://forums.aws.amazon.com/thread.jspa?messageID=395052#395052It explains how settings set in the.elasticbeanstalk/optionsettings.file are set using the API in such a way that they can't be changed later, unlike those set in the.ebextensions/*.configfiles.Also, in an incredibly annoying move, the optionsettings file will often set some settings in it which you want to set in the .config file however it automatically re-creates the optionsettings file when running eb start and there's very little that seems possible. This makes using the eb command line tools close to impossible to use if you want to change something like the WSGIPath.
|
While trying to setup an elastic beanstalk worker application using the command line tools (ebtools), my configuration file (optionsettings.MyApp-env) gets overwritten when I start/update/stop the environment.These are the steps to reproduce:Using the CLI tools' commandeb initI've created a new application in Elastic Beanstalk.Theconfigfile in the.elasticbeanstalkfolder had the following line:OptionSettingFile=/Users/doron/projects/workers/my-worker/.elasticbeanstalk/optionsettings.MyWorkerName-devAfter runningeb startfor the first time, that file was created with some values.I went ahead and changed its contents according tohttp://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.htmlso it'll be configured as I want (environment parameters, autoscaling servers amount, etc...).To apply the changes I've tried the following:Update the existing environment witheb update.Terminate the existing environment witheb stopand build it from scratch witheb start.In both cases theoptionsettingsfile get changed after running the command (updateorstart).The new content of the file looks more like the vanilla version I got after calling the firsteb startwith all sorts of configuration parameters that I added - removed completely.Is there another way of configuring the environment (not the software on the machine, but the configuration that exists in the console - instance type, regions, autoscaling, rotating updates, etc...) ?
|
Elastic Beanstalk optionsettings file keep getting overwritten with default parameters
|
You cannot currently receive replies to SMS messages with Amazon SNS. We would like to provide expanded SMS support (more AWS regions, more geographic coverage, more functionality), but unfortunately don’t have timing to share.
|
I am trying with this sample code to send message using SNS API -BasicAWSCredentials cr = new BasicAWSCredentials("MYACCESSKEYS","mySecretKeys");
AmazonSimpleNotificationService sns = new AmazonSimpleNotificationServiceClient(cr);
string topicArn = sns.CreateTopic(new CreateTopicRequest
{
Name = "ses-bounces-topic",
}).CreateTopicResult.TopicArn;
sns.SetTopicAttributes(new SetTopicAttributesRequest
{
TopicArn = topicArn,
AttributeName = "MyName",
AttributeValue = "Sample Notifications"
});
sns.Subscribe(new SubscribeRequest
{
TopicArn = topicArn,
Protocol = "SMS",
Endpoint = "my-mobile-number"
});
ListSubscriptionsByTopicResult ls = sns.ListSubscriptionsByTopic(new ListSubscriptionsByTopicRequest
{
TopicArn = topicArn
}).ListSubscriptionsByTopicResult;
sns.Publish(new PublishRequest {
TopicArn=topicArn,
Subject="MySms",
Message="Testing Message"
});This code working fine to send message to my mobile. I am successful to send message to a SMS-enabled device mobile.Is there any way to get reply of user if he/she sent back? Please guide me if we can get the reply of user sent back using any API request.Thanks in advance!!
|
How can I get the reply of SMS that sent to a mobile device?
|
You need to ensure that the server is running on 0.0.0.0 if you need it to be reachable by addressinganyIP of the instance.If you have started it on localhost (127.0.0.1), then the behavior is expected. You can stop the server and re-start it to bind to 0.0.0.0:3000. Things should work.
|
I have a web server running on an Ubuntu Amazon EC2 instance at port 3000.15.0.0.10is the private ip of this EC2 instance.After I ssh into this instance and run the following commandcurl localhost:3000/index.html, it returns me the html source of my index.html page.But when I runcurl 15.0.0.10:3000/index.html, it says :curl: (7) couldn't connect to hostWhy is this happening ?What can I do to make the second command also return the content?
|
Curl amazon EC2 instance
|
I believe you have two options to handle situations where new EC2 instances are being created automatically for you:Eithercreate a custom AMIfor your EC2 instances, orCustomize your AWS EB environment.Amazon issues a notice about using custom AMI:s:"After you are running on your own custom AMI, you will no longer receive any
automated updates to the operating system, software stack, or the AWS Elastic
Beanstalk host manager."Personally, I've stuck to using configuration files. Takes a bit of tinkering, but once I got it working it operates pretty well.Good luck!
|
I'm trying to figure out how to install and use GeoIP libraries on AWS (Elastic Beanstalk).
As far I know EB has an "ephemeral filesystem", but I could store the CeoCity binary in S3… but what about MaxMind C libraries? Has anyone configured EB to use MaxMind's api?(My stack is based on Python/Django)
|
MaxMind GeoIP libraries and database on Amazon Elastic Beanstalk
|
I think a better test would be to avoid the initial costs/latency incurred in starting up the JVM and loading the classes. Something like:public class TestDynamoDBMain {
public static void main(String[] args) {
try {
AWSCredentials credentials = new PropertiesCredentials(new File("aws_credentials.properties"));
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);
DynamoDBMapper mapper = new DynamoDBMapper(client);
// Warm up
for (int i=0; i < 10; i++) {
testrun(mapper, false);
}
// Time it
for (int i=0; i < 10; i++) {
testrun(mapper, true);
}
} catch (Exception e) {
e.printStackTrace();
}
}
private static void testrun(DynamoDBMapper mapper, boolean timed) {
long start = System.nanoTime();
Model model = mapper.load(Model.class, "hashkey1", "rangekey1");
if (timed)
System.out.println(
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start)
+ " (ms) to load Model");
}
}Furthermore, you may consider enabling the default metrics of the AWS SDK for Java to see the fine grain time allocation in Amazon CloudWatch. For more details, see:http://java.awsblog.com/post/Tx1O0S3I51OTZWT/Taste-of-JMX-Using-the-AWS-SDK-for-JavaHope this helps.
|
I've been testing out DynamoDB as a potential option for a scalable and steady throughput database for a site that will be hit pretty frequently and requires a very fast response time (< 50ms). I'm seeing pretty slow responses (both locally and on an EC2 instance) for the following code:public static void main(String[] args) {
try {
AWSCredentials credentials = new PropertiesCredentials(new File("aws_credentials.properties"));
long start = System.currentTimeMillis();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);
System.out.println((System.currentTimeMillis() - start) + " (ms) to connect");
DynamoDBMapper mapper = new DynamoDBMapper(client);
start = System.currentTimeMillis();
Model model = mapper.load(Model.class, "hashkey1", "rangekey1");
System.out.println((System.currentTimeMillis() - start) + " (ms) to load Model");
} catch (Exception e) {
e.printStackTrace();
}
}The connection to the DB alone takes about800 (ms)on average and the loading using themappertakes an additional200 (ms). According toAmazon's page about DynamoDBwe should expect "Average service-side latencies...typically single-digit milliseconds." I wouldn't expect the full round-trip HTTP request to add that much overhead. Are these expected numbers even on an EC2 instance?
|
How to improve speed of DynamoDB requests
|
I had the same problem with running a ruby script (ruby script.rb).
I replace ruby by its full path (/sources/ruby-2.0.0-p195/ruby) and it worked.
in you case, replace "aws" by its full path. to find it:
find / -name "aws"
|
I have a crontab that fires a PHP script that runs the AWS CLI command "aws ec2 create-snapshot".When I run the script via the command line the php script completes successfully with the aws command returning a JSON string to PHP. But when I setup a crontab to run the php script the aws command doesn't return anything.The crontab is running as the same user as when I run the PHP script on the command line myself, so I am a bit stumped?
|
aws-cli 1.2.10 cron script fails
|
We have a whenever cookbook in our repo we use that you would be more than welcome to use here:https://github.com/freerunningtech/frt-opsworks-cookbooks. I assume you're familiar with adding custom cookbooks to your opsworks stacks.We generally run it on its own layer that also includes the rails cookbooks required for application deployment (while not the app server):Configure: rails::configureDeploy: deploy::rails wheneverUndeploy: deploy::rails-undeployHowever, we usually also deploy this instance as an application server, meaning we do end up serving requests from the box we're using for whenever as well.There is one "gotcha", in that you must set your path in the env at the top of the schedule.rb like this:env :PATH, ENV['PATH']
|
Does anyone have experience/success using the whenever gem on aws opsworks? Is there a good recipe? Can I put that recipe on a separate layer and associate one instance with that additional layer? Or is there a better way to do it? Thanks!!!EDIT:We ended up doing it a bit differently...Code:Can’t really post the real code, but it’s like this:in deploy/before_migrate.rb:[:schedule].each do |config_name|
Chef::Log.info("Processing config for #{config_name}")
begin
template "#{release_path}/config/#{config_name}.rb" do |_config_file|
variables(
config_name => node[:deploy][:APP_NAME][:config_files][config_name]
)
local true
source "#{release_path}/config/#{config_name}.rb.erb"
end
rescue => e
Chef::Log.error e
raise "Error processing config for #{config_name}: #{e}"
end
endin deploy/after_restart.rb:execute 'start whenever' do
cwd release_path
user node[:deploy][:APP_NAME][:user] || 'deploy'
command 'bundle exec whenever -i APP_NAME'
endin config/schedule.rb.erb:<% schedule = @schedule || {} %>
set :job_template, "bash -l -c 'export PATH=/usr/local/bin:${PATH} && :job'"
job_type :runner, 'cd :path && bundle exec rails runner -e :environment ":task" :output'
job_type :five_runner, 'cd :path && timeout 300 bundle exec rails runner -e :environment ":task" :output'
set :output, 'log/five_minute_job.log'
every 5.minutes, at: <%= schedule[:five_minute_job_minute] || 0 %> do
five_runner 'Model.method'
end
|
Whenever gem on aws opsworks
|
I can't think of any reason why the SDK code would cause your CPU to go so high. My first guess would be some sort of garbage collection issue. When you upload your data, are you passing in a File object to AmazonS3.putObject, or some sort of stream (including FileInputStream)? Streams can be a little tricky to deal with, since they aren't guaranteed to be repeatable and you have to explicitly provide the Content-Length in the ObjectMetadata as part of your upload, otherwise the SDK has to buffer your upload in memory to calculate the total length. That'd be the very first thing I'd recommend checking out.On a side note.. you should check out theTransferManager APIin the SDK. It gives you a nice simple interface to uploading and downloading files to/from Amazon S3, and have several optimizations built in.If that still doesn't turn up a clue, then I'd recommend making a dead simple repro case for this. Write a single class file that simply uploads a random File to the same S3 key, and leave that running for the same duration as your application code. If you're able to reproduce the problem in that simple setup, then we can take a look at the code and help get it debugged, but with all the other variables involved in your full application code, we can't do much more than guess at what could be happening.
|
I'm currently working on a server app (JEE) and getting some problems to upload files to AWS S3. I'm using the Java SDK (S3client.putObject) to upload these files. When the server starts, everything happens as expected. Files are generated in the server (EC2 instance) and uploaded to S3 in a few seconds. But after some days, the performance degrades a lot. Files that usually took 5 or 6 seconds to be uploaded need now 10 to 30 minutes (yes, minutes). I profiled the app and the culprit here is the section that does the upload using the AWS Java SDK. Strangely the CPU utilization goes near 100% and stays there for minutes. As this is basically an IO operation, I don't understand why it may need so many CPU cyles to run.
Has anyone eve experienced this behavior?
Any tips on where to look?PS: file size goes from 1 to 50 MB.Thanks a lot!Updates:
The EC2 instance that creates the files and uploads them to S3 is m1.large.
I'm using the 1.6.4 AWS SDK version .
|
Upload files to AWS S3 takes a lot of CPU
|
+50Check if following hack works - Add "HttpErrorCodeReturnedEquals=404" to the condition. I am expecting following sequence of events to happenAccess "www.example.com/index.html"Gets 404 as page doesn't not existsYour prefix condition along with 404 matches a
Redirect ruleSo "/blog" should be added as prefixReturns "www.example.com/blog/index.html" page.Reading documentation I did not find a way to specify negative condition like "KeyPrefixNotEquals" to avoid the recursion
|
I'm using an S3 bucket to host a static sitemydomain.com. Originally the blog content layout wasindex.htmlposts/article.htmlNow I keep all the blog content inside ablogdirectory.blog/index.htmlblog/posts/article.htmlI have enabled website hosting on mydomain.com bucket. I would like to useS3's custom redirection rulesto redirect old urls which lacked the 'blog' prefix. For example,mydomain.com/index.htmlshould redirect tomydomain.com/blog/index.html.I've tried<RoutingRules>
<RoutingRule>
<Condition>
<KeyPrefixEquals>/</KeyPrefixEquals>
</Condition>
<Redirect>
<ReplaceKeyPrefixWith>blog/</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>and<RoutingRules>
<RoutingRule>
<Condition>
<KeyPrefixEquals>mydomain.com/</KeyPrefixEquals>
</Condition>
<Redirect>
<ReplaceKeyPrefixWith>mydomain.com/blog</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>but the first results in a redirection loop (not surprising) and the second does not work.
|
S3 Redirect base domain to key prefix folder
|
I just ran into the same issue today.After some debugging I figured SES was instanciating using the wrong server (I'im using EU whereas US is default).ActionMailer::Base.add_delivery_method :ses, AWS::SES::Base,
server: "email.eu-west-1.amazonaws.com",
access_key_id: PLEASE_REMOVE_YOUR_CREDENTIALS_FROM_QUESTION,
secret_access_key: PLEASE_REMOVE_YOUR_CREDENTIALS_FROM_QUESTIONdid the trick for me.Hope this can help someone someday ^^
|
I'm using Amazon AWS SES to send the common confirmation emails when a user get registered. I have my email and domain verified, but Rails doesn't send the message.I have installed aws-ses gem and it works because i've done some trys from Rails console. But when it has to send it automatically, i get:I, [2013-11-13T12:36:21.953813 #3262] INFO -- : Completed 500 Internal Server Error in 1623ms
F, [2013-11-13T12:36:21.958860 #3262] FATAL -- :
AWS::SES::ResponseError (MessageRejected - Email address is not verified.):My amazon_ses.rb looks like:ActionMailer::Base.add_delivery_method :ses, AWS::SES::Base,
access_key_id: 'ACCESS_KEY_ID',
secret_access_key: 'SECRET_ACCESS_KEY'And my production.rb:config.action_mailer.default_url_options = { :host => 'ismuser.com' }
config.action_mailer.delivery_method = :sesI'm just guessing the problem is i have not defined the source email (the email verified in SES), but i don't know where i should define it.Help?
|
AWS SES and Rails. My app doesn't send the mails
|
The instance will already have an IP address in that range allocated. Use something like 'dig' to lookup the IP address of the endpoint frominsideof the VPC and you will get back an IP address from your VPC subnet.
|
I have an RDS instance started in a DB Subnet Group in my VPC. This instance has an endpoint of the formsomeDatabase-db-small.abcd1234.us-east-1.rds.amazonaws.com:3306.How does one allocate to this instance an IP address in the VPC subnet10.0.0.0/24?
|
How to allocate IP address in VPC to RDS instance?
|
In a fit of inspiration just after posting, I added the following to~/.ssh/config:Host someServer
Hostname 1.2.3.4
User ubuntu
IdentityFile ~/.ssh/pem/Me.pemAnd I simply cloned the Git repo as such:git clone ssh://someServer/opt/git/someRepo.gitThis has the terrific effect of including the.pemfile as needed.
|
I keep a Git server on Amazon EC2, and in order topushorpullto it I need to runssh-add ~/.ssh/pem/Me.pem. Is there any way to add this.pemfile to the Git config such that I won't have to runssh-addeach time? I'm thinking of a configuration file in a similar vein to~/.ssh/configwhich lets users configure just such an option (IdentityFile ~/.ssh/pem/Me.pem).
|
Include .pem for git pull / push
|
There is fairly new tool open-sourced by Netflix calledIcewhich allows you to visualize the billing details as retrieved via the AWS reports generated into your S3 buckets.You might also want to check the answers over at serverfault to asimilar question.
|
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed9 years ago.Improve this questionMy CIO is asking me for a monthly "per instance" breakdown of EC2 charges, as some of our EC2 instances are run on behalf specific customers. Does anyone know how to accomplish this?I can use java, python, or the aws command line tools if necessary, but a report tools or service is preferable.
|
AWS EC2 billed hours per instance in a given time period [closed]
|
You should be able to usecreateVolumeto create the item. That looks to return aCreateVolumeResult, which has aVolumeobject inside.You would then take theVolumereturned from thecreateVolumecall andattachVolumewith a matchingAttachVolumeRequest.This is all done after you create one of AWSAmazonEC2Clientobjects:documentation is all pulled from here.Workflow of the code would probably look like this (note: pseudo code is used and there may be a few more pieces to hook in but the workflow should look something like this)AWSCredentials credentials = new AWSCredentials();
AmazonEC2Client client = new AmazonEC2Client(credentials);
CreateVolumeResult request = new CreateVolumeRequest(java.lang.Integer size,
java.lang.String availabilityZone);
CreateVolumeResponse volumeResponse = client.createVolume(request);
AttachVolumeRequest attachRequest = new AttachVolumeRequest(volumeResponse.getVolume().getVolumeId(), java.lang.String instanceId, java.lang.String device);
client.attachVolume(attachRequest);
|
I am trying to find a way to create a new EBS and attach it to a running instance pro grammatically through the AWSJavaSDK. I see ways to do this with command line tools and with rest based calls but no way through the SDK proper.
|
Amazon AWS creating EBS(Elastic block storage) through Java API
|
Most of the services of Amazon Web Services are in a single region. The only exceptions are Route 53 (DNS), IAM, and CloudFront (CDN). The reason is that you want to control the location of your data, mainly for regulatory reasons. Many times your data can't leave the US or Europe or any other region.
It is possible to create high availability for your services within a single region with availability zones. This is how the highly available services as DynamoDB or S3 are giving such functionality, by replicating the data between availability zones, but within a single region.
|
I had created a simple table in dynamo called userId, I could view it in the AWS console and query it through some java on my local machine. This morning, however, I could no longer see the table in the dynamo dashboard but I could still query it through the java. The dashboard showed no tables at all (I only had one, the missing 'userId'). I then just created a new table using the dashboard, called it userId and populated it. However, now when I run my java to query it, the code is returning the items from the missing 'userId' table, not this new one! Any ideas what is going on?Ok, that's strange. I thought dynamo tables were not specified by region but I noticed once I created this new version of 'userId' it was viewable under the eu-west region but then I could see the different (previously missing!) 'userId' table in the us-east region. They both had the same table name but contained different items. I didn't think this was possible?
|
DynamoDB Table Missing?
|
In my opinion, it's an understandable trade-off DynamoDB made. To be highly available and redundant, they need to replicate data. To get super-low latency, they allowed inconsistent reads. I'm not sure of their internal implementation, but I would guess that the higher this 64KB cap is, the longer your inconsistent reads might be out of date with the actual current state of the item. And in a super low-latency system, milliseconds may matter.This pushes the problem of an inconsistent Query returning chunk 1 and 2 (but not 3, yet) to the client-side.As per question comments, if you want to store larger data, I recommend storing in S3 and referring to the S3 location from an attribute on an item in DynamoDB.
|
I had a use case where I wanted to store objects larger than 64kb in Dynamo DB. It looks like this is relatively easy to accomplish if you implement a kind of "paging" functionality, where you partition the objects into smaller chunks and store them as multiple values for the key.This got me thinking however. Why did Amazon not implement this in their SDK? Is it somehow a bad idea to store objects bigger than 64kb? If so, what is the "correct" infrastructure to use?
|
Why does AWS Dynamo SDK do not provide means to store objects larger than 64kb?
|
The suggested solution from theAWS Developer Forums:It turns out that cron has the "from" address hard-coded in the source (q.v. "do_command.c" in the cron source), so one does not have influence over what cron transmits to sendmail (which in our case is symlinked to "/usr/bin/msmtp").However, due to the magic of Linux, we do have the ability to alter the stream of text that goes into sendmail.The way I worked around this cron limitation was to move the "msmtp" binary to"msmtp.bin" and then create "/usr/bin/msmtp" that was a shell script:#! /bin/bash
sed -e 's/root .Cron Daemon./[email protected]/' | /usr/bin/msmtp.bin "$@"This is also, AFAIK, the only means one has to set the "debug" flag to msmtp when used in a "global" setting (such as cron, or other cases where sendmail is invoked with arguments you don't control).While the script above is rather simplistic, you can also conditionally alter the text by checking the input arguments for the magic "-FCronDaemon" which is also hard coded in the cron binary. I would be stunned if any other program calls sendmail with "-FCronDaemon".
|
I have a crontab that includes a[email protected]. My server uses msmtp to forward the email to Amazon Simple Email Service. My problem is that output from cron commands never arrives in my mailbox. This is what the msmtp log says:Mar 06 14:26:02 host=email-smtp.us-east-1.amazonaws.com tls=on auth=on user=MY.SES.USER[email protected][email protected]smtpstatus=554 smtpmsg='554 Transaction failed: User name is missing: ?Cron Daemon ?.' errormsg='the server did not accept the mail' exitcode=EX_UNAVAILABLEWhat do I need to do in order to make Amazon SES accept the cron emails?
|
Crontab email through msmtp -> Amazon SES
|
The type of file depends on the file obviously. Have a look at this:http://en.wikipedia.org/wiki/Internet_media_typeIf you know what exactly is your file, then assign one of these to response ( not mandatory though ). You should also add the length of the file to response ( if it is possible, i.e. if it is not a stream ). And if you want it to be downloadable as an attachment, then add Content-Disposition header. So all in all you only need to add this:var filename = "myfile.txt";
res.set({
"Content-Disposition": 'attachment; filename="'+filename+'"',
"Content-Type": "text/plain",
"Content-Length": data.Body.length
});NOTE: I'm using Express 3.x.EDIT: Actually Express is smart enough to count content length for you, so you don't have to addContent-Lengthheader.
|
I am trying to send a file's content to the client in my request, but the only documentation Express has is it's download function which requires a physical file; the file I am trying to send comes from S3, so all I have is the filename and content.How do I go about sending the content of the file and appropriate headers for content type and filename, along with the file's content?For example:files.find({_id: id}, function(e, o) {
client.getObject({Bucket: config.bucket, Key: o.key}, function(error, data) {
res.send(data.Body);
});
});
|
Nodejs Express Send File
|
How about this?You need to run as:sudo umount /dev/xvdf
|
I have create an EBS drive, attached it to the Instance and created file system usingmkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition usingdf -hkit is still showing the drive.[ec2-user@XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdriveAnd more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.I am new to EC2 cloud and also to Linux.
|
Amazon EC2: Unable to unmount and remove EBS drive file system
|
In my opinion, it is almost always best to store and return the fewest fields possible — preferably just the ID, unless you explicitly need a feature such as highlighting.Storing a lot of data in your index can have a negative impact on your search performance as your index grows. There is no data that loads faster than no data. Plus, looking up objects by their IDs should be a very cheap operation in your primary data store of choice.Most importantly, if your application is using an ORM to interact with its data store, then the sheer utility of reusing all your domain modeling consistently throughout your application would be hard to overstate.Returning values straight from your search engine can be useful. But, short of using the search engine as a primary data store, I would need averycompelling reason to fragment my domain logic by foregoing an ORM.
|
I have read that it is best practice to only return an ID when querying for results, and then populate metadata from the database. Is this true? I am worried about performance.
|
Amazon Cloudsearch (or Solr, ElasticSearch) best practice for result contents?
|
So I created a new bucket in the US (the last one was in Ireland) and everything works smoothly now.
|
I'm using the standalone PHP-S3 class:http://undesigned.org.za/2007/10/22/amazon-s3-php-classI've tried all the ready made tutorials, downloaded the source, changed the corresponding variables (set my bucket, access_key, access_secret).I'm guetting the following error whenever I try to upload any file:Warning: S3::putObject(): [417] Unexpected HTTP status in C:\Users\Jad\Dropbox\www\test\S3.php on line 312Note:My bucket already exists and I even allowed all the permissions to the usereveryone(temporarily for it to work but it's still not working)
|
Amazon S3 putObject() - not working - PHP
|
I found the answer to my problem. I think I made an error when adding the lib directory to my build path.Here's the right way to do it:Right click Project -> select Properties -> Java Build Path -> Libraries and click Add JARs. Then select the JARs added to the lib directory. Thanks.
|
I'm setting up an Android-based Amazon AWS SimpleDB client in Eclipse (just started). I'm getting an error on the line:import com.amazonaws.services.simpledb.AmazonSimpleDBClient;that says "The import com.amazonaws cannot be resolved."I've already installed the AWS SimpleDB jar file in the lib directory of my project, and added the lib directory to the build path of my project.How do I get Eclipse to resolve the name in the import statement? Thanks.
|
import com.amazonaws cannot be resolved
|
The AWS SDK for Ruby (aws-sdk gem) supports enumerating region names:require 'aws-sdk'
ec2 = AWS::EC2.new(:access_key_id => '...', :secret_access_key => '...')
ec2.regions.map(&:name)
=> ["eu-west-1", "sa-east-1", "us-east-1", "ap-northeast-1", "us-west-2", "us-west-1", "ap-southeast-1"]You can also use a client interface to the DescribeRegions call:ec2.client.describe_regions
=> { :region_info=>[
{:region_name=>"eu-west-1", :region_endpoint=>"ec2.eu-west-1.amazonaws.com"},
{:region_name=>"sa-east-1", :region_endpoint=>"ec2.sa-east-1.amazonaws.com"},
{:region_name=>"us-east-1", :region_endpoint=>"ec2.us-east-1.amazonaws.com"},
{:region_name=>"ap-northeast-1", :region_endpoint=>"ec2.ap-northeast-1.amazonaws.com"},
{:region_name=>"us-west-2", :region_endpoint=>"ec2.us-west-2.amazonaws.com"},
{:region_name=>"us-west-1", :region_endpoint=>"ec2.us-west-1.amazonaws.com"},
{:region_name=>"ap-southeast-1", :region_endpoint=>"ec2.ap-southeast-1.amazonaws.com"}
],
:request_id=>"04458cac-bdf2-4847-bf1f-c7ea65813777"
}You can view the gem docs here:http://docs.amazonwebservices.com/AWSRubySDK/latest/frames.html
|
I am developing a Rails application for AWS and would like to create drop down menu for region names, like "us-east-1" etc.If someone already created gem to get them, I want to use it. Does anyone know such a gem or useful API?
|
Get AWS region names with Ruby
|
After more research and posting on the AWS forum I got a solution although not a full understanding of what happened under the hood. Thought I would post this as an answer if that's okay.Turns out there is a bug in the AMI Version 2.0, which of course was the version I was trying to use. (I had switched to 2.0 because I wanted hadoop 0.20 to be the default) The bug in AMI Version 2.0 prevents mounting of instance storage on 32-bit instances, which is what the c1.mediums launch as.By specifying on the CLI tool that the AMI Version should use "latest", the problem was fixed and each c1.medium launched with the appropriate 350GB of storage.For example./elastic-mapreduce --create --name "Job" --ami-version "latest" --other-optionsMore information about using AMIs and "latest" can be foundhere. Currently "latest" is set to AMI 2.0.4. AMI 2.0.5 is the most recent release but looks like it is also still a little buggy.
|
I'm using Amazon EMR and I'm able to run most jobs fine. I'm running into a problem when I start loading and generating more data within the EMR cluster. The cluster runs out of storage space.Each data node is a c1.medium instance. According to the linkshereandhereeach data node should come with 350GB of instance storage. Through the ElasticMapReduce Slave security group I've been able to verify in my AWS Console that the c1.medium data nodes are running and are instance stores.When I run hadoop dfsadmin -report on the namenode, each data node has about ~10GB of storage. This is further verified by running df -hhadoop@domU-xx-xx-xx-xx-xx:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 2.6G 6.8G 28% /
tmpfs 859M 0 859M 0% /lib/init/rw
udev 10M 52K 10M 1% /dev
tmpfs 859M 4.0K 859M 1% /dev/shmHow can I configure my data nodes to launch with the full 350GB storage? Is there a way to do this using a bootstrap action?
|
Amazon EMR: Configuring storage on data nodes
|
According tosome past research, the s3cmd GET operation is about 5 times slower than wget. Keep in mind that s3cmd is a utility designed to retrieve files from your S3 filesystem. It doesn't use the HTTP protocol but instead uses the s3 protocol.The only time I can see using the s3cmd utility is for cases where you're retrieving files you cannot otherwise retrieve using standard HTTP GET methods, like when the files on S3 don't have read permissions or you're doing maintenance on your S3 buckets.Based on your question, I'm assuming you're trying to use this utility in a production system; however, it doesn't appear that was the intention or goals of the utility.For more details, check outperformance testing spreadsheet.As far as costs goes, I'm not an expert on Amazon pricing, but I believe they bill based on actual data transferred, so a 1GB file would cost the same regardless of whether you downloaded it quickly or slowly. It's like the question where someone asks you what is heavier, ten pounds of bricks or ten pounds of feathers.
|
I can download a file from S3 using either of the following methods.s3cmd get s3://bucket_name/DB/company_data/abc.txt
wget http://bucket_name.s3.amazonaws.com/DB/company_data/abc.txtMy question is :1) Which one is faster?
2) Which one is cheaper?
|
What is the most efficient S3 GET request method?
|
Create a rule to open port 3000 in the security group associated with your ec2 instance.
It can be done through the command line tools or through the web console, which is more straightforward. If you didn't specify a security group when creating the instance it will be the "default" security group.A decent walkthrough for the consoleAmazon documentationRightscale explanation of different firewall situations
|
I'm trying to run a basic node.js server,var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('hello world!\n');
}).listen(3000, '0.0.0.0', function() {
console.log('Server running on port 3000');
});However when I run it and go tohttp://x.x.x.x:3000/the page doesn't load.I tried the answer onthis questionbut that didn't work either. And changing the host to127.0.0.1or the server ip or emitting it doesn't fix it either.I've also followedthis guidethat says to proxy requests with haproxy. But that did not work either.Is there something in the security tab I have to enable/disable?Edit: The problem was I was using the wrong IP. The IP changes when the instance is restarted.
|
I can't access my node.js server on my AWS EC2 isntance from the outside
|
There is no such stuff as folders in Amazon S3. It is a "flat" file system. The closer you can get to folders is adding prefixes likefoo/bar/filename.txtto your file names.
Even though several S3 tools will show you stuff as if they were contained inside folders, this concept does not exist on S3.Please see this related thread:Amazon s3 Folders Problem
|
Below codes can get 1 single file from AWS 3 but, what about a folder?var _key:int=Account.lessons[dl_i].id;
var dest:String = Conf.Dir+_key;
var request:GetObjectRequest = new
GetObjectRequest().WithBucketName(Conf.bucketName).WithKey(_key+"");
var response:GetObjectResponse = client.GetObject(request);
response.WriteResponseStreamToFile(dest);
|
AWS S3 SDK Get the folder instead of a file
|
"Retired" means that a reserved instance purchase is no longer in effect.Usually this would be because the term expired (1 year, 3 years, etc). However, according tothis thread, it looks like it could also mean that there was a problem processing payment.Either way, retired instances are no longer usable.
|
I can see from billing that we purchased 4 reserved EC2 instances in 2 batches of 2 earlier this year.We are currently using 2 EC2 instances.In the list of purchased reserved instances, I can see 2 listed as active, and 2 listed as retired. Can you tell me what "retired" means and if they are still usable?Thanks
|
Amazon EC2 reserved and retired instances
|
There's business risk and technical risk.Business risk is that you might have to move hosts later for some external reason. VPS's, EC2, etc require more upfront investment, but keep you independent. Tools likeChefcan help with the configuration effort.Technical risk is that your application may not be easily implemented on the platform. Since most VPS options allow you to install arbitrary software, they minimize this, again at the cost of more configuration effort on your part. AFAIK, the largest constraint GAE enforces on you is it's difficult to do long running background tasks. (Working without JOINs and other aspects of de-normalized data requires a different way of thinking, but this approach is fairly common in web applications no matter where they run once the SQL database is larger than a single host can support.)If you can live with both these risks, GAE would appear to save you a substantial amount of effort. If you cannot live with these risks, you should tailor your own environment.As an aside, I find S3 to be worth it no matter your environment. It's far simpler than ensuring your local server static file storage is reliably backed up, and you never have to worry about capacity. It's best if you use it for data that is uploaded but rarely overwritten or deleted (think facebook photo albums).
|
I have my first app, not that big, but it is the first step. (next big one on the way)Now if I want to put it on my own Linode VPS, I have to configuremod_python or mod_wsgi,as well as memcache, Ngix, mySQL or Postgresql, etc. to make it work. If I put it GAE, All I have to do is convert the models to use GAE's API.What I like about GAE is scaling. (if they can really do it)Then I'd only worry about developing my apps and doing SEO work on them instead of worrying about load share/balance, cache, db / IO redundancy, etc.I don't want to do any porting later on. (I have to decide now and stick with it)So, if you have any experience on this, what do you recommend:1- Use VPS(s) for everthing
2- Use VPS(s) plus Amazon S3
3- Use VPS(s) plus Amazon S3 & SimpleDB
4- Use GAEAlso: Would I be able to get away with not having JOIN rights when using the BigTable?Note: I don't have any spatial need now, but for a location table I might need that later on.I'd like to know what do you think!
|
Django -- I have a small app ready, Should I go on private VPS or Google App Engine?
|
You should be able to determine the Sales Rank by querying for the SalesRank response group when doing an ItemLookup with the Amazon Associates Web Service.Example query:http://ecs.amazonaws.com/onca/xml?
Service=AWSECommerceService&
AWSAccessKeyId=[AWS Access Key ID]&
Operation=ItemLookup&
ItemId=0976925524&
ResponseGroup=SalesRank&
Version=2008-08-19Response:<Item>
<ASIN>0976925524</ASIN>
<SalesRank>68</SalesRank>
</Item>See the documentation here:http://docs.amazonwebservices.com/AWSECommerceService/2008-08-19/DG/index.html?RG_SalesRank.html
|
I've seen several products that will track the sales rank of an item on Amazon. Does Amazon have any web-services published that I can use to get the sales rank of a particular item?I've looked through the AWS and didn't see anything of that nature.
|
How can I track the sales rank of an item on Amazon programmatically?
|
I've been having the exact same issue trying to find any information regarding how to get a seller's ID through the SP-API.As far as I can see, the only way to programmatically get the seller's ID is during the app authorization process via oauth:https://developer-docs.amazon.com/sp-api/docs/selling-partner-appstore-authorization-workflowIt mentions in Step 4 that the redirect after authorization includesselling_partner_idas a query parameter. That is the seller account's ID.Unfortunately, I cannot find any way to retrieve the seller ID after the fact. There is a Github issue regarding this missing feature but, unsurprisingly from Amazon, has not gotten very far:https://github.com/amzn/selling-partner-api-docs/issues/492
|
I want to update the listing quantity on amazon through patchlistingItem API but it requiresSellerID, I can't find any endpoint to get sellerId (I have only a refresh token ) of sellers that are authorized with our website.https://developer-docs.amazon.com/sp-api/docs/listings-items-api-v2021-08-01-reference#patchlistingsitemI have tried this endpoint but cant get sellerIdhttps://developer-docs.amazon.com/sp-api/docs/sellers-api-v1-reference
|
is there any endpoint in amazon sp api to get seller Id?
|
There isn't a native solution for Step Functions to achieve this yet.
Another workaround would be to use a simple lambda to convert the type.Edit: A native workaround would be to use the intrinsic functionStates.StringToJsone.g."NumberValue.$": "States.StringToJson("$.numberAsString")"
|
Is there a way in an aws step function to convert a string to a number?I have a query param that is passed from api gateway to the step function. It is a string with a numeric value. I would like to cast the string to a number in the step function state language, but so far no luck.
|
Step functions convert string to number
|
An option would be to convert the timestamp and then filter with subtracting an hour from the current time.Assuming the value in timestamp is milliseconds since epoch you can usefrom_unixtime:Based on your sample value provided to see how that works:select from_unixtime(1650578683860/1000e0)Which then gives the result:2022-04-21 22:04:43.860Then you can use DATE_ADD and subtract an hour from CURRENT_STAMP, so the where clause would be something like:WHERE from_unixtime("timestamp"/1000e0) >= DATE_ADD('hour', -1, CURRENT_TIMESTAMP)
|
I have some data rows in AWS Athena table and I am trying to get the data from the last 1 hour. I am using awswrangler, I will post my snippet below. Basically, instead of querying all data and then filtering out only the last 1 hour with Python, I would like to do that in the Athena SQL query so that I get a faster response (and thus execution time of the program). My code is:import awswrangler as wr
import boto3
session=boto3.Session()
df = wr.athena.read_sql_query(f"""SELECT *
FROM data_table""",
database="database",
keep_files = False,
boto3_session = session).sort_values('timestamp')My progress:
I can get the current timestamp with"SELECT CURRENT_TIMESTAMP", but this will return the timestamp in a date format. In order to get the last 1 hour, my idea is to convert 1 hour to milliseconds as well, and subtract it from the milliseconds of current timestamp and apply it as a filter.NOTE!timestampin the table is in milliseconds.
|
Amazon Athena get data from the past one hour
|
Looking in theUser Guide, you can see:Resource: which secrets they can access. See Secrets Manager resources.The wildcard character (*) has different meaning depending on what you attach the policy to:In a policy attached to a secret, * means the policy applies to this secret.In a policy attached to an identity, * means the policy applies to all resources, including secrets, in the account.So in the case where it is attached to the secret, it effectively has no meaning that differs from*, but it is when you attach it to an identity that it becomes more useful. Then you can give differing identities different action permissions on various secrets.
|
I am trying to understand resource based policy in IAM.I understand : it is attached to a resource like s3,KMS, secrets manager etc.My question is what is the significanceResourcein a resource based policy.For example a permission policy for AWS secrets manager(https://aws.amazon.com/premiumsupport/knowledge-center/secrets-manager-resource-policy/){
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:*",
"Principal": {"AWS": "arn:aws:iam::123456789999:user/Mary"},
"Resource": "*"
}
]
}Here the Resource is * or the resource can be the ARN of the secrets manager. (Is there any other value allowed in this case ? ) For S3 I can think of the root bucket or other prefixes.So my question is what is the use case for Resource here ? Please let me know if I am reading it wrong.Thanks in advance.
|
Significance of resource in Resource Based Policy
|
If you useboto3s3 slient, you can get a list of folders:import boto3
s3 = boto3.client('s3')
result = s3.list_objects(Bucket=BUCKET_NAME, Delimiter='/')
for prefix in result.get('CommonPrefixes', list()):
print(prefix.get('Prefix', ''))
|
I want to list all of the folders from my S3 bucket. In my s3 bucket I have folders for example : SF_test_01, SF_test_02, SF_test_03 etc. I usedfor my_bucket_object in my_bucket.objects.all():
print(my_bucket_object)but its returning all paths with folder and file names. Is there anyway how to do it? I want only folder names. I can add that I'm using AWS Lambda function and I have imported boto3
|
List all of the folder name from S3 bucket
|
please try to increase timeout to desired time in AWS lambda configuration setting.
|
i am trying to execute my lambda function in AWS, its giving me timeout error even though my code is correct. any help is appreciated.
|
AWS lambda task timeout after 3 seconds
|
I'll rather suggest to put them in thesamconfig.tomlfile, like:[default.deploy.parameters]
stack_name = "your-application"
s3_bucket = "your-s3-for-cloudformation-stuff"
s3_prefix = "your-folder-name"
...
tags = "Name=\"test-stack\" az:zone=\"infra\""That will propagate tags down to all the resources of the stack (including CloudFormation stack itself)
|
I'm trying to add tags to a stack while using SAM deploy, but they're not showing up.
The SAM cli command code can be found below.sam deploy --stack-name test-stack --resolve-image-repos --capabilities CAPABILITY_NAMED_IAM --no-fail-on-empty-changeset --template-file ./.aws-sam/build/template.yaml --resolve-s3 --tags Name=test-stack az:zone=infraTags are not being created in the CloudFormation stack despite giving the —tag parameter to the cli.Can someone tell me how to add stack-level tags to the CloudFormation stack that is deployed using SAM CLI?
|
How to add stack-level tags to cloudformation stack deployed using SAM
|
In your sample code, when you use:credentials: rds.Credentials.fromGeneratedSecret('myname')CDK will automatically create anAWS Secrets Managersecret and store it there. You can view the password generated byretrieving the secret.If you want to specify your own password, you can useCredentials.fromPassword. The documentation recommends not to have the password directly exposed in your CDK code for security reasons. You can also useCredentials.fromSecretto read the value from a AWS Secrets Manager secret you have created separately.
|
I made theRDSwith this script bycdk.It makes the usermynameas admin, however where can I get the password?And also, is there any way to set the default password by script?const dbInstance = new rds.DatabaseInstance(this, 'Instance', {
engine: rds.DatabaseInstanceEngine.mysql({
version: rds.MysqlEngineVersion.VER_8_0_19,
}),
vpc,
instanceIdentifier:`st-${targetEnv}-rds`,
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC,
},
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
publiclyAccessible: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
databaseName:`stybot${targetEnv}`,
credentials: rds.Credentials.fromGeneratedSecret('myname')
});
|
Is there a way to set the default password while making RDS by cdk
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.