Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Dynamodb paginates the resultshttp://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#PaginationDynamoDB paginates the results from Query and Scan operations. With
pagination, Query and Scan results are divided into distinct pieces;
an application can process the first page of results, then the second
page, and so on. The data returned from a Query or Scan operation is
limited to 1 MB; this means that if the result set exceeds 1 MB of
data, you'll need to perform another Query or Scan operation to
retrieve the next 1 MB of data.If you query or scan for specific attributes that match values that
amount to more than 1 MB of data, you'll need to perform another Query
or Scan request for the next 1 MB of data. To do this, take the
LastEvaluatedKey value from the previous request, and use that value
as the ExclusiveStartKey in the next request. This approach will let
you progressively query or scan for new data in 1 MB increments.
|
I'm woking with DynamoDB using java SDK. The case here is, that I've a secondary index which when queried might contain 1000+ records in the returned result. I'm not sure if DynamoDB returns the result in a paginated form or all records at once?Thanks.
|
Does AWS DynamoDB API pose a limit to number of records returned in a secondary index query?
|
I finally used SNS for this purpose. Each time I need to sent an alert, I call a lambda and supple lambda with the list of subscribers and the message for that asset. Lambda will go ahead, create a new topic, add the subscribers to it, publish the message to it and when everything is done, removes the topic. Works great.
|
We have got a strange requirement and we would like to send SMS to our clients based on the assets they are monitoring. Each asset can have 100s of subscribers and there are 1000s of assets so obviously, we can not create one SNS topic per asset. We have the assets and their list of subscribers in a RDS instance on AWS.Is there anyway with SNS to make the list of its subscribers dynamic, each time we publish a message to it we also supply the list of subscriber this message should be sent to? What are my other options or another AWS service? Lambda maybe? Please advise. thanks
|
AWS SNS dynamic subscriptions
|
CloudFront does not support the use IAM credentials for generating signed URLs, nor does it use the signing algorithms common to other AWS services.The process is, however, fully documented. CloudFront has its own method for accessing private objects in S3 on behalf of your users -- the origin access identity -- and will use this mechanism transparently when presented with a signed URL or signed cookies, generated using a keypair associated with a trusted signer.SeeServing Private Content through CloudFrontfor descriptions of the mechanisms and configuration walk-throughs.
|
We are using cloudfront for serving s3 resource and it is restricted.
In c#, while creating the presigned url using the "AmazonCloudFrontUrlSigner.SignUrlCanned" it does ask only for the cloudfront private key generated using the root credentials and doesn't ask for IAM user credentials.In the distribution behavior, I can see there is an option to specify the "Trusted Signers" but not able to understand where it is being used. Any info on this would be great. Also is there a way to generate presigned cloudfront url using the IAM user credentials?
|
Amazon cloudfront. Usage of Trusted Signers
|
One option is to use Cognito user pools as a provider for your Cognito identity pool, which would then give you credentials to access Dynamo.If you're using multiple providers, the approach that would probably make the most sense is to use the Cognito identity pool's generated id (identity id) as the key in Dynamo, as the user may only log in with some public provider, not user pools.This identity id is constant once a login is linked, with one exception. If two authenticated identities merge in an identity pool, the resulting identity id could be either. Since Dynamo works the way it does, this would mean you would have to catch merge events, then grab all entries from Dynamo that exist for the old key, delete them, and re-insert them with the new key. This is admittedly not the smoothest scenario, and we'll take this as a feature request to make this use case a bit easier.Storing data against a user isn't really designed with Cognito federated identities (identity pools) in mind, only user pools. It really makes the most sense when user pools is more of a stand alone entity, not as much in the use case you described.
|
If I have a Cognito User Pool and a Cognito Identity Pool and I have application specific data, is there anything considered best-practice in how to join these together?For example, supposing I have application data stored in DynamoDB, perhaps SMS/text messages sent by a user. Also supposing a text message (as stored in the DB) looks like this:{
"account_id": "a-uuid-for-the-account",
"message_body": "Hello world",
"message_subject": "Greetings!",
"date_sent": "...",
"message_id": "..."
}Would I then join this to the User Pool or the Identity Pool? For example a separate accounts table might have records resembling this:{
"account_id": "a-uuid-for-the-account",
"user_pool_username": "MrBloggs"
"addresses": [ "123 Springfield Road", "Blahsville" ]
}I can see disadvantages of joining to the User Pool, as you may introduce other IDPs and this would then fail. So perhaps, you would use the ID of an Identity Pool 'identity'?Lastly, this question makes me wonder what the point of the 'attributes' are that you might store against the User Pool users (in the user pool itself)? Taking the postal address example as used above, if this were stored in the User Pool then you'd have to store addresses for users in other IDPs separately — duplicating effort and complicating the software.Thanks!
|
How to join application data with Cognito User Pool + Cognito Identity Pool users?
|
You can't do this directly with any of the officially released Ansible modules, but - as with anything that Ansible doesn't directly support - you can justshellout instead.So if you wanted to enable VPC flow logs you could use the AWS CLI'screate-flow-logscommand:- name: enable vpc flow logs
local_action: shell aws ec2 create-flow-logs --resource-type VPC --resource-ids {{ vpc_id }} --traffic-type ALL --log-group-name {{ vpc_flow_log_group_name }} --deliver-logs-permission-arn {{ vpc_flow_log_iam_role_arn }}In addition to that, if ansible supports hooks or custom resource triggering, you can also enable VPC flow logs through the CloudFormation or even better usingCDK. For more information refer toofficial doc
|
When creating an AWS VPC with Ansible, how to enableVPC Flow Logs?
|
Enable AWS VPC Flow Logs with Ansible
|
You are not using the session your boto3.session() returns. Instead you are using the same default session. You can develop from the following code snippet:for acct in accounts:
session = boto3.Session(profile_name=acct)
iam = session.client('iam')
for user in usernames:
iam.create_user(UserName=user)
|
I cant seem to find a really good way to initiate multiple sessions with boto3. If I have 10 accounts and want to lets say, make a new IAM user, I cant seem to change the boto3.session.Session with new calls.So example code:for user in usernames:
for acct in accounts:
boto3.session.Session(profile_name=acct)
print 'trying account: %s' % acct
try:
uname = IAM.create_user(UserName=user)
uname
print uname
print row_template % header
print row_template % tuple(['-' * len(h) for h in header])
print row_template % (user, acct)
except botocore.exceptions.ClientError as e:
print eHowever, it will only create a session for the default session and will not change it. I cant seem to find a way to close the session either.Any help would be greatly appreciated.
|
looping over multiple aws profiles with boto3
|
When you add VPC configuration to a Lambda function, it can only access resources in that VPC. If a Lambda function needs to access both VPC resources and the public Internet, the VPC needs to have a Network Address Translation (NAT) instance inside the VPC. So for that EC2 instance to send logs to cloud watch it needs internet connection through the NAT instance.AWS Lambda uses the VPC information you provide to set up ENIs that allow your Lambda function to access VPC resources. Each ENI is assigned a private IP address from the IP address range within the Subnets you specify, but is not assigned any public IP addresses. Therefore, if your Lambda function requires Internet access (for example, to access AWS services that don't have VPC endpoints, such as Amazon Cloudwatch), you can configure a NAT instance inside your VPC or you can use the Amazon VPC NAT gateway. For more information, seeNAT Gatewaysin the Amazon VPC User Guide. You cannot use an Internet gateway attached to your VPC, since that requires the ENI to have public IP addresses.
|
I'm invoking the following lambda function to describe an instance information:'use strict'
var aws = require('aws-sdk');
exports.handler = function(event, context) {
var instanceID = JSON.parse(event.Records[0].Sns.Message).Trigger.Dimensions[0].value;
aws.config.region = 'us-east-1';
var ec2 = new aws.EC2;
var params = {InstanceIds: [instanceID]};
ec2.describeInstances(params, function(e, data) {
if (e)
console.log(e, e.stack);
else
console.log(data);
}
};In CloudWatch Logs I can see that function runs until the end, but doesn't log nothing inside ec2.describeInstances method:END RequestId: xxxxxxxxxxxxxx
REPORT RequestId: xxxxxxxxxxxxxx Duration: xx ms Billed Duration: xx ms Memory Size: xx MB Max Memory Used: xx MBMy lambda function has VPC access and IAM Role of AdministratorAccess (full access). For some reason, it can't run ec2.describeInstances method. What is wrong and how can I fix it?
|
Can't run ec2 method in AWS Lambda Function
|
thequestionis among the most seen on SO for aws: You can install a FTP server on any EC2 instance typeThere's no limit on EBS and you can alwaysincrease the storageif you need, so best rule is: start low and increase when neededOnly point to mention is the network performance comes with the instance type so if you care about the speed a t2.nano (low network performance) might not be sufficient
|
My company is looking for a solution for file sharing via FTP - currently, we share one server for client/admin FTP file sharing and serving multiple sites, and are looking to split off our roles so that we have one server dedicated to FTP and one for serving websites.I have tried to find a good solution with AWS, but cannot find any detailed information regarding EBS and EC2 servers, and whether an EC2 package will be able to handle FTP storage. For example, a T2.nano instance seems ideal with 1 cpu and minimal RAM, but I see no information regarding EBS storage limits.We need around 500GiB at most, and will have transfers happening daily in the neighborhood of 1GiB in and out. We don't need to run a database or http server. We may run services for file cleanup in the background weekly.EDIT:I mis-worded the question, which was founded from a fundamental lack of understanding AWS EC2 and EBS which I now grasp. I know EC2 can run FTP services, the question was more of a cost-effective solution with dynamic storage. Thanks for the input!
|
Best AWS setup for a dedicated FTP server?
|
Do not invoke function_2 from function_1.Write the two functions to be able to complete their respective tasks independently.In order to control the flow of execution, you should use AWS Step Functions, which lets you co-ordinate between your various lambda functions.If you want to persist information, use another storage service (like S3 or DynamoDB) to store information that function_2 can use. Then let Step Functions direct the traffic. But check if this service exists in the region you are deploying your functions.Here is a quick guide on AWS Step Functions:http://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
|
In Serverless, I have the following folder structure/component_a/function_1/function_1.js (get)
/component_a/function_2/function_2.js (get)What is the best way from function_1 to call function_2 using the Serverless Framework?
|
What is the best practice to call an api gateway method from another method?
|
No, auto file name generation isn't supported in S3.
|
I'm configuring the Amazon API Gateway as a proxy for an S3 bucket. Ideally, I'd like the client to be able to POST a file to a bucket, have S3 assign it a file name, and then return that name in the response. I don't want to give the client the ability to specify the file name. Is this possible? The documentation for setting up the proxy doesn't mention POST at all, and other POST examples I've found still require the client to specify the key name.
|
Can an S3 bucket generate its own object key names?
|
you'll need:export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
|
I am following thistutorial, but am unable to install AWS EB clientsudo pip install awsebcliTraceback (most recent call last):File "/usr/local/bin/pip", line 11, in
sys.exit(main())File "/usr/local/lib/python2.7/site-packages/pip/init.py", line
215, in mainlocale.setlocale(locale.LC_ALL, '')File "/usr/lib64/python2.7/locale.py", line 579, in setlocalereturn _setlocale(category, locale)locale.Error: unsupported locale settingHow do I fix thisunsupported locale settingissue?I have searched around quite a bit but can't find anything specific to my issue.Note that I alr have Python 2.7.10 on my aws ec2 instance. Also, my region is asia pacific.
|
Unsupported locale setting for AWS EB client
|
Yes it is possible but not straightforward.Get list of stacks in your account/profileLoop through the list and create a list of stack namesGet the stack resources (describe_stack_resources) for each stack(name)Locate the resource where resource['LogicalResourceId'] == 'Ec2Instance'Get the inst_id from that resourceOnce you have the inst_id, you can get its attributes including private_ip using boto3.resource('ec2')I have coded this and use it regularly. AWS may throttle your CF calls, if it is called too often.
|
I need to discover the private IP address for every host in a AWS CloudFormation stack.The CloudFormation API (seehttp://boto3.readthedocs.io/en/latest/reference/services/cloudformation.html) doesn't seem to have any direct support for extracting nodes given a stack ID. Is it even possible?
|
boto3: How to get the IP addresses of CloudFormation stack instances?
|
There are a very large number of ways to authenticate between the client and API Gateway. There is no "best" way.To authenticate between API gateway and the back-end servers, you would use SSL authentication as described here:http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html
|
I have a python microservice which I would love to connect to AWS API Gateway. - The problem is that I have researched ways to make both secure, but not really came to a conclusion.I came across a site saying I should use SSL Certifications toonlyenable requests from API Gateway.Can someone enlighten me on what's the best practice for authentication between the client and API Gateway and the API itself?
|
Amazon (AWS) API Gateway - Authentication
|
You could use the lately introduced$input.bodyvariable in your mapping template:{
"body" : "$input.body"
}You maybe should also check outthisdiscussion on this problem. To receive the body in your python function just dodef my_handler(event, context):
body = event['body']But if the sole purpose of the function is to upload the file to S3, you could also do this directly with API Gateway:Go to theIntegration Requestsettings of your methodUnderIntegration Typeklickshow advancedSelectAWS Service ProxySelectS3als the AWS Service and fill in the necessary information
|
I am trying to send a file from client side and receive it through AWS API Gateway to my Lambda function which will then put this file in S3 bucket.I have used the following as default parameter template in API Gateway{"image" : $input.params('MediaUrl0')}How will I receive it in python which looks like:
def read_upload_toS3(event, context):
s3 = boto3.resource('s3')
|
How do I pass multipart-form data to AWS Lambda
|
The "Missing Authentication Token" error can be interpreted as eitherEnabling AWS_IAM authentication for your method and making a request to it without signing it withSigV4, orHitting a non-existent path in your API.For 1, if you use the generated SDK the signing is done for you.For 2, if you're making raw http requests make sure you're making requests to/<stage>/s3/{key}BTW, the path override for s3 puts needs to be{bucket}/{key}, not just{key}. You may need to create a two-level hierarchy with bucket as the parent, or just hardcode the bucket name in the path override if it will always be the same. See:http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html
|
I have been reading about creating an API which can be used to upload objects directly to S3. I have followed the guides from Amazon with little success.I am currently getting the following error:{"message":"Missing Authentication Token"}My API call configuration:The role ARN assigned is not in the image, but has been set up and assigned.
|
AWS API Gateway as Serivce proxy for S3 upload
|
Multiple websites can be hosted on one instance, given that the instance is large enough to handle all the traffic from all the different websites.Here are two main reasons you would use more than one EC2 instance:Load: A single instance would not be able to handle the load. In this case you would want to start up multiple servers and place them behind a load balancer so that the load can be shared across them. You might also want to split out each site into separate clusters of EC2 servers to further distribute the load.Fault tolerance: If you don't design your system with the expectation that an EC2 instance can and will disappear at some point, then you will eventually have a very unpleasant surprise. With your site running on multiple servers, spread out across multiple availability zones, if a server or even an entire AZ goes down your site will stay up.
|
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed7 years ago.Improve this questionSorry if there is an obvious answer to this, but I'm currently in the process of setting up a new company, from where I'll be hosting client websites. Rather than use an external hosting company, I'd like to take full control of this through EC2.Can multiple websites be hosted on a single instance, or will each new site require it's own instance?Many thanks,
L
|
AWS - EC2: Why would I need more than one instance? [closed]
|
It is possible. Save the image (PutObject) in a S3 bucket. It is called Push Model where aPutObjectin S3 triggers a lambda execution. The S3 object name (key) is passed to the lambda function. The lambda when invoked, downloads the image fie, resizes it and uploads the resized image to a different bucket in S3.AWS has detailed documentation and example for your use case. CheckUsing AWS Lambda with Amazon S3andTutorial: Using AWS Lambda with Amazon S3
|
I would like to know if there is a way to pass an image file from the client and send it to AWS lambda function. I ask this because I have to save the image file in a S3 bucket but I want to rename and compress the file in the lambda function before uploading it. If it's not possible give me your suggestion.
|
Pass an image file to AWS lambda
|
It can be found here:https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.xsd(See alsohttps://docs.aws.amazon.com/AmazonS3/latest/dev/UsingSOAPOperations.html)
|
An application of mine, interacting with the Amazon S3 server using REST API, performed a "Delete Multiple" operation against the server and encountered an error response:<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MalformedXML</Code>
<Message>The XML you provided was not well-formed or did not validate against our published schema</Message>
<RequestId>6FA...D61</RequestId>
<HostId>E5G...uhg=</HostId>
</Error>Quoting theAmazon documentation:This happens when the user sends malformed xml (xml that doesn't
conform to the published xsd) for the configuration. The error message
is, "The XML you provided was not well-formed or did not validate
against our published schema."Some of my app's deletion keys contain encoded characters that may be causing a problem. I would therefore like to see Amazon's published schema (XSD) file itself, running it through a validator to determine the problem.Where can I find the Amazon XSD file?
|
Where can I find Amazon's published S3 XSD (XML Schema Definition)?
|
This code is working fine for me:def set_thing_state(thingName, state):
# Change topic, qos and payload
payload = json.dumps({'state': { 'desired': { 'property': state } }})
logger.info("IOT update, thingName:"+thingName+", payload:"+payload)
#payload = {'state': { 'desired': { 'property': state } }}
response = client.update_thing_shadow(
thingName = thingName,
payload = payload
)
logger.info("IOT response: " + str(response))
logger.info("Body:"+response['payload'].read())
def get_thing_state(thingName):
response = client.get_thing_shadow(thingName=thingName)
streamingBody = response["payload"]
jsonState = json.loads(streamingBody.read())
print jsonState
#print jsonState["state"]["reported"]Good luck
|
According to boto3 documentation here:https://boto3.readthedocs.org/en/latest/reference/services/iot-data.html#clientthe update_thing_shadow method takes the thingName & JSON payload as parameters. Currently it reads:client = boto3.client('iot-data', region_name='us-east-1')
data = {"state" : { "desired" : { "switch" : "on" }}}
mypayload = json.dumps(data)
response = client.update_thing_shadow(
thingName = 'MyDevice',
payload = b'mypayload'
)When I use the command line there's no problem but can't seem to get it right from within the lamba function. I've called it with numerous versions of code (json.JSONEncoder, bytearray(), etc..) without any luck. The errors range from syntax to (ForbiddenException) when calling the UpdateThingShadow operation: Bad Request: ClientError. Has anyone had success calling this or a similar method from within a AWS lambda function? Thanks.
|
AWS Lambda function - can't call update thing shadow
|
The Auth0 example still uses a Lambda function to validate the JWT on each request. API gateway isn't going to validate JSON Web Tokens automatically, you have to provide a Lambda function to do that.I would look into using the newAPI Gateway Custom Authorizersfeature. This way you can have a single Lambda function that is responsible for validating the JWT for each request. This keeps your authentication code encapsulated in a single function instead of duplicated in every single Lambda function. It also allows you to do authentication in Lambda while the actual API endpoint may be pointing to something other than Lambda.
|
I'm trying to implement the following scheme:with Angular SPA on the frontend and Gateway API + lambda functions on the backendSince my app requires authentication, there will be /auth endpoint checking user credentials (users are stored in DynamoDB) and returning auth tokens (JWT) in case of success. Client will have to send auth tokens with every request to backend, and my lambda functions will have to validate those tokens.This scheme looks good, but I'm wondering can it be changed somehow so that responsibility to check tokens is moved out of lambda functions (lambdaAction on the picture above)?I have seen tutorials of using third-party services like Auth0
with authentication taking place in Gateway API, before any lambda functions (seethe linkfor example) But I can't figure out how to use those services with users stored in my own DB.So, my question, in short: is it possible to use Gateway API with token-based authentication with users stored in my database
|
AWS Gateway API authentication with users stored in DynamoDB
|
As long as the parameters in question do not have theNoEchoproperty explicitly set totrue(it defaults tofalse), then you can retrieve the parameter values using thedescribe-stackscall from any of the various tools (e.g. AWS API, CLI, or SDK of your choice). IfNoEchois set totrue, you won't be able to retrieve those parameter values.To run the command, you will need to either run it from an instance that's running with an IAM role / instance profile which has the correct permissions to calldescribe-stacks, or the tool has been configured with AWS security credentials (i.e.Access Key IdandSecret Access Key) that have permission.AWS CLI examples:aws cloudformation describe-stacks --region <region> --stack-name <stack-name>By default, you'll notice the parameters are embeded in a JSON response, along with a bunch of other information about the stack. To be more useful in scripting, you could use aJMESPathquery to narrow down the data returned to just the parameter's value:aws cloudformation describe-stacks --region <region> --stack-name <stack-name> --query 'Stacks[*].Parameters[?ParameterKey == `<parameter-name>`].ParameterValue' --output text
|
I have Windows user account credentials passed in as parameters in a CloudFormation template. Using SSM/EC2Config I will need to execute commands on my instances associated with this template, but since only one specific user account on Windows has been granted access to resources I need, I need to specify these same credentials when I execute my Powershell commands via SSM (as just running as Administrator will not have the proper access).The commands will be run later, not at instance launch. Is there any way for me to grab these credentials from CloudFormation? Or any other way to achieve this or something similar?
|
Can values passed in as parameters be retrieved from CloudFormation for other uses?
|
As far as I know there are no way to run in standalone mode on EMR unless you go back to the old ami-versions instead of using the emr-release-label. The old ami-version will however cause other problems with newer versions of Spark, so I wouldn't go that way.What you can do is to launch ordinary EC2-instances with Spark instead of using EMR. If you have a local Spark installation, go to theec2folder and usespark-ec2to launch the cluster, like this:./spark-ec2 --copy-aws-credentials --key-pair=MY_KEY --identity-file=MY_PEM_FILE.pem --region=MY_PREFERED_REGION --instance-type=INSTANCE_TYPE --slaves=NUMBER_OF_SLAVES --hadoop-major-version=2 --ganglia launch NAME_OF_JOBI suspect that you have jar-files that are needed, so they have to be copied onto the cluster (copy to master first, ssh to master and copy them onto the slaves from there../spark-ec2/copy-diron master will copy a directory onto all slaves). Then restart Spark:./spark/sbin/stop-master.sh
./spark/sbin/stop-slaves.sh
./spark/sbin/start-master.sh
./spark/sbin/start-slaves.shand you are ready to launch Spark in standalone mode:./spark/bin/spark-submit --deploy-mode client ...
|
I'm able to run Spark on AWS EMR without much trouble following thedocumentationbut from what I see it always uses YARN instead of the standalone manager. Is there any way to use the standalone mode instead of YARN easily? I don't really feel like hacking the bootstrap scripts to turn off yarn and deploy spark master/workers myself.I'm running into aweird YARN related bugand I was hoping it won't happen with standalone manager.
|
Spark standalone mode on AWS EMR
|
The best way to handle the concurrent limit is to use a Kinesis stream rather than SNS.
The number of shards will limit the number of lambda invoked. And if it pertinent for you, you can take several messages at once, which you can't do with SNS, and that can lead to hit the concurrent limit.
|
My current AWS Lambda function invokes another AWS Lambda function but I want to make sure that the invoke succeeded. After looking atconcurrent execution limits for AWS LambdaI am trying to figure out what would happen if the concurrent limit is hit and I tried to invoke the Lambda from another Lambda.
For now, I am solving this problem by putting messages in an SNS but I rather prefer invoking Lambda directly avoiding the indirection.
|
Invoking lambda from lambda: AWS Lambda concurrent execution limits
|
You are not guaranteed to get any messages in response when the queue size is small. This is a property of distributed queues. Seethis pagefor more information.
|
I'm using boto3, AWS library to connect to their SQS services. I'm trying to connect, read and write messages from a queue but this doesn't seem to be working and the documentation isn't helpinghere's my code, anyone can spot what I'm doing wrong?#Connect to a session
session = Session(aws_access_key_id=SQSAccessKey, aws_secret_access_key=SQSSecretKey,region_name=sqsRegion)
#Connect to a resource
sqs= session.resource('sqs')
queue = sqs.get_queue_by_name(QueueName=transactionQueue)
print(queue.url)
# Create a new message
print 'creating new message'
toWrite = 'hello world'
response = queue.send_message(MessageBody=toWrite)
print(response.get('MessageId'))
#Reading messages in queue
messages = queue.receive_messages()
print 'there are %s messages in the queue' % len(messages)
for message in messages:
# SQS Message
message.body
message.delete()After sending the new message to the queue (and printing out the message ID) I try to read the queue for messages but it returns no new messages as if nothing was written to it.Anything I'm doing wrong?thanks!
|
Can't write message to SQS using boto3
|
Should really be a comment, but I don't have enough reputation, so...If your REST APIs are internal use only, couldn't you simply deploy them on a different port? You could then use security groups to make that port accessible only from app2.In other words, your main app on app1 would be running say on port 80, and you configure your internal REST APIs to run on port 8080. Then utilize security groups to restrict access to port 8080 from app2 only.
|
We are developing an application that uses two services deployed on AWS-ElasticBeanstalk, lets sayapp1.beanstalk.comandapp2.beanstalk.com.app1exposes some internal REST APIs (app1.beanstalk.com/intenal/reports) and we have the requirement to make them accessible only from app2.It is clear to us that we can block the requests at application level but we are looking to block the even before that ..something like a firewall. Is there any AWS service that integrates with Beanstalk and allows us to allow request to a certain URLs exapp1.beanstalk.com/intenal/*only if the request comes from a certain security group or subnet(VPC)
|
Allow HTTP request based on URL and security group
|
S3 regional endpoints such as s3-region.amazonaws.com do not support CORS. CORS is only supported on buckets (after you've enabled it). So you cannot call listBuckets. It would be great if AWS enabled this, but there may be compelling reasons not to.You may be able to work around this, if needed, by hard-coding bucket names in your web client (not ideal), or by maintaining a list of buckets in a readable JSON file stored in S3. Personally, I'd prefer the latter and would try to maintain the file usingAWS Lambda. Or you could ask the user to supply the bucket name, of course, but they typically will not know it.
|
I am trying to listBuckets associated with an authorized user in frontend using AWS JS SDK.listBuckets API documentation:http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listBuckets-propertyAnd, listBucket request failed with the following error message:https://s3-us-west-2.amazonaws.com/. Response to preflight request
doesn't pass access control check: No 'Access-Control-Allow-Origin'
header is present on the requested resource. Origin
'http://palombpramalis.local:8888' is therefore not allowed access.
The response had HTTP status code 403.How to configure CORS forhttps://s3-us-west-2.amazonaws.com/?AWS documentations talks about configuring CORS for specific bucket onlyhttp://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html.
But this request is for listing all buckets for an authenticated user.
|
CORS error with listBuckets in AWS JS SDK
|
Did you have a sort key while creating a table? If so, then you have to specify the sort key too as you have a composite key on the table. Having a sort key means that you could have multiple records with the same primary key, however the sort key must be uniquehttp://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#WorkingWithTables.primary.keyThe sort key may also be referred to as range or range key in the AWS Dynamo DB documentation and the console.So your delete item would be likeDeleteItemSpec itemSpec = new DeleteItemSpec().withPrimaryKey("cognitoId", "my_id", "sortKeyField", "sort_key_id");
DeleteItemOutcome outcome = table.deleteItem(itemSpec);
|
I've been attempting to delete an item from a table in DynamoDB through java code, but every attempt I've made results in the same error:com.amazonaws.AmazonServiceException: The provided key element does
not match the schema (Service: AmazonDynamoDBv2; Status Code: 400;
Error Code: ValidationException;My current attempt is very simple and looks like this:final DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(credentials));
Table table =dynamoDB.getTable(tableName);
DeleteItemSpec itemSpec = new DeleteItemSpec().withPrimaryKey("cognitoId", cognitoId);
table.deleteItem(itemSpec);tablenameis simply the table name, thecredentialshave been verified to be correct, andcognitoIdis the actual ID of the item I'm trying to delete. The table in question hascognitoIdas the primary key and I don't understand why the deletion isn't matching the schema. The table also has a sort key, or range key (I'm not sure what it is because the documentation is quite vague). I've been referring to the documentation here:http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#WorkingWithTables.primary.key
|
Cannot delete item from DynamoDB table (java)
|
Elastic Load Balancer cannot be configured to filter out requests.If your allowed connections are based on IP address, then you can use VPC ACLs to allow only connections from certain IP addresses. All others will receive failed connections at the ELB level.If your allowed connections are not based on IP address you can take a look at CloudFront in combination with Amazon Web Application Firewall (WAF).WAF can be configured to filter at the web request level by IP address, URL, query string, headers, etc.
|
I have a Django app deployed on AWS Elastic Beanstalk. Django is configured to only serve requests that comes for a specific hostname (ALLOWED_HOSTS). If the host information in the request doesn't match, it will raise return 500 response code, that is fine.But, I have noticed that I get quite many of those, either sending requests vis IP address, or via other domain names. So, I would like to configure the setup so that the load balancer rejects the request if it doesn't have the proper hostname in the header information.Is this possible to do? I have been trying to go over settings in the AWS Console, but cannot find any information how to do this. I could patch the EC2 instances to reject those request so it doesn't reach Django at all, but I would like to stop it as early as possible.Flow now:Client -> Load Balancer -> EC2 instance -> Nginx -> Django
<-500 error- DjangoWhat I want:Client -> Load Balancer
<-reject- Load Balancer
|
Can AWS Load Balancer be configured to filter out requests?
|
To add a cronab entry for root user:sudo crontab -ewhich will open an editor. Insert the following line to restart tomcat7 at 11pm daily0 23 * * * /sbin/service tomcat7 restartUpdate:/sbin/service tomcat7 restartno longer works as of 2022.
|
I have an instance on Tomcat running on EC2. Based on some resourcing reasons that I don't want to get into, I'd like it to restart each evening at 11:00pm. I'm not interested in reloading or stopping the applications context as the PermGen space gets crowded until eventually the box tips over and dies.So where on an aws linux instance do I specifyservice tomcat7 restartand give it a cron expression?
|
Restart Tomcat Service on AWS EC2 instance, on a schedule
|
Following the discussion athttps://github.com/aws/aws-sdk-js/issues/862:There seem to be inconsistencies on DynamoDB's side as to which version of TLS DynamoDB uses to communicate in the client. To get around this, you need to force the SDK to use TLS v1:const https = require('https');
const dynamodb = new AWS.DynamoDB({
region: 'us-east-1',
httpOptions: {
agent: new https.Agent({
ciphers: 'ALL',
secureProtocol: 'TLSv1_method'
})
}
});
const dynamodbDoc = new AWS.DynamoDB.DocumentClient({
region: 'us-east-1',
service: dynamodb
});
|
I'm using dynamoDB to save the data that a web service is generating.
I sometimes (it is not consistent) get the 'EPROTO' error, I read about it and it is a protocol error, but I use aws-sdk (javascript) and I don't state any protocol-related details.This is how I initialize it:var aws = require('aws-sdk');
var dynamoDB = new aws.DynamoDB({
accessKeyId: config.DynamoDB.accessKeyId,
secretAccessKey: config.DynamoDB.secretAccessKey,
region: config.DynamoDB.region
});And I simply use the put api:dynamoDB.putItem(params, function(err, dat) {
if (err) {
console.log('ERROR: Putting to dynamo failed with error: ' + err.message);
}
else {
console.log('wipi');
//passing data
}
});paramsis as follow:var params = {
TableName: config.DynamoDB.tableNames.data, //this is the table name, a string
Item: {
id: {
S: id // this is a generated uid (also a string)
},
scheme: {
S: ivd.version // this is a string of structure 'X.X.X'
},
data: {
S: JSON.stringify(data.data) // data.data is a big object - {arg1: [1, 2, 3...], arg2: '', ...}
}
}
};I should mention that it is not even consistent over the same params object (that differs only by the generated uid).Any ideas what this error means in my case and why would it occur?
|
aws DynamoDB gives "write EPROTO"
|
After you have completed your setup and testing, you need torequest to be removed from Sandbox Mode and be granted production access.To help protect our customers from fraud and abuse and to help you establish your trustworthiness to ISPs and email recipients, we do not immediately grant unlimited Amazon SES usage to new users. New users are initially placed in the Amazon SES sandbox.Among the the restrictions in sandbox mode:You can only send mail to the Amazon SES mailbox simulator and to verified email addresses and domains.There are two ways to request production access, either by opening a support case, or submitting a request form, both of which are discussed athttp://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html.
|
How to send mails through amazon simple email services.I have set up myamazon simple email servicesaccount for my domain and verified it, but I can only send emails to email ids related to my domain like[email protected]. I am unable to send emails to other domains like[email protected],[email protected], etc. Please Tell me if I have to do something related to my amazon web services account.$config = Array(
'protocol' => 'mail',
'protocol' => 'smtp',
'smtp_host' => email-smtp.us-east-1.amazonaws.com,
'smtp_port' => 465,
'smtp_user' => ##############,
'smtp_pass' => ###########################,
'mailtype' => 'html',
'charset' => 'iso-8859-1'
);
$this->load->library('email');
$this->email->initialize($config);
$this->email->set_newline("\r\n");
$this->email->from([email protected], 'User');
$this->email->to($to);
$this->email->subject($subject);
$this->email->message($message);
|
Unable to send emails to other domain using amazon ses
|
Unfortunately, AWS doesn't have vertical auto-scaling functionality for EC2 so this cannot be achieved without shutting down the instance and relaunching it as another (bigger) instance type. Horizontal scaling is, however, quite easy to configure (launching another copy of your instance with same instance type).As a workaround, you can create a snapshot of your instance and use it to relaunch this instance as a bigger type (using CloudWatch, as @Sri.U pointed out in the comments).The only instances which allow vertical scaling are RDS (relational database service) instances.
|
I have a website at AWS ec2 micro instance. Is it possible to configure the autoscaling with storage space go up when needed and also the cpus or not? Thank you for any help and suggestion.
|
Automatically upgrade Amazon AWS from micro to Medium
|
I figured it out and maybe it will be useful for someone else:There were in facttwo problems.My first field in the redshift table was of the typeINT IDENTITY(1,1)and in CSV I had0value there. After removing the first column from CSV, even without specified columns mapping everything was copied without a problem if...DELIMITER ','commandOptionwas added toS3ToRedshiftCopyActivityto force using comma. Without it RedShift recognized dot from namespace (my.namespace.string) as delimiter.
|
I'm working on the data pipeline. In one of the steps CSV from S3 is consumed by RedShift DataNode. My RedShift table has 78 columns. Checked with:SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'my_table';After failed RedshiftCopyActivity 'stl_load_errors' table shows "Delimiter not found" (1214) error for line number 1, for column namespace (this is second column, varchar(255)) on position 0. Consumed CSV line looks like that:0,my.namespace.string,2119652,458031,S,60,2015-05-02,2015-05-02 14:51:02,2015-05-02 14:51:14.0,1,Counter,1,Counter 01,91,Chaymae,0,,,,227817,1,Dine In,5788,2015-05-02 14:51:02,2015-05-02 14:51:27,17.45,0.00,0.00,17.45,,91,Chaymae,0,0.00,12,M,A,-1,13,F,0,0,2,2.50,F,1094055,Coleslaw Md Upt,8,Sonstige,900,Sides,901,Sides,0.00,0.00,0,,,0.0000,0,0,,,0.00,0.0000,0.0000,0,,,0.00,0.0000,,1,Woche Counter,127,Coleslaw Md Upt,2,2.50After simple replacement ("," to "\n") I have 78 lines so it looks like the data should be matched... I'm stuck on that. Maybe someone knows how I can find more information about the error or see the solution?EDITQuery:select d.query, substring(d.filename,14,20),
d.line_number as line,
substring(d.value,1,16) as value,
substring(le.err_reason,1,48) as err_reason
from stl_loaderror_detail d, stl_load_errors le
where d.query = le.query
and d.query = pg_last_copy_id();results with 0 rows.
|
AWS Data Pipeline RedShift "delimiter not found" error
|
The EC2 command line interface is an older version that just supports the EC2 service, one of many services on AWS. You will want to use the AWS command line interface which supports all AWS services.
|
Amazon Web Services has two different command line interfaces for managing their services without using a web browser: TheEC2 Command Line Interface Toolsand theAWS Command Line Interface. It appears that most of the functionality is available in both families.For new users and applications, is there any reason to use the legacyec2-*style commands, rather than theaws ec2 *commands?
|
AWS API Tools vs AWS CLI: Is there a reason to use one instead of the other?
|
GitLab doesn't implements a direct integration with AWS services, but you can work around that. You can do something like what you described, and implement all the installation/distribution/auth logic, but then you aren't really getting much from CodeDeploy. What you should do depends on what you are trying to achieve.Automatic deployment on push:
You can get automatic deployments on commit to GitLab if you bridge their WebHooks with something that can authenticate to AWS. That might look like:Web Hook in GitLab that sends a push request to a Jenkins server you control.The Jenkins server uses the Git plugin to pull the source.The Jenkins server runs your build and test steps.The Jenkins server uses the CodeDeploy plugin to upload the build artifacts to S3 and create a deployment.If you want to have manual deployments, you could do the same as above but manually trigger the Jenkins build.Deploy manually only:
Do the following when you want to deploy:Use git to checkout the commit you want to deploy.Run your build and test locally.Execute the AWS CLI deploy push command to upload your build artifacts to S3.Create a deployment in CodeDeploy using the bundle uploaded.
|
I have set up two EC2 instances in a private subnet behind a NAT.
The instances are both in a AutoScalling group.
I want to integrate CodeDeploy with my repository from GitLab.All I can think of now is running a script on the BeforeInstall hook of the appspec.yml file.Is there another way to do this?
|
AWS CodeDeploy integration with GitLab
|
AWS Lambda now supportsScheduled Tasks. Since Lambdacan make HTTP requestsandwrite to DynamoDB, using Lambda should work and you don't have to worry about setting up an EC2 instance with a cron job just for that.
|
I'm looking for an easy way to retrieve and store JSON in Amazon DynamoDB.
I'm getting the data via an URL and I would like to query the URL every X second - example:
wgethttp://open-stocks.com/api/get-data-10:21:33.jsonThe time in the URL should match time of request - so that's dynamic.I guess I could spin up an entire Linux server on AWS and write a Python script generating the URL, getting the data and push it to Amazon DynamoDB - but I would love a sort of existing service, which made me not worrying about server OS, cronjobs etc...Any help to such service, perhaps directly via AWS?
|
Automatically retrieve JSON data via URL every X second and store in Amazon DynamoDB
|
You are using AWS credentials that don't have permission to invalidate your CloudFront distribution. You should go into the AWS IAM console and look at that user you are using, "cats-kittens-beanstalk-user"??, and add the appropriate permissions to that user.Alternately, create a new user in IAM that has the appropriate permissions.I know AWS throws this access denied even though the user is
authorized to run commands in some instancesIn the example you link, it appeared to S3 that they were trying to perform an operation on an S3 bucket that they didn't own, or one that didn't exist, so I think the permission error was perfectly appropriate in that instance.If you are completely sure that your user has the appropriate permissions, then perhaps your distribution-id or something in your .json file is incorrect, causing CloudFront to think you are trying to edit a distribution that you don't own, or one that doesn't exist.
|
I am attempting to create a command that will invalidate CloudFront distribution when pushing out new code. This is an attempt to fix the issue that new HTML pushed out doesn't take up to 24 hours to appear on my web app. The idea comes from thisAWS CLI COMMAND REFERENCEHere is the command:aws cloudfront create-invalidation --distribution-id XXXXXXXXXXXXXX --invalidation-batch file://invbatch.jsonHere is the response I get when I run the command:A client error (AccessDenied) occurred when calling the CreateInvalidation operation: User: arn:aws:iam::XXXXXXXXXXXXXX:user/cats-kittens-beanstalk-user is not authorized to perform: cloudfront:CreateInvalidationAny idea why this might be? I know AWS throws this access denied even though the user is authorized to run commands in some instances - seehere.
|
Access Denied when calling the CreateInvalidation operation on AWS CLI
|
This isn't going to solve your problem but I can confirm that I'm having the exact same issue and I haven't been able to get Amazon to fix the problem all week. It started when I used their new spot instance request system on November 7th. I'm assuming that the system has a bug in it that causes the spot request (not just the instance) to recreate itself. I haven't had problems like this before when creating a spot request. Amazon also thought that I was using an Auto scaling group but I've never set that up either.The problem is now, trying to get Amazon to respond to this problem. I've opened multiple tickets. They have only responded one time in a week. At least they gave me a $12 refund for the time used by the spot instances so far. :-(
|
I created a spot instance request with my custom AMI (based on Amazon Linux AMI, EBS backed.) When my request was fulfilled, I have another spot instance request created by itself with amzn-ami-pv-2015.09.0.x86_64-ebs (ami-50978202) as AMI with the same bid pricing as the one I created. I didn't really pay attention at the time and since my bid was low ($0.005 per hour) and I only used it for a couple of hours, so I didn't pay attention to it that much.When I terminated the instance and cancel the request (both the one I created and the one that creates itself,) a new spot instance request with ami-50978202 keeps creating itself no matter how many times I keep canceling it and terminate the instance that was fulfilled. I thought it was because I still have the custom AMI on my account so I tried copying the AMI to another region, but the spot instance creating itself doesn't happen in that region, so I'm quite lost with what's happening here. Any help would be appreciated.
|
Amazon EC2 spot instance request creates itself
|
You're looking fordescribe-instance-status. This will return, among other things, both the System Status and Instance Status as displayed on the 'Status Check' tab in the EC2 web console.Example Request as made for a running, healthy instance:aws ec2 describe-instance-status --instance-ids i-abcd1234Example Output as made for a running, healthy instance:{
"InstanceStatuses": [
{
"InstanceId": "i-abcd1234",
"InstanceState": {
"Code": 16,
"Name": "running"
},
"AvailabilityZone": "us-east-1a",
"SystemStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
},
"InstanceStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
}
}
]
}If you want to review historical status checks, you can do so viaCloudWatch (linked documentation)by reviewing the following EC2 metrics:StatusCheckFailed_InstanceStatusCheckFailed_System
|
I am trying to retrieve, via AMAZON CLI tools, the 'Status Checks' information, which is displayed for an EC2 Instance in the console. For example 'Pending' or '2/2 checks passed'. I have used the following command:ec2-describe-instances [instance_id ...]However, it only returns Instance State info such as 'Running', 'Stopping' etc. I want the more granular information as displayed in the Status Checks column in the AWS Console. Does anyone know the command to retrieve this information for an instance?
|
How to get AWS Status Checks for an Instance
|
UseTIMEFORMAT 'auto'instead.It's able to import2015-01-13T11:13:08.869941+00:00as2015-01-13 11:13:08.869941.I assume this method just discards the timezone information, but at least you can get the data in this way.If you have various timezones in the data, maybe you need to do some preprocessing to convert everything into UTC, for example.Unfortunately I think theCOPYwith a provided time format is rather strict and doesn't support timezone parts.
|
I have the following redshift table:DROP TABLE IF EXISTS "logs";
CREATE TABLE "logs" (
"source" varchar(255) DEFAULT NULL,
"method" varchar(255) DEFAULT NULL,
"path" varchar(1023) DEFAULT NULL,
"format" varchar(255) DEFAULT NULL,
"controller" varchar(255) DEFAULT NULL,
"action" varchar(255) DEFAULT NULL,
"status" integer DEFAULT NULL,
"duration" float DEFAULT NULL,
"view" float DEFAULT NULL,
"db" float DEFAULT NULL,
"ip" varchar(255)DEFAULT NULL,
"route" varchar(255) DEFAULT NULL,
"request_id" varchar(255) DEFAULT NULL,
"user" INTEGER DEFAULT NULL,
"school" varchar(255) DEFAULT NULL,
"timestamp" datetime DEFAULT NULL
);So far so good.The only problem is that the datetime in my source file on s3 is the following:"2015-01-13T11:13:08.869941+00:00". This looks like rfc822 (or rfc3339 or rfc2822).A few timeformats are supported by the COPY command (see doc:http://docs.aws.amazon.com/redshift/latest/dg/r_DATEFORMAT_and_TIMEFORMAT_strings.html). But not my rfc822 format.I've tried the following:TRUNCATE logs;
COPY "logs" FROM 's3://path/to/logstash_logfile.gz'
CREDENTIALS 'aws_access_key_id=THE_KEY;aws_secret_access_key=THE_SECRET'
TIMEFORMAT AS 'MM-DD-YYYYTHH:MI:SS'
JSON 's3://path/to/jsonpath.json' GZIP;But I'm getting:SELECT * FROM stl_load_errors;Invalid timestamp format or value [MM-DD-YYYYTHH:MI:SS]
|
Copy a datetime with the format rfc822 into redshift
|
The live and sandbox modes are completely separate and no transfer is possible from one to the other.You will need to implement this programmatically by storing the specs of the sandbox HIT and creating a live HIT.Another option is to use a service like TurkPrime.com which allows you to copy HITs from sandbox to live mode
|
If I create a HIT in the Sandbox via Mturk's GUI, is it possible to transfer it to the Production site, or do I have to re-create the HIT manually in the Production site?In particular, is it possible, to download .input, .question and .properties for HIT created via GUI in the sandbox, in order to use them to generate the same HIT on the Production site via the CLT?The obvious way seems to be usingMturk HIT's layouts. However, reading the doc, I don't see how/ know whether it is possible to to do this using the CLT. The doc onHITLayoutParameterrequires usingCreateHIT, but this is not an available command in the CLT (only haveloadHITs).I have seen other questionsCreating mTurk HIT from Layout with parameters using boto and pythonandCreate a MTurk HIT from an existing templateabout ways to do it withbotobut I am still wondering whether that's doable with the CLT.
|
Mturk: transfer HIT from Sandbox to Production site
|
Login as the user "hadoop" (http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-connect-master-node-ssh.html). It has all the proper environment and related settings for working as expected. The error you are receiving is due to logging in as "ec2-user".
|
I created anEMR 4.0instance in AWS with all available applications, includingSpark. I did it manually, through AWS Console. I started the cluster and SSHed to the master node when it was up. There I ranpyspark. I am getting the following error whenpysparktries to createSparkContext:2015-09-03 19:36:04,195 ERROR Thread-3 spark.SparkContext
(Logging.scala:logError(96)) - -ec2-user, access=WRITE,
inode="/user":hdfs:hadoop:drwxr-xr-x atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)I haven't added any custom applications, nor bootstrapping and expected everything to work without errors. Not sure what's going on. Any suggestions will be greatly appreciated.
|
Error starting Spark in EMR 4.0
|
What you can do is each time you write into SQS in your region, you also write into SQS in another region. Here are my suggestions:You would have to do that programmatically, and depending on your requirements you may have to implement a two phase commit.Alternatively, you use SNS to publish your messages, but configure the SNS subscriptions to write to SQS queues across regions.Bear in mind that AWS availability zones are already actually geographically distributed, about 50 miles away from each other, I believe.If this was my project, I would follow option 2. Take a look at this AWS document:http://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.htmlBasically, what you do is instead of publishing directly to SQS, you create an SNS topic in each region. Each of those topics has SQS subscriptions in both regions as well.When a message is published to SNS in either region, SNS will automatically publish to both SQS subscriptions.
|
We are evaluating Amazon SQS for one of our applications. We will be persisting some messages in SQS for a maximum of 10 hours a day (one of our 'consumers' can only process between 7am and 9pm).It appears that persisted messages are distributed across multiple availability zones in the same geographical region. Is there a way to achieve replication across regions in SQS? I read Amazon's white paper on DR and it has no mention about SQS.
|
Amazon SQS - Disaster Recovery
|
Here's an approach that is not ideal but that works. Always run the same steps in all environments. Write a shell script to encapsulate the commands you want to conditionally run. Have the shell script test for the existence of an environment variable and run the commands if that environment variable is set.
Here's an illustration on how to implement this:1/ add a config file.ebextensions/worker_job.configfiles:
"/home/webapp/worker_job.sh" :
mode: "000755"
owner: webapp
group: webapp
content: |
#!/bin/sh
if [ -z "$RUN_WORKER_JOB" ]; then
echo 'RUN_WORKER_JOB is not set, skipping RUN_WORKER_JOB';
else
echo 'RUN_WORKER_JOB is set, running RUN_WORKER_JOB'
# run useful worker commands
fi
container_commands:
00_worker_job:
command: /home/webapp/worker_job.sh2/ set theRUN_WORKER_JOBon selected environmentsUsing the AWS console, select the environment you want the commands to run in. Open theSoftware Configurationtool and set an environment variable calledRUN_WORKER_JOB. Make sure that environment variable is not set in the environments you don't want your commands to run in.Note:Set a convention and call the scripts, variables and files in a consistent way:worker_job.sh,worker_job.config,RUN_WORKER_JOB, etc...
|
I'll be running a Rails application on Elastic Beanstalk and I'll have both Web and Worker environments. The problem is that, since they both share the same code, I need to run some specific ebextensions on the worker environment (to initialize the worker process) and some specific scripts on the web (to initialize the app server). How can I separate the two scripts on different folders inside .ebextensions folder and tell Elastic Beanstalk to run then according to an environment variable?Thanks,
|
How to run conditional scripts on Elastic Beanstalk according to environment?
|
AWS support confirmed the following as of August 2, 2015:"innodb_doublewrite can't be modified in RDS MySQL instances"
|
Please can anyone help me to disable "innodb_doublewrite" for my MySQL database hosted at Amazon RDS. I need this as we need to quickly update around 15 Million rows.I know there is a startup option for this:--skip-innodb_doublewriteBut how to use it?Apart from that, Amazon RDS Parameters Group does not show the option of "innodb_doublewrite" for editing. Amazon does not also allow direct editing ofmy.cnffile.I can access the MySQL through my Linux Server. But don't know exactly how to use the startup option for Amazon RDS. Please can any one help me to disable this option?
|
How to disable innodb_doublewrite for MySQL at Amazon RDS?
|
Okay,given the DCOS template, the LaunchConfiguration for the slaves looks like this: (I've shortened it somewhat)"MasterLaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"IamInstanceProfile": { "Ref": "MasterInstanceProfile" },
"SecurityGroups": [ ... ],
"ImageId": { ... },
"InstanceType": { ... },
"KeyName": { "Ref": "KeyName" },
"UserData": { ... }
}
}To get started, all you need to do is add theSpotPriceproperty in there. The value ofSpotPriceis, obviously, the maximum price you want to pay. You'll probably need to do more work around autoscaling, especially with alarms and time of day. So here's your newLaunchConfigurationwith a spot price of $1.00 per hour:"MasterLaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"IamInstanceProfile": { "Ref": "MasterInstanceProfile" },
"SecurityGroups": [ ... ],
"ImageId": { ... },
"InstanceType": { ... },
"KeyName": { "Ref": "KeyName" },
"UserData": { ... },
"SpotPrice": 1.00
}
}
|
Is it possible to change the DCOS template to use spot instances? I have looked around and there does not seem to be much information regarding this.
|
Spot Instances Support DCOS
|
Everything your Lambda function executes must be included in the deployment package you upload.That means if you want to run Java code, you can reference other Java libraries. (Likewise, if you want to run Node.js code, you can reference other Node libraries.)Regardless of the tools you use, the resulting .zip file must have the following structure:All compiled class files and resource files at the root level.All required jars to run the code in the /lib directory.(source)Or you can upload a .jar file.exiftool, on the other hand, is a Perl command-line program. I suspect that on your local machine, you shell out from your Java code and run it.You cannot do that in AWS Lambda.You need to find a Java package that extracts EXIF information (I am sure there are plenty to choose from) and include that in your deployment package. You cannot install software packages on Lambda.
|
I understand that AWS Lambda runs on the application layer of an isolated environment.In many situations, functions need to use third-party tools that must be installed first on the linux machine. For example, a media processing function usesexiftoolto extract metadata from image, so I installexiftoolfirst.Now I want to migrate the media processing code into AWS Lambda. My question is, how can I use those tools that I originally must install on linux? My code is written in Java, andexiftoolis necessary.
|
AWS Lambda: How to use tools that must be installed first in linux?
|
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you doknife ssl checkyou'll probably get an error message that looks like this:ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.EDIT:Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.ssh onto your chef serveropen your hostname file with the following commandsudo vim /etc/hostnameremove the line containing you internal ip and replace it with your public ip and save the file.reboot the server withsudo rebootrunsudo chef-server-ctl reconfigure(this signs a new certificate, among other things)Go back to your workstation and useknife ssl fetchfollowed byknife ssl checkand you should be good to go.What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
|
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
|
Chef on AWS: How do you update the certificate on the server?
|
The EB CLI tells Elastic Beanstalk to use the "aws-elasticbeanstalk-ec2-role" instance profile. This will override your ebextensions.
In order to use your own profile, you can either use the "-ip" option or you can use a default saved configuration.eb create --tier worker -ip custom-profileIf you want to do this with saved configurations instead, seethis blog post.
|
tl;drInstance gets assumed-role instead of what I set in configuration.I deploy a java application in docker into elastic-beanstalk; I actually set a specific role with my custom policies in.ebextensions/instance.config:- namespace: aws:autoscaling:launchconfiguration
option_name: IamInstanceProfile
value: custom-profileWhen I deploy witheb init && eb create --tier workereverything is okay. Then the application tries to access stuff, which is allowed incustom-profile, but it fails with:Exception in thread "main" com.amazonaws.AmazonServiceException: User: arn:aws:sts::***:assumed-role/aws-elasticbeanstalk-ec2-role/*** is not authorized to perform: ...It doesn't even mention the reason why it uses an "assumed role". Interestingly, when I set the role manually in the web console and upload the zip, it works.I've tried using SingleInstance and LoadBalanced, both to the same result. I've read thedocsand googled, but found nothing that would work. I've added the PassRole priviledge to my console user, but I don't even know, if it helps anything. The config is accepted as valid, but while the EC2 instance is created I don't have any info as to why it's not assigned the right role.I'll be thankful for your advice.Notes:new InstanceProfileCredentialsProvider()is used in Java.
|
Assign role to instance in .ebextensions
|
Instances created in VPC public subnets will be automatically assigned a public, routable IP address and a corresponding publicly-resolvable DNS entry of the formip-<dash delimited address>.<region>.compute.amazonaws.com. Any ports allowed in the instance's security groups will be accessible over the Internet. The automatic address cannot be chosen. These public addresses are not persistent; when the instance is terminated, the IP address is lost.Elastic IP addresses, by contrast, are associated with an AWS account. They can be attached to an instance. When the instance is terminated, the elastic IP can be associated with a new instance. They are persistent until manually released.You may find the AWS docs onVPC public addressesuseful. Also note that EIPs have some small cost associated in some cases; see the section on Elastic IP Addresses in theEC2 pricing docs.
|
What is difference between two terminologies, having a public subnet vs assigning elastic IP address to an instance of VPC over AWS ?
|
AWS public subnet vs Assigning elastic IP address to an instance of a VPC
|
I would suggest these approaches to study:Elastic BeanStalk - This is AWS simply hosting model. If you're not IT savy you should pursue this approachEC2 with MySQL RDS - In this case you'll create a Virtual Machine(s) (EC2) install Tomcat and other dependencies and deploy your app. You'll then use RDS to store your data (which is MySql as a service)EC2 only - YOu'll do the same as 2. but install your own instance of MySql. There may be AMI's offered that you can provision that will meet your application requirements.Other reading:Route53 if your going to use AWS for your domain recordsElastic Load Balancing if your going to need High AvailabilityElastic Block Store if you want persistent disks accross VMsNetwork Security Groups to secure your VMs (for 1. and 2.)Virtual Private Cloud for additional securityCloudFormation if you want to automate provisioningThere are many articles on:AWS Architecture
|
I am working on a Servlet/JSP project and I want to host it on aws.amazon.com. I have already signed up for Amazon Web Services and after signing in this page opens up and I have no idea what to do or which option to select.I think AWS provides a lot of customization with a lot advanced technical options to choose from, but this is difficult for beginners who just want to make their site running.My project will use these:-JSP/ServletsCSSMySQLStruts2Tomcat WebServer
|
How to upload Servlet/JSP website through Amazon Web Services?
|
I receive bounce notifications from SES for invalid domains.The difference is that the bounce is not immediate since there is no responding mail server. SES will hold the mail and retry several times before declaring it a bounce. I receive the bounce notification 12-16 hours after the initial message was sent if the domain is invalid. Usually from a misspelling.Real Bounce ResultsOn 4/26 3:53 pm I sent a mail to an invalid domain ([email protected]instead of[email protected])On 4/27 6:17 am I received the bounce from SES.
|
I have created an emailing system using Amazon's Simple Email Service (SES) that handles bounces to invalid messages with their Notification(SNS) and Queue(SQS) services. Sending emails to valid addresses work as expected, but I am running into a problem when trying to report bounces.There are 2 bounce situations: the first one works and the second one does not.1) Emailing a fake address at an existing ISP (for eg:[email protected]or[email protected]) - correctly bounces and sends a Notification to my Queue through SNS2) After emailing a fake address at a fake ISP (for eg:[email protected]), the Queue never receives a bounce from SNS.However, the bounce is recognize on some level by AWS because it is added to the Bounce-Statistics Graph in the console.I can't remove these addresses from my email list if I am never notified that email has bounced.After doing a lot of research, I initially thought that it was a problem with theAWS Suppression ListBut I dont think that's possible since i have tried sending to email addresses that were very unlikely to have been used in the past 12 days.My other thought, is this is asoft bounce, and the system will only be updated if it continues to bounce for the next 12 hours.Any suggestions or advice would be appreciated.
|
AWS-SES: Handling Bounces for Invalid ISPs
|
Check whether your EC2 instance is inside a VPC or not.Instances inside VPC will retain their private IP addresses when stopped and restarted. But instances outside VPC (ie. EC2-Classic) will change their private IP address when stopped and restarted.Unfortunately, it's not possible to move an EC2 instance from EC2-Classic to EC2-VPC. However, in many cases, you can create an AMI image of the instance and launch a new instance from the AMI inside the VPC.
|
Yes, I read an article by Eric Hammondherewhere he mentions that the private IP would also change when restarting. A few months ago, when I first got an AWS cluster up for hadoop, I used the internal IP to configure /etc/hosts and the internal IP wouldn't change (even when the instance is stopped, i can see the internal IP).To replicate this cluster as part of our corporate account, I created a few AMIs and used those to launch the instances. Now, the IPs are changing each time the machine is restarted.On checking the machines that didnothave the IP change, there doesn't seem to be anything special about them. They are the same simple EBS backed instances with volumes. Hmm, so what's the difference between them?
|
AWS instance private IP changing after stop/restart (did not happen before)
|
This is not POST request. It is GET. The API endpoint is like thishttp://sqs.us-east-1.amazonaws.com/123456789012/testQueue/
?Action=ReceiveMessage
&WaitTimeSeconds=10
&MaxNumberOfMessages=5
&VisibilityTimeout=15
&AttributeName=All;
&Version=2012-11-05
&Expires=2013-10-25T22%3A52%3A43PST
&AUTHPARAMSI was not mentioning the request parameters so not getting the result. That was the issue.
|
I have created a queue and also given the permission as (*).
The URL of my queue ishttps://sqs.us-west-2.amazonaws.com/123/ExampleWhen I hit this URL it always gives the output as<UnknownOperationException/>. I checked the Chrome console it shows me following error. Can anybody suggest what is the issue ? Or it is a bug ?
|
UnknownOperationException is always returned by Amazon SQS
|
The tree and branch functionality work by listing objects in a bucket with a prefix and delimiter.The prefix specifies the current "folder" and the delimiter should be a '/' to prevent nested keys from being returned.For example, to list all of the "files" and "folders" inside the "photos/family/" folder of a bucket:s3 = Aws::S3::Client.new
resp = s3.list_objects(bucket:'bucket-name', prefix:'photos/family/', delimiter:'/')
# the list of "files"
resp.contents.map(&:key)
#=> ['photos/family/summer_vacation.jpg', 'photos/family/parents.jpg']
# the list of "folders"
resp.common_prefixes
#=> ['photos/family/portraits/', 'photos/family/disney_land/']The contents are the files, or leaf nodes in a response. The common_prefixes are the directories. If you want to continue down to see the files and folder inside "photos/family/portraits/", then just#list_objectsagain with a different prefix:resp = s3.list_objects(bucket:'bucket-name', prefix:'photos/family/portraits/', delimiter:'/')
|
In version 1 of their SDK, Amazon provided some really useful methods that could be used to explore the contents of buckets using Tree, ChildCollection, LeafNode, BranchNode, etc. Unfortunately, I've had a difficulty time replicating their functionality with version 2 of the SDK, which doesn't seem to include such methods. Ideally, I'd like to do something similar to the example below, which is taken fromthe v1 SDK.tree = bucket.as_tree
directories = tree.children.select(&:branch?).collect(&:prefix)
#=> ['photos', 'videos']
files = tree.children.select(&:leaf?).collect(&:key)
#=> ['README.txt']Any ideas on how one might achieve this?
|
Amazon AWS: How to replicate tree/branch functionality from AWS Ruby SDK v1 in AWS Ruby SDK v2?
|
AWS Cloudwatch isn't run on your instances. Its infrastructure is fully managed by Amazon and independent from your VPC. You can see it as a SaaS (Software as a Service).So you don't have to worry about that. For more informations, please see:https://aws.amazon.com/cloudwatch/
|
Just wondering if the AWS cloudwatch runs on the same VPC where i have all my applications are running?Is there any chance that AWS cloudwatch might go down and we may loose the monitoring capability?Do we need to have a monitoring mechanism to check the Cloudwatch health?Thanks
|
AWS Cloudwatch Monitoring
|
Could it be a different region where the tables created and the console shows?
|
I have following code written in DynamoDB for table creation. I am running this with Eclise. I have configured Tomcat server. I deployed my app on Tomcat and open the localhost URL.DynamoDB dynamoDB = new DynamoDB(dynamo);
ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition()
.withAttributeName("Id").withAttributeType("N"));
ArrayList<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new KeySchemaElement().withAttributeName("Id")
.withKeyType(KeyType.HASH));
CreateTableRequest request1 = new CreateTableRequest()
.withTableName("abcdef")
.withKeySchema(keySchema)
.withAttributeDefinitions(attributeDefinitions)
.withProvisionedThroughput(new ProvisionedThroughput()
.withReadCapacityUnits(5L)
.withWriteCapacityUnits(6L));
System.out.println("Issuing CreateTable request for abcde");
Table table = dynamoDB.createTable(request1);
System.out.println("Waiting for abcde to be created...this may take a while...");
table.waitForActive();It runs successfully. It also shows the table created successfully.
But when I open Amazon DynamoDB console, it does not reflect the newly created table. Can anyone suggest me what goes wrong here ? I have properly configured secretKey and accessKey.
|
Creating table does not reflect in DynamoDB console
|
I believe there are two issuesformatting in setenv.sh, you need \ to split across lineslast line $CATALINA_OPTS which tries to execute the arguments, hence -Dcom.sun.management.jmxremote not found...Suggested fixCATALINA_OPTS="-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Djava.rmi.server.hostname=ec2-xx-xxx-xx-xx.ap-southeast-1.compute.amazonaws.com"
echo $CATALINA_OPTS
|
I am trying to use VisualVM in my system to monitor a Tomcat instance running over EC2. I tried steps provided in multiple blogs about how to configure it, but still when I try to run tomcat it gives me following error../catalina.sh: 5: /home/gvr/apache-tomcat-8.0.18/bin/setenv.sh: -Dcom.sun.management.jmxremote: not foundI added following statement inserver.xml<listener classname="org.apache.catalina.mbeans.JmxRemoteLifecycleListener"
rmiregistryportplatform="10001"
rmiserverportplatform="10002"
uselocalports="true" />And mysetenv.shis as followsCATALINA_OPTS="-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Djava.rmi.server.hostname=ec2-xx-xxx-xx-xx.ap-southeast-1.compute.amazonaws.com"
$CATALINA_OPTSBesides this I have added,catalina-jmx-remote.jarin tomcat'slibdirectoryCould anyone please provide me some hint, what is possibly going wrong. I tried everything I have found related to configuring VisualVMI am running Tomcat 8.0.18, java 8 over ubuntu
|
Using VisualVM on tomcat 8 running on EC2
|
Figured out how to do it. The updated code that has a question that shows an image is below.#set( $image_url = "http://upload.wikimedia.org/wikipedia/commons/6/6f/Earth_Eastern_Hemisphere.jpg" )
<Question>
<QuestionIdentifier>question1</QuestionIdentifier>
<QuestionContent>
<Binary>
<MimeType>
<Type>image</Type>
<SubType>jpg</SubType>
</MimeType>
<DataURL>${image_url}</DataURL>
<AltText>Image</AltText>
</Binary>
<Text>What is this a picture of?</Text>
</QuestionContent>
<AnswerSpecification>
<SelectionAnswer>
<StyleSuggestion>radiobutton</StyleSuggestion>
<Selections>
<Selection>
<SelectionIdentifier>1a</SelectionIdentifier>
<Text>Earth</Text>
</Selection>
<Selection>
<SelectionIdentifier>1b</SelectionIdentifier>
<Text>Sun</Text>
</Selection>
</Selections>
</SelectionAnswer>
</AnswerSpecification>
</Question>
|
I am trying to add an external image URL to a Qualification Test using the Amazon Mechanical Turk command line tools. This requires editing the XML file titled "qualification.question" to include the image URL.If I wanted to insert the URLhttp://upload.wikimedia.org/wikipedia/commons/6/6f/Earth_Eastern_Hemisphere.jpginto the code from the "qualification.question" below above the text "What is this a picture of?", how would I do this?<Question>
<QuestionIdentifier>question1</QuestionIdentifier>
<QuestionContent>
<Text>What is this a picture of?</Text>
</QuestionContent>
<AnswerSpecification>
<SelectionAnswer>
<StyleSuggestion>radiobutton</StyleSuggestion>
<Selections>
<Selection>
<SelectionIdentifier>1a</SelectionIdentifier>
<Text>Earth</Text>
</Selection>
<Selection>
<SelectionIdentifier>1b</SelectionIdentifier>
<Text>Sun</Text>
</Selection>
</Selections>
</SelectionAnswer>
</AnswerSpecification>
</Question>
|
How to add images to a qualification test in Amazon Mechanical Turk command line tools?
|
To migrate your database the best is to usecontainer_commands, they are commands that will run every time you deploy your application. There is a good example in theEBS documentation(Step 6) :container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: trueThe reason why you're getting anImportErroris because EBS installs your packages in a virtualenv. Before running arbitrary scripts in your application in SSH, first change to the directory containing your (latest) code withcd /opt/python/currentand then activate the virtualenvsource /opt/python/run/venv/bin/activateand set the environment variables (that your script probably expects)source /opt/python/current/env
|
I'm trying to run a Python script I've uploaded as part of my AWS Elastic Beanstalk application from my development machine, but can't figure out how to. I believe I'velocated the script correctly, but when I attempt to run it under SSH, I get an import error.For example, I have a Flask-Migrate migration script as part of my application (pretty much the same as theexample in the documentation), but after successfully SSHing to my EB instance with> eb sshandlocating the scriptwith$ sudo find / -name migrate.pywhen I run in the directory (/opt/python/current) where I located it with$ python migrate.py db upgradeat the SSH prompt I getTraceback (most recent call last):
File "db_migrate.py", line 15, in <module>
from flask.ext.script import Manager
ImportError: No module named flask.ext.scripteven though myrequirements.txt(present along with the rest of my files in the same directory) hasflask-script==2.0.5.On Heroku I can accomplish all of this in two steps with> heroku run bash
$ python migrate.py db upgradeIs there equivalent functionality on AWS? How do I run a Python script that is part of an application I uploaded in an AWS SSH session? Perhaps I'm missing a step to set up the environment in which the code runs?
|
How do I run a Python script that is part of an application I uploaded in an AWS SSH session?
|
Not officially. But the incredible Mitch Garnaathas a Github repository with "missingcloud" bits. On that list is instance information. You can pick that out with your favorite language. Here's an example with a bit ofjq. (this is imperfect, maybe someone can help split these into instance:ramMB rows?)$ curl --silent https://raw.githubusercontent.com/garnaat/missingcloud/master/aws.json | jq '[.services."Elastic Compute Cloud".instance_types|to_entries|.[]|.key,.value.ramMB]' | head -9
[
"c1.medium",
1700,
"c1.xlarge",
7000,
"c3.2xlarge",
15000,
"c3.4xlarge",
30000,
|
Is there a way to get the number of cores and amount of memory of an instance type from the command lineawstool?Basically I want to access the data onhttp://aws.amazon.com/ec2/instance-types/programmatically.
|
Query EC2 instance type attributes
|
My suggestion would be one of two things, either have two autoscaling groups - one for the readonly instances (i.e. the non-master), and then a second ASG for the master instance(s). Even if there is only one master instance at any time, you can still benefit by including it in its own autoscaling group by taking advantage of the ability for the ASG to detect when it has failed, and spin up a single new instance to replace it.Alternatively, leave the master instance out of the auto-scaling altogether, and just run it as a reserved instances - let the rest of the RO instances scale up and down as necessary.
|
I am using EC2 with autoscaling and loadbalancing to host my webapp. To guarantee consistency between the EC2 instances, I only want to allow access to the administration interface from one instance, so all write operations are executed on this instance. The other instaces then periodically download copies of the changed files.So here's my question:Can I have a designated "Master" Instance, in my autoscaling group, which is slightly different (runs script for uploading files which were written to)? Of course this Instance should never be shut down, no matter what. All the other "Slave" Instances are indentical an can be created and terminated on demand. Is there some sort of configuration option for this or can I do this with a policy?
|
AWS EC2 Autoscaling: Defining a master instance, which is never terminated
|
I had the same problem, and found that we needed two things, both of which were put into the .htaccess file:php_value upload_tmp_dir "/tmp"
php_value upload_max_filesize 10MThe upload directory must be owned by the web server, e.g. "webapp", or "daemon", or writable by that account. In addition, the max filesize must accommodate your uploads.In my case, the upload limit was 2M by default, and my files were 4M. This resulted in an empty $_FILES array.
|
We use AWS Elastic Beanstalk to host PHP applications which include file upload facilities which aren't working. We have php.ini set the tmp_upload_dir to /tmp but it still doesn't work.We've just moved the site from another server, everything was working perfectly there, but EB doesn't seem to want to let us upload files.Here's an example of the code we are using:$imagePath = "/tmp/";
$allowedExts = array("gif", "jpeg", "jpg", "png", "GIF", "JPEG", "JPG", "PNG");
$temp = explode(".", $_FILES["img"]["name"]);
$extension = end($temp);
if ( in_array($extension, $allowedExts))
{
if ($_FILES["img"]["error"] > 0)
{
$response = array(
"status" => 'error',
"message" => 'ERROR Return Code: '. $_FILES["img"]["error"],
);
echo "Return Code: " . $_FILES["img"]["error"] . "<br>";
}
else
{
$filename = $_FILES["img"]["tmp_name"];
list($width, $height) = getimagesize( $filename );
move_uploaded_file($filename, $imagePath . $_FILES["img"]["name"]);
$response = array(
"status" => 'success',
"url" => $imagePath.$_FILES["img"]["name"],
"width" => $width,
"height" => $height
);
}
}
else
{
$response = array(
"status" => 'error',
"message" => 'something went wrong',
);
}
|
AWS Elastic Beanstalk file upload not working
|
Data Transfer OUT From us-east-1 Amazon EC2 ToAmazon S3, Amazon Glacier, Amazon DynamoDB, Amazon SES, Amazon SQS, or Amazon SimpleDB in the same AWS Region$0.00 per GBAmazon EC2, Amazon RDS, Amazon Redshift or Amazon ElastiCache instances, Amazon Elastic Load Balancing, or Elastic Network Interfaces in the same Availability ZoneUsing a private IP address $0.00 per GBUsing a public or Elastic IP address $0.01 per GBAmazon EC2, Amazon RDS, Amazon Redshift or Amazon ElastiCache instances, Amazon Elastic Load Balancing, or Elastic Network Interfaces in another Availability Zone or peered VPC in the same AWS Region$0.01 per GBAnother AWS Region or Amazon CloudFront$0.02 per GBData Transfer OUT From us-east-1 Amazon EC2 To InternetFirst 1 GB / month $0.00 per GBUp to 10 TB / month $0.12 per GBNext 40 TB / month $0.09 per GBNext 100 TB / month $0.07 per GBNext 350 TB / month $0.05 per GBTaken from -https://aws.amazon.com/ec2/pricing/on-demand/
|
AWS mentions that at some documentation that there is a minimal outbound data charges within regionhttp://aws.amazon.com/pricing/But some documentation says there is no charge. Which one's true?http://aws.amazon.com/ec2/faqs/Which one is correct?
|
AWS Outbound data transfer charges
|
Get a connection:conn = boto.ec2.connect_to_region("us-east-1")Get your snapshots:snaps = conn.get_all_snapshots(owner="self")Iterate through the list and look at thestart_timeattribute:snaps[0].start_timeUsedir(snaps[0])to see all available attributes and find other things you need.
|
I am writing a Python program to get a list of all the EBS snapshots in our account (owner=self) that were "started" (basically, created) before a certain date, then perform some other actions on that list.I don't think I can use filters in the get_all_snapshots() function because it only supports equality, not GT/LT operators. I believeAWS boto Get Snapshots in Time Periodconfirms this.So I supposed I have to get a list of all of them, then iterate through the list. However, the boto documentation isn't clear to me (http://boto.readthedocs.org/en/latest/ref/ec2.html#module-boto.ec2.snapshot) exactly what methods/properties are available on the snapshot object.Any guidance here?
|
How to get list of all EBS Snapshots "started" before a certain date?
|
For anyone that runs into this issue - check your environment variables. Apparently the CLI uses those before using anything in the credentials file.I'm on Mac, so removing the offending env variables from my ~/.profile file did the trick.
|
AWS cli returns meA client error (InvalidAccessKeyId) occurred when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.However, I'm able to hadoop distcp as well as s3md using the exact same credentials. What is the problem here?
|
Why does AWS cli give me InvalidAccessKeyId error, when Im able to use the same creds for s3cmd?
|
You need to installmysql command line client. The info is already there in the screenshot. You need to installmysql-client-core-5.5ormysql-client-core-5.6depending on your RDS MySQl version.So, you may want to run something like below to install these packages:# apt-get install mysql-client-core-5.5
|
I have trying to configure AWS server myself.UsingInfo1:EC2 as hosting and select Ubuntu 14 as OS and install Apache PHP and i have check that it php and http works.I have install Apache and php using below commendsudo apt-get install apche2 php5 libapche2-mod-php5Info2:Rds as mysql database instance.it is workingNow i have put aambc.sql on ec2 systemand tryingimport it to MySQL-rds systemPlease the screen shot:Issue1:when we trying to import sql from EC2 it show error The Program 'mysql' can be ...see the screen shotIssue2:is need to install some MySQL or PHP to library which will connect EC2w with Rds and running the system.Please help me anymore.it immediate.Thanks in advance
|
Is need install mysql on AWS EC2 if i am using AWS rds as database instance?
|
Been digging in the code for the transport app that I have been using. Seemed that it was picking up config settings from somewhere besides my django project settings and was overriding them.A few years ago I was testing out google cloud storage for a google app engine test project which meant I installed "Gsutils" package globally. Guess what? Gsutils uses Boto too! So once I found out that I could set a boto config file I started looking for that. Sitting on OSX no file ~/.boto could be seen in the Finder or when listing the files in my home directory withls -al. Alas, when I tried to create it withnano ~/.botovoilá! There was heaps of settings already there from the time I used Gsutils.Once in there I disabled the#https_validate_certificates = Truesetting and everything works like a charm now.
|
I have an issue that is described in thisticket.I can´t docollectstaticuploads with django locally to ourstatic.somesite.comsince S3 adds s3.amazon.com to the url and then invalidates their own*.s3.amazon.comcertificate.
I have set a dns pointer for static.somesite.com that points to the ip of the s3 service.I have theAWS_S3_SECURE_URLS = Falseset.Not sure how to solve it yet. This is the full error message. I understand completely why it is happening, there has to be a workaround? On our production server this works just fine. Just cant find the settings.boto.https_connection.InvalidCertificateException:
Host static.somesite.com.s3.amazonaws.com returned an invalid certificate
(remote hostname "static.somesite.com.s3.amazonaws.com" does not match certificate)
{
'notAfter': 'Apr 9 23:59:59 2015 GMT',
'subjectAltName': (
('DNS', '*.s3.amazonaws.com'),
('DNS', 's3.amazonaws.com')),
'subject': (
(('countryName', u'US'),),
(('stateOrProvinceName', u'Washington'),),
(('localityName', u'Seattle'),),
(('organizationName', u'Amazon.com Inc.'),),
(('commonName', u'*.s3.amazonaws.com'),)
)
}
|
Django AWS S3 Invalid certificate when using bucket name "."
|
In your resource blocks, insert an asterisk between the two ":" in the arn lines, to specify all accounts, or replace it with your account number."arn:aws:ec2:us-east-1:*:instance/*"
"arn:aws:ec2:us-east-1:*:image/ami-*",
"arn:aws:ec2:us-east-1:*:subnet/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:volume/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:security-group/*"
|
I am using the policy to limit RunIstances only to a specific instance types and a specific region. When I run the launch wizard or simulation under a test user I am getting "implicitly denied" error.Here is the policy:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1::instance/*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": [
"t1.micro",
"m1.small"
]
}
}
},
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1::image/ami-*",
"arn:aws:ec2:us-east-1::subnet/*",
"arn:aws:ec2:us-east-1::network-interface/*",
"arn:aws:ec2:us-east-1::volume/*",
"arn:aws:ec2:us-east-1::key-pair/*",
"arn:aws:ec2:us-east-1::security-group/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:CreateKeyPair"
],
"Resource": [
"*"
]
}
]
}could somebody point to the issue?
|
how to limit instance launch by instance type in AWS using IAM service
|
Firstly, you don't need to create an entire new instance, snap the EBS volumes of the old one, and attach the copies. If you're doing this to try to avoid service interruption, what happens when you switch the EIP from the old to the new instance? Yep - service interruption.Just stop the m1, reset it to m3, and start. There will be an outage, of course, but you'll be back in less than 5 minutes and you've saved yourself a chunk of work replicating volumes.As for EBS Optimised - do you really need that? Do you understand what it means, and what the consequences of NOT having it on the new instance are? If the answers to both are YES, then of course pick an m3 (or larger) instance type that supports it. If NO, research until you know what the feature gives you and whether you actually need it (you pay more with it active - don't spend more than you actually need to).
|
If I were to upgrade an amazon instance, I'd create a snapshot of the image and create the new instance from this image and then upgrade that instance.My question(s) is related to mongodb and the best way upgrade from a m1.large to a m3.large instance - basically m3's are cheaper and more powerful than the old m1's.I currently have mongodb running on the m1.large instance backed by 3 EBS Volumes for storage, journalling and logs (essentially the mongodb image config from the MarketPlace).When i've gone through to setup the new m3.large instance, I noticed that it's not EBS Optimized.Working with mongodb and the current config, I assume for optimal performance, it's desirable to go the EBS Optimized route - if that's the case, the best upgrade path is to go for m3.xlarge? Would I hit a big performance penalty if I went with a m3.large?And lastly....after taking a snapshot of an image (specifically an image backed with EBS Volumes), does the new image take that same config setup? I.E The new image will be backed by the same volumes?I know I can stop and start the current instance, but I want to minimise any downtime.Any help appreciated!
|
Upgrading amazon EC2 m1.large instance to m3.large with mongodb installed
|
Simply useos.makedirs(). This will create all intermediate directories if needed.Recursive directory creation function. Like mkdir(), but makes all
intermediate-level directories needed to contain the leaf directory.
Raises an error exception if the leaf directory already exists or
cannot be created. The default mode is 0777 (octal). On some systems,
mode is ignored. Where it is used, the current umask value is first
masked out.
|
How do I create directories in Python given a list of files with paths that may or may not exist?I am downloading some files from s3 that may or may not exist in a certain directory. Based upon these keys that represent a potentially deeply nested directory that may or may not exist.So based upon the key /a/number/of/nested/dirs/file.txt how can I created /a/number/of/nested/dirs/ if they do not exist and do it in a way that doesn't take forever to check for each file in the list.I am doing this because if the local parent directories do not already existget_contents_to_filenamebreaks.My Final Solution Using Answer:for file_with_path in files_with_paths:
try:
if not os.path.exists(file_with_path):
os.makedirs(file_with_path)
site_object.get_contents_to_filename(file_with_path)
except:
pass
|
How do I create directories in Python given a list of files with paths that may or may not exist?
|
You can simple iterate overbucketobjectsand use thewith_prefixmethods3.buckets[YOUR BUCKET NAME].objects.with_prefix('videos/my_videos/college').each.collect(&:key)
#=> ["videos/my_videos/college/myfirst_day.mp4"]OR use theas_treemethods3.buckets[YOUR BUCKET NAME].as_tree(prefix:'videos/my_videos/college').select(&:leaf?).collect(&:key)
#=> ['videos/my_videos/college/myfirst_day.mp4']Obviously these are fictional since I have no access to your bucket but take a look atObjectCollectionandTreefor more methods in theAWSSDK.There are quite a few methods for bucket traversal available such asTreeresponds tochildrenwhich will list bothLeafNodes (File) andBranchNodes (Directory).BranchNodes will then also respond tochildrenso you can make this recursive if needed.To get thesuffix(e.g. just the filename) you could possibly patch these in.class LeafNode
def suffix
@member.key.split(delimiter).pop
end
end
class S3Object
def suffix
@key.split("/").pop
end
endI have not fully tested these in any way but they should work for returning just the file name itself if it is nested inside a branch.
|
I have a following scenario.
Consider in my caseaws s3 folder structureis as follows- videos
- my_videos
- collegeI have uploaded video file saymyfirst_day.mp4in thecollege, for this related formed key is"videos/my_videos/college/myfirst_day.mp4"Now I have to list all the files from thevideos/my_videos/collegedirectory.
How can I do it.For this I am usingaws-sdk gem
|
list all the files from s3 using aws-sdk gem
|
You canpublic a custom metricto AWS CloudWatch, then set up anautoscale triggerandscaling policybased on your custom metrics. Autoscale can start the instance for you and will kill it based on your policy. You'll have to include the appropriate user data in thelaunch configurationto bootstrap your host. Just like userdata for any EC2 instance, it could be a bash script or ansible playbook or whatever your config management tool of choice is.
|
I occasionally have really high-CPU intensive tasks. They are launched into a separatehigh-intensityqueue, that is consumed by a really large machine (lots of CPUs, lots of RAM). However, this machine only has to run about one hour per day.I would like automate deployment of this image on AWS, to be triggered by outstanding messages in thehigh-intensityqueue, and then safely stopped once it is not busy. Something along the lines of:Some agent (presumably my own software running on my monitor server) checks the queue size, determines there arex > x_thresholdnew jobs to be done (e.g. I want to trigger if there are 5 outstanding "big" jobs")A specific AWS instance is started, registers itself with the broker (RabbitMQ) and consumes the jobsOnce the worker has been idle for somet > t_idle(say, longer than 10 minutes), the machine is shut down.Are there any tools that can I use for this, to ease the automation process, or am I going to have to bootstrap everything myself?
|
Managing workers on AWS
|
In order to create an admin user for my Django app on Beanstalk I created a custom Django command that I invoke incontainers_commands, so there is no need for human input at all! Moreover I defined the user/password as environment variables so I can put my code under version control safely. The implementation of my command is something similar to this:import os
from django.core.management.base import BaseCommand
from com.cygora.apps.users.models.User import User
class Command(BaseCommand):
def handle(self, *args, **options):
username = os.environ['SUPER_USER_NAME']
if not User.objects.filter(username=username).exists():
User.objects.create_superuser(username,
os.environ['SUPER_USER_EMAIL'],
os.environ['SUPER_USER_PASSWORD'])then in my beanstalk config:container_commands:
02_create_superuser_for_django_admin:
command: "python manage.py create_cygora_superuser"
leader_only: trueps: if you never created a custom Django commands before, all you have to to is to create a package:management.commandsin your desired app (ie: /your_project/your_app/management/commands/the_command.py), Django will load it automatically (and you can see it when typingpython manage.py --help)
|
I've followed this youtube instruction from amazon to deploy django web app to AWS EB EC2. The website successfully ran. But I can not login to admin. The admin that came with django polls example. I recall that during the setup process, it prompt me for RDS setup and since my web app use MySQL, I had to pick RDS setup. When I setup the RDS, it did not prompt me to create a user, but only prompted me to create a password, which dutifully I did.https://www.youtube.com/watch?v=YJoOnKiSYwsSimilar instructions can be found on AWS, too.http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.htmlI've tried username 'root' and password is 'blank' which works on my local pc, but of course that would make things too simple.After that attempt failed, I did some searching in EC2 and RDS dashboard on AWS, and I found username=ebroot.So I tried username 'ebroot' and password [rds_password_from_setup], but that didn't work.I've tried many other combinations of usernames and passwords but nothing works. This might be a stupid question to ask the online community, but what do you suppose is my username and password that RDS might accept?
|
What is my user and pass? Deploy django on AWS EC2 but can not login admin
|
Foreb 2.6:Firstly, it'sgit aws.push(assuming you didgit aws.configfirst). Alternatively, you can use theebcommand (eb init,eb branch,eb push).git checkout [tagname]
eb push # or git aws.pushForEB 3.1:git checkout <tagname>
eb deploy <environment>Also, AWS differentiates between Elastic Beanstalk CLI 2.6 and 3.1 by the former using lowercaseeband the latter using uppercaseEB. At the command line they're still both run aseb.
|
I created an app on EB using the web interface (I switched to the UI approach, since by using the CLI utils:eb int+eb startI was unable to configure a postgres db because a mysql one is created automatically and by setting "postgres" in the config file I was getting an exception like "you can't change dbengine type"!!)
My problem is that currently I'm unable to use git to deploy my app, and I have to create and upload a zip file using the web UI (which is a process I really hate, since I'm in a very experimental state and I deploy very often).
If I try to usegit push.awsI get:./.git/AWSDevTools/aws/dev_tools.rb:53:in `host': private method `split' called for nil:NilClass (NoMethodError)
from ./.git/AWSDevTools/aws/dev_tools.rb:112:in `signed_uri'
from .git/AWSDevTools/aws.elasticbeanstalk.push:86…how can I use git to push and deploy a tag on my repository to EB in my current situation?
|
Deploy a git tag to Amazon Elastic Beanstalk
|
Documentationmentions that the Packages section is processed first.The order in which these are processed are as follows:PackagesFilesCommandsServicesContainer Commands
|
When deploying to the Elastic Beanstalk, in what order are all the dependencies installed? For one part, I have all my project dependencies in requirements.txt. This includesPIL. But forPILI need to install libjpeg and other libraries as such (this is in .ebextensions/myapp.configpackages:
yum:
libjpeg-devel: []
freetype-devel: []
zlib-devel: []
... rest of config fileOnly problem is, if pip is run first, I would have to reinstall Pillow which I do not know how to do
|
AWS Elastic Beanstalk Django - What happens first when deploying to EB, pip install -r requirements.txt or commands in configuration file
|
I have found an answer thanks to theAWS Discussion forum.
The right parameters setup is$param = array(
'Bucket' => $this->bucket,
'CopySource' => urlencode($this->bucket . $this->delimiter . $source_key) . '?versionId=' . $source_version_id,
'Key' => $dest_key
);
|
I'm trying to implement "restore" object by creating a copy of an older object version.I am using AWS PHP SDK 2, methodcopyObject, but I cannot find a way to specify versionID of the source object.AWS REST API documentation (ref) mentionsTo copy a different version, use the versionId subresource.but it is not mentioned in the SDK docs.I tried to add the versionID to the "CopySource" attribute, SDK docs say that it isThe name of the source bucket and key name of the source object, separated by a slash (/)but it did not work.$param = array(
'Bucket' => $this->bucket,
'CopySource' => urlencode($this->bucket . $this->delimiter . $source_key . $this->delimiter . $source_version_id),
'Key' => $dest_key
);
$result = $this->s3Client->copyObject($param);QuestionHow can I specify the versionID of the source object?
|
AWS S3 CopyObject Version
|
TheAWS Mobile SDKssupport accessing SNS directly from the mobile device. If you're interested in seeing code demonstrating this on iOS, we included some as a sample we prepared for re:Invent 2013 calledMobile Photo Share.The important thing to note when accessing SNS directly from the mobile device is that you'll want to restrict the credentials delivered to the device to only those services and resources you'll need to access. You can accomplish this viaweb identity federationor atoken vending machinewith appropriately restricted policy.If you want to learn more about the Mobile Photo Share app, we had two talks at re:Invent about the app and its architecture. The video and slides for those talks are available here:Building Cloud-Backed Mobile AppsIntegrating Social Login Into Mobile Apps
|
I am using Amazon SNS Service for an iOS application that needs push notifications.I have figured most of the things, except for the part where I have to register my device tokens.Thisis where Amazon talks about it. It can be done manually or with the help of createPlatformEndpoint API which they obviously recommend for bulk uploads. My question is how we can directly register tokens from devices that will install the app later on. The documentation talks about a proxy server which I would want to avoid as of now. Isn't there a direct way of doing this, like where I can directly call a method and push the device token received in the application to my SNS Platform?This, is a possible duplicate except that it is in reference to Android.
|
What is the most efficient way to create end-points for an Amazon SNS service?
|
Rolling updates are not supported in beanstalk for app version changes.. They are supported for only env changes.. See the blogs below.. As on today web deployments or version updates cause a brief downtime because beanstalk updates all servers at once..https://forums.aws.amazon.com/thread.jspa?messageID=502158񺦎https://forums.aws.amazon.com/thread.jspa?messageID=328344񐊘https://forums.aws.amazon.com/thread.jspa?messageID=506438񻩆You can do something like this:https://forums.aws.amazon.com/message.jspa?messageID=258782
|
Do rolling updates control (honor the wait time between the push to instances) minor updates to application like "Changing a header field of a jsp page".I have set up the rolling update time to 1 hour and i have four instances. I am using eclipse IDE. I make a minor change on header/title bar and then click "Run on Server" in AWS EC2 APACHE TOMCAT7 (US-EAST-1). Beanstalk goes and updates all 4 instances at once. I was expecting it to wait for 1 hour each and all instances updated after 3 hours.. But it happens instantaneously..
|
Elastic Beanstalk: Rolling Updates
|
You should probably read more about thebrowser-based POST uploads feature of Amazon S3on the AWS docs. Doing an POST upload to Amazon S3 requires you to send a special JSON policy doc along with your POST request and upload. TheS3PostObjectclassin the AWS SDK for PHP is helpful for generating this policy, based on your provided options, as well as generating other form element values you need to include with your form/request.Though I haven't tried it yet, you probably just need to swap out the S3 bucket endpoint for your CloudFront distribution endpoint to do the upload via the CloudFront edge location. Also, make sure your distribution is configured to accept POST requests. To get a more official answer to your question, I'd ask the Amazon CloudFront team on theCloudFront forum.
|
So as we all knowCloudfront now supports uploading to S3via the edge points.However, I'm not really sure how to do this? I know it'll not be fully featured (i.e. not support authorize headers and part uploads) but I'm keen to do straight uploads.I'm working in PHP, though it doesn't appear this is supported on the API yet as a method. Doing something rather simpler looks to require various authorisation milestones.Has anyone found the best way to do this yet or just some suggestions for the best way as I'm trying CURL POST and other such things to no avail.
|
Upload to S3 via Cloudfront
|
If you can see the web site from the EC2 instance, but not from other machines, there is probably one of the following things wrong:The DNS entry is not available or is wrong. Since you can RDP using that entry, this can't be the cause.Access to the correct port is being blocked by the security group or firewall. Since the instructions you referenced specifically say to make sure that both port 80 (HTTP) and 3389 (RDP) are open, and you know that is true from port 3389, this isn't likely, but is possible. Make sure that there are security group rules for both port numbers that look the same.The Windows server itself is refusing to allow outside access to port 80 on that address. This is unlikely, but not impossible, and the instructions specify that you should "disable Internet Explorer Enhanced Security Configuration", and at the end cover "Making Your WordPress Site Public". Make sure that the web server isn't configured to only respond to requests from localhost (127.0.0.1) and that there are no Windows firewall rules blocking port 80.I think that the likeliest problem is number 2, above. Perhaps you forgot to open port 80 in the security group, or typed a different port number or a different address range to open it to.
|
I have followed thesteps provided by Amazon EC2. I have installed a wordpress website in the EC2 Instance.My public DNS is given as ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com/
and Public IP is also given as xx-xxx-xx-xxx.How to view the website from any other machine?Note:EC2 Instance is created and running now.I can view it in the localhost as well as public DNS in the EC2 instance using RDP. (http://ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com/)
|
How to view website launched in Amazon EC2 instance?
|
Found out the answer:it should be an array of "clips", like so:'Composition' => array(
array(
'TimeSpan' => array(
'StartTime' => '00:00:00.000',
'Duration' => '00:00:02.000'
))In my case I only needed 1 clip.More information about duration here:(Optional) Clip Start Time- (StartTime)
You can create an output file that contains an excerpt from the input file. Clip Start Time indicates the place in the input file where you want a clip to start. The format can be either HH:mm:ss.SSS (maximum value: 23:59:59.999; SSS is thousandths of a second) or sssss.SSS (maximum value: 86399.999). If you don't specify a value, Elastic Transcoder starts at the beginning of the input file.(Optional) Clip Duration(Duration)
The duration of your excerpt clip. The format can be either HH:mm:ss.SSS (maximum value: 23:59:59.999; SSS is thousandths of a second) or sssss.SSS (maximum value: 86399.999). If you don't specify a value, Elastic Transcoder clips from Clip Start Time to the end of the file.If you specify a value longer than the duration of the input file, Elastic Transcoder transcodes from Clip Start Time to the end of the file and returns a warning message.For Detailed info about aws transcoderhere
|
I'm using AWS SDK PHP.Using->createJob(everything is fine, but when I add'Composition' => array(
'TimeSpan' => array(
'StartTime' => '00:00:00.000',
'Duration' => '00:00:02.000'
)
)to one of the outputs, I get the following error:{"error":{"type":"Aws\ElasticTranscoder\Exception\ElasticTranscoderException","message":"Start of structure or map found where not expected.","file":"/Applications/XAMPP/xamppfiles/htdocs/breves/vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php","line":91}}I'm trying to cut the video.Any toughts?Amazon SDK API Developer Guide
|
Amazon Elastic Transcoder - Adding duration to output returning error
|
I finally got around the issue by using the--server-connect-attributeoption, which is supposed to be used along with a--ssh-gatewayattribute.Add--server-connect-attribute public_ip_addressto above knife ec2 create server command, which will make knife use public_ip_address of your server.Note: This hack works using knife-ec2 (0.6.4). Referdef ssh_connect_hosthere
|
I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using--associate-eipoption in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verboseIs there any other work-around/patch to solve this issue?Thanks in advance
|
How to launch amazon ec2 instance inside vpc using chef without using a gateway machine?
|
As per thedocumentation, This commands works on either--cidror--source-group. so If you have multiple IP addresses then I would say the only option is to run the same command multiple times for the individual IP address (which would take the form of1.1.1.1/32).Or,You can list all the ipadress in cidr format (1.1.1.1/32) in a file (each ip address on a new line) and then run aforloop over it running above command for each iteration. e.g.for i in `cat ip_address_cidr.txt`; do aws ec2 revoke-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 $i; doneI have not tested above command syntax but that should do it so that you can revoke the rules in a single one-liner command.
|
How to remove all rules for a given port using the "aws ec2"?aws ec2 revoke-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 **--ALL-IP**
|
Clear rules of AWS security group for a particular port
|
You cannot add conditional policies that apply to objects in S3 object lifecycle configuration which in your case is, is based on the object's last access time.You can however transition objects to Glacier based on their age or on a specific date.I would like to think you can handle it in your application but the s3 object returned does not have the last access time, if you use the the AWS SDK.Detailshere
|
i am storing objects to S3, i would like that object never accessed in the last month go to glacier.After some research i don't think i can achieve this, but i hope to be wrong.When creating lifecycle for an s3 bucket the rule is based on object creation date (not last access date)Setting the storage class for the object will not help according tohttp://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html"You cannot associate an object with the Glacier storage class as you upload it. You transition existing Amazon S3 objects to the Glacier storage class by using lifecycle management. For more information, see Object Lifecycle Management."Does anyone know how can achieve this?Thanks
|
AWS storage: S3 object go to Glacier if never accessed in the last month
|
Figured it out. Go parses unicode back to plain-text when processing a url. I needed to use request.URL.Opaque.More info here:https://stackoverflow.com/a/17322831/733860The issue was unicode-related. There was a %2F in my command (not displayed in my original question) that Go was converting to / that should have been left as %2F (cURL was properly leaving it as %2F). Changing the %2F to %252F fixed the issue.It also appears that when creating a new HTTP request Go will parse your unicode back to plain text, so if you have %3D in the URL you submit to the HTTP request initializer it will convert it to =. I thought an obvious solution would be to put %253D into the URL but apparently there is a bug in Go that will convert %3D to = but NOT %25 to %. I had to use the Opaque URL request (request.Url.Opaque) to get around this.
|
I am attempting to download a track fromhttp://freemusicarchive.org. Generally speaking, you can download a file by appending /download to the track URL, which responds with a redirect to the asset on S3.For example, try this link:http://freemusicarchive.org//music//Zola_Jesus//Live_at_WFMU_on_Scott_McDowells_Show_1709//Odessa/downloadTo see the redirect, put that link here:http://www.wheregoes.com/retracer.phpI am able to get the redirect location with code that looks like this:req, err := http.NewRequest("GET", url, nil)
errHndlr(err)
transport := http.Transport{}
resp, err := transport.RoundTrip(req)
defer resp.Body.Close()
errHndlr(err)
redirect := resp.Header.Get("Location")I have verified the redirect link works by printing it to the console and copy/pasting it into my browser, but when I call http.Get on the same url, I get a "SignatureDoesNotMatch" error from AWS.If anyone can offer insight as to what is going wrong here, I would greatly appreciate it.
|
Golang - SignatureDoesNotMatch error from S3 when attempting GET request
|
One option is to handle termination yourself. Instead of configuring autoscaling to downscale your instance group, put the logic to determine if an instance needs to terminate in the instance itself. Once you decide that an instance needs to self-terminate, do whatever work you need to do before terminating, and then call theas-terminate-instance-in-auto-scaling-groupcommand with--decrement-desired-capacityoption to terminate the instance. E.g.:as-terminate-instance-in-auto-scaling-group --decrement-desired-capacity i-d15ea5eSee this AWS forum thread:https://forums.aws.amazon.com/thread.jspa?messageID=407743&tstart=0#407743.
|
I have an application that is constantly gathering data from active connections and then writing compiled/batched data at the end of every minute.I have Amazon Auto Scaling working with these servers. The problem is.. when the group is down scaled I need to keep the servers writing their last minute worth of data before termination occurs after being removed from the ELB.Is there anyway to Remove the instance from the Load Balancer then have a wait period of X minutes before terminating the instance? (Ideally I would wait 2-5 mintues before termination of the instance)Any guidance would helpThanks
|
AWS Auto Scaling - Down scale wait x minutes before server termination
|
The CORS headers do not affect the same-origin policy for iframes in Safari.You can communicate between the frames usingpostMessageor you could attach a subdomain frommydomain.comto your S3 bucket and relax the same-origin policy by settingdocument.domain(this method only works to communicate between subdomains of the same domain, it doesn't work between different domains).You can learn more about iframes communication from this answer on StackOverflow:Ways to circumvent the same-origin policy
|
I just changed my blog from wordpress to django-zinnia. Zinnia uses a WYMeditor (https://github.com/wymeditor/wymeditor) iframe within django-admin for blog post text and content entry, and right now I can't access the iframe due to a same-origin issue. The error I'm seeing in browser console is:Blocked a frame with origin "http://www.mydomain.com" from accessing a frame with origin "http://mybucket.s3.amazonaws.com".
Protocols, domains, and ports must match.
WYMeditor.WymClassSafari.initIframe
onloadIs there a parameter I can update in my CORS configurations for the bucket to allow the iframe to load cross-origin? I already have<AllowedOrigin>http://www.mydomain.com</AllowedOrigin>within my current CORS rules:<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://mydomain.herokuapp.com</AllowedOrigin>
<AllowedOrigin>http://mydomain.com</AllowedOrigin>
<AllowedOrigin>http://www.mydomain.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Host</AllowedHeader>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
|
Same origin Issue with iframe loaded from AWS S3
|
Once you have theCredentials, you create aBasicSessionCredentialsobject and pass that into the constructor for theAmazonCloudFormationClient. For example:// Package the temporary security credentials as
// a BasicSessionCredentials object, for an Amazon S3 client object to use.
BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
sessionCredentials.getAccessKeyId(),
sessionCredentials.getSecretAccessKey(),
sessionCredentials.getSessionToken());
// The following will be part of your less trusted code. You provide temporary security
// credentials so it can send authenticated requests to AWS CloudFormation.
AmazonCloudFormationClient client = new AmazonCloudFormationClient(basicSessionCredentials);I hope that helps!
|
I can get aCredentialfromFederatedTokenResultby querying AWS. Now, I want that credential to be used by another service to create an application stack using CloudFormation API - I can create that with using my root account which I don't want. But as I read fromGetFederationTokenandCredential Page, I must must pass to the service API to use the temporary credentials.Is it possible to do things with temporary users (given permissions) to launch a stack, creating a new Key-Pair? Any links, code snippets are highly appreciated.Seemsruby sdkprovides for session tokens. How do I get it done in Java ?At the moment I create the stack that is similar to CloudFormation given as sample with AWS SDK which does not use temporary credentials.
|
How to pass Session Token of temporary user that is retrieved with FederatedTokenResult to AWS - Java SDK?
|
If you want to access your laravel app by server ip you need to edit your httpd.conf file (usually in /etc/apache2 or /etc/httpd) and set the DocumentRoot option to the right directory.DocumentRoot /var/www/laravel/publicand then restart apache
|
I'm brand new to AWS and this has got me stumped. I'm trying to install Laravel 4 on an instance I have on EC2 running the AMI Linux package. I don't have a domain for this, just using the free tier and trying it out.Laravel needs to have the laravel/public folder as the document root but I can't work out how to do this. I've read loads of things about the conf.d folder vhosts file httpd.conf file and I don't really understand how it all fits together.Can someone help me and tell me how I can set my documnent root so that when i visit my Elastic IP address it loads up correctly?Thanks
|
Installing Laravel on EC2 Instance
|
I've been looking some request examples and as far as I know it's not REST, as it's not using resources in the url, only parameters.Examples:Create topichttp://sns.us-east-1.amazonaws.com/
?Name=My-Topic
&Action=CreateTopic
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2010-03-31T12%3A00%3A00.000Z
&AWSAccessKeyId=(AWS Access Key ID)
&Signature=gfzIF53exFVdpSNb8AiwN3Lv%2FNYXh6S%2Br3yySK70oX4%3Add Permission:http://sns.us-east-1.amazonaws.com/
?TopicArn=arn%3Aaws%3Asns%3Aus-east-1%3A123456789012%3AMy-Test
&ActionName.member.1=Publish
&ActionName.member.2=GetTopicAttributes
&Label=NewPermission
&AWSAccountId.member.1=987654321000
&AWSAccountId.member.2=876543210000
&Action=AddPermission
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2010-03-31T12%3A00%3A00.000Z
&AWSAccessKeyId=(AWS Access Key ID)
&Signature=k%2FAU%2FKp13pjndwJ7rr1sZszy6MZMlOhRBCHx1ZaZFiw%3DMore info:http://aws.amazon.com/en/sns/faqs/
|
Is Amazon Simple Notification Service (SNS) RESTFUL web service?On readingAmazon SNS documentation, there is nothing written about RESTFUL service.Thanks
|
Amazon Simple Notification Service is RESTFUL web service?
|
There are two things at play here:Thefile system pathTheURL pathIf you're running anAmazon Linuximage, your web content should be deployed inside/var/www/html-- as is the case with just about every reasonable Linux installation.If your index page is stored at/var/www/html/index.php, then your URL will behttp://123.45.678.910/index.php.If you're trying to accesshttp://123.45.678.910/var/www/restAPI/index.php, it means that you uploaded your file to/var/www/html/var/www/restAPI/index.php.Make sense?
|
I have a web application which is currently working fine on my local machine and I am now trying to get it to work on EC2.I transferred the index.php file into the folder /var/www and I am able to access it by visiting my elastic IP (for example,http://123.45.678.910/).The trouble is that I also added the folder named restAPI into the folder /var/www which in turn has several files. When I try to access restAPI/index.php by going to the URL -http://123.45.678.910/var/www/restAPI/index.php, it gives me a404 error.
|
EC2 web application folder structure
|
According toAmazon EC2 Documentation, a security group is just a single point for firewall settings applied to a given instance:A security group acts as a firewall that controls the traffic allowed
to reach one or more instances. When you launch an instance, you
assign it one or more security groups. You add rules to each security
group that control traffic for the instance.In Windows Azure you have to set these rules on a per-instance or per-service basis, there is no way to define some rules and apply them automatically to all instances.But you can use PowerShellcmdletsfor automating this task for your services.Firewall rules apply mostly for PaaS: for your web/worker role services and for SQL Azure. In case of IaaS there are two sides: your VM with custom software firewall (depending upon your OS etc.) and the endpoints you create and manage in Azure Portal that relay in- and outbound traffic to your VM.
|
what is the AWS security groups equivalent in azureif there is any in azure is this only for the PaaS services or also for IaaS ?
|
AWS security groups equivalent in azure
|
You have to create a new object for s3 vianew:var AWS = require('aws-sdk');
AWS.config.update({region: 'eu-west-1'});
var s3 = new AWS.S3();which should work without any problem.
|
I'm using the AWS Node.js API (aws-sdk) version 1.0.0 on Node version 0.11.2. I get an error simply constructing the API object:var AWS = require('aws-sdk');
AWS.config.update({region: 'eu-west-1'});
var s3 = AWS.S3();The error is:/.../node_modules/aws-sdk/lib/service.js:25
var ServiceClass = this.loadServiceClass(config || {});
^
TypeError: Object #<Object> has no method 'loadServiceClass'
at Object.Service (/.../node_modules/aws-sdk/lib/service.js:25:29)
at Object.features.constructor [as S3] (/.../node_modules/aws-sdk/lib/util.js:405:24)
at ReadStream.<anonymous> (/.../server.js:92:22)
at ReadStream.EventEmitter.emit (events.js:97:17)
at fs.js:1492:10
at Object.oncomplete (fs.js:94:15)I get the same error with Node 0.8.23, 0.9.12 and 0.10.5 too.I can't find any reference to this error anywhere, so it obviously doesn't happen to anyone else! What am I doing wrong?
|
AWS Node.js API error
|
I try a different JRE 1.6 and now I could post my application.It is really weird, I don't know why JRE 1.7 doesn't work.
|
Closed.This question isnot reproducible or was caused by typos. It is not currently accepting answers.This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may beon-topichere, this one was resolved in a way less likely to help future readers.Closed5 years ago.Improve this questionI wrote a AWS java web project, and it could run on my local server.But when I refer tohttp://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Java.sdlc.htmlto deploy the application on elastic beanstalk. I could only got a blank page. I also tried upload WAR file to elastic beanstalk, but still got an blank page.I SSH into the EC2 instance, and find no folders for my application. I think there should be somewhere in tomcat7/myapp folder. Could anyone tell me why, and how to deploy java app to it?I also try the travelLog example, but still cannot get the page display.
|
how to deploy java web application to AWS elastic beanstalk? [closed]
|
Yes I have.However since the other answers were written, rather than use generic PostGres drivers, you should use customised Redshift Drivers provided by Amazon.The answers you are looking for are here:http://docs.aws.amazon.com/redshift/latest/mgmt/configure-odbc-connection.html
|
Is it possible to use Amazon Redshift as the data source for an Excel pivot table? Googling this question didn't yield any obvious answers. Thanks.
|
Has anyone used Redshift to source an Excel pivot table?
|
Please use the list() method to get a list of your files, then use the get() method to get each file.class S3 extends AmazonS3Client {
final String bucket;
S3(String u, String p, String Bucket) {
super(new BasicAWSCredentials(u, p));
bucket = Bucket;
}
String get(String k) {
try {
final S3Object f = getObject(bucket, k);
final BufferedInputStream i = new BufferedInputStream(f.getObjectContent());
final StringBuilder s = new StringBuilder();
final byte[] b = new byte[1024];
for (int n = i.read(b); n != -1; n = i.read(b)) {
s.append(new String(b, 0, n));
}
return s.toString();
} catch (Exception e) {
log("Cannot get " + bucket + "/" + k + " from S3 because " + e);
}
return null;
}
String[] list(String d) {
try {
final ObjectListing l = listObjects(bucket, d);
final List<S3ObjectSummary> L = l.getObjectSummaries();
final int n = L.size();
final String[] s = new String[n];
for (int i = 0; i < n; ++i) {
final S3ObjectSummary k = L.get(i);
s[i] = k.getKey();
}
return s;
} catch (Exception e) {
log("Cannot list " + bucket + "/" + d + " on S3 because " + e);
}
return new String[]{};
}
}
|
I have a large number of files that need to be downloaded from an S3 bucket. My problem is similar tothis articleexcept I am trying to run it in Java.public static void main(String args[]) {
AWSCredentials myCredentials = new BasicAWSCredentials("key","secret");
TransferManager tx = new TransferManager(myCredentials);
File file = <thefile>
try{
MultipleFileDownload myDownload = tx.downloadDirectory("<bucket>", null, file);
System.out.println("Transfer: " + myDownload.getDescription());
System.out.println(" - State: " + myDownload.getState());
System.out.println(" - Progress: " + myDownload.getProgress().getBytesTransfered());
while (myDownload.isDone() == false) {
System.out.println("Transfer: " + myDownload.getDescription());
System.out.println(" - State: " + myDownload.getState());
System.out.println(" - Progress: " + myDownload.getProgress().getBytesTransfered());
try {
// Do work while we wait for our upload to complete...
Thread.sleep(500);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
} catch(Exception e){
e.printStackTrace();
}
}This was adapted from the TransferManager class example for multiple upload. There are well over a 100,000 objects in this bucket. Any help would be great.
|
Download a Large Number of Files Using the Java SDK for Amazon S3 Bucket
|
The latest version allows you to map different branches to different repositories, seeAnnouncement: Deploy Git Branches to Multiple Elastic Beanstalk Environments:Starting today, you can use eb and Git to deploy branches to multiple
Elastic Beanstalk environments. You can also manage and configure
multiple Elastic Beanstalk environments using eb. For example, you can
configure eb and Git to deploy your development branch to your staging
environment and deploy your release branch to your production
environment. [...]
|
I have two different environments running off of the same git repository. it looks like in the AWS console tools for git and elastic beanstalk, I can only connect one environment at a time, is there anyway to have it push to both of my environments at the same time?
|
aws.push to more than one environment
|
You just need to find out where your code is located on the server. SSH to one of the instances and then you can use the python interactive shell to run your django code for debugging, use the manage.py commands for database debugging, tests etc.Once you have connected to the instance, it's just an OS.
|
I am new to web development. This is probably a dumb question but I could not quite find exact answer or tutorial that could help me. The company I am working at has its site(which is built in python django )hosted on amazon EC2. I want to know where I can start about debugging this production site and check logs and databases that are stored there. I have the account information but is there anyway I can access all the stuff using command line(like an ubuntu shell) or tutorial for the same ?
|
How can I debug python web site on amazon EC2?
|
I think what you are looking for is exactly OpenStack Heat Projecthttp://wiki.openstack.org/HeatThe project is under active development.
|
Is there any technology like CloudFormation for aws that would work on any IaaS based cloud to do the same thing? I mean you write it once and then it runs on any IaaS based cloud platform like azure, aws, openstack, and so on?
|
Cloud Agnostic Tool On Any IaaS Based Cloud
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.