Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I was looking for this functionality today and stumbled across the referenced thread. It was, coincidentally, updated today:Hello,Thanks for your input. I have submitted a feature request on your
behalf to export WAF events to S3 for long term analysis.Best Regards, albertpatawsThe lack of this feature strikes me as being almost as odd as the fact that I can't change timezones for graphs.ShareFollowansweredSep 11, 2017 at 18:57bstovallbstovall12111 silver badge1010 bronze badgesAdd a comment|
|
How do you route AWS Web Application Firewall (WAF) logs to an S3 bucket? Is this something I can quickly do through the AWS Console? Or, would I have to use a lambda function (invoked by a CloudWatch timer event) to query the WAF logs every n minutes?UPDATE:
I'm interested in the ACL logs (Source IP, URI, Matches rule, Request Headers, Action, Time, etc).UPDATE (05/15/2017)AWS doesn't provide an easy way to view/parse these logs. You can get a "random sample" via the get-sampled-requests command. Which isn't acceptable...Gets detailed information about a specified number of requests--a
sample--that AWS WAF randomly selects from among the first 5,000
requests that your AWS resource received during a time range that you
choose. You can specify a sample size of up to 500 requests, and you
can specify any time range in the previous three hours.http://docs.aws.amazon.com/cli/latest/reference/waf/get-sampled-requests.htmlAlso, I'm not the only one experiencing this issue either:https://forums.aws.amazon.com/thread.jspa?threadID=220202
|
AWS WAF - Auto Save Web Application Firewall logs in S3
|
I'm assuming 'Oauth2 Server' in your question means the thing that validates tokens. You don't state if your app is actually issuing tokens, or what type of tokens are issued.Best option is probably subjective, but my preference has always been to use Custom Authorizers, as this is then a re-usable component for other resources.Swagger imports into API Gateway aside, you can manage authorization in your app if you wanted to, it just becomes the first thing you deal with when a new request is received, just make sure the authorization header is mapped in API Gateway to head downstream.ShareFollowansweredJun 5, 2018 at 15:45K MoK Mo2,12599 silver badges1616 bronze badgesAdd a comment|
|
I’m planning on building a user management Java API and deploy it in Wildfly. The API specification will be done using Swagger.Then I will create a Docker image with the Wildfly + application and then create a container from that image on AWS ECS (EC2 Container Service).The next step is to import the API’s Swagger specification into AWS API Gateway and forward the requests to the created AWS ECS container.My question. What is the best option to implement an OAuth2 server:Create it in a Lambda Function and use it as a Custom Authorizer in AWS API Gateway?Create it on a new Java application (on the same or new Wildfly container), therefore not using the AWS API Gateway’s Custom Authorizer option? Is this even possible, since the requests will be received from AWS API Gateway? I ask this because when trying to import a Swagger specification with and OAuth2 security implementation, AWS API Gateway gives the following error:Your API was not imported due to errors in the Swagger file.
Unsupported security definition type 'oauth2' for 'oauth'. Ignoring.As a side note, since all the future clients of the API will be developed by myself, I’m planning on using the Resource Owner Password Credentials Grant on my OAuth2 server.
|
AWS API Gateway + AWS ECS + OAuth2 Password Grant
|
try to run it with-x Print commands and their arguments as they are executedto debug and try to change the mode to000777.files:
"/home/ec2-user/myfile" :
mode: "000777"
owner: root
group: root
content: |
#!/usr/bin/env bash
set -xeShareFollowansweredApr 30, 2017 at 1:13BerlinBerlin1,46611 gold badge2121 silver badges4343 bronze badgesAdd a comment|
|
I've done this before a long time ago, but now it's not working... :)I am trying to use EBExtensions in an ElasticBeanstalk application. I created a vanilla Elastic Beanstalk environment with no configuration beyond the defaults. I gave it an application version that had a directory structure like the following:.ebextensions
40testextension.config
app.js
other filesThe important part is that I have a folder called .ebextensions at the root of my deployable artifact, which is where I believe it should be located.The 40testextension.config file inside that file has the following contents:files:
"/home/ec2-user/myfile" :
mode: "000755"
owner: root
group: root
content: |
# This is my file
# with contentI uploaded that version when creating the environment, and the environment created successfully. But when I look for that file, it is not present. Furthermore, when do a recursive grep for that ebextension file name in the logs at /var/log, I only get one result:./eb-activity.log: inflating: /tmp/deployment/application/.ebextensions/40testextension.configHaving looked at the logs, it seems that the file is present when the artifact gets pulled down to the host, but the ebextension never gives any indication of running.What am I missing here? I've done this in the distant past and things have worked very nicely, but this time I can't seem to get the thing to be executed by the Beanstalk deploy lifecycle.
|
AWS Elastic Beanstalk - EB Extensions Not Working
|
I believe you might have some static variables there. In Java environment, AWS-Lambda keeps the static variables in memory across multiple lambda executions. So If you have a static map and you add variables to it with each lambda execution, It stays in memory. This might be the case for node.js too.ShareFollowansweredJul 13, 2017 at 16:07super7egazisuper7egazi75311 gold badge88 silver badges2323 bronze badgesAdd a comment|
|
I am executing alambda Nodejs functionthat has following configuration:1) Max Memory : 512 Mb2) Time out : 20 secondsMemory consumption for a single execution:100 MbIt takes around100 Mbto execute a single function.What I noticed:When theLambda functionis executed multiple times, the memory consumed keeps on increasing from100 Mbto128 Mbto155Mband so on ...When it reaches the Max Memory (512 Mb), execution stops and I get the following error:Process exited before completing requestWhen tried after few minutes, the memory is cleaned up and it again sarts from100 MbIs there any way to clean up the used memory in Lambda function? If not, is there any other way to tackle this problem?EDIT:I am using this lambda function to generate image fromcanvasusingnode-canvas
|
AWS Lambda function execution stops
|
You forgot to addvar Pusher = require('pusher');ShareFollowansweredSep 26, 2020 at 8:20AianAian23633 silver badges1010 bronze badges0Add a comment|
|
I am trying to use AWS Lambda to trigger Pusher notification to the browser. The same code works fine when run locally but fails to connect to Pusher servers when run on Lambda :exports.handler = function (event, context) {
var pusher = new Pusher({
appId: '<id>',
key: '<key>',
secret: '<secret>',
cluster : "eu"
});
console.log(pusher);
pusher.trigger('test_channel', 'my_event', {"message": "hello world"});
context.succeed('hello world');
};Any ideas why it does not work? Or how to make it working?
|
AWS Lambda and Pusher intergration. How to send notification?
|
I had similar problem with OVA images exported from VirtualBox. In my case converting it toraw imageformat worked. So if you are using VirtualBox, import the OVA to it, and then convertVDIstorage drive toraw. For example:VBoxManage clonehd "/mnt/b/VirtualBox VMs/ub.vdi" ./ub.img --format rawThen you uploadub.imgto S3, and use that in yourcontainers.jsonfile foraws ec2 import-imagecommand. For example:[
{
"Description": "My Server OVA",
"Format": "raw",
"Url": "s3://<your-s3-bucket>/ub.img"
}
]The command:aws ec2 import-image --description "My server VM" --disk-containers "file:///./containers.json"and to monitor the import process:aws ec2 describe-import-image-tasks --import-task-ids <import-id-from-previous-command>ShareFollowansweredMay 10, 2022 at 6:16MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
|
I am trying to import a SO VMDK file(OVA/OVF) of Ubuntu server 14.04 to AWS ,But facing the below error,even though the machine seems to have proper partitioning and volumes.This happens only in case of Ubuntu server, while Windows server machines are successfully imported.I am trying to import using the ec2-api-tools only.
|
AWS ec2 import-instance error-No valid partitions. Not a valid volume.[Client error]
|
Federated users requiretemporary access keyswhich you can grant withaws sts assume-role.ShareFollowansweredMar 24, 2022 at 12:19RobinGowerRobinGower94866 silver badges1414 bronze badgesAdd a comment|
|
We're using Auth0 to give (federated) users access to Auth0 (we've followed these instructions for setup:https://auth0.com/docs/integrations/aws#sso-with-the-aws-dashboard)In Auth0 we've setup a simple rule system where the federated user's group membership maps to one of two different IAM roles, which gives the user either full access or read-only access (or no access at all) in the aws console.However, I'm struggling to see how I can provide federated users with the means to get an access key id/secret linked to their account. Our wishlist is:The access key id/secret is unique per federated user, and as such is void if the federated user is deleted from the identity provider.I could manually provision a IAM role per federated user and link each user to his/her "personal" IAM role, but I'd obviously prefer not to.All in all I guess I'd like there to be a "linked" IAM user representing each federated account.So I guess my question is: How do allow my federated users access to personal access key id's in aws?
|
How can I supply federated users with an aws access key id/secret?
|
You can checkout nagios or ganglia for cluster health but you cant see the jobs running on spark with these tools.ShareFollowansweredDec 8, 2016 at 7:41Sandeep PurohitSandeep Purohit3,6621919 silver badges2222 bronze badgesAdd a comment|
|
I am running a spark cluster on AWS EMR. How do I get all all the details of the jobs and executors that are running on AWS EMR without using the spark UI. I am going to use it for monitoring and optimization.
|
monitoring spark cluster in AWS EMR without spark UI
|
I had the same problem with a NodeJS Elastic Beanstalk app. However, I was able to get around it by updating the Listener/Certificate settings via the AWS EC2 console (https://console.aws.amazon.com/ec2/), via the Load Balancers section (under LOAD BALANCING).I was updating the certificate for a staging version of a cloned environment. This was the only way I could assign a different certificate to the staging environment.See more athttp://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.htmlShareFollowansweredApr 12, 2017 at 8:32Al.Al.18311 silver badge1111 bronze badgesAdd a comment|
|
I'm a pretty new developer and deployed my first Django app via Elastic Beanstalk. I want to serve https requests and have configured my SSL certificate and have my load balancer set up correctly. When I go into EB > Configuration > Secure listener port and set it to 443 I'm getting the error upon saving:LoadBalancerHTTPSPort: You have specified both the @deprecated(:default.aws:elb:loadbalancer:LoadBalancerHTTPSPort)
option as well as one in the new aws:elb:listener:443 namespace.
The :default.aws:elb:loadbalancer:LoadBalancerHTTPSPort option will be ignored.Not sure what I'm missing because I'm still not able to serve https requests
|
Django Elastic Beanstalk App - Cannot Set Secure Listener Port to 443: LoadBalancerHTTPSPort
|
Your best bet for this will be to use astack set in CloudFormation.AWS CloudFormation StackSets extends the functionality of stacks by
enabling you to create, update, or delete stacks across multiple
accounts and regions with a single operation. Using an administrator
account, you define and manage an AWS CloudFormation template, and use
the template as the basis for provisioning stacks into selected target
accounts across specified regions.With a stack set, you can specify the accounts and regions to which you want to deploy your lambda. You will likely want to put the lambda code in an S3 bucket that you can then reference from your CloudFormation template.Then it is easy (and simple) to deploy to a new region--just add that region to the stack set.ShareFollowansweredAug 25, 2020 at 15:54ShawnShawn8,64755 gold badges3838 silver badges6464 bronze badgesAdd a comment|
|
I have a deployed a lambda in US EAST region. There is a need to deploy the same lambda in multiple regions. Is there a simple way(in the portal) to do it ? Or do I have to manually create these lambdas in every region ?
|
Is it possible to deploy same aws lambda jar in multiple regions at once?
|
Explicitly set your credentials so that our the same as the CLI with the ENV variables.echo $ACCESS_KEYecho $SECRET_KEYimport boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY
)
b = client.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string()How boto resolves its credential.The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:Passing credentials as parameters in the boto.client() methodPassing credentials as parameters when creating a Session objectEnvironment variablesShared credential file (~/.aws/credentials)AWS config file (~/.aws/config)Assume Role providerBoto2 config file (/etc/boto.cfg and ~/.boto)Instance metadata service on an Amazon EC2 instance that has an IAM role configured.http://boto3.readthedocs.io/en/latest/guide/configuration.html#guide-configurationShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredDec 29, 2016 at 4:18strongjzstrongjz4,35111 gold badge1818 silver badges2727 bronze badgesAdd a comment|
|
Using this aws cli command (with access keys configured), I'm able to copy a key from S3 locally:aws s3 cp s3://<bucketname>/test.txt test.txtUsing the following code in boto, I get S3ResponseError: 403 Forbidden, whether I allow boto to use configured credentials, or explicitly pass it keys.import boto
c = boto.connect_s3()
b = c.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string() # exception thrown hereI've seen the other SO posts about not validating the key with validate=False etc, but none of these are my issue. I get similar results when copying the key to another location in the same bucket. Succeeds with the cli, but not with boto.I've looked at the boto source to see if it's doing anything that requires extra permissions, but nothing stands out to me.Does anyone have any suggestions? How does boto resolve its credentials?
|
Cannot read a key from S3 with boto, but can with aws cli
|
According to the JSON-schema documentation, the date-time format takes a date representation, as defined byRFC 3339, section 5.6An example looks like this:1996-12-19T16:39:57-08:00ShareFolloweditedOct 7, 2021 at 13:28CommunityBot111 silver badgeansweredJan 2, 2017 at 1:02MikeD at AWSMikeD at AWS3,6451717 silver badges1515 bronze badgesAdd a comment|
|
In Amazon Api Gateway, i have created model is as follow:{
"start": {
"type": "string",
"format": "date-time"
},
"end": {
"type": "string",
"format": "date-time"
}
}and i am returning{
"start": "2015-10-12 10:30:00",
"end": "2015-10-13 10:30:00"
}but this is throwing error in Android SDK as follow:java.text.ParseException: Failed to parse date "2015-10-12 10:30:00"
|
What date format does Amazon Api Gateway support?
|
Your answer may be in thisOpenVPN Support thread. I'm running into the same issue. From what I gather, when you're connected over vpn, public IPs and DNS names won't resolve. You can connect to other EC2 instances easily using private IPs. But the RDS instance's IP is not static, so it must be resolved using it's host name. The solution apparently is to make your OpenVPN server use the Amazon DNS server, so that it can resolve the RDS instance by its host name.ShareFollowansweredNov 19, 2017 at 20:36Paul SiersmaPaul Siersma2,10611 gold badge2222 silver badges2525 bronze badgesAdd a comment|
|
I have an EC2 that run as a VPN server. In the same VPC I have a RDS instance and another EC2 instance in a private subnet.I have devices that connects to the VPN server and I have configured that they can communicate with each-other and with the private EC2 too. But I can't make them to communicate with the RDS instance.I have configured the Security Group of the RDS to allow all inbound traffic from the SG of both EC2, tried to allow even All Traffic from 0.0.0.0/0 a still VPN clients can't communicate with the RDS. I see that RDS can communicate inside the VPC but not outside it. Once upon a time a remember and I'm sure that I was connected from my local MySQL Workbench to the RDS(3 years ago)Is there anyway to make this work?
|
AWS: How to connect VPN clients to RDS (VPN server EC2 and RDS are in the same VPC)
|
Looks like the AWS Elastic beanstalk UI now works to set SSL as the protocol.ShareFollowansweredSep 27, 2016 at 19:00Alex YurkowskiAlex Yurkowski1,72622 gold badges1212 silver badges2727 bronze badgesAdd a comment|
|
I can't switch on SSL for my load balancer in AWS Elastic beanstalk.I add these to my configuration as well as the certificate in the drop down below them.When I hit add, I get this warning. The warning seems okay as it is ignoring a deprecated value. The environment goes through it's update, but when complete the fields above are set back to off.What's wrong here? Am I doing something incorrectly? Is there an issue with aws elastic beanstalk interface?
|
AWS Elastic Beanstalk Load Balancer SSL/HTTPS not working
|
In your .htaccess file in your Wordpress root directory, have you tried to add in<IfModule mod_setenvif.c>
SetEnvIf X-Forwarded-Proto "^https$" HTTPS
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} ^http$
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</IfModule>Also make sure that in yourwp_optionstable that you havesiteurlset to https:// andhomeis set to https://ShareFolloweditedFeb 24, 2017 at 15:07answeredFeb 24, 2017 at 14:54Matt PramschuferMatt Pramschufer30322 silver badges55 bronze badgesAdd a comment|
|
I'm just starting with the AWS...I have 1EC2 Instance, which is running WordPress (WordPress powered by Bitnami image from the Marketplace -https://aws.amazon.com/marketplace/pp/B007IP8BKQ).
Everything works fine, I can access both front and back-end of the WP installation running from standardHTTP.The problem starts when I connect the Instance to the ELB and I'm trying to access the site via theHTTPS.I've run through the process of setting up the SSL certificate using the ACM and applied the cert to the ELB. I've also got the 443 and 80 port enabled on the security group for the EC2 instance. The port forwarding on the ELB is set up for both 443 and 80 to go to 80 and my WP config file is set up to check the HTTP_X_FORWARDED_PROTOheader to stop WP getting into an infinite redirect loop.I think this is all working fine, as I can access the site via the HTTPS and browse the pages. However, when I attempt to log in I am redirected to HTTP and to solve this problem I have set WP site URL and WP home URL to HTTPS. This then results in numerous 503 errors (503 Service Unavailable: Back-end server is at capacity). From the AWS Dashboard, I can see the Instance is running just fine.I've researched quite a lot to make sure I have everything set up correctly (some threads I've come across -Link 1Link 2) and everything seems fine (but something is obv wrong).Any suggestions on how to get the ELB to work?
|
503 error when running WordPress on AWS EC2 ELB SSL
|
I'v newer deployed to amazon (and havesmallexperience with Spring) but i see some ways to solve it:Just use Apache Zookeeper. Create ephemeral-sequential nodes in zookeeper for each node. When i'ts scheduled time and node is first in that queue - remove node from queue, add node again and start your job (Apache Curator Framework has all functionality you need to implement it in your code)Use Hazelcast queues (same logic). Benefits: hazelcast could be used in embedded mode. Minuses: not sure about hazelcast discovery abilities in amazon, as for my experience - hazelcast is much less stable than zookeeper.Other way - if you have any "instance id" on each instance you could start job if start time is reachedANDinstanceId.hashcode() % dayOfMonthNumber == 0ShareFolloweditedAug 19, 2016 at 21:11answeredAug 18, 2016 at 21:18Frut DzentreadyFrut Dzentready13277 bronze badges3Let me try to develop a solution using these methods.–Pravin UmamaheswaranAug 19, 2016 at 14:33just tell me if you need any assistance–Frut DzentreadyAug 19, 2016 at 21:061@PravinUmamaheswaran Were you able to develop a solution to run the job only in one of the instance?–Jitendra KumarJul 22, 2022 at 10:55Add a comment|
|
I have a java application which has been scheduled to run once every night at a set time. The application sends e-mails if a condition is met. All the scheduling code is in Java and I am not using any of Amazon's features to schedule it. This application has been deployed on an EC2 instance and it sits behind an elastic load balancer. Based on the load, additional nodes could be added. My java application gets replicated to other nodes as well and the nightly job executes on all instances.Is there a way by which I can make a single node execute this job?Thanks.
|
How do I run a nightly job only on one instance if it is deployed on N instances?
|
For the moment, you may have to rely on thelist of public IP address ranges for AWS, allowing traffic bound for all the CIDR blocks associated with your region.Part of design for resiliency of much of what AWS does relies on the ability of their service endpointsnotto depend on static address assigments and instead to use DNS... but their service endpoints should always be on addresses associated with your region, since very few services violate their practice strict regional separation of service infrastructure.(CloudFront, Route 53, and IAM do, maybe others, but these are provisioning endpoints, not operational ones. These provisioning-only endpoints do not need to be accessible for most applications to function normally.)ShareFollowansweredJul 14, 2016 at 11:26Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges11good answer Michael, did not know about publicly available list of AWS address ranges, this is really handy.–b.b3rn4rdJul 14, 2016 at 11:33Add a comment|
|
Even when I create EC2 instances in a private subnet, they must be able to send traffic to the Internet if I want to register them to a ECS cluster.I am using a NAT gateway to do this, but I still feel insecure that the instances can send private information to anywhere in case of takeover.What would be the most compact CIDR range that I can use for the instances' security group, instead of 0.0.0.0/0?
|
Securing outbound traffic rule from EC2 instances when using ECS
|
Add gem 'rack-cors' to your gemfile, run bundle.
Create a new initializer cors.rb. Add the following code thereRails.application.config.middleware.insert_before 0, 'Rack::Cors' do
allow do
origins '*'
resource '*',
headers: :any,
methods: [:get]
end
endThis should fix your issue.ShareFollowansweredJul 13, 2016 at 8:19VasanthVasanth10877 bronze badges0Add a comment|
|
I've downloaded the rack-cors gem, followed the documentations and tried to configure the CORS settings inside my Amazon AWS s3 bucket to accept GET requests from my site but I still keep getting the same error in my console.<CORSConfiguration>
<CORSRule>
<AllowedOrigin>https://[url]</AllowedOrigin>
<AllowedOrigin>http://[url]</AllowedOrigin>
<AllowedOrigin>http://localhost:3000</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>XMLHttpRequest cannot load [audio_url_from_amazon]. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin [my_website_url] is therefore not allowed access.Any help with this would be awesome as I've read other questions but still can't seem to get the right config working.
|
How to get CORS header for rails app to access aws s3 bucket?
|
Try to move the lineif __name__ == '__main__'afterapplication=..., because the launcher mayimportyour script instead of exec it. In that case__name__will not be defined as__main__. See thispagefor better example.application = connexion.App(__name__, specification_dir='./swagger/')
application.add_api('swagger.yaml', arguments={'title': 'This is a basic API fascade to theprotptype development robot. The API front-ends the communication with an MQTT pub/sub topic, which uses the Amazon Web Services IoT service.'})
if __name__ == '__main__':
application.run()ShareFolloweditedJun 7, 2016 at 23:51answeredJun 7, 2016 at 23:43gdlmxgdlmx6,56911 gold badge2121 silver badges4040 bronze badges2Thanks, that moved it forward, now it passes that and fails with 'AttributeError: 'module' object has no attribute 'default_controller'' :) :(–user2997982Jun 8, 2016 at 0:00Seems that your local packages have different version with those on the AWS. Check the version number of those related packages.–gdlmxJun 8, 2016 at 0:07Add a comment|
|
Pulling my hair out! I am trying to deploy a Python FLask application to AWS Elastic Beanstalk and I am getting the errorTarget WSGI script '/opt/python/current/app/application.py' does not contain WSGI application 'application'The web page is just returning a 500 Server errorthe content of my application.py is as follows:#!/usr/bin/env python3
import connexion
if __name__ == '__main__':
application = connexion.App(__name__, specification_dir='./swagger/')
application.add_api('swagger.yaml', arguments={'title': 'This is a basic API fascade to theprotptype development robot. The API front-ends the communication with an MQTT pub/sub topic, which uses the Amazon Web Services IoT service.'})
application.run()Runs fine locally, but no good when I upload to AWS. I have changed the name from app.py to application.py and changed the app = to application = but no changeDon't know where to go next :(
|
Target WSGI script '/opt/python/current/app/application.py' does not contain WSGI application 'application'
|
$cloudSearchDomain = App::make('aws')->createClient('cloudsearchdomain', [
'endpoint' => xxxxxxxxxxxxxxxxxxxxxxxxxxx,
]);or$cloudSearchDomain = Aws::createClient('cloudsearchdomain', [
'endpoint' => xxxxxxxxxxxxxxxxxxxxxxxxxxx,
]);ShareFollowansweredJun 7, 2016 at 8:42ntzmntzm4,77322 gold badges3333 silver badges3838 bronze badgesAdd a comment|
|
I follow aws-sdk-php-laravel readme.md to setup aws-sdk-php-laravel in laravel 5.2In composer.json"require": {
"php": ">=5.5.9",
"laravel/framework": "5.2.*",
"aws/aws-sdk-php-laravel": "3.1.0"
},composer updateIn config/app.phpproviders addAws\Laravel\AwsServiceProvider::class,aliases add'Aws' => Aws\Laravel\AwsFacade::class,php artisan vendor:publishand one of controllers<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use Aws;
$cloudSearchDomain = App::make('aws')->get('cloudsearchdomain', array('endpoint' => xxxxxxxxxxxxxxxxxxxxxxxxxxx));always getFatal error: Class 'App\App' not foundIf adduse App;getBadMethodCallException in Sdk.php line 178:
Unknown method: get.but the same code work fine in laravel 4.2How can I fix it?
|
Aws-sdk-php-laravel get 500 error in laravel 5.2
|
every SNS message contain a "mail" part with the messageIdsee. If you are sending a mail you receive as the only response the messageIdsee.
so I store all the messageId together with information I later maybe need if that mail bounces or gets complaints and if I receive a bounce I can query all the information with the given messageId.
|
Currently we are using Mandrill to send our emails, when ever Mandrill detects a bounce we get the original headers along with the bounce or a subaccount where the email was send from.In Amazon SES we are getting the notifications through SNS thats no problem but besides the email from the user we get no original information back. So we have no idea what email campaign the user bounced on etc.Anybody that knows how to handle this?
|
Amazon SES bounce/complaint handling
|
Try this command:eb deployIt will zip your repository, upload to S3 and deploy to eb.Get the CLI tool:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html
|
I am new to AWS elastic beanstalk. I have deployed the Parse example server using deploy to AWS button in theParse Server Example Link. I want to update the cloud code in main.js but I don't know how can I deploy the cloud code the way I was deploying with Parse in terminal.
|
How to deploy cloud code on AWS hosted Parse server
|
According to the instructions in:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.htmlyou can download the files using the following code for this purpose:
scp -i key_file.pem ec2-user@ec2_IP_Address:/remote/dir/foobar.txt /local/dir
|
My current application is updating JSON files on the amazon elastic beanstalk server. Is there any way to download the current files on the server or access those JSON files??Just wondered if I can access them before restructuring my server to host those files elsewhere or on a DB.
|
Download files hosted on elastic beanstalk
|
You can use Amazon Product Advertising API to get product's Images,Rank,Sales Rank etc.LinkAnd also to get order status update you can use Amazon MWS Subscriptions APILinkI hope these helps you.
|
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed5 years ago.Improve this question1) Product - API calls
I am trying to import all my listing data in my local website, but I could not find proper API to get all product related information from 1 api call.- Product images- Product variations- Product attributesCurrently I am using "_GET_MERCHANT_LISTINGS_DATA_' report and GetMatchingProductRequest to achieve that but not get all information which I can see on Amazon selling website.. Can anyone suggest is there any proper API to do so?2) Real time Order/Product Updates - When any new order came in my amazon selling account I like to received real time order update to my given webservice URL. is that possible in Amazon? I have read spec but not find any relavant information to achieve that..
Can anyone please suggest is that possible in Amazon MWS api?
|
Amazon MWS API (Product and real time order update) [closed]
|
It doesn't seem to be a VPC problem, but more a problem of package that is not in the lambda.In python, the best way is to install your packages in your lambda folder before compressing all in the .zip :$ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER(And I don't knowpypyodbcbutpymsqlis working for me.)
|
my app does some http request and insert the result into sql server db on daily basis. sql server is on amazon's rds service, i use default-vpc settings.When i try to use it in aws lambda (packaged as it defined on the aws-lambda documentation), it gives following error:module initialization error: 'ODBC Library is not found. Is
LD_LIBRARY_PATH set?'I use pypyodbc as python mssql module.Do i need to setup odbc library manually?Attached role includes the policiy:AWSLambdaVPCAccessExecutionRoleEdit: i tried to use ceodbc and pyodbc, unable to find module "" error raised. (installed in virtualenv with ceodbc whl file, pyodbc with pip)
NOTE: those two has .pyd file extensions in root level, since they are also in the site-packages folder. I guess amazon lambda doesn't include pyd file while executing.Edit2: followed these steps, got same error.https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds-create-rds-mysql.html
|
Python - SQL Server AWS Lambda Integration
|
OK, so I figured it out and thought I'd put the answer up here in case it is useful to anyone else.As I suspected the problem was the curl version. In order for the line which tells curl which version to use to take affect I needed to be on curl version 7.34 or higher.curl_setopt($ch, CURLOPT_SSLVERSION, 6);So how do you upgrade the curl version? Well there was a big upgrade button on the environment's main page to upgrade the version of Linux running on my instances so I clicked on that it upgraded curl at the same time. I now have curl version 7.38 and its using TLS v1.2 as I wanted.
|
I use this PHP function below to use curl to contact an outside APIfunction api_post($url, $data = array()) {
global $api_key;
global $password;
$data = json_encode($data);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_SSLVERSION, 6);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, false);
curl_setopt($ch, CURLOPT_MAXREDIRS, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Content-Type: application/json',
'Accept: application/json'
));
curl_setopt($ch, CURLOPT_USERPWD, $api_key . ':' . $password);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
$response = curl_exec($ch);
return $response;
}The api that I m communicating with is about to insist on using tls v1.2 which is a good thing except for some reason my code is using version 1.0.It is fine if I do it from my local server but on the production server (An Amazon Web Services EC2 instance on AWS Elastic Beanstalk) it is not. I guess it has something to do with my server setup but I have no idea what or how to fix it.Here is the curl section from my PHPinfo. Maybe I need to upgrade it or something? But how would I do this?
|
USing TLS v1.2 for Curl On Amazon AWS
|
Not sure when this changed, but currently you can trigger TLS enforcement viaConfiguration Sets, as mentionedhere.You configure whichconfiguration setto use per email, for example adding a custom header to your email when sending via SMTP:X-SES-CONFIGURATION-SET: [NameOfConfigSet]
|
I am sending transnational emails using AWS SES and I want them to be
transferred over a secure connection only.By this I mean that if the receiving server does not support TLS I do not want the email to be sent at all.The FAQ on SES states:Q: Does Amazon SES send email over an encrypted connection using Transport Layer Security (TLS)?A: Yes. If the receiving mail server advertises the STARTTLS extension, Amazon SES will attempt to upgrade the connection to a TLS connection. If that fails, Amazon SES will fall back to plain text.(see here)Is there anyway to avoid sending emails over non secure connections?
|
AWS SES : Force emails over TLS and fail if TLS not supported
|
For EC2 you need to install the AWS Plugin that is provided by elasticsearch. Once plugin is installed, you need to generate secret-key and access-key from EC2.Thislink should help you generate the required keys.
After this you need to configure following settings in your elasticsearch.yml : network.host:ec2,network.publish_host: "", discovery.type: ec2,cloud.aws.access_key: , cloud.aws.secret_key: ,discovery.ec2.groups: ,discovery.ec2.host_type: "public_ip", discovery.ec2.ping_timeout: "10s"This should get you up and running. Connect from your client using the ip address of your machine.
|
I have elasticsearch running on EC2 (Fedora),I am unable to connect externally using the public ip or hostname.ElasticSearch starts correctly and I can access locally on the machine using: curl -XGEThttp://localhost:9200{
"name" : "Prodigy",
"cluster_name" : "awstutorialseries",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}And I do follow all the steps explained here:elasticsearch on Ec2 cannot hit public IP(timeout)likeDo what TJ said in his comment, + restart the instance. I wasn't sure if this was/is necessary, but I did it for good measure.I made sure that the following is set in the elasticsearch.yml file: a. http.enabled: true b. http.cors.enabled: true c. http.cors.allow-origin: "*"Restarted elasticsearch (service elasticsearch restart)I can connect kibanaBut I can't connect to elasticsearchmy inbound and outbound are wide open for this instance:this is my elasticsearch.yml file
|
Unable to connect to Elasticsearch through aws public IP
|
Replace the second"Resource": "*"by"Resource": "arn:aws:ec2:*:*:instance/*"
|
I am trying to restrict AWS IAM users to only be able to startt2.microandt2.smallinstances. I've applied the following permissions but I am unable to start any instances with this configuration.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"ec2:InstanceType": [
"t2.micro",
"t2.small"
]
}
}
}
]
}I expected this to give me full permission and then deny if the type isn'tt2.microort2.small, however it seems to deny everything.
|
Restrict aws IAM users to certain EC2 instance types
|
Seems like when you log from Lambda it turns everything in to a string. May have something to do with adding the Request time and ID to each item.
|
I have an AWS Lambda function which is logging errors. Errors are logged as such:console.error(err);I'm trying to create a Cloudwatch filter which uses their JSON log filtering syntax:{ $.errorType = "ValidationException" }I can see the error in the log2015-11-24T20:26:02.852Z 76800706-2d78-45ed-9068-46ccccafe6af
{
"errorMessage": "1 validation error detected: Value '[]' at 'xxxxxx' failed to satisfy constraint: Member must have length greater than or equal to 1",
"errorType": "ValidationException",
"stackTrace": [
...etc...
]
}Is there some sort of special setup or manual logging into CloudWatch required to support the JSON filter syntax? I cannot find any info in the CloudWatch docs.Docs:http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-logging.htmlhttp://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/FilterAndPatternSyntax.html#d0e19372
|
Amazon Cloudwatch log filtering - JSON syntax
|
Put your environment variable in a file:export AWS_ACCESS_KEY=
export AWS_SECRET_KEY=save the file in ~/.vars in the remote host and then in your playbook.- name: Describe instances
command: source ~/.vars && aws ec2 describe-instances --region us-east-2for security you could delete the file after run and copy again in the next play.
|
The AWS CLI command tasks in Ansible playbooks work fine form command line if AWS credentials are specified as environment variables as per boto requirements. More info can be found hereEnvironment Variables.
But they fail to run in Tower because it exports another set of env. vars:AWS_ACCESS_KEY
AWS_SECRET_KEYIn order to make them work in Tower just add the below in task definition:environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"e.g. this task:- name: Describe instances
command: aws ec2 describe-instances --region us-east-1will transform to:- name: Describe instances
command: aws ec2 describe-instances --region us-east-1
environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"NOTE: This only injects env.var. for the specific task - not the whole playbook!
So you have to amend this way every AWS CLI task.
|
How to run AWS CLI command tasks in Ansible Tower
|
I faced this same problem using my own docker-based configuration
and the change that cleared it up for me was to addclient_max_body_size 20M;at every level in the nginx.conf file for my container's nginx.However, my nginx.conf was quite a bit more elaborate than yours.
I don't understand how yours can work with only an http clause.Here is what my nginx.conf looks like:upstream myapp {
server unix:///var/run/myapp.sock;
}
client_max_body_size 20M;
server {
listen 80;
server_name mayapp.com;
# path for static files
root /usr/src/app/public;
location / {
try_files $uri @proxy;
client_max_body_size 20M;
}
location @proxy {
proxy_pass http://myapp;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 20M;
}
client_max_body_size 20M;
}
|
I have a Rails/Postgres app hosted on AWS Elastic Beanstalk. One form posting data to my app also allows users to select multiple photos, in which the photos are directly uploaded to Amazon S3 using Carrierwave in the same request. While it works in development, it throws a '413 Request Entity Is Too Large' error in production.I've tried configuring my app with some of the suggestions on related Stack Overflow posts to increase the max body size of the request, but nothing seems to be working. Not sure if I should be using the container commands at all either. No idea what that's doing..ebextensions/01_files.config
container_commands:
01_reload_nginx:
command: "service nginx reload"
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
http {
client_max_body_size 20M;
}
|
AWS Elastic Beanstalk, Rails, Carrierwave- 413 request entity too large
|
See alsohttps://stackoverflow.com/a/6927400/122441As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second * $0.10 per million I/O).
|
I have a table in MySql at the moment, 7.3 million rows, 1.5GB in size if I run this query:How to get the sizes of the tables of a mysql database?I'm trying to get a handle on what a full table scan of that in AWS Aurora would cost me?AWS lists it as:I/O Rate - $0.200 per 1 million requestsBut how do I possible translate that into "what will this cost me"?
|
AWS Aurora IOPS Cost
|
Current execution context is not aware of the python's environment preferences.All you have to do is to assignPYTHONPATHenvironment variable before you executeawsclicommand.Example:export PYTHONPATH=$PYTHONPATH:/home/ubuntu/.local/lib/python2.7/site-packages
# For example list files from your bucket
aws s3 ls s3://mybucket --recursiveIn order to set correct path forPYTHONPATHyou need to check where are the python packages are installed on your computer/server.
Above example is from my ubuntu 16.04 server were python2.7 is installed by compiling python's source code.Depending on how python is installed, you should search for one of the folderssite-packagsordist-packageswhich contains list of installed python packages.Also, on another server I've found that required package are on following location:export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/dist-packagesHere is the example what is located inside mysite-packagesfolder.view image
|
On a linux server:File "/usr/local/bin/aws", line 19, in <module>
import awscli.clidriver
ImportError: No module named awscli.clidriverAny idea why this could have started happening?
|
AWS cli started failing randomly today...?
|
I have seen this issue as well. They know about it and they are working on a fix (it has been a while though).https://forums.aws.amazon.com/message.jspa?messageID=678379As noted in comments it works by using the CLI.eb create --database --database.engine mysql --region eu-west-1You can also use eb config to setup a default database.
|
I'm having problems recently creating a new Amazon RDS database (mySQL) and associating it with an Amazon Elastic Beanstalk environment. I have done this painlessly in the past for other environments (simply by going to the environment's configuration tab -> data tier -> "create a new RDS database" -> entering details as needed -> pressing "Save").However, there is now a section at the end requiring me to"Select the subnets for RDS instances in your Availability Zone". I have to tick both of the two subnets detected (which are contained in the default VPC) because I'm required to have a subnet in at least two Availability Zones (despite having selected Single Availability Zone - if that's relevant).When I click "Apply" I get the error message:"DBSubnets: Invalid option value: '' (Namespace: 'aws:ec2:vpc', OptionName: 'DBSubnets'): Specify the VPC ID and make sure all subnets exist."Any ideas about what I've done wrong? I'm unsure where exactly I'm meant to specify the VPC ID, and why I even have to.Sorry if I've misunderstood something - I'm fairly new to this stuff. Thanks in advance for any help.
|
Recently unable to create RDS database (and associate with Beanstalk environment)
|
I think there is one way to get preassigned url of all items by creating an array of multiple getCommands, getCommand can handlemultiple commandsand then you can usetoArray()function of Aws\CommandInterface to convert it into an array. ThecreatePresignedRequest()function does not support for multiple requests either you have to called it repetitive or need to use angetObject()
|
I am trying to get pre-signed url of all object in bucket. I am using amazon php sdkversion 3.What I have tried is$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials.ini' => [
'key' => $credentials['key'],
'secret' => $credentials['secret'],
],
]);
$client->listObjects(['Bucket' => $bucketName]);Above get me all object in arrayAccess but It have object url likehttps://s3-us-west-2.amazonaws.com/some-demo/one2.txtand I don't want that everyone have access to one2.txt so I have created a preassigned url by$cmd = $client->getCommand('GetObject', [
'Bucket' => $bucket,
'Key' => $key
]);
$request = $client->createPresignedRequest($cmd, '+20 minutes');
$presignedUrl = (string) $request->getUri();
echo $presignedUrl;Now I am getting url with tokenhttps://s3-us-west-2.amazonaws.com/some-demo/one2.txt?X-Amz-Content-Sha256=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJUZQHGPBTNOLEUXQ%2F20150828%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20150828T090256Z&X-Amz-SignedHeaders=Host&X-Amz-Expires=1200&X-Amz-Signature=77e52cf99c0f438d48851193dbaba0fsdfe1b4d8e604d6sdf11a22b3be45e410168ab81which Is exactly what I want but Now my question isHow to get preassigned url all items in bucket rather than making for all item one by one ?
|
Get presigned url of all objects in bucket amazon aws s3
|
I can't find a way to do this.I would do something like the following:create table viewSQL
as select view_definition
from information_schema.views;
drop table [table you want to change] cascade;
do $$
declare
record record ;
lv_sql text;
begin
for record in select view_definition
from viewSQL loop
execute immediate 'record.view_definition';
end loop;
end $$
|
Is there any way at all to drop a table in PostgreSQL ignoring dependencies (not using CASCADE)?I'm attempting to drop and recreate a table in order to add an IDENTITY column (as there seems to beno other way to do this in AWS Redshift), but I've got views that are dependent on the table.I obviously don't want to have to temporarily modify every dependent view just so that I can drop and recreate the same table with an added column.
|
Drop table ignoring dependencies in PostgreSQL?
|
Error 405 stands for "MethodNotAllowed" where the specified method is not allowed against this. Since you have mentioned that Main Server successfully sends the messages to SQS (you have verified it via the console), I will provide a solution to implement a worker thread. This was taken fromthis repositoryin GitHub. Have a look at the worker.php file.$queue = new Queue(QUEUE_NAME, unserialize(AWS_CREDENTIALS));
// Continuously poll queue for new messages and process them.
while (true) {
$message = $queue->receive();
if ($message) {
try {
$message->process();
$queue->delete($message);
} catch (Exception $e) {
$queue->release($message);
echo $e->getMessage();
}
} else {
// Wait 20 seconds if no jobs in queue to minimise requests to AWS API
sleep(20);
}
}
|
I'm trying to set up Laravel 4.2 queue using AWS SQS and an EB Worker environment. I'm pushing the job into the queue from another server and I want the worker environment to execute it. But so far it looks like the worker tries to execute it, but for some reason gets a 405 error in the access log...I'm trying to get a very simple test code... On the worker env. I pretty much clean Laravel installation just with queue config and stuff and this class:class TestQueue {
public function fire($job, $data)
{
File::append(storage_path().'/sqs_push.txt', $data['date']);
$job->delete();
}
}Now on the main server, from where I want to push, I have this:public function getTestQueue(){
$data = ['date' => date('Y-m-d H:i:s')];
$queue = \Queue::push('TestQueue', $data);
var_dump($queue);
}On the worker I have launched thephp artisan queue:listenWhen I run that method, it adds it to the SQS queue (I can see it in the SQS console) and the worker tries to execute it, but all I see is some 405 errors in the access logs...
Maybe im doing something wrong in my queue setup? Can anyone help me please?
|
Laravel 4.2 AWS SQS queue setup using EB worker environment
|
In short,# generate a data key
$KMSKeyS3 = New-KMSDataKey -KeyId $KMSKeySource -KeySpec AES_256 -Region "ap-southeast-2"
[byte[]]$plaintextDataKey = $KMSKeyS3.Plaintext.ToArray()
[byte[]]$encryptedDataKey = $KMSKeyS3.CiphertextBlob.ToArray()
[string]$encryptedDatakeyBase64 = $([Convert]::ToBase64String($encryptedDataKey))Seethis answer to a question on PowerShell and KMSfor a comprehensive answer including tested encryption and decryption scripts, and base64 conversion.
|
I am using anAWS PowershellcmdletNew-KMSDataKeythat creates aSystem.IO.MemoryStreamthat contains an encryption key that I need to use to encrypt some files.This is the documentation for the command:http://docs.aws.amazon.com/powershell/latest/reference/items/New-KMSDataKey.htmlAnd this is the object that is returned by that cmdlet:http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TKeyManagementServiceGenerateDataKeyResult_NET3_5.htmlI am trying to get theplaintextproperty. How can I access theSystem.IO.MemoryStreamto get the key?This is my script sample:$KMSKeyS3 = New-KMSDataKey -KeyId $KMSKeySource -KeySpec AES_256 -Region "ap-southeast-2"This gives me:CiphertextBlob KeyId Plaintext
-------------- ----- ---------
System.IO.MemoryStream arn:aws:kms:ap-southeast-2:<Customer>:key/<Key> System.IO.MemoryStream
|
How can I read from the memory stream of the "Plaintext" property returned by New-KMSDataKey?
|
Have your try adding a CNAME alias for your cloudfront domain ??After setting up the CNAME alias, you can set the cookies on the base domain, then you will be able to pass your cookie.Let's put more detail to it in case people want to know what would be the next step is, let's use the following example :-You are developing onmy.fancy.site.mydomain.comYour Cloudfront CNAME alias iscontent.mydomain.comMake sure you set your cloudfront signed cookies to.mydomain.comfrom your fancy appFrom this point on, you are able to pass the cookie for the CF.One quick way to test if your cookie is set appropriately, try to get your assets URL, and put the url in the browser directly. If the cookie set correctly, you will be able to access the file directly.If you are using javascript to get the cdn assets, make sure in your JS code, you need to pass withCredentials option, or it won't work. For example, if you are using jQuery, you will need something like the following :-$.ajax({
url: a_cross_domain_url,
xhrFields: {
withCredentials: true
}
});And if the request is successful, you should get a response header from CloudFront with "Access-Control-blah-blah".Hope it helps people if they search this answer.
|
I am trying to build an app where users upload content on their browsers to an S3 bucket through CloudFront. I have enabled CORS on the S3 bucket and ensured that the AllowedOrigin is set to *. I can successfully push content from a browser to the S3 bucket directly so I know that CORS on S3 is configured correctly. Now, I am trying to do the same with browser -> CloudFront -> S3. CloudFront always rejects the pre-flight OPTIONS method request with a 403 forbidden response.I have the following options enabled on CloudFront:Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETEWhitelist Headers: Access-Control-Request-Headers,Access-Control-Request-Method, Origin OPTIONS requests are disabled
from the "Cached HTTP Methods"CloudFront apparently now supports CORS but has anyone got it working for an HTTP method OPTIONS request? I tried asking this on theAWS forumsbut no responses.
|
AWS CloudFront CORS Support
|
Create an EC2 instance, download to that instance and upload to S3.
|
I want to copy100-200GBfiles from a secure FTP server (here ftp server with username and password is secure FTP server for me) to AWS S3 bucket. Obviously, I don't want to download and upload file. So, I looked intoFTP2Cloud, but it is in beta phase and the limit is only 100MB. Also, I've also looked intos3cmd, but I couldn't figure out a way to connect to secure ftp server. So, I'm stuck on transferring the files. Can somebody help me in transferring files from FTP to S3 without explicitly downloading and uploading data?
|
What is the best way to transfer file from FTP to AWS s3?
|
CloudFormation + ElasticBeanstalk works great.
|
I am researching ways to deploy a ruby application to an AWS Autoscaling Group and I'm having a hard time deciding which way is best and finding good content about it.I have looked into CodeDeploy, Elastic Beanstalk, CloudFormation, Capistrano, Chef and some others. The combination of some of them.I, personally, didn't want to use Chef or anything that needs much time maintaining. Currently I am using Dokku on EC2, but I need to make a more scalable and elastic solution for a new project.What would be the best suggestion and study material?
|
Rails - Best deployment setup with AWS Auto Scaling
|
If you're certain your software's performance will scale up with higher frequency cores then you'd want to look for instance types containing thezclassifier.As at Oct 2021 these are theM5znand thez1dinstance types.
|
We am currently doing a scientific project that requires us to do some (relatively) heavy numerical simulations. The issue is that we are required to use a Windows-based program that only supports single threading.Initially, we were under the assumption that a memory-optimized instance would be more cost-effective. Subsequently, we found out that the program doesn't utilize anything more than around 2GB of memory.Looking at the table of compute-optimized instances on offer, does ECU refer to the overall computational performance? Or is it an indicator of performance for a single vCPU/core? Since the program doesn't run on multiple threads (and by extension multiple cores), this is an important question for us.
|
Optimal AWS EC2 instance-type for single-threaded Windows program
|
Use the Amazon Redshift JDBC Driver:http://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html
|
I'm quering a Redshift cluster using jdbc. The query has a single parameter:select * from table_name bc where bc.column_name ~ ? limit 10I'm suppling the parameter usingstmt.setString(1, "expected_value");The query execution fails with and exception:org.postgresql.util.PSQLException: ERROR: The pattern must be a valid UTF-8 literal character expression
Detail:
-----------------------------------------------
error: The pattern must be a valid UTF-8 literal character expression
code: 8001
context:
query: 496280
location: cgx_impl.cpp:1902
process: padbmaster [pid=4192]
-----------------------------------------------Executing a query without a parameter works fine:select * from table_name bc where bc.column_name 'expected_value' ? limit 10Any ideas what might cause the exception?
|
AWS Redshift Posix pattern matching
|
There is a diffrence,In Amazon EC2, Elastic Load Balancing provides a special Amazon EC2 source security group that you can use to ensure that a back-end Amazon EC2 instance receives traffic only from the ELB load balancers.Regarding your other question,
You cannot (as far as I know) direct ELB to VPC.
ELB can only be directed to Public subnet ELB and from there to Private subnet ELB.I suggest you read more here:Elastic Load Balancing
|
I have created CNAME using Route 53 for a ELB (2 VPC instances added with it). Verified CNAME withhttp://mxtoolbox.com. It looks fine. Alsonslookup -q= CNAME.MYDOMAINshows my CNAME and address fine.My problem is, CNAME.MYDOMAIN is not loading in web browser. Where as the same setup works for ELB (with EC2-Classic instances) and loading in web browser.Is there any different CNAME setup for ELB with VPC instance and ELB with EC2-Classic instance?
|
CNAME for ELB (with VPC instance) is not working
|
I've encountered an issue where notebooks wouldn't save on the nbserver on AWS EC2 instance I set up in a similar manner via different tutorial. It turns out I had to refresh andre-loginusing the password, because my browser would automatically log out have a certain period. Might help if you close and re-attempt to go the the nbserver and see if it asks you to re-login.Here's a few other things you can try:try to copy a problematic notebook into the server (scp) and try to open+save, as opposed to going thru repo pull to see if anything changescheck if the hanging "saving notebook" message appear for notebooks in certain directoriescheck the ipython console messages when you save a problematic notebook and see if anything there can help you pinpoint the issue
|
I'm running a remote IPython notebook server on an EC2 instance on AWS. The instance is running Ubuntu.
Followedthistutorial to set up, and everything seems to work - I can access the notebook via https with a password and run code.However, I can't seem to save changes to the notebook - It says "saving notebook" and then nothing happens (i.e, still written 'unsaved changes' on top).Any ideas would be greatly appreciated.Edit: It's not a permissions problem, since running in sudo doesn't help.
When creating a new notebook in the remote server, I am able to save. Problem only occurs for notebooks pulled from my git repository. Also, when opening a problematic notebook, and deleting all cells until it's absolutely empty, I can sometimes (!) save the empty notebook, and sometimes (!!) I still can't.
|
Ipython notebook remote server on AWS
|
Try the following:config/routes.rbRails.application.routes.draw do
root 'home#index'
endRemove or comment out all definitions of '/' (including get '/', match '/', etc.)app/controllers/home_controller.rbclass HomeController < ApplicationController
def index
render 'index'
end
endapp/views/home/index.html.erb<h1>HELLO WORLD.</h1>And make sure to deletepublic/index.html.
|
I put up my rails application on AWS elastic beanstalk through Amazon's eb tool.
On elastic beanstalk, I'm using its default load balancer, and am running ubuntu 64bit with ruby 2.0.I'm getting two major problems:1) the root route isn't working.In my config/routes.rb, I tried:root 'controller#actionroot :to => 'controller#action'root to: 'controller#action'and found none of them working. The server was giving me an error saying that:Invalid route name, already in use: 'root' (ArgumentError)I guessed that there was some kind of clash between Rail's default root=>public/index.html and my own routing in config/routes.rb? So I created public/index.html and the root url '/' now serves public/index.html. I want to figure out a way to make it work the 'Rails' way, with the root url routing to controller#action.2) Static assets are not being served.in my layouts/application.html.erb file, I have the Rails defaulttrue%>
true%>However, when I fire up the Rails app on elastic beanstalk on production environment, I get:http://myurl.com/javascripts/application.js404 (Not found)http://myurl.com/stylesheets/application.css404 (Not found)Interestingly, assets in public/images get served correctly.Does anyone know the solutions for these problems?Thank You in advance!========================= Edit ===========================I'm using Amazon 64bit Linux with Passenger Standalone
|
Deploying Rails on AWS elastic beanstalk - Static Asset routing not working
|
As statedhereyou should specify a subnet when creating auto-scaling group. And though it is not stated that you have to have default VPC for creating launch configuration, I would say that readingthis. Particularly this lines:If your AWS account comes with a default VPC and if you want to create your Auto Scaling group in default VPC, follow the instructions in ...So you just need to create auto-scaling group in the desired subnet and use your launch configuration for this group.
|
I keep getting this error returned from my boto create_launch_configuration() cmd wrapped in a fabric task.This is the cmd:if user_data != '':
security_groups=list('sg-d73fc5b2')
print "Trying to use this AMI [%s]" % image_ami
lc = LaunchConfiguration(
name=launch_config_name,
image_id=image_ami,
key_name=env.aws_key_name,
security_groups=security_groups,
instance_type=instance_type
)
launch_config = autoscale_conn.create_launch_configuration(lc)and this is the response<ErrorResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<Error>
<Type>Sender</Type>
<Code>ValidationError</Code>
<Message>No default VPC for this user</Message>
</Error>
<RequestId>4371fa63-e008-11e3-8554-ff532bce5053</RequestId>
</ErrorResponse>We disabled the default VPC in order to try and minimise mistakes being applied to a VPC via API calls. We have several VPC's running from the same account and it would be useful to be able to specify the VPC via boto.Has anyone any idea how I can set this default VPC on a per task basis?
|
Boto create launch configuration in different VPC with fabric and boto
|
I've had exactly this issue! The problem is that theRemove instance when < 2000000criteria is firing when you only have one instance. If your instance goes above 2000000 and then back down below 2000000, EB will terminate it and launch another one. Disable the autoscaling action for that alarm and your problem will go away.With regards to your second issue - how long are you waiting for the newly built instance to become available? I have noticed that new instances will be added to the autoscaling group as soon as they are ready, but before EB has finished deploying your application. In your single-instance situation, that's especially bad since there are no valid servers for a few minutes.
|
I'm running into two distinct, but related, issues with a Rails app (Ruby 1.9.3) I have deployed on AWS' Elastic Beanstalk. I have the following autoscaling config applied. I believe it is the default.Environment type: Load balanced, auto scalingNumber instances: 1 - 4Scale based on Average network outAdd instance when > 6000000Remove instance when < 2000000Issue #1 - My app doesn't get very much traffic yet and only requires 1 EC2 instance (m1.medium). I get several "ElasticBeanstalk Default Scale Down alarm" emails from AWS each week. Most of the time, I check my app after receiving one, and it's fine; however, about once a month, I check my app after receiving the email and find the nginx 404 page. EB has terminated my EC2 instance - the only one running my app - and generated a new one. Why is it scaling down from 1 to 0? This has happened to me with consistency for the past 6 months. Has anyone else experienced this? Found a solution?Issue #2 - When the above situation happens, EB creates a new EC2 instance for me. But, I continue to get the nginx 404 page until I re-deploy - which is a manual task and seems to defeat the purpose ofautoscaling. Does EB require a re-deploy after autoscaling occurs? Shouldn't it automatically deploy the current/latest version of my app to the new EC2 instance(s)?Any help/advice is greatly appreciated!
|
AWS Elastic Beanstalk Rails app autoscaling issues
|
you can not get Canonical IDs via SNS ,but you can get it via Google GCM REST Api youself.see this:GCM with PHP (Google Cloud Messaging)result will have Canonical IDs.
|
I was going through Google Cloud Messaging documentation and I came across this section:http://developer.android.com/google/gcm/adv.html#canonicalCanonical IDsIf later on you try to send a message using a different registration ID, GCM will process the request as usual, but it will include the canonical registration ID in the registration_id field of the response. Make sure to replace the registration ID stored in your server with this canonical ID, as eventually the ID you're using will stop working.How do we handle the Canonical Id update while using Amazon SNS Endpoint ANRs?I checked the Amazon API documentation forCreatePlatformEndpoint:http://docs.aws.amazon.com/sns/latest/api/API_CreatePlatformEndpoint.htmlandPublish:http://docs.aws.amazon.com/sns/latest/api/API_Publish.htmlPlease suggest. Thanks!
|
Amazon SNS: Does it handle Google Cloud Messaging Canonical Ids?
|
Had the exact same problem and it turned to be the permissions set to the files when copying the files over. I was using the PHP library provided by Amazon, and when calling the copy method the permissions needed to be set to "bucket-owner-full-control". That worked for me. You are probably copying your stuff in a different manner, but maybe this is the path to follow.I started to find the solution readinganother question.
|
i recently transferred a couple of s3 buckets to a different account with s3cmd from the master :(now i cant access any of the files transferred to these buckets since there is no way i can add permissions to these transferred files. when i try to add permissions to these files i get. Sorry! You were denied access to do that even when I'm the admin!no way to add permissions to files :https://i.stack.imgur.com/xOB8W.jpgI have tried to add everyone permission in the bucket itself but all in vain.I'll appreciate if anyone can help me to retrieve these files.
|
aws s3 bucket files locked out, cant add permissions
|
Answer from original poster @user1010900 :Got the answer for now:"EMR currently cannot use an IAM role assigned to the EC2 instance launching the EMR job."Ref:https://forums.aws.amazon.com/thread.jspa?messageID=531826򁵲AWS EMR does not support AWS STS now.Ref:http://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.htmlThanks
|
When I try to create an EMR machine with boto from an already created EC2 machine with role (having almost all authorities) it gets failed with error "Access denied checking jar: s3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar".My question is what are the general steps to follow to run a boto script with IAM role from an EC2 machine so that it can create an EMR machine?Thanks!
|
Creating EMR machine from an EC2 machine with boto (with IAM role) gets failed
|
I struggled for a bit with the same question and I think I finally got an answer:Create a new SNS topic.Create an AWS Lambda function that launches the deploy for you on whatever you want using the JavaScript AWS-SDK. So you can get the idea:var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1'; // Opsworks only uses this one for Stacks in any region
var opsworks = new AWS.OpsWorks();
opsworks.describeStacks({}, function(err, data) {
console.log(data);
context.succeed(message);
});Assign the required policies to this lambda function to allow whatever you call from the AWS API.Set Github to send the notification to the SNS topic rather than calling Opsworks directly.
|
How do I limit an app to just be deployed to one layer per default in AWS Opsworks?I have set up a webhook from Github to automatically deploy my app to Opsworks but the app is deployed to all my layers when it should only be deployed to one layer.
|
How to limit an app to one type of layer in AWS Opsworks?
|
Answering my own question:-I found my mistake.I should be passing the URI of S3 folder path to the fileSystem Object like below:-FileSystem fileSystem = FileSystem.get(URI.create(otherArgs[1]),conf);
FSDataOutputStream fsDataOutputStream = fileSystem.create(new Path(otherArgs[1]+"//Result.txt"));
PrintWriter writer = new PrintWriter(fsDataOutputStream);
writer.write("\n Average Delay:"+averageDelay);
writer.close();
fsDataOutputStream.close();
|
Is there any way in which I can write to a file from my Java jar to an S3 folder where my reduce files would be written ? I have tried something like:FileSystem fs = FileSystem.get(conf);
FSDataOutputStream FS = fs.create(new Path("S3 folder output path"+"//Result.txt"));
PrintWriter writer = new PrintWriter(FS);
writer.write(averageDelay.toString());
writer.close();
FS.close();Here Result.txt is the new file which I would want to write.
|
Writing to a file in S3 from jar on EMR on AWS
|
you can pass merchant id in product item search apihttps://docs.aws.amazon.com/AWSECommerceService/latest/DG/ItemSearch.html
|
I am using searching the whole available internet resources for getting all products of any specific amazon seller/merchant. I have used almost every available Api of Amazon, MWS Reports, MWS Products, Product Advertising Api.My problem with Each Api.Product Advertising Api of Amazon does return a generic search results and no MerchantId specific results.MWS Products Api also gives results generally and not based on specif MerchantId.MWS Report Api is creating Report for only my MerchantId (The Merchant who created the AWS key, secret). and not return any results for any other merchant.Once again. I want to get all products from any Amazon Merchant Store based on Merchant Id.This is my exact competitorAmazon store Facebook Page tab appAt least tell me how they get all products of a specific Merchant.
|
How to get all products of any specific Amazon merchant/seller?
|
When I've ran into this in the past it was due to a DNS issue with the new machine not being able to resolve the ChefServer. For AWS/EC2 make sure the default DNS server that DHCP hands out will resolve correctly your chefserver.example.com domain name.
|
commandknife ec2 server create -r "role[test1]" -I ami-axxxxxe --flavor t1.micro -x ubuntu --ssh-key JP_Key --availability-zone us-east-1c -p 22 --tags Name=test_knife2 --iam test-role --subnet subnet-cxxxxx8 --associate-eip 5x.xx.xx.x -g sg-xxxxError:10.220.15.110 Synchronizing Cookbooks:
================================================================================
10.220.15.110 Error Syncing Cookbooks:
================================================================================
https://mychefserver.example.com/bookshelf/organization00000000000000000000000000000000/
chcksum-d7c3b4577ca3ce35e757fb4a72c895f2?&Expires=1386685120&Signature=%2BaZMqKMbCxiBS5JuuaDgGO0HSRo%3D - getaddrinfo: Name or service not known
Your chef_server_url may be misconfiguredWhen I doknife client listin server output is the instance id of the chef client
problem here is client is not able to pull the recipes from server.
|
Error in Synchronizing Cookbooks
|
try to move the bucket inside the s3_credentials hashs3_credentials: {
:bucket => ENV['S3_BUCKET']
:access_key_id => ENV['S3_ACCESS_KEY'],
:secret_access_key => ENV['S3_SECRET_KEY']
}
|
I have a problem with paperclip. I set it to store my attachments in s3 and I have a lot of them in original size. The problem is I need to reprocess them to have 3 diffrent sizes per image. I read in paperclip readme that #reprocess! method might be useful.This is my user class with attachment :has_attached_file :avatar, styles:
{
large: ["135x135#", :jpg],
thumb: ["50x50#", :jpg],
small: ["30x30#", :jpg]
},
default_url: '/placeholders/avatars/:style.png',
url: '/system/users/:attachment/:id_partition/:style/:filename',
storage: :s3,
bucket: ENV['S3_BUCKET'],
s3_credentials: {
:access_key_id => ENV['S3_ACCESS_KEY'],
:secret_access_key => ENV['S3_SECRET_KEY']
}
validates_attachment :avatar,
content_type: {
content_type: /^image\/(jpg|jpeg|pjpeg|png|x-png|gif)$/,
message: 'is not allowed (only images)'
},
size: {
in: 0..1.megabytes,
message: 'is too big'
}I have also set credentials to s3 in my development.rb and production.rb. When I ran reprocess! on every user.avatar object it returns true but folder structure don't change.pry(#<Importer::Mugshots>)> user.avatar.reprocess!
(0.6ms) BEGIN
(5.3ms) UPDATE "users" SET "avatar_content_type" = '', "avatar_file_size" = 30735, "avatar_updated_at" = '2013-11-19 11:10:17.486960', "avatar_file_name" = '78398594.jpg', "updated_at" = '2013-11-19 11:10:19.001503' WHERE "users"."id" = 542025
(11.6ms) COMMIT
=> trueI tried to change paperclip config to use local filesystem but it doesn't help. What it might be ?
|
Paperclip reprocess returns true but doesn't change folders structure
|
Have you added the below in the config of CORS with POST?<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
|
Im storing images on Amazon S3. I have finally uploaded images (following the idea suggested here:uploading image), with XMLHttpRequest, and it works very nicely.Now Im trying to delete images from s3, but when I change the method on xml.open to 'DELETE', and send the form Amazon responds with a forbidden message.
I did enable upload/delete under permissions at AWS console, and edited the CORS config to include DELETEActually, right now, to send files to S3 Im using POST request, but PUT request doesn't work either.
|
DELETE files from S3 with CORS
|
S3cmd first copies the object from source to dest, and then delete form source. Apparently it is doing it right (https://github.com/s3tools/s3cmd/blob/master/S3/S3.py) and I've never had this kind of problem.Are you running on the latest version of s3cmd?Have you tried to run another version?Is there some pattern related to these files you are trying to delete (ie: larger than 1GB)?
|
I'm having a strange behavior from s3cmd.
when running mv command on multiple files in a folder (on by one), some of the files are only being copied to destination dir but not deleted from the source dir.did anyone experienced anything like that?thanks in advnaced,Oren
|
S3cmd mv command not deleting source files after copying
|
This might be a character encoding issue.I was running into a similar issue while transferring some text fields between a MySQL database and DynamoDB. Certain special characters likeéwere not valid UTF-8 characters. And even though they were replaced with the replacement character (�), for some reason this was throwing a fatal error. I ended up having to run a check on all the fields before they were put into the database and convert the encoding to ASCII (which was the type all the other strings were set to). Instead of the�replacing the unknown characters, a?was used so it wasn't a "good" fix, but it did prevent the script from crashing.This is my best guess though as all I have to go off of is the error being similar to the one I was getting. And seeing as this question is old, thought I would attempt to answer it in case anyone else comes across this like me.
|
I'm trying to level up files (putObject) with my app running on Plesk, Zend 1.0 + SDK 2.0 from Amazon but returning the following error:Fatal error: Uncaught exception 'Guzzle\Common\Exception\InvalidArgumentException' with message 'Invalid resource type' in /var/www/vhosts/domain/library/Amazon/Guzzle/Http/EntityBody.php:50 Stack trace: #0 /var/www/vhosts/domain/library/Amazon/Aws/Common/Client/UploadBodyListener.php(85): Guzzle\Http\EntityBody::factory(false) #1 [internal function]: Aws\Common\Client\UploadBodyListener->onCommandBeforePrepare(Object(Guzzle\Common\Event)) # ...To download the files (GetObject) works normally.
|
AWS S3 Upload Bucket - Invalid resource type
|
This will help you.You need to create an xml configuration file like this.http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
|
I am at a lost on how to query my Amazon CloudSearch from a static HTML page. Although the documentation is good there are no examples beyond copying and pasting a URL in a browser.What I would like is an HTML page in S3, so no server side code allowed, to have a text field form that when the search button is clicked fires to my CloudSearch end point and returns the resultsCloudSearch responds with JSON, so will have to parse that and make a table of the results.So far I have been working with a saved JSON of the results locally and using Jquery to read the JSON file<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>JSON Sample</title>
</head>
<body>
<div id="placeholder"></div>
<script src="http://code.jquery.com/jquery-1.7.1.min.js"></script>
<script>
$.getJSON('search.json', function(data) {
var output="<ul>";
for (var i in data.hit) {
output+="<li>" + data.hit[i].id+ "</li>";
}
output+="</ul>";
document.getElementById("placeholder").innerHTML=output;
console.log(data);
});
</script>
</body>
</html>This gives me the ID for the record.But when I try and change the URL to the CloudSearch end point I get no data back. Having read and gone round in circles I believe it is because of CORS.However, Amazon's documentation just say to use an HTTP GET on the endpoint but how do I build that into my HTML page.Sorry for such a basic question
|
Amazon CloudSearch query
|
According to the documentation, you can create up to 30 databases per RDS instances.http://aws.amazon.com/rds/faqs/#2We would need more details to debug your particular issue. (Parameters used to create the RDS instance, exact error message etc )
|
I created an AWS RDS MSSQL instance using Management Console but I cannot create a new database. Creating a table works fine though.Did I miss anything in the configuration? Do I need to execute a special schema?
|
Amazon AWS RDS Create database permission denied
|
that command is now deprecated and replaced by
auto scaling policies and cloudwatch alarms.There is some documentation on how to do this here:http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/US_SetUpASLBApp.html
|
I download the Auto Scaling Tool here:http://aws.amazon.com/developertools/2535When I tried to create a trigger, the command not found. I also export its environment, but command that not found.. I tried to find that as-create-or-update-trigger folder, but there is no file like that. What's up?root@ip:/root/tools# as-create-or-update-trigger Trigger1
--auto-scaling-group AutoScale --namespace "AWS/EC2" --measure CPUUtilization --statistic Average --dimensions
"AutoScalingGroupName=AutoScale" --units "Percent" --period 60
--lower-threshold 30 --upper-threshold 70 --lower-breach-increment"=-1" --upper-breach-increment "1" --breach-duration 120 OK-Created/Updated triggeras-create-or-update-trigger: command not foundDoes anyone has the same issue? Any solution?
|
as-create-or-update-trigger: command not found
|
If you are willing to pay, this SaaS site calledbouncelyseems to provide an interface and an api wrapper around SES bounces.
|
I'm currently creating an email app that is able to send emails to many users. However, I want to know whether there are bounced emails. I'm currently using Amazon SES to notify me if the email is bounced. However, I want the bounced email's data to be automatically entered into my Rails application instead of typing it manually based to the mailer daemons I get from Amazon. Is there are way to do so?
|
How to check for bounced emails in rails?
|
ScanFilteris a dictionary, where the keys are attribute names and the values indicate the conditions the items returned by the Scan call must meet. This appears to be part of the legacy-v1 DynamoDB API. You gain access to more features likefilter (condition) expressionsinScans, and higher level abstractions like theObjectMapper, if you use the v2aws-sdk-ios SDK.
|
I want to set scanfilter while fetching of data from Dynamodb.
DynamoDBScanRequest *request = [[[DynamoDBScanRequest alloc] initWithTableName:TEST_TABLE_NAME] autorelease];can any body let me know how i can set scan filter in the above code?
|
How to set scanfilter in Amazon-Dynamodb for iOS?
|
Try using the very latest JVM (6u32 or 7u4) and see if it's still reproducible. If you are on an older version, there's at least a decent chance it's already been fixed in the latest.
|
We have an amazingly elusive jvm crash occuring on an Ubuntu server that runs on AWS.Our JVM crashes while crawling a few web pages.The crash occurs at line 308 of the "safepoint" cpp module. At the stage where a gauranteeArmed==0 statement occurs.Our sysadmin has advised that , at the time of crashing, there are a massive amount of threads created by the JVM.We have not reproduced this bug in other Linux or OSX boxes.We use the Ning library to crawl a few
Web pages.Related PostsHow do I investigate the cause of a JVM crash?JBoss / HotSpot JVM crashingIn each of these posts a "safepoint" related crash which comes from "nowhere" was observed. Most interestingly, the first above post actually exhibits a JVM crash during network related events.The cryptic nature of this bug leads me to believe that there is a bug related to thread creation and scheduling which is specific to our current version of Ubuntu with respect to the way java invokes some of its concurrency features, or some underlying library incompatibility that is highly idiosyncratic to our particular situation.My Question(s)My main question here is - what is the best method for debugging a JVM stack trace involving these "safepoints", and where can I get started learning about dealing with such errors ? There have been other questions along this line, but I have not seen a generic answer .Secondary, any insight into aws, java, networking, and how Ubuntu might behave differently in the cloud would be useful here.
|
Debugging the "safepoint" error - need theoretical OR practical to debugging JVM crashes?
|
Amazon provides a means of notifying bucket events (as seenhere), but the only event currently supported is thes3:ReducedRedundancyLostObject.I am afraid the only ways you can do what you want, today, are by either polling (or crawling, like you said) or modifying the clients who upload files to your bucket(s) (if you are in control of their code) in order to notify your boxes whenever stuff is uploaded/changed.
|
This question already has answers here:Closed11 years ago.Possible Duplicate:Notification of new S3 objectsGet notified when user uploads to an S3 bucket?What's the most efficient way to detect changes in Amazon S3? A number of distributed boxes need to synchronize local files with S3. Each box needs to synchronize with a portion of an S3 bucket. Sometimes files get dropped into a bucket from an external source, and so the boxes won't know about it.I could write a script that continually crawls all files on S3 and notifies the appropriate box when there is a change, but that will be slow and expensive. (There will be millions of files). I thought about enabling logging on the bucket, but it takes a long time for logs to get written, and I would like to get notified of changes fairly quickly.Any other ideas?
|
How to detect changes in Amazon S3? [duplicate]
|
What is your CPU load during the upload? If you have SSL turned on, then it could be that your are maxing out the tiny amount of CPU that your EC2 instance is allowed to consume.
|
I upload files from .NET environment using aws .net sdk.
The code runs on EC2 small instance server.
The code is very straightforward and standard.The problem is that uploading ~10Mb file takes about 10 minutes, which in my opinion is not good
File at about 7-8Mb takes about 7-8 minutes respectively.What can be done to improve this issue?
|
upload to Amazon S3 extremely slow
|
Some information is missing - what is your disk configuration? The EBS may contribute to the latency if everything is persisted to disk.Amazon had released a white paper with best practices on how to install mongo on EC2:MongoDB on AWS. Here's its descriptionThis whitepaper provides an overview of general best practices that apply to all major NoSQL systems and highlights one of popular NoSQL systems - MongoDB - and discusses how to best run it on the AWS cloud. It further examines different MongoDB configurations so you can optimize it for performance, durability, and security.
|
I have deployed mongodb 64 bit 2.x version on aws m1.large instance.I am trying to find best performance that mongo can give us on aws in-light ofhttp://www.snailinaturtleneck.com/blog/tag/mongodb/(andmongodb read/write performance and mongo hosting in the cloud)I have created one db with one collection i.e. user and inserted 100,000 records/json object (each json object size is 4KB) using random number as suffix to “user-“. Also, created index on user id.Further, I set db profiler to log slow query taking 20ms or more. I have executed java program with 10 threads. Each java class generates user id with random number and finds it in user collection in infinite loop. With such load I have observed latency in query/read up-to 60ms.I also observed that when I run less number of threads say 3 or 4 (having query load on user collection 5K per second to find users) then I see no latency or less then 2ms latency.I failed to understand why increasing load of finding user in collection is causing latency. I believe that mongo db can perform much more concurrent read then what I am trying and should not impact on performance as such.One possibility I assume that would be - mongo is having performance issues if there are large queries executed on single collection like in our case, I expect to have 10K to 20K queries per second on single collection.We would appreciate your thoughts / suggestion.
|
Latency on aws (m1.large) with MongoDB 64b 2.x
|
Ah. This is pilot error.I would explain what I'd done wrong, but I'm too embarassed.Deleted items do not show up in wish-lists after all. Yay.
|
I've been tinkering with Yahoo Pipes and theAmazon Product Advertising API (formerly ECS) SDKto retrieve my wishlist.The problem is that although I can get all the items on my wishlist just fine, it seems to include items that I've deleted too.Has anyone else used this API and noticed this? Is there a way around it?UPDATE:Requested additional information in comments...Here is the URL I use to fetch the wishlist XML:http://webservices.amazon.co.uk/onca/xml?SubscriptionId=[my subs id]&Service=AWSECommerceService&ResponseGroup=ListItems&ProductPage=1&ProductGroup=Book&Operation=ListLookup&ListType=WishList&ListId=[my list id]And here is the relevant part of the XML response:<ListId>[my list id]</ListId>
<ListName>Wishlist</ListName>
<TotalItems>132</TotalItems>
<TotalPages>14</TotalPages>
<ListItem>
<ListItemId>EPIE5559HKT391</ListItemId>
<DateAdded>2003-11-17</DateAdded>
<QuantityDesired>1</QuantityDesired>
<QuantityReceived>0</QuantityReceived>
<Item>
<ASIN>5557205521</ASIN>
<ItemAttributes>
<Title>Horton hears a who</Title>
</ItemAttributes>
</Item>
</ListItem>
...The rest of the XML is just either more list items like that, or information about the request at the top of the response.
|
Amazon Web services - retrieving a wishlist
|
-1I found the solution. The issue was only with chrome browser. S3 static endpoint was on http and springboot backend application was on https.I created a cloudfront distribution for my s3 static website and now since both are on https i am not getting the error.
|
I am calling my rest endpoint from aws s3 static website and getting below CORS errorAccess to fetch at 'http://[my-public-ip]:5225/user/1'
from origin 'http://[my-bucket].s3-website-us-east-1.amazonaws.com'
has been blocked by CORS policy: The request client is not a secure context
and the resource is in more-private address space `private`.I have below code in my springboot for allowing cors, also i tried various other combinations like allowedOrigins, allowedMethods but none of them working@Override
public void addCorsMappings( CorsRegistry registry ) {
registry.addMapping( "/**" );
}Also i tried adding cors setting in s3[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"Access-Control-Allow-Origin"
]
}
]am i missing something?
|
CORS error while accessing endpoint from S3 static website
|
source_fileis alocal path, not a remote url. Thus you can do:Download your file from github usingnull_resourcewithlocal_execas shownhere.Use yourarchive_fileto archive the newly downloaded file from github.
|
I need to archive a directory from my gitlab repo using terraform in order to use the ZIP file in an aws lambda function.
when putting a url in the source attribute it errors:Error: error archiving file: could not archive missing file: https://*********/lambda.pymy Terraform code for archiving the file is:data "archive_file" "init" {
type = "zip"
output_path = "${path.module}/example.zip"
source_file = "https://*********/lambda.py"
}so obviously this is not the right way to do it, didn't find anything regarding this online, is there a proper way to do this using Terraform?
|
How do I archive file/dir from a gitlab repository using Terraform?
|
This issue happens becauseCognitoIdentityClientis expecting credentials even though they're not required.private readonly s3 = new S3({
region: environment.AWS.region,
credentials: fromCognitoIdentityPool({
client: new CognitoIdentityClient({
region: environment.AWS.region,
credentials: () => Promise.resolve({} as any), // Temporary fix for problem
}),
identityPoolId: environment.AWS.identityPoolId
})
})
|
I am trying to replace S3aws-sdkv2 with@aws-sdk/client-s3v3 (TheguideI am using). This was working with v2 (putObject, getObject, etc.), but with the changes, I am getting an error saying:ERROR Error: Uncaught (in promise): Error: Credentialis missing
Error: Credentialis missingThis error happens upon load, I am not even calling any of the methods, just initializing the S3 object.private readonly s3 = new S3({
region: environment.AWS.region,
credentials: fromCognitoIdentityPool({
client: new CognitoIdentityClient({ region: environment.AWS.region }),
identityPoolId: environment.AWS.identityPoolId
})
})
|
AWS S3: Credentials missing (aws-sdk v3)
|
-1Object Versioninghttps://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html. You will need to have enabled versioning on the bucket already though. If you're going through CloudFront you can do this and create an invalidation.
|
In the event that you somehow need to rollback to the previous working version.Codedeploy supports blue green deploy for ECS. That works great, but how do you manage the same sort of thing in S3?
|
How are you handling rollback of static frontend deployed to S3?
|
Just try to use using the environment variables:AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_DEFAULT_REGIONhttps://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.htmlIt works for me.I'm using cdk version1.57.0
|
I am trying to using the commandCDK Bootstrapafter I have set up my virtual environment using the AWS CDK. This is the code for my application that the command above is pulling credentials from.#!/usr/bin/env python3
from aws_cdk import core
from hello.hello_stack import MyStack
app = core.App()
MyStack(app, "hello-cdk-1", env={'account':'IDHERE','region': 'us-east-2'})
MyStack(app, "hello-cdk-2", env={'account':'IDHERE','region': 'us-west-2'})
app.synth()Obviously I have taken the accountID out.
When using the command CDK Bootstrap here is my error output❌ Environment aws://ACCOUNTIDHERE/us-west-2 failed bootstrapping: Error: Need to perform AWS calls for account ACCOUNTIDHERE, but no credentials found. Tried: default credentials.
at CredentialsCache.getCredentials (/usr/local/lib/node_modules/aws-cdk/lib/api/util/sdk.ts:261:11)
at CredentialsCache.get (/usr/local/lib/node_modules/aws-cdk/lib/api/util/sdk.ts:223:25)
at SDK.cloudFormation (/usr/local/lib/node_modules/aws-cdk/lib/api/util/sdk.ts:117:20)
at Object.deployStack (/usr/local/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:56:15)
at Object.bootstrapEnvironment (/usr/local/lib/node_modules/aws-cdk/lib/api/bootstrap-environment.ts:93:10)
at /usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:270:24
at async Promise.all (index 1)
Need to perform AWS calls for account ACCOUNTIDHERE, but no credentials found. Tried: default credentials.
|
AWS CDK Python (No Credentials Found)
|
-1Cognito will automatically setup Cloudwatch for your User Pool activity. You just need to go to Cloudwatch > Select "Cognito" from the Services list.Try to refine the time filter or set the refresh interval to make Cloudwatch regularly fetch the newest metrics.Hope this answers.
|
Our team is implementing a Web Application (ReactJS) that utilizes Amazon Cognito service for user sign-up, log-in, log-out.
However, instead of using Cognito's hosted UIs, we created our own login page and used amazon-cognito-identity-js sdk to implement the authentication functionality.Now, I need to be able to monitor the user activity (for example, which users logged-in from which location). I understand that this can be done by using Cognito's Advanced Security feature.I have set the user pool's Advanced Security Setting to "Audit Only". However, there are still no Cognito-related metrics showing up in Cloudwatch.I also tried to follow the instructions described in below site but to no results.https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-viewing-advanced-security-app.htmlDoes anyone have any idea as to what setting I may have missed out?
Any advice is highly appreciated.
|
Metrics for Cognito are not showing up in Cloudwatch
|
-1WhateverACLvalue you're using in the signature needs to also be sent in the request headers, as'x-amz-acl': '**-**-**'.Note also that an S3PUTdoes not expectFormData-- it expects thebodyto containonlythe raw bytes of the object. This isn't the cause of the error, but once you correct the signature error, you'll need to change this, too, in order to get a valid, usable upload.
|
I am getting a presigned url from aws and using it to request(PUT) a zip file. I get signature does not match.when getting presigned url:const params = {
Bucket: myBucket,
Key: myKey,
Expires: 60*60,
ACL: '**-**-**',
ContentType: 'application/x-zip-compressed'};when requesting:const formData = new FormData();
formData.append('file', file);
formData.append('filename', file.name);
fetch(url, {
method: 'PUT',
headers: {
'Content-Type': 'application/x-zip-compressed',
},
body: formData
})
|
s3 presigned url multipart formdata upload err:signature does not match
|
I struggled with the same problem for hours, and in the end it turned out to be the user info endpoint that was wrong. I was using the same one as you, but it should behttps://openidconnect.googleapis.com/v1/userinfo.I haven’t found any Google documentation saying what the value should be, but found this excellent blog post that contained a working example:https://cloudonaut.io/how-to-secure-your-devops-tools-with-alb-authentication/(the first example uses Cognito, but the second uses OIDC and Google directly).
|
I am trying to set Listener rules on an ALB. I want to add Google Oauth support to one of my servers.Here are the Google endpoints I am usingI see google auth page alright, but on the callback url I'm seeing 500 Internal Server Error. I've also set the callback URL. Am at a loss as to what's wrong here. Any help is most appreciated!After authentication, I'm not redirecting to my application, instead I've set ALP to show a text based simple response.
|
AWS ALB Listener Rules - OIDC - Google Oauth
|
-1Just create a policy like below and grant to your user, than you might keep using the same strategy local or in lambda.PS: I checked here and it works like a charm!You also might check myPoC Lambda SSMproject. In this project I use serverless to develop lambda and it works invoking locally by using invoke local -f hello_ssm.Policy{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ssm:GetParameter"
],
"Resource": [
"arn:aws:ssm:us-east-1:139486740103:parameter/my-secure-param"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:139486740103:key/alias/aws/ssm"
],
"Effect": "Allow"
}
]
}
|
Absolute SSM noob here, currently we use SSM in our lambda function, and to use it we simple import the SSM class and instantiate an instance, the constructor does the env var injections.from aws_ssm import SSM
ssm = SSM()While this works as expected when running on AWS Lambda, but it doesn't work well in our local computer, typically our local accounts not setup with SSM.In order to bypass the SSM and load the vars from actual existing env vars, I will have to add a switch:if not os.environ.get('NO_SSM'):
from aws_ssm import SSM
ssm = SSM()And this seems like a hack to me (especially False False condition to make it right), I am just wondering if there is a proper way to do it for local development?Just thinking again, it would have been better to reverse the situation originally to only use SSM whenUSE_SSMenv is defined:if os.environ.get('USE_SSM'):
from aws_ssm import SSM
ssm = SSM()
|
Is there a local development mode for AWS SSM?
|
-1You need to give access to the entire account from Dev/AdminRole and then restrict access from roles in the base account.
|
I have quite a few users in an AWS Account - let's call it the Base account:IAM:
groups:
admins:
user1
user2
user3
....
user56I have created a second AWS account - let's call it the Dev account with a single Role with AdministratorAccess.IAM:
Roles:
AdminRoleI tried to add a Trust Relationship between Base/admins and Dev/AdminRole with this Trust policy:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::************:group/admins" },
"Action": "sts:AssumeRole"
}
]
}but got the following error:An error occurred: Invalid principal in policy....How do I allow users in Base/admins to assume the Dev/AdminRole?
|
AWS Trust Assume Role from Group
|
-1You can set authorization only for resources and methods. For example, we have the following API structure:/
/test
GET (1)
PUT
/test/new (2)
ANY
/example/{proxy+}
GET (3)1) For methodsite.com/testendpointin GET method if you try to use the same key in PUT method you cath error.2) For resourcesite.com/test/newendpointin all methods in /test/new, but if you try to GET on /test/new/new2 you cath the error.3) For resource(with proxy)site.com/example/{proxy+}endpointYou can auth to any example/* path.
|
I am trying to use AWS API Gateway to proxy requests to some REST endpoints I have running in docker containers. I set up my API Gateway method for integration type HTTP and checked 'Use HTTP Proxy integration', But this is not simply proxying my requests, it strippes out the path parameters, query string parameters and body, and makes me map them to something.Am I missing something, I don't want API gateway transforming my request I just want it to proxy it back to my internal REST endpoints.FYI I am using a swagger doc to generate the API Gateway structure (their UI is quite annoying)I read about {proxy+} endpoints which sound like what I want, but how do I define swagger docs about a certain endpoint action, or have granular apikey and authorizors on my endpoints?
|
AWS API Gateway HTTP Proxy mode
|
I found this is the easiest way to do this is like this:let request = new AWS.DynamoDB({apiVersion: '2012-08-10'})
let params = {
TableName: 'YOUR_TABLE_NAME',
Key: {
'YOUR_KEY': { S: 'STRING_VALUE_TO_MATCH' }
}
}
let result = await request.getItem(params).promise().then((data) => {
return data.Item
})
// Now you can use result outside of the promise.
console.log(JSON.stringify(result))Make sure this is inside an async function and it should work for you. This isn't for a "scan" but the concept should be the same.
|
I am retrieving the data from dynamodb with aws's dbclient.scan function. I need to use the output data to retrieve data from another table. I am trying to assign the output of first db scan into variable that is outside of dbclient.scan. The problem is that I get empty variable eventhough I assigned data from dbclient.scan callback function. What should I do? Anyway, I haven't used promise and asynchronous concept. The following is the code that I wrote.var tmp = []
docClient.scan(params, (error, result) => {
if(error) { .......}
else{ var tmp1 = result.Items[0].data
tmp.push(tmp1)
}
});
console.log(tmp)//empty listWhat should I do?
Many thanks,
Sea
|
pass data from callback (aws dynamodb scan) to outside
|
I have faced a similar problem and found a workaround insearch after APIwhich is not affected by that limit of 10k elements and thus can be useful in cases when you know you might have more than that and still want to render the total elements that are there. With the ability of relatively easy fetches without the hard restrictions on filters or search at the cost of a not-too-handy pagination, that can still be done somewhat easy.It is tricky to use, because:All indexes that you are searching in should be sorted by the unique field across all indexes"from" query parameter(of a page) should be set to 0send a "search after" value from last element on the the previous page of that fieldAnd you won't operate numbers in pages anymore, just the last elements of the pages, and the size of those pages that you want to see after that elementIt is not really an answer on how to jump to the last one but that is a solution to not changing the result window size, and or limiting resulting documents to 10k only
|
I am using search query to retrieve documents from elastic search which returns me nearly 50k documents. I have a UI which renders 100 documents per page and have a button to jump to last page. Whenever I try to hit on last page I get below errorResult window is too largeI don't wish to increase theindex.max_result_window = 10000
|
How to Jump to last page in elastic search when search query returns more than 10000 documents
|
Converted parameters to json string and posted that string as parameter to query stringMap<String, String> parameters = new LinkedHashMap<>();
List<String> productsIsbs = Arrays.asList(request.getProducts())
.stream().map(x -> x.getIsbn()).collect(Collectors.toList());
String params = new Gson().toJson(productsIsbs).replace("\"", "\'");
parameters.put("isbns", params);
GenericApiGatewayRequest apiGatewayRequest;
try {
apiGatewayRequest = new GenericApiGatewayRequestBuilder()
.withHeaders(headers)
.withHttpMethod(HttpMethodName.GET)
.withResourcePath("/bid")
.withParameters(parameters)
.build();
GenericApiGatewayApacheResponse response = client.executeGetWithHttpClient(apiGatewayRequest);
if(response.getHttpResponse().getStatusLine().getStatusCode() == 200) {
String responseJson = response.getBody();
}
} catch (GenericApiGatewayException e) { // exception thrown for any non-2xx response
System.out.println(String.format("Client exception:%s - %s", e.getStatusCode(), e.getMessage()));
e.printStackTrace();
}
|
Closed.This question needsdebugging details. It is not currently accepting answers.Edit the question to includedesired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.Closed4 years ago.Improve this questionI am calling below AWS Gateway API from java console application. In postman, it works perfectly with AWS signatures.GEThttps://api.valorebooks.com/bid?isbns=["9780026840019"]I am using AWS SDK and this library.https://github.com/rpgreen/apigateway-generic-java-sdkConsole app is passing parameter isbns but API is throwing HTTP:400Source API:https://valorebooks.github.io/api/source/bid/
|
Calling AWS API Gateway, issue with GET calls [closed]
|
-1You'd definitely benefit from Google Analytics custom task feature instead of custom HTML. More on this fromSimo Ahava.Also, Google Big Query is quite a popular destination for streaming hit data since it allows many 'on the fly computations such as sessionalization and there are many ready-to-use cases for BQ.
|
So the questions has more to do with what services should i be using to have the efficient performance.Context and goal:So what i trying to do exactly is use tag manager custom HTML so after each Universal Analytics tag (event or pageview) send to my own EC2 server a HTTP request with a similar payload to what is send to Google Analytics.What i think, planned and researched so far:At this moment i have two big options,UseKinesisAWS which seems like a great idea but the problem is that it only drops the information in one redshift table and i would like to have at least 4 o 5 so i can differentiate pageviews from events etc ... My solution to this would be to divide from the server side each request to a separated stream.The other option is to use Spark + Kafka. (Here is a detail explanation)I know at some point this means im making a parallel Google Analytics with everything that implies. I still need to decide what information (im refering to which parameters as for example the source and medium) i should send, how to format it correctly, and how to process it correctly.Questions and debate points:Which options is more efficient and easiest to set up?Send this information directly from the server of the page/app or send it from the user side making it do requests as i explained before.Does anyone did something like this in the past? Any personal recommendations?
|
Google Tag Manager clickstream to Amazon
|
-1You can set Content-Type headers in API Gateway. Refer this link I hope it should helpshttp://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
|
My API has an OPTIONS method that is implemented through a Mock integration that is set to return the required CORS headers.The problem is that it returns500 Internal Server Errorwhen there is noContent-Typeset on the OPTIONS request. It is returning proper CORS headers with a200 SuccesswhenContent-Typeis set toapplication/json.How do I fix this as the OPTIONS request is sent by the browser and I don't think I'm allowed to setContent-Typeheader on it?
|
AWS API Gateway OPTIONS Request Headers
|
-1I had this situation because of a memory leak.
Try to monitor your RAM.
|
I installed NodeJS v8.1.2 on an Amazon Linux Distribution on aws.I have pm2 installed that is in-charge of restarting process in cases it fails.I catch uncaught exceptions in the process and log them so the process wont restart since I use socket.io and I don't want users to get disconnected on every single exception.about two month ago after updating nodejs to v7, nodejs would restart randomly with no reason what so ever, so I decided to compile nodejs from sources using nvm, and it resolved the issue.about a week ago I updated nodejs again to v8.1.2 and today the process restarted again with no reason at all, no exception... nothing on the servers stats where too high.. no reason what so ever.what do I do?any information regarding the issue would be greatly appreciatedupdatesI checked/var/log/messagesand I noticed a segmentation fault error at the time of the restart. do I have to create a core dump to investigate the issue further?can a segmentation fault of the nodejs process can be caused because of my code ?what do I do ? :)
|
pm2 restarts nodejs process with no indications why
|
-1You can extract it from the request. e.g. For lambda extract it from the fieldevent.requestContext.identity.cognitoAuthenticationProviderin the context parameter.See these resources for how to do it.-https://serverless-stack.com/chapters/mapping-cognito-identity-id-and-user-pool-id.html-https://github.com/aws-amplify/amplify-js/issues/390
|
I have a Cognito Federated Identity with Cognito User Pool as Auth provider. It all works and I can see that any new user added in user pool creates a new federated identity. But how do I know which Federated Identity is linked to which user in the User Pool in my code - say a Lambda? When I browse the Identity pool in AWS console, it just shows the User pool id in linked logins and not the User pool sub. Do I need to store a mapping between User Pool's sub to Identity pool's Identity id ?
|
Cognito User Pool with Federated Identites - how to find linked user
|
-1Follow below steps to achieve this.1) buy your domain,example.com2) go to the Amazon S3 console and create a bucket namedexample.com3) add yourindex.htmlfile to it and provide read permissions4) enable static website hosting for the bucket, usingexample.comin the field5) go to the R53 routing part of the console and add aType Arecord set (IPV4)6) Select Yes for Alias and choose the endpoint from the drop down, it will be something likeexample.com..s3-website-us-west-2.amazonaws.com7) Hit 'Create'8) Go back to Hosted Zones and click theexample.comzone, on the right you will see 4 namespaces that look something like this:ns-XXXX.awsdns-54.org
ns-XXX.awsdns-15.com
ns-XXXX.awsdns-45.co.uk
ns-XXX.awsdns-27.net9) Copy these namespaces to a notepad or something10) The Amazon side is now configured, we just need to do the domain side, so in my case I went toiwantmyname.com11) Go to edit namespaces, and change them to the ones you copied from step 812) We're done! Just be patient as it does take some time to configure all of this. In my case it took about 15 minutes. You can ping the website or use nslookup to check up on the progress through your console:ping example.com
nslookup example.compinging is inferior to nslookup with S3 since Amazon blocks them
|
I have a static website hosted on aws s3 bucket. I want to set custom URL for every page in my website. like www.site.com/folder/subfolder/file.html to www.site.com/filename. What is the simplest way to do this.
|
Custom url of static website hosted on aws s3 bucket
|
As per the design in Danilo's book, if you are using the aws-sdk javascript , you should define your objects like :var creds = new AWS.CognitoIdentityCredentials({
IdentityPoolId: //hard coded value for your system//
})
AWS.config.update({
region: 'us-east-1',
credentials: creds
});
var lambda = new AWS.Lambda();then once you receive your identityId and token , you should assign them to you creds as follow :creds.params['IdentityId'] = output.identityId;
creds.params['Logins'] = {};
creds.params['Logins']['cognito-identity.amazonaws.com'] = output.token;
creds.expired = true;where output is the response from your LambdAuthLogin Lambda function.
|
I am attempting to create an iOS app in Swift that uses the following authentication service using AWS Lambda -https://github.com/danilop/LambdAuthIt uses the AWS Mobile SDK for iOS to communicate with DynamoDB and Lambda -http://docs.aws.amazon.com/mobile/sdkforios/developerguide/Here is the sample code for the website that utilizes the token returned from the Lambda login function, I imagine the Swift code will be something similar -https://github.com/danilop/LambdAuth/blob/master/www/login.html#L69Here is the cloud function that generates the token for the user -https://github.com/danilop/LambdAuth/blob/master/LambdAuthLogin/index.js#L102I have created an identity pool in AWS Cognito (Federated Identities) and I have two roles, auth and unauth. My application appears to always being the unauth role (arn:aws:sts::123123123:assumed-role/_unauth_MOBILEHUB_123123123/CognitoIdentityCredentials). My users are being stored in a dynamodb table, with a salted password.The root of the problem is that I don't know the correct Swift code to write after I receive a login token from the service to transition my user into the authenticated role (use the auth arn). I want it to be using the auth role for every service call to AWS (dynamodb, lambda, etc). I'm hoping that someone can point me in the right direction - thank you.
|
AWS Lambda/Cognito Authentication - Assuming Auth Role
|
-1UselockForUpdate()to instead prevent rows from being modified or selected with another shared lock.
|
I am working onlaravel 5.1and my mysql version is5.5.44.My database storage engine isinnoDBI want to lock my table while inserting data into table as I have to acheive concurrecny because there can spawn multiple instances of server at same time (because of load balancer of aws) with single database.I have studiedmysql lockingwhich explains storage engine must be MyISAM or MEMORY, or MERGE and alsopessimistic locking in laravel. WhichsharedLock() and lockForUpdate()But it is not clear to me either they can do lock table or not..Question :How can I acheive concurrency in this scenario ? Allowing only one insertion in table at one time. What steps do I have to follow ?Thanks
|
Locking table while inserting in laravel 5.1
|
-1From the command-line theaws-cliis the easiest way to upload to / download from S3.See:http://docs.aws.amazon.com/cli/latest/reference/s3/index.htmlFor programmatic access, available SDKs are here:http://aws.amazon.com/tools/
|
Any tutorials or samples available for amazon s3 in mac os x?Just needed a sample for the s3 simple upload download operation.Searched a lot for api.but not available!!Found custom one by Tom Anderson.But I cannot make it work.Is there any workaround ?Looking for tutorials or source samples?
|
Amazon s3 in mac osx?
|
This is now possible with an update from AWS, see here for more detailshttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
|
I initially created all my resources in AWS manually. Now I'm trying to use CloudFormation (SAM) templates to create all my new resources. Is there any way I can all my old (manually created) resource to this CF template and hence forth use this CF to make modification to those old resources?
|
how to add "manually created" resources to the cloudformation template
|
-2Quote from CodeBuild doc:
"For Security Groups, choose the security groups that AWS CodeBuild uses to allow access to resources in the VPCs."Learn more about using VPC with CodeBuild:https://docs.aws.amazon.com/codebuild/latest/userguide/vpc-support.html
|
I want to allow CodeBuild to run my database migrations. I am configuring my CodeBuild project to be in the VPC and subnet of my RDS. But what do I put for security group? Is this security group to allow/deny access to my CodeBuild? Or should I understand it as the security group I want my CodeBuild to access?
|
What should I put as SecurityGroup for CodeBuild?
|
You will have to get all the prices through the api. Unfortunately this is only possible for your own items which are listed on amazon. You will have to receive any "AnyOfferChangedNotification" notifications. Take a look athttps://images-na.ssl-images-amazon.com/images/G/02/mwsportal/doc/en_US/notifications/MWSPushNotificationsApiReference.V343959826.pdffor further informations.
|
Does anybody have experience using the Amazon API and know how to retrieve the main Amazon price (the one it shows on the items page) instead of the lowest offer price?Using this product as an example at the minute:http://www.amazon.co.uk/dp/B004KXWGJQ/ref=wl_it_dp_o_pd_nS_ttl?_encoding=UTF8&colid=2EJX987PULKQU&coliid=I1N4XNG5LMMNCVI want to get the price £7.79 but instead can only get the £6.15 price. I have tried using a number of different response groups (see link below) but still no luck. Does Amazon not want us to use this price for some reason?http://docs.aws.amazon.com/AWSECommerceService/latest/DG/CHAP_ResponseGroupsList.html
|
How to get the main Amazon Price using Amazon API
|
I haven't found a way to see this easily in the UI. You can see the size of the individual snapshots by going to "AWS Backup" > "My Account" > "Backup Vaults". There, click on the vault you are interested in. Then click on the invididual recovery-points and their respective size is shown. Now add those values.On the command line using awscli and jq, you can sum them for the whole vault (replace MY_XXX by your specific values):aws backup --region MY_REGION list-recovery-points-by-backup-vault --backup-vault-name MY_BACKUP_VAULT_NAME --query 'RecoveryPoints[*].BackupSizeInBytes' --output json | jq add
|
How can I know the data size of my AWS backup vault?I can't find any instruction to view the data size of the vault in thisdocument.The reason I want to know the data size of the vault is that my AWS Backup costs me too much money. So I'd like to know how much data I've stored in the vault.
|
How to view the size of an AWS backup vault?
|
When you create a GCP Compute Engine instance (EC2 equivalent) you can declare that you want it to have a public IP address. This is an IP that you can use over the Internet to access your instance. GCP gives you two types of IP ... static (stable) or ephemeral. A static IP is yours until you explicitly release it. There is no charge for this as long as your compute engine is running. An ephemeral IP is one which is allocated to you dynamically and may change following a restart of your compute engine instance.GCP does not (currently ... things could always change) create a DNS entry that will resolve to your IP address over the Internet. It does create a DNS entry that can be usedinsideyour GCP VPC network to allow one compute engine to call another within the GCP environment.If you want to reach your Compute Engine via a DNS name it is your responsibility to create a DNS "A" record in your own DNS server. If you don't have a DNS server that you can use, then you can obtain a domain name for a few dollars and then create an instance of a GCP Cloud DNS Server and add an "A" record for your compute engine to that server.See also:Cloud DNSInternal DNS
|
On creating an EC2 instance on AWS, you can access it via IP address or a domain name provided by Amazon out of the box:Is there a similar thing available for Google Cloud out of the box? I'm on a network that blocks IP addresses, and wildcard DNS likexip.io, so I was curious to know about it. Also, is there a specific term this is called which I'm missing?
|
Google Cloud domain name for instance (like EC2)
|
-1Ok, what i can suggest - you could actually do the 2nd index as "OrganizationId#processId" - the organization ID should be always known when searching - as you plan i guess to search of all items within an organization with specific process ID?This should work out for you (on the index, not the table)"Condition": {
"ForAllValues:StringLike": {
"dynamodb:LeadingKeys": "${aws:PrincipalTag/organizationId}#*"
},if i'm assuming the tag is with the org id
|
So, I am currently making a DynamoDB table with multiple indexes and trying to manage access control.I have a key (organizationId) that I do not want to use as my secondary indexes partition or sort key, because it would be pretty much pointless query-wise.DynamoDB tableTable name: ExecutionsPartition key: OrganizationId (String)DynamoDB Secondary IndexPrimary partition key: processId (String)Primary sort key: status (Number)Would the following IAM Policy condition effectively limit access on the secondary index based on the organizationId ?"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"anOrganizationId / Variable"
]
}
}
|
DynamoDB Fine-Grained Access Control and secondary indexes
|
Didn't test it, as it involves setting up a test EMR server, but here's what should work:Step 5:ssh -i publickkey.pem -L 8080:127.0.0.1:7777 HOSTNAMEStep 6:Open jupyter notebook on browser using 127.0.0.1:8080
|
I am new to spark and AWS, I am trying to install Jupyter on my Spark cluster (EMR), i am not able to open Jupyter Notebook on my browser in the end.Context:I have firewall issues from the place i am working, i can't get access to the EMR clsuter's IP address i create on a day-to-day basis. I have a dedicated EC-2 instance (IP address for this instance is white listed) that i am using as a client to connect to the EMR cluster i create on a need basis.I have access to the IP address of the EC2 instance and the ports 22 and 8080.
Ido nothave access to the IP address of EMR cluster.Following are the steps that i am following:Open putty and connect to the EC2 instanceEstablish connection between my EC2 instance and EMR cluster
ssh -i publickey.pem ec2-user@host name of the EMR clusterinstall jupyter on the spark cluster using the following command:
pip install jupyterConnect to spark:
PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook --no-browser --port=7777" pyspark --packages com.databricks:spark-csv_2.10:1.1.0 --master spark://127.0.0.1:7077 --executor-memory 6400M --driver-memory 6400MEstablish a tunnel to browser:
ssh -L 0.0.0.0:8080:127.0.0.1:7777 ip-172-31-34-209 -i publickey.pemopen Jupyter on browser:http://host name of EMR cluster:8080I am able to run the first 5 steps, but not able to open the Jupyter notebook on my browser.
|
Using Jupyter notebook on Spark on EMR
|
I hope this documentation will help you out, the steps are broken down and quite simple to follow:http://docs.aws.amazon.com/AmazonS3/latest/dev/walkthrough1.htmlYou can also use policy variables as well.It lets you specify placeholders in a policy. When the policy is evaluated, the policy variables are replaced with values that come from the request itself.
For example -${aws:username}:Further more you can also check out this Stackoverflow question (if seem relevant):Preventing a user from even knowing about other users (folders) on AWS S3
|
So I've been trying to define a policy to restrict a group of IAM users to a particular folder in an S3 bucket with no success. I've riffed off the policy outlined in this blog post.http://blogs.aws.amazon.com/security/post/Tx1P2T3LFXXCNB5/Writing-IAM-policies-Grant-access-to-user-specific-folders-in-an-Amazon-S3-buckeSpecifically I'm using the following:{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition":{"StringEquals":{"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition":{"StringLike":{"s3:prefix":["myfolder"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::mybucket/myfolder/*"]
}
]
}Unfortunately this policy for some reason allows users to navigate not only into the specified folder but other folders present in the same bucket. How do I restrict users in such a way that they can only navigate into the specified folder?
|
AWS: Restricting IAM User to Specific Folder in S3 Bucket
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.