Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4If say root (/) is xfs, run the following command for the 500 GiB Volume $ mkfs -t xfs /dev/nvme1n1 If root (/) is ext4, $ mkfs -t ext4 /dev/nvme1n1Create a directory in root say named mount $ mkdir /mountNow mount the 500 GiB Volume to /mount $ mount /dev/nvme1n1 /mountNow it will be mounted and can viewed in df -hTAlso make sure to update the /etc/fstab so mount remains stable if there is a rebootTo update first find UUID if 500 GiB EBS Volume $ blkid /dev/nvme1n1 Note down the UUID from the outputNow go to /etc/fstab using editor of your choice $ vi /etc/fstabThere must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4) UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0Now finally run mount command $ mount -aShareFollowansweredSep 8, 2020 at 6:30anitekanitek4622 bronze badges1thanks for the detailed guide really appreciate it.–KevalMay 20, 2021 at 12:28Add a comment|
UseCase:Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.So I did start anm.5.largeinstance-based environment and attached an EBS volume of 500GB.Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdfThis is my total storage and I do not see 500GB volume.On digging further, it looks like the EBS volume is attached but not at the correct mount point.EC2 EBS configurationQuestion:What should be the next step in order to use this EBS volume?Question:What should be done in order to make use attached EBS for docker building?Question:What should be the most efficient instance type for docker building?
EBS volume shows no mount point
After speaking with AWS Support, the issue was the Ec2 image. I was using latest and they suggested that its not the best practise to always use latest. I switched to an older image version and the build is working again.ShareFollowansweredJul 4, 2021 at 19:01thechmodmasterthechmodmaster74933 gold badges77 silver badges1919 bronze badgesAdd a comment|
I have a pipeline on AWS for my api service which written in .Net core 3.1.My buildspec.yml pretty simple, it runsdotnet restoreanddotnet publish.I get this error in the restore phase "error NU1100: Unable to resolve" for a lot of libraries, for example:C:\codebuild\tmp\output\src363055303\src\ExternalClient\ExternalClient.csproj : error NU1100: Unable to resolve 'Serilog (>= 2.9.0)' for '.NETCoreApp,Version=v3.1'.I tried to restore the project on my pc through the terminal using the same command and it works.I see in the log that the pipeline use nuget org: "Feeds used:https://api.nuget.org/v3/index.json".This pipeline already worked on AWS, and I have no idea why it started to fail over restore.
AWS code build restore error NU1100: Unable to resolve for .Net core 3.1
The followingRoleArn: 'arn:aws:iam::xxxxxx:root',is not the IAM role. It seems you are trying to assume the IAM root user. The correct ARN of a role hasformofarn:aws:iam::account-id:role/role-name-with-pathShareFolloweditedDec 23, 2020 at 20:45istrupin1,4931919 silver badges3232 bronze badgesansweredSep 5, 2020 at 0:07MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges21i have an IAM Role. (Role which is also used in a lambda function). i try to asumeRole to simulate the lambda locally (incl. the role permissions). but i get an InvalidClientTokenId: The security token included in the request is invalid too.–Stefan VolkmerJan 20, 2021 at 21:55@StefanVolkmer You could make new question about it, with the relevant details.–MarcinJan 20, 2021 at 22:15Add a comment|
I am trying to assume anIAMrole usingaws-sdkas so...var sts = new AWS.STS(); sts.assumeRole({ RoleArn: 'arn:aws:iam::xxxxxx:root', RoleSessionName: 'awssdk' }, function(err, data) { if (err) { // an error occurred console.log('Cannot assume role'); console.log(err, err.stack); } else { // successful response AWS.config.update({ accessKeyId: data.Credentials.AccessKeyId, secretAccessKey: data.Credentials.SecretAccessKey, sessionToken: data.Credentials.SessionToken }); } });But I keep getting...InvalidClientTokenId: The security token included in the request is invalidHowever, I can connect if I just use the following...AWS.config.update({ accessKeyId: process.env.ID, secretAccessKey: process.env.SECRET });Any reason why I cannot assume a role?
AWS STS Assume Role - InvalidClientTokenId: The security token included in the request is invalid
You can point the CloudFront distribution (assets.example.com), then add a new origin with the domain namewww.example.comthen add a new cache behavior with the path patternrobots.txtand add the origin to it.This setup takes a request toassets.example.com/robots.txtand forwards it towww.example.com/robots.txt. With this, you can remove the duplications.ShareFollowansweredSep 1, 2020 at 12:21Tamás SallaiTamás Sallai3,27511 gold badge1515 silver badges2525 bronze badgesAdd a comment|
I have a websitewww.example.com. So when I access robots.txt file likewww.example.com/robots.txtand it shows multiple lines text which is created/prepared by SEO team.So there is a sub domainassets.example.comwhich is pointing to CloudFront. When I access the robots.txt file through CloudFront URL likehttps://assets.example.com/robtos.txtthen it shows below result in the browser.User-agent: * Disallow: /So there is a request to update the robots.txt file content in AWS CloudFront sohttps://assets.example.com/robtos.txt&https://www.example.com/robtos.txtshould show the same text. I didn't find anywhere robots.txt is placed in cloud-front.Is it possible to update robots.txt in cloudfront? Is there any role of CloudFront here? Or we need to update the robots.txt for assets.example.com same as configured for example.com?Please help me out. I'm very confused here.
How to update/replace robots.txt file in aws cloudfront
By default, EC2 instances do not allow accessing the port directly.You need to create aCustom TCPrule for the port9100in theInbound rulesif it does not exist.If you have also configured a firewall, you need to allow port9100too.You can test remote ports are reachable or not (REF:https://stackoverflow.com/a/9463554/664229):nc -zvw 5 <ip> <port>ShareFolloweditedAug 21, 2020 at 9:47answeredAug 21, 2020 at 9:15La Min KoLa Min Ko3611 silver badge66 bronze badges1how to setup routing on aws account where prometheus is running to collect metrics from a different aws account where node-exporter is installed on the ec2 instance. All aws accounts are within the same AZ–Emmanuel Spencer EgbuniweNov 9, 2022 at 7:46Add a comment|
I am setting upprometheus/node_exporteron AWS EC2. With the following configuration[Unit] Description=Node Exporter Wants=network-online.target After=network-online.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter [Install] WantedBy=multi-user.targetAnd I can access metrics by using curl with localhost. Something like the followingcurl localhost:9100/metricsI can access the metric via private IP address as well. For examplecurl private_ip_address:9100/metricsBut, when I try to access it via public IP address. It's not working, got curl timeout.curl public_ip_address:9100/metricsI try accessing from theipv4:9100from the server itself and from my local machine. Both got the same issue.How can I enable to make it accessible from the ipv4 address?
Can't access node_exporter from public ip address on EC2
Login to theVapor Dashboardand request a new certificate in the needed region. you can request more than one.ShareFollowansweredAug 5, 2022 at 7:08Gismo1337Gismo133731922 silver badges1818 bronze badges11The moment you run into this problem ... and see your own awnser as accepted (... one year later).–Gismo1337Sep 27, 2023 at 13:41Add a comment|
I am working withLaravel 7.xandVapor(the latter for the first time). I have an issue where, on deployment of staging, I get the following error:==> Ensuring IAM Role Exists ==> Ensuring Storage Exists ==> Ensuring Cache Table Is Configured ==> Ensuring Functions Exist ==> Updating Function Code ==> Updating Function Configurations ==> Updating Function Version ==> Ensuring Function Aliases Exist ==> Running Deployment Hooks ==> Ensuring Vanity Domain Certificate Exists ==> Ensuring Http API Is Configured An error occurred during deployment. Message: AWS: The certificate provided must be owned by the account creating the domain.I am using theVapordefault network setup ofAPI Gwy 2and have registered my domain and issued the certificate forus-east-1via theVaporui. I can see the certificate inAWS's Certificate Manager console for that region.I have deleted and recreated the certificate, both via theVaporui & cli andAWS, a number of times, but the error remains.Any suggestions appreciated.
Laravel Vapor deployment issue - ssl certificate ownership
Generally you would use URL of the DB instance to connect to it. But if you want the IP, you can check it inEC2 console -> Network Interfaces:ShareFollowansweredAug 15, 2020 at 23:14MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges8Byec2 consoledo you meanec2 dashboard? If so there are 0 ec2 instances - so then where is thisnetwork itnerface?–WestCoastProjectsAug 16, 2020 at 0:03@javadba Hi. On the screenshot, you see a menu on the left with "network interfaces" at the bottom.–MarcinAug 16, 2020 at 0:04OK see it now. The IP Address shown is the resolved IP address of theEndpointso those do line up thx. I still can not connect to the RDS instance: times out. There are other thing(s) I'm still missing apparently. Really hard to get connectivity to an RDS on a publicly accessible vpc: could not find any complete instructions.–WestCoastProjectsAug 16, 2020 at 0:07@javadba What about security group of your RDS? Have you modified them to allowed incoming connection from your IPs or internet?–MarcinAug 16, 2020 at 0:081oh that's interesting: setting my laptop IP address actually allows it to work!–WestCoastProjectsAug 16, 2020 at 1:16|Show3more comments
I have made an RDS instance inside a VPC publicly accessible:Will there be a publicly accessible IP address now? Where would I find it in theaws console?
Where to find the externally accessible IP address when accessing an RDS instance in a VPC?
if you want to trigger a lambda after a file was uploaded to S3 you have two ways:S3 Eventnotifications:this is a S3 specific feature and supports lambda as a target and also SQS and SNS. You can find more info here:https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.htmlCloudTrail:CloudTrail logs pretty much all Events in your account and you can react to them if you want.create a bucketCreate a trail, you might want to select write only, to reduce the amount of stuff that gets writtenadd the bucket to the trail with addS3EventSelectoradd your targetuploadBucket.onCloudTrailWriteObject('cwEvent', { target: new targets.LambdaFunction() })this will create a CloudWatch Event.On the first step you might need to also log it to cloud watch logs, I'm not sure anymore:const trail = new cloudtrail.Trail(this, 'CloudTrail', { sendToCloudWatchLogs: true, managementEvents: cloudtrail.ReadWriteType.WRITE_ONLY, });I prefer version two, because CloudWatch Event supports way more targets than SQS, SNS and Lambda. I used it to trigger a Step Function for example.Docs:https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudtrail-readme.htmlhttps://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3-readme.html#bucket-notificationsShareFollowansweredAug 12, 2020 at 12:07Jonny RimekJonny Rimek1,0911313 silver badges2020 bronze badgesAdd a comment|
I'm trying to build a lambda function with s3 trigger throw the CDK deployment, does somebody knows if it possible to programmatically trigger the CDK code?I found those links:Lookup S3 Bucket and add a trigger to invoke a lambdaWith CDK, can it be triggered through a lambda to deploy the stackbut they were a few months ago and I wanted to know if anything was renewed
AWS CDK - add an s3 trigger to invoke a lambda
The code sadly will not work. The reason is thatcontainer commandsrun when your app is in thestagingfolder,notincurrentfolder:The specified commands run as the root user, and are processed in alphabetical order by name. Container commands arerun from the staging directory, where your source code is extracted prior to being deployed to the application server.You can try to userelative paths:container_commands: 01addpermission: command: "chmod -R 755 ./storage" 02clearcache: command: "php . config:cache"Thealternativeis to usepostdeployplatform hook which runs commands after you app is deployed:Files hererun afterthe Elastic Beanstalk platform enginedeploys the applicationand proxy serverShareFolloweditedJul 29, 2020 at 10:53answeredJul 29, 2020 at 9:12MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I'm working on elastic beanstalk exextentions. A storage-permission-denied error occurs every deployments and I a have to type command to resolve that. Does the code below(.extensions/chmod.config), prevent the error occur ?container_commands: 01addpermission: command: "chmod -R 755 /var/app/current/storage" 01clearcache: command: "php /var/app/current config:cache"
Setting a laravel storage directory permission by ebextentions
The options to configure thishave been added finally. They are available on Beam versions after 2.26.0.The pipeline options are--s3_access_key_idand--s3_secret_access_key.Unfortunately, the Beam 2.25.0 release and earlier don't have a good way of doing this, other than the following:Inthis threada user figured out how to do it in thesetup.pyfile that they provide to Dataflow in their pipeline.ShareFolloweditedMay 5, 2021 at 18:14answeredOct 27, 2020 at 18:34PabloPablo10.7k11 gold badge4646 silver badges6969 bronze badgesAdd a comment|
I use Google Cloud Dataflow implementation in Python on Google Cloud Platform. My idea is to use input from AWS S3.Google Cloud Dataflow (which is based on Apache Beam) supports reading files from S3. However, I cannot find in documentation the best possiblity to pass credentials to a job. I tried addingAWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYto environment variables withinsetup.pyfile. However, it work locally, but when I package Cloud Dataflow job as a template and trigger it to run on GCP, it sometimes work, and sometimes not, raising "NoCredentialsError" exception and causing job to fail.Is there any coherent, best-practice solution to pass AWS credentials to Python Google Cloud Dataflow job on GCP?
Passing AWS credentials to Google Cloud Dataflow, Python
In order for AWS cli to run on MobaXterm, you will need to run the following commands in MobaXterm:MobApt install python2-pippip2 install awscliIt will take some time for MobaXterm to complete steps 1 and 2. Also, AWS cli runs super slow in MobaXterm. You are better off using cmd.This is the site that helped me ran AWS cli on MobaXterm.https://majornetwork.net/2017/07/installing-aws-cli-on-cygwin/ShareFollowansweredSep 2, 2020 at 21:20DannyDanny2122 bronze badgesAdd a comment|
I am a newbie to both AWS and MobaXterm. I am trying to use MobaXterm to manage AWS instances because it comes with bash.I am following the commands as perhttps://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html.When I run the following command$ sudo ./aws/install, I get the following error:Unable to start 'install': There is no application associated with the given file name extension.I did run chmod 777 to ensure that I am able to read/write/execute. Please see attached image.I do know that I can use Windows CLI installer in command line. However, doing SSH to EC2 is a nightmare in Windows with all certificates. With MobaXterm (because of bash), it is very easy. So, my preference is to use MobaXterm instead of Windows command prompt.Moreover, I don't want to directly install Ubuntu. Hence, I am looking for some guidance here. I'd appreciate any help.I am hoping that I am not missing any package. Thanks for any help.
Using AWS CLI with MobaXterm on Windows
This is generally to distribute your ECS service across multiple availability zones, allowing your service to maintain high availability.A subnet is bound to a single AZ, so it is assumed each subnet is in adifferent AZ.By splitting across multiple subnets, during an outage load can be shifted to launch containers entirely in other subnets (assuming they're in different AZs).This is generally encouraged for all services that support multiple availability zones.More information onAmazon ECS Availability best practicesare available from the blog.ShareFolloweditedJul 18, 2020 at 17:56answeredJul 18, 2020 at 17:52Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges21So, basically, if we configure an ECS service with two Subnets and both of them in different AZs, if something happens within the AZ1, for instance, the other AZ2 would be there to guarantee the access to ECS service?–Felipe DesideratiJul 18, 2020 at 17:562Yes. If you're using Fargate this will happen automatically, if you're EC2 based and not using dynamic port ranges then ensure that you use autoscaling groups spread across the AZs too :)–Chris WilliamsJul 18, 2020 at 17:58Add a comment|
Why I should configure an AWS ECS Service or an EC2 Instance with two or more Private Subnets from the same VPC? What would be the benefits of doing such thing instead of configuring it within just one Subnet? Would it be because of availability? I've read the documentation but it was not clear about it.Reference:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html
Why I should configure a AWS ECS Service with two or more Subnets?
You’re correct, this limit counts towards individual files that you’re uploading.Remember that AWS S3 sync also performs ListObjects when it executes so this too will count (although this is a paginated request that returns 1000 objects at a time so it counts as one).The sync command should only copy new and modified files so you should only see these requests count in the free tier limit.ShareFollowansweredJul 13, 2020 at 20:02Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badgesAdd a comment|
I have a static web site hosted on S3 in the free tier. This tier gives me "2,000 Put, Copy, Post or List Requests of Amazon S3" which I am regularly exceeding.Given that my web site has 92 files in it when rendered using Next.js and I keep a test and prod version of the web site, does this mean that every time I deploy a new version it counts as 184 updates to S3?Extra info: I do a very simple deployment: build on a local Jenkins, saving a tar file to an artefacts S3 bucket, untar to to local then use a "aws s3 sync" command to copy to my bucket.
Cost of updating Web Site hosted on AWS S3
To do this you need to apply filter patterns across the entire log group, which will query all logstreams.If you're looking for a specific error phrase you can wrap it in double quotes such as "ERROR".From the consoleGo to the CloudWatch service screenClickLog groupsClick on your log groupClick "Search All"Enter your pattern in the "Filter events" text boxFrom the CLIUse thefilter-log-eventsfunction. An example is belowaws logs filter-log-events --start-time 1593967410000 --end-time 15945722100000 --log-group-name /aws/lambda/function-name --filter-pattern ERROR --output textFor examples of how to use more complex filter patterns take a look at theFilter and Pattern Syntaxpage.ShareFolloweditedJul 12, 2020 at 16:45answeredJul 12, 2020 at 16:40Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges21Thanks for the tipp. That is already a helpful start. Even though this gives me every line which has the word ERROR in it. There might be successful invokes which have (handled) errors–User12547645Jul 12, 2020 at 16:471Hopefully you can find pattern that matches your logs specifically :)–Chris WilliamsJul 12, 2020 at 16:50Add a comment|
I have a Lambda function that has a success rate of over 99% (which is nice).Occasionally there is an invocation which results in an error. I would like to view the log of that one invocation.How can I find the cloudwatch logstream which contains the error invoke?
How can I filter CloudWatch logstreams to the logstreams of failed invocations?
A global secondary index has separate capacity units, to avoid performance on the table itself.A global secondary index is stored in its own partition space away from the base table and scales separately from the base table.Performance on your table can only be impacted if its own credits are depleted. The global secondary index sits in its own partition space which can be treated as if has its own boundaries.In addition as DynamoDB uses separate credits forread (RCU) and write (WCU)these 2 actions would never have a performance impact on the other.ShareFolloweditedJul 10, 2020 at 7:15answeredJul 10, 2020 at 7:05Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badgesAdd a comment|
I have a DynamoDB table with a good partition key (PK=uuid) but there is a GSI (PK=type, SK=created) where type has only 6 unique values and created is epoch time.My question is if I start to do a lot of reads with this GSI, will that affect the performance of the whole table? I see that the read capacity for both the table and the GSI is not shared according tothis AWS docsbut what will happen behind the scene if we start to use this GSI a lot? Will Dynamodb writes get impacted?
Can DynamoDB reads on GSI with badly chosen partition key affect the read/write for the table
put_objectreturnsS3.Object, which in turn has thewait_until_existsmethod.Therefore, something along these lines should be sufficient (my verification code is bellow):import boto3 s3 = boto3.resource('s3') with open('test.img', 'rb') as f: obj = s3.Bucket('test-ssss4444').put_object( Key='fileName', Body=f) obj.wait_until_exists() # optional print("Uploaded to S3 successfully")put_objectis ablocking operation. Thus it will block your program until your file is uploaded. Thereforewait_until_existsis not really needed. But if you want to make sure that the upload actually went through and the object is in S3 you can use it.ShareFollowansweredJul 7, 2020 at 0:23MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges11This is great information thank you. But how do I view the HTTPresponsefrom theput_object? I'm looking for things likeETag,MD5,Status 200, etc.–ericOnlineJul 7, 2020 at 15:49Add a comment|
Looking at the boto3 docs, I see thatclient.put_objecthas aresponseshown, but I don't see a way to get the response frombucket.put_object.Sample snippet:s3 = boto3.resource( 's3', aws_access_key_id=redacted, aws_secret_access_key=redacted, ) s3.Bucket(bucketName).put_object(Key="bucket-path/" + fileName, Body=blob, ContentMD5=md5Checksum) logging.info("Uploaded to S3 successfully")How is this accomplished?
How to access response from boto3 bucket.put_object?
Please ensure sts endpoint status of your region is Active. You can check in IAM > Account settings > Security Token Service (STS).If still not work, you may try to change service endpoint in your trust policy from "amplify.amazonaws.com" to "amplify.<your-region>.amazonaws.com", which works for me. Amplify Endpoint:https://docs.aws.amazon.com/general/latest/gr/amplify.htmlShareFollowansweredFeb 10, 2021 at 9:29heyheyheiheyheyhei2344 bronze badgesAdd a comment|
I was trying to deploy my application with AWS Amplify using github and I got this error :2020-07-03T10:39:32.225Z [ERROR]: !!! Unable to assume specified IAM Role. Please ensure the selected IAM Role has sufficient permissions and the Trust Relationship is configured correctly. 2020-07-03T10:39:32.348Z [INFO]: # Starting environment caching... 2020-07-03T10:39:32.348Z [INFO]: # Environment caching completed Terminating logging...And this is the trust policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "amplify.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
Unable to assume specified IAM Role when deploying with AWS Amplify using GitHub
Ok, it was with:aws cognito-idp admin-update-user-attributes --user-pool-id us-east-2_XXXX --username[email protected]--user-attributes Name="email_verified",Value="false"ShareFollowansweredJul 1, 2020 at 21:51pmirandapmiranda7,9611616 gold badges8181 silver badges167167 bronze badgesAdd a comment|
Is there a way to change theAccount statuson a user by CLI command?I know I can resend an email verification with:aws cognito-idp resend-confirmation-code --client-id 54675464564564 --username[email protected]Is there are any similar command for what I need?
AWS Cognito, change Account status by CLI
The more appropriate way would be to useaws glue python shell jobas it is under the serverless umbrella and you'll be charged as you go.So this way you will only be charged for the time your code runs. Also you don't need to manage the EC2 for this. This is like an extended lambda.ShareFollowansweredJul 1, 2020 at 7:47Shubham JainShubham Jain5,39722 gold badges1616 silver badges3838 bronze badges0Add a comment|
I have a python script which copy files from one S3 bucket to another S3 bucket. This script needs to be run every Sunday at some specific time. I was reading some of articles and answers, So I tried to use AWS lambda + Cloudwatch events. This files run for minimum 30 minutes. would it be still good with Lambda as Lambda can run max 15 minutes only. Or is there any other way? I can create an EC2 box and run it as a Cron but that would be expensive. Or any other standard way?
Python Script as a Cron on AWS S3 buckets
A very likely reason why yourlambda in VPCtimeouts is that it hasno internet accesssince it does not have public IP. Fromdocs:Connect your function to private subnets to access private resources. If your function needs internet access, use NAT. Connecting a function to a public subnetdoes not give it internet access or a public IP address.To rectify the issue, the followingshould be checked:is lambda in a private subnetis there a NAT gateway/instance in a public subnetare route tables correctly configured from private subnet to the NAT device to enable internet accessAlternatively, can consider using (or check if exists) aVPC interface endpointfor CodePiepline. The interface, ifcorrectly setup, can enable access to the CodePipeline from lambda function without internet.ShareFollowansweredJun 23, 2020 at 13:17MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges1Thanks a lot @Marcin, it works fine by creating aninterface endpointfor CodePipeline on my VPC–MedartusJun 23, 2020 at 13:44Add a comment|
I'm trying to create a Lambda function that works with CodePipeline. The issue is that it can't send the job success info to CodePipeline. I'm using the javascript aws-sdk and the function putJobSuccessResult from the AWS.CodePipeline objects don't execute fine in production.const AWS = require('aws-sdk'); const codepipeline = new AWS.CodePipeline(); exports.config = (event, context) => { // Retrieve the Job ID from the Lambda action const jobId = event['CodePipeline.job'].id; return codepipeline.putJobSuccessResult({ jobId }).promise(); };This code works fine locally when I put the jobId of my pipeline but when I upload the code on the AWS Console and run the pipeline, it doesn't work anymore.Here is the IAM Configuration for the Lambda specific to CodePipeline part:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "codepipeline:PutJobSuccessResult", "codepipeline:PutJobFailureResult" ], "Resource": "*", "Effect": "Allow" } ] }Do you have any ideas about why it doesn't work on the cloud ?
Lambda can't execute putjobSuccess for CodePipeline inside a VPC
Unfortunate, this isnot possible.If you want to have few such values provided by users, you have to define your parameter as follows (an example):pTypeOfAccountNeeded: Default: "Tools,Dev" Type: CommaDelimitedList AllowedValues: - "Tools,Sandbox" - "Sandbox" - "Tools,Dev"ShareFollowansweredJun 20, 2020 at 2:00MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
Cloudformation expert,Is it possible to be able to select multiple values from a dropdownlist in Cloudformation Template?I tried something like this but it didn't workpTypeOfAccountNeeded: Default: "Tools, Sandbox, Dev, Test (QA), Preprod, Prod" AllowedValues: - "Tools" - "Sandbox" - "Dev" Type: CommaDelimitedList
Selecting multiple values from DropDownList in CloudFormation Template
Figured out a decent way to go about this:import datetime import boto3 s3 = boto3.resource('s3') for i in range(5): date = datetime.datetime(2019,4,29) date += datetime.timedelta(days=i) date = date.strftime("%Y-%m-%d") print(date) old_date = 'file_path/FLORIDA/DATE={}/part-00000-1691d1c6-2c49-4cbe-b454-d0165a0d7bde.c000.csv'.format(date) print(old_date) date = date.replace('-','') new_date = 'file_path/FLORIDA/allocation_FLORIDA_{}.csv'.format(date) print(new_date) s3.Object('my_bucket', new_date).copy_from(CopySource='my_bucket/' + old_date) s3.Object('my_bucket', old_date).delete()ShareFollowansweredJun 14, 2020 at 17:21sanjayrsanjayr1,75933 gold badges2525 silver badges4444 bronze badgesAdd a comment|
I saved out a pyspark dataframe to s3 with the following command:df.coalesce(1).write.partitionBy('DATE' ).format("com.databricks.spark.csv" ).mode('overwrite' ).option("header", "true" ).save(output_path)Which give me:file_path/FLORIDA/DATE=2019-04-29/part-00000-1691d1c6-2c49-4cbe-b454-d0165a0d7bde.c000.csv file_path/FLORIDA/DATE=2019-04-30/part-00000-1691d1c6-2c49-4cbe-b454-d0165a0d7bde.c000.csv file_path/FLORIDA/DATE=2019-05-01/part-00000-1691d1c6-2c49-4cbe-b454-d0165a0d7bde.c000.csv file_path/FLORIDA/DATE=2019-05-02/part-00000-1691d1c6-2c49-4cbe-b454-d0165a0d7bde.c000.csvIs there an easy way to reformat this path in s3 to follow this structure?:file_path/FLORIDA/allocation_FLORIDA_20190429.csv file_path/FLORIDA/allocation_FLORIDA_20190430.csv file_path/FLORIDA/allocation_FLORIDA_20190501.csv file_path/FLORIDA/allocation_FLORIDA_20190502.csvI have a few thousand of these so if there is an programmatic way to do this, that would be much appreciated!
Rename Pyspark output files in s3
I do it by following these steps:Accessing MongoDB using Rockmongo The MEAN instance includes Rockmongo, a web-based GUI for MongoDB. However, by defauit it can only be accessed via connections from localhost or hosts with the IP address of 127.0.0.1.Because your web browser is running on your local machine, you'll need to establish an SSH tunnel between your local machine and the Lightsail instance.Note: Step 1 below is for Mac and/or Linux users - if you're on Windows using Putty please see the instructions on the Bitnami page. Once you've configured Putty, pick up the instructions below at step 2.In your terminal open a second window and make sure you're in the directory with your default.pem file and create the SSH tunnel by entering the following command:ssh -N -L 8888:127.0.0.1:8080 -i default.pem bitnami@<instance_ip>This command instructs your system to tunnel any requests tohttp://127.0.0.1:8888/port 8080 on your Lightsail instance.Note: Be sure to substitute your Lightsail instance's IP address where it says <isntance_ip>.Note: There is no output from this command the cursor will just appear below the command line and sit there.In your web browser navigate tohttp://127.0.0.1:8888/rockmongo/Log in using the same credentials you used to access the MongoDB CLI previously.You should be presented w/ the Rockmongo web UI.Credit:https://github.com/mikegcoleman/todoShareFollowansweredJul 15, 2021 at 23:33Giovanni AdamoGiovanni Adamo2144 bronze badgesAdd a comment|
I'm created lightsail instance for MEAN stack web app. So I watched the tutorial, they use rockmongo with GUI login. I created MongoDB database and user to access the database. So I can use it only using the ssh command line. So are there any process to connect the lightsail MongoDB with MongoDB compass?
Is there an process to connect AWS lightsail mongoDB with mongo Compass?
This is normally done when setting bucket in website mode. Fromdocs:To make your bucket publicly readable, youmust disableblock public access settings for the bucketWhat others can do, depends onbucket policy. For website, it should only allowread only access:{ "Version":"2012-10-17", "Statement":[ { "Sid":"PublicRead", "Effect":"Allow", "Principal": "*", "Action":["s3:GetObject"], "Resource":["arn:aws:s3:::examplebucket/*"] } ] }As you can see, the policyallows onlys3:GetObject.s3:PutObjectis not included which would allow writing to the bucket.ShareFolloweditedJun 10, 2020 at 21:40answeredJun 10, 2020 at 21:33MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badges4The Bucket Policy is empty right now, thanks for the sample ;-)–maximeJun 10, 2020 at 21:37Do you know what is the default behavior ? Is it blocked by default and you have to explcitely give access to people in the policy ?–maximeJun 10, 2020 at 21:372@maxime No problem. If its empty then nothing is allowed. You have to allow read-only for others to be able to view it.–MarcinJun 10, 2020 at 21:383@maxime Everything in AWS is private by default. So without the policy, your content can't be accessed anonymously.–MarcinJun 10, 2020 at 21:39Add a comment|
I want to share some public assets (images and fonts) with anyone who has the URL (read only).I uploded the files to a new S3 bucket.InPermissions > Block public accessI set everything toOffInProperties > Static website hostingI configured it asUse this bucket to host a websiteIs it risky ? Can anybody write in my bucket because I setBlock Public accesstoOff?
AWS S3 : is it risky to set "Block all public access" to off?
Take a look at usingStep Functionsto orchestrate the entire workflow for you.Have the CloudWatch event trigger a Step Function that would do the following:Preprocess dataCreate predictions (if its a batch process why not usebatch transforminstead).Use a retry loop to check if inference has been completed.Once it has been inferred run post processing of data and copy to S3.ShareFollowansweredJun 5, 2020 at 12:45Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges41Many Thansk. In your opinion, this solution is easier to implement than the one i proposed?–lgndrzzzJun 5, 2020 at 12:55Not only easier, but also more resilient, easier to debug and a better architecture–Chris WilliamsJun 5, 2020 at 13:042last question, when you say "preprocess data", how? As far as i understood, is not possible (in a serverless environment) to call a python script to preprocess (i could use a lambda function to preprocess data, but there are too many constraints). What i should use? Thanks again, really–lgndrzzzJun 5, 2020 at 13:131You would either invoke a Lambda function that runs the preprocess of the data, run a ECS Task (Could be Fargate) or run the task on the EC2. Steps functions can trigger any of these.–Chris WilliamsJun 5, 2020 at 13:21Add a comment|
I am trying to understand how to implement a machine learning algorithm, where the preprocessing and postprocessing is an heavy taskm inside AWS Sagemaker. The main idea is to get data from S3, each time the data change in S3, Cloud watch triggers a lambda function to invoke a SageMaker endpoint. The problem is that, once the algorithm is trained, before predicting the new data, i need to preprocess the data (custom NLP preprocessing). Once the Algorithm have done the prediction, i need to take this prediction, do a post-process and then send the post-processed data to S3. The idea i have in mind is to create a docker:├── text_classification/ - ml scripts | ├── app.py | ├── config.py | ├── data.py | ├── models.py | ├── predict.py - pre-processing data and post-processing data | ├── train.py | ├── utils.pySo i will do the pre-processing and the post-processing inside "predict.py". When i will invoke the endpoint for prediction, that script will run. Is this correct?
AWS SageMaker data preparation
As per your comment you would want to use AWS Private Link to accomplish this problem.By doing this the only resource shared between is an endpoint to access the service.This would be accessible over VPN too, which should grant you access on your on-premise.ShareFollowansweredJun 2, 2020 at 14:48Chris WilliamsChris Williams33.5k44 gold badges3434 silver badges7171 bronze badges2Thank you for the suggestions! :) If we were to use Private Link, can the endpoint expose the entire dashboard to us? From our side, how do we consume that endpoint in order to render the dashboard?–xzkJun 2, 2020 at 15:13Hi, when you create it you have the ability to create a private DNS name:docs.aws.amazon.com/vpc/latest/userguide/…. By accessing this you can access your dashboard. A small caveat is that the application will need to be load balancer behind an NLB.–Chris WilliamsJun 2, 2020 at 15:46Add a comment|
BackgroundMy company has bought a SaaS product which is hosted on vendor's AWS environment. The product has a website dashboard which is currently only accessible within the vendor's AWS environment. The access to the vendor's AWS environment is tightly controlled by the vendor.Right now my users are able to access that dashboard by using Amazon Workspaces provisioned by the vendor. However, the Workspaces have limited number of accounts for my company. My company would like to make this dashboard widely accessible within my company and not restricted by the number of Workspaces accounts.QuestionMy company has our own AWS account as well but currently not linked to vendor's VPC/AWS environment at all. Can we build something in our own AWS (probably with a PrivateLink to vendor's VPC?) such that my users cansecurelyaccess the vendor's dashboard via our own AWS environment? If there's a possible way to do this, what are the AWS services required on both sides?My objective is to ensure this dashboard is not exposed to the Internet and yet all my company's users can view it without having Workspace credentials.
AWS - Accessing private website hosted on an external AWS account
When your udf is called, it receives the entire context, and this context needs to be serializable. The boto client is NOT serializable, so you need to create it within your udf call.If you are using an object's method as udf, such as below, you will get the same error. To fix it, add a property for the client.class Foo: def __init__(self): # this will generate an error when udf is called self.client = boto3.client('comprehend', region_name='us-east-1') # do this instead @property def client(self): return boto3.client('comprehend', region_name='us-east-1') def my_udf(self, text): response = self.client.detect_sentiment(Text=text, LanguageCode='pt') return response["SentimentScore"]["Positive"] def add_sentiment_column(self, df): detect_sentiment_udf = sqlf.udf(self.my_udf) return df.withColumn("Positive", detect_sentiment_udf(df.Conversa))@johnhill2424's answer will fix the problem in your case:import pyspark.sql.functions as sqlf import boto3 def detect_sentiment(text): comprehend = boto3.client('comprehend', region_name='us-east-1') response = comprehend.detect_sentiment(Text=text, LanguageCode='pt') return response["SentimentScore"]["Positive"] detect_sentiment_udf = sqlf.udf(detect_sentiment) test = df.withColumn("Positive", detect_sentiment_udf(df.Conversa))ShareFollowansweredAug 9, 2021 at 15:56Daniel R CarlettiDaniel R Carletti51944 silver badges1515 bronze badgesAdd a comment|
When applying a Pyspark UDF that calls an AWS API, I get the errorPicklingError: Could not serialize object: TypeError: can't pickle SSLContext objectsThe code isimport pyspark.sql.functions as sqlf import boto3 comprehend = boto3.client('comprehend', region_name='us-east-1') def detect_sentiment(text): response = comprehend.detect_sentiment(Text=text, LanguageCode='pt') return response["SentimentScore"]["Positive"] detect_sentiment_udf = sqlf.udf(detect_sentiment) test = df.withColumn("Positive", detect_sentiment_udf(df.Conversa))Wheredf.Conversacontains short simple strings. Please, how can I solve this? Or what could be an alternative approach?
AWS Comprehend + Pyspark UDF = Error: can't pickle SSLContext objects
I found the answer in the link:https://forums.developer.amazon.com/questions/210684/dynamic-entities-in-python.html. A short example in Python:entity_synonyms_1 = EntityValueAndSynonyms(value="entity 1", synonyms=["synonym 1", "synonym 2"]) entity_1 = Entity(id="entity_id_1", name=entity_synonyms_1) entity_synonyms_2 = EntityValueAndSynonyms(value="entity 2", synonyms=["synonym 3", "synonym 4"]) entity_2 = Entity(id="entity_id_2", name=entity_synonyms_2) replace_entity_directive = DynamicEntitiesDirective(update_behavior=UpdateBehavior.REPLACE, types=[EntityListItem(name="custom_type", values=[entity_1, entity_2])]) response_builder.add_directive(replace_entity_directive)ShareFollowansweredMay 24, 2020 at 21:56filipebarrettofilipebarretto1,87211 gold badge1515 silver badges2727 bronze badgesAdd a comment|
I am developing a Alexa Skill in Python and I need to dynamically add Entities for the user to update a Slot Type so the user can choose an option.I the pageUse Dynamic Entities for Customized Interactionsthere is documentation and example for Node.js and Java, but no examples for Python. Looking at theDocumentationfor the Python SDK, it was not clear to me how to do the same in Python.I created a Slot Type calledtestand tried the following code:test_directive = {"object_type": "Dialog.UpdateDynamicEntities", "update_behavior": "REPLACE", "types": [{"name": "test", "values": [{"id": "round-rock", "name": {"value": "Round Rock Express", "synonyms": ["Round Rock", "Express"]}}, {"id": "corpus-christi", "name": {"value": "Corpus Christi Hooks", "synonyms": ["Corpus Christi", "Hooks", "Corpus"]}}]}]} response_builder.add_directive(test_directive)But I get the error:'dict' object has no attribute 'object_type'What is the correct way to add dynamic entities in Python?
How to add Dynamic Entities in Alexa with Python?
Here is the appropriate answer. If you scroll down the page in the terraform docs, it gives a list of attributes (which are exportable):https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster. You will notice vpc_config attributes has a member cluster_security_group_id:vpc_config Attributes cluster_security_group_id - Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication.To actually gain access to this property, given that vpc_config is a list, you will need to access it as so:aws_eks_cluster.cluster.vpc_config[0].cluster_security_group_idIf you do not specify a cluster security group, then AWS will autogenerate a cluster security group which contains the rules to allow the cluster and the cluster node group to communicate. Consequently, it is a common pattern to export this property like so:output "cluster_security_group_id" { value = aws_eks_cluster.cluster.vpc_config[0].cluster_security_group_id }ShareFollowansweredJan 24, 2022 at 6:25Daniel ViglioneDaniel Viglione8,58799 gold badges7171 silver badges112112 bronze badges1Not working, I get an errorCan't configure a value for "vpc_config.0.cluster_security_group_id": its value will be decided automatically based on the result of applying this configuration.–Noopur PhalakSep 6, 2023 at 7:33Add a comment|
I am deploying AWS EKS Cluster using a terraform script. Everything is deploying fine. But I am stuck in an issue with the security group. I have added two ports to allow ingress traffic to my application URL.But the issue is that, after complete deployment of EKS cluster there is two security group created, one which I have created and other is created by EKS itself.So here I have to manually add the port in EKS created security group to access my application's URL on the browser.Here how I can add my specific ports in EKS created security group.
Terraform AWS EKS security group issue
The issue is not the lambda execution policy, but you (yourIAM user) does not have permissions to performiam:AttachRolePolicy.The reason is that the lambda will add the followingservice-role policyto your function execution role, regardless the fact that you already haveAmazonSNSFullAccessthere:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sns:Publish", "Resource": "arn:aws:sns:region:xxxx:testTopic" } ] }You have to add the missing permissions to the IAM user you use when login to the console.ShareFolloweditedMay 21, 2020 at 11:33answeredMay 21, 2020 at 11:18MarcinMarcin227k1414 gold badges267267 silver badges322322 bronze badgesAdd a comment|
I want to send message from lambda function to SNS. When I am trying to add destination "SNS" then this error is coming. What are the IAM Policies, i am missing ? I have added AWSLambdaFullAccess and AmazonSNSFullAccess IAM policies.
Your function's execution role doesn't have permission to send result to the destination
You can run a serverless GraphQL onGoogle Cloud Functionsor Firebase which is the closest thing toAWS AppSyncavailable today on GCP.ShareFolloweditedFeb 2, 2023 at 15:13double-beep5,2531717 gold badges3636 silver badges4343 bronze badgesansweredMay 20, 2020 at 23:22iTechiTech18.3k44 gold badges5959 silver badges8080 bronze badges21"One drawback to take note of when using GCP Functions is that GraphQL subscriptions are not able to be supported due to the nature of the GCP Functions." is a 2018 note, which seems to still be true. Ref:codeburst.io/…–MagneDec 13, 2020 at 15:42Seems like you can use Google PubSub to alleviate that:github.com/axelspringer/graphql-google-pubsub–MagneDec 13, 2020 at 16:23Add a comment|
I'm using AWS AppSync (GraphQL) for an API that is connected to Lambda and S3. Now, we are planning to migrate this to Google Cloud Platform. Could someone help me understand if there are any Services/options available in Google Cloud Platform that provides similar services like AWS AppSync?Thanks.
AWS AppSync Alternative in GCP
No. To fully quote thedocumentation, with emphasis added:Use the role session name to uniquely identify a sessionwhen the same role is assumed by different principals or for different reasons.The goal is to allow forensics based on CloudTrail logs: if you see something assuming a role, you want to know who/what it is.As a practical example, whenever you assume a role from the AWS Console, it uses your username as the session name.If you're assuming a role as part of an application, I think it makes sense to use the application name (but beware that your session name has a 64-byte limit!).ShareFolloweditedMay 20, 2020 at 16:36answeredMay 20, 2020 at 14:20ParsifalParsifal4,24655 silver badges99 bronze badgesAdd a comment|
AWS documentation says that role session name is used to uniquely identify a session. So what happens if I have 2 instances of my application running which assume the role with same session?
Does Role Session Name in AWS assume role have to be unique?
No it is not possible.Sqs,databaseorredisare just for keeping the serialized(encoded etc) version of your laravel jobs. Here is the closest you may get;Forget about sqs queue driver.Implement your job in aws lambda.Allow lambda to consume your sqs (policies, triggers etc listed in the documentation)Make a request from your laravel app viaaws php sdkorhttp request(guzzle,curl) to your sqs and let lambda to consume your sqs.You may use some async driver to trigger your sqs requests asynchronous.If you want to use sqs delay queue,The maximum is 15 minutes- here for thedoc
I wonder whether there is a way to use only lambda function when I dispatched a job in laravel.What I'm using is below.Laravel 5.8(PHP 7.2)AWS SQSSupervisordIn Laravel, I dispatch a job with SQS connection and job is in Laravel project. I searched how I can use SQS as a trigger for lambda function. And I found this document. (Using AWS Lambda with Amazon SQS) If I follow this document, I think I can run job in lambda. But In Laravel project, job will be run again. I want to use only lambda as a job.How I can run only lambda function as a job?
Using AWS SQS and Lambda for queue job in Laravel
After investigating more,uuidas a format isn't explicitly defined in the OpenAPI specification. Therefore implementation of format validation is not always consistent with every system. So I think the AWS validator implementation is a little bit funky.The cleanest solution I have thought of is using regex like this:{ "$schema" : "http://json-schema.org/draft-04/schema#", "title" : "newUser", "type" : "object", "properties" : { "userId" : { "type" : "string", "format" : "uuid", "pattern": "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$" } } }
I am trying to validate requests coming into API gateway by using the request validator to validate the body of the request. The JSON body which is expected just has one key which is "userId" and the value should be a UUID. I have setup my model like this:{ "$schema" : "http://json-schema.org/draft-04/schema#", "title" : "newUser", "type" : "object", "properties" : { "userId" : { "type" : "string", "format" : "uuid" } } }After a few tests it seems to be working, it accepts a valid UUID and all of these correctly return a bad request:{ "userId": null }{ "userId": "text" }{ "userId": 123 }{ "userId": "8327a29c-7134-4566-8b58-" }{ "userId": "8327a29c-7134-4566-8b58-46bcf951ef6az" }However if you remove a few characters or add a couple of valid hex characters to make it an invalid length then it will pass validation and forward on the request. What is the correct way of validating UUIDs using the request validator in API gateway which actually works?
AWS API Gateway UUID request validation
If you do a web search for "dask snakemake" you'll find a Github issue from 2017 that you might want to read through. It's certainly possible, but someone would need to write the integration.You may also want to try Dask's integration with Airflow, or, perhaps a bit more modern, the Prefect library.
I have a Snakemake workflow that I've been using to train DL TensorFlow models. At a high level there are a few longish-running jobs (model training) that can be run in parallel. I would like to run these on the cloud anddask-cloudproviderseems like a promising option since I canleverage GPU's easily on ECS. To do this, though, would I have to rewrite my workflow using the Dask functions (maybedask delayed)? Or is there some way to get Snakemake to use Dask?
Would it make sense to use Snakemake and Dask together?
Cluster to use isNetworking only:With this option, you can launch a cluster with a new VPC to use forFargate tasks.The CICD pipeline for automated deployment of your Docker images to your ECS service could be constructed usingCodePipeline. The good start towards it is the following tutorial:.Tutorial: Continuous Deployment with CodePipelinefor ECS
I want to use AWS-Fargate as a data PreProcessor in a ML pipe.I deployed Docker Image containing my python script within AWS-ECR.I also created a task with this image.My questions are :Which cluster should i use, i don't understand well the concept of cluster.How to deploy in the pipe (the best should be triger by s3 event and execute asdocker run)Thank you for your answers
How to use AWS-Fargate as a python script
You need a serverless style of application hosting, e.g. as suggested by a commentor with API Gateway backed by Lambda. If your request count is low, you may actually not pay much due to Free tier for these services. There is an R Runtime for Lambda here:[1] Serverless execution of R code on AWS Lambda -https://github.com/bakdata/aws-lambda-r-runtime
I am running an R ShinyApp on Fargate ECS. It is roughly used once per week by the customer. It is running constantly and therefore we are paying for a substantial amount of idle time.Is there a way it could be launched when there is an incoming connection and then stopped when this connection ends?Does anyone have any suggestions for this?Many thanks
How can I reduce costs of ECS Fargate being used to run an R ShinyApp
There are several ways to accomplish what you are trying to do, but without understanding the motivation fully it is hard to say which is the "Best Solution".SSH Tunneling is the defacto standard of accessing a resource in a private subnet behind a public bastion host. I will agree that SSH Tunnels are not very convenient, fortunately, some ide's and many apps are available to make this as easy as a click of a button once configured.Alternatively, you can set up a client to site VPN to your EC2 environment which would also provide access to the private subnet.I would caution anything you do which proxies or exposes the DB cluster to the outside world in a naked way such as using IP tables, Nginx, etc. should be avoided. If your goal is this, then the correct solution is to just make the DB instance publicly exposed. But be aware any of these solutions which do not make use of tunneling in (such as VPN or SSH Tunnel) would be an auditory finding, and open your database to various attack vectors. To mitigate it would be recommended that your security groups should restrict port 3306 to the public IP's of your corporate network.
I have a database set up (use RDS) in a private subnet, and a bastion is set up in front of it in a public subnet. The traditional way to access this database from local laptops is to set up an ssh tunnel on that bastion/jumpbox and map the database port to local. But this is not convenient to development because we need to set up that tunnel everytime before we want to connect. I am looking for a way to access this database without setting up an ssh tunnel first. I have seen a case where the local laptop directly uses that bastion's ip and its 3306 port to connect to the database behind. I have no idea how it is done.BTW, in that case I saw, they don't use port forwarding because I didn't find any special rules in the bastion's iptable.
Direct access a database in a private subnet without SSH tunnel
I know this is an old thread, but I've recently had this problem and had to figure it out. When you have a user in your user poolthat does not have the email_verifiedset to True, theAuth.signInmethod will only work with that user's username.Not only that, but theAuth.forgotPasswordmethod won't send the code to the user's e-mail. There's possibly more interactions like these.Make sure youconfirm your user's email, that's all!If you are using a backend to create your users, you can simply do this by:const signUpParams = { UserPoolId: "YOUR_USER_POOL_ID", TemporaryPassword: "TEMP_PW", Username: "username", UserAttributes: [ { Name: 'email', Value: "USER_EMAIL", }, { Name: 'email_verified', Value: 'true', }, ], }; const { User } = await cognitoService.adminCreateUser(signUpParams).promise();
During AWS Cogito User Pool there are two options for how to let users sign in. One of them isUsername - Users can use a username and optionally multiple alternatives to sign up and sign in.Also allow sign in with verified email addressI have selected the above and while testing it using aws-amplify I always get__type: "NotAuthorizedException" message: "Incorrect username or password."if I try to sign in with email but it works fine when I try to sign in with the username.Here is my signin methodAuth.signIn(email, password) .then(r => { setLoggedIn(true); }) .catch(r => { setLoggedIn(false); })};
AWS-Cognito: How to sign in user with email or username
S3 offers eventual consistency for DELETES.From theS3 Data Consistency ModelA process deletes an existing object and immediately tries to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.Here, where the deletion and downloading of the same object is done concurrently, even if the deletion of the object succeeds before the download is complete, the process will still be able to download the data.
I cant seem to find how does AWS s3 handles, if someone deletes file while other person is downloading it. Does it behave like unix system, where descriptor is opened and file is downloaded without problems or does it behave in other way?Thanks for help!
AWS S3 Deleting file while someone is downloading that object
It was safe. I used the following commands (note the-nis needed), and Pytorch, used with Python3 and cuda continued to work.conda env remove -n tensorflow2_p27 conda env remove -n tensorflow_p27 conda env remove -n mxnet_p27 conda env remove -n mxnet_p36 conda env remove -n chainer_p27 conda env remove -n chainer_p36After each step it asked if it was okay to delete a bunch of packages.That freed up about 10GB, which gave me the buffer I needed. (So I didn't experiment deleting some of the others, but I'm fairly sure all the aws_neuron ones could have gone too.)
I started up "Deep Learning AMI (Ubuntu 18.04) Version 27.0"; it comes with a 90GB disk, which seemed plenty, but over 60GB of that was already used. I only need python3, pytorch, cuda.I found 30GB was inside~/anaconda3/envs:3.9G aws_neuron_mxnet_p36 2.2G aws_neuron_pytorch_p36 1.9G aws_neuron_tensorflow_p36 2.5G chainer_p27 1.2G chainer_p36 2.1G mxnet_p27 2.1G mxnet_p36 729M python2 866M python3 2.5G pytorch_p27 2.6G pytorch_p36 2.2G tensorflow2_p27 2.1G tensorflow2_p36 2.3G tensorflow_p27 1.8G tensorflow_p36 31G totalIs it safe to justrm, say, thetensorflowandmxnetdirectories?conda env listgives the same list. Is it better to do e.g.conda env remove tensorflow2_p27.Are there likely to be any side effects of removing those packages? Is there a way to make sure nothing else depends on them before removing them?
Safely delete conda files from AWS deep learning AMI
Not sure if this has been answered yet but go to "Edit job" and remove your required connection if you have one. When you have a required connection it keeps you within your VPC not allowing for external connections.
I am trying to access an external api from aws glue script.import requests r = requests.get("https://api.github.com/users/hadley/orgs")I'm getting a connection error stating,ConnectionError: HTTPSConnectionPool(host='api.github.com/users/hadley/orgs', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb8ff471400>: Failed to establish a new connection: [Errno 101] Network is unreachable',))Can anyone help? Thanks in advance!
Connection Error while calling external api from AWS Glue
I have created policy in account A for SQS & added ARN resource(For queue inAccount B)arn:aws:sqs:Region:AccountID_B:QueueNameThen attached that policy to a role & the same role attached to EC2 instance of account A. Right click on the Queue in account B then click onadd permission. Popup will appear to provideprinciple&action.Principleis aws accountId who can access this queue(Here we can specify theAccount A accountId) & action is the set ofpermission(API label access which is required) for that queue.
I am running EC2 instance in account A & have SQS queues in account A & B. My application is running on EC2 instance of account A. Message listener is getting the queueUrl & polling the messages from queues which can be in account A or B. Here is the code sample to get the queueUrl which works fine if we get the queueUrl of account A but fails if we supply account B sqs queue as input parameter:public String getQueueUrl(String queueOwnerAccountId, String region, String queueName) throws AwsException { try { AmazonSQS sqs = AmazonSQSClientBuilder.standard().withRegion(Regions.fromName(region)).build(); GetQueueUrlRequest getQueueUrlRequest = new GetQueueUrlRequest(queueName).withQueueOwnerAWSAccountId(queueOwnerAccountId); GetQueueUrlResult result = sqs.getQueueUrl(getQueueUrlRequest); return result.getQueueUrl(); } catch (QueueDoesNotExistException e) { throwAwsException("With accountId:"+queueOwnerAccountId+" ,Queue: "+queueName+" does not exists in region: "+region); } catch (AmazonClientException e) { throwAwsException("Invalid destination address:"+e.getMessage()); } return null;}I have added policy(Policy have ARN for queues of both the account) to IAM roles in account A for both the account's queue. Please let me know if i am missing any settings. Thanks.
Cross account SQS access with IAM roles
Yes. Amazon S3's Replication feature allows you to replicate objects at a prefix (say, folder) level from one S3 bucket to another within same region or across regions.From theAWS S3 Replication documentation,The objects that you want to replicate — You can replicate all of the objects in the source bucket or a subset. You identify a subset by providing a key name prefix, one or more object tags, or both in the configuration.For example, if you configure a replication rule to replicate only objects with the key name prefix Tax/, Amazon S3 replicates objects with keys such as Tax/doc1 or Tax/doc2. But it doesn't replicate an object with the key Legal/doc3. If you specify both prefix and one or more tags, Amazon S3 replicates only objects having the specific key prefix and tags.Refer to thisguideon how to enable replication using AWS console. Step 4 talks about enabling replication at prefix level. The same can be done viaCloudformationandCLIas well.
Does anyone know if it is possible to replicate just a folder of a bucket between 2 buckets using AWS S3 replication feature?P.S.: I don't want to replicate the entire bucket, just one folder of the bucket.If it is possible, what configurations I need to add to filter that folder in the replication?
Is it possible to replicate a specific S3 folder between 2 buckets?
Yes, you can create a new cloudformation stack or update an existing stack with resources that are not managed by Cloudformation. This is done usingResource Import.Here is the list of such resources that can be imported:Resources that support Import operation.
We are retaining some of the cloudformation resources for safety reasons.This is achieved usingDeletionPolicy: RetainIs there any way to reattach those retained resources to a new/existing stack?
Reattaching a retained cloudformation resources, back to a cloudformation stack
Have you tried scheduling lambda execution using cron expression?This expression 0 0 * * ? * would run your lambda every day at midnight GMT. Adjust accordingly to your time zone.
I have a use case where I need to run a functiondaily at 12:00:00 am. My Python code takes 3-4 seconds to initialize and 3 seconds to execute. It would still be fine if my function is triggered within 1 second after 12:00:00 am, but I can't do it with AWS Lambda triggered by AWS CloudWatch.I have created a AWS Lambda function and it's being triggered every 11:59 pm using AWS CloudWatch Events because itdoes not provide second level precision. So, it'swasting computing time averaging between 15 - 45 secondsafter initializing just to sleep until 12:00:00 am.Although the price is still quite low, but I just felt annoyed that themajority (>75%) of the computing time taken is used to Sleep. Anyone else has a better idea?
How to schedule AWS Lambda to run exactly at 12:00:00 am (second-level precision)?
NextShardIterator will only return null when it reaches the end of a closed shard ( in cases when the shard count is updated using UpdateShardCount, SplitShard or MergeShard)https://docs.amazonaws.cn/en_us/kinesis/latest/APIReference/API_GetRecords.html#API_GetRecords_ResponseSyntax"NextShardIterator The next position in the shard from which to start sequentially reading data records. If set to null, the shard has been closed and the requested iterator does not return any more data."If you want to start reading the stream from a specified timestamp, the best way to do this would be to use event source mapping with lambda and specifying the StartingPosition as TIMESTAMP in lambda.https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html#SSS-CreateEventSourceMapping-request-StartingPosition
I am trying to read records from kinesis stream after a particular timestamp in a lambda function. I get the shards, shard iterators and then the data.When I get the first iterator, I get the data and keep calling the same function recursively using NextShardIterator (present in the data returned). According to the documentation, the NextShardIterator will return null when there is no more data to read and it has reached $latest.But it never returns null, and the function keeps getting invoked and eventually I get Provisioned Throughput Exceeded Exception.I also tried using MillisBehindLatest to stop reading when the value is zero, but it also fails in some cases.Is there a correct way to get the data from kinesis based on timestamp?
NextShardIterator never returns null when reading data from kinesis stream
After you setup the backend configuration code to an S3 bucket in Terraform run this command to push any local terraform state to AWS S3 using:terraform state push <path to your local state>
Trying to store an existing local terraform state file to the backend state storage S3. But running the terraform apply command throws error "Resource already exists".Is there a method to successfully sync an existing .tfstate to AWS s3 ?
Storing existing local Terraform State to Amazon S3
I eventually solved this by running this:alter session set ENABLE_UNLOAD_PHYSICAL_TYPE_OPTIMIZATION = false;
My table on Snowflake contains a field created asINTand defaults toNUMBER(38,0)as Snowflake data type.When I unload this table to s3 in parquet format with COPY command, I expect to retain the whole schema including the precision of this field. However, the resulting parquet hasINT32 Decimal(precision=9, scale=0).In Snowflake documentation, it is mentioned that/* To retain the table schema in the output file, use a simple SELECT statement (e.g. SELECT * FROM cities). */However, my query below does not keep the precision intact.COPY INTO @staging.dl_stage/prediction/vehicle/export_date=20200226/file FROM ( SELECT * FROM snd_staging.PREDICTION.vehicle ) FILE_FORMAT=(type='parquet' COMPRESSION = AUTO) HEADER = TRUE OVERWRITE = TRUE SINGLE = False MAX_FILE_SIZE=256000000;Is it possible to force keeping Snowflake data type precision intact?
Retaining schema when unloading Snowflake table to s3 in parquet
Turns out that this will do this trick. Its all about encoding, thanks to the help of @KunLun. In my scenario, file is the multipart file (pdf) that is passed to aws via a POST to the url.server gets a file with this byte -> 0010 (this will not be interpreted right, because a standard byte has 8 bits)so, we encode it in base 64 -> doesn't matter what resultdecode it to get a standard byte -> 0000 0010 (now this is a standard byte and it's interpreted right by aws)This source here helped a lot as well -->https://www.javaworld.com/article/3240006/base64-encoding-and-decoding-in-java-8.html?page=2Base64.Encoder enc = Base64.getEncoder(); byte[] encbytes = enc.encode(file.getBytes()); for (int i = 0; i < encbytes.length; i++) { System.out.printf("%c", (char) encbytes[i]); if (i != 0 && i % 4 == 0) System.out.print(' '); } Base64.Decoder dec = Base64.getDecoder(); byte[] barray2 = dec.decode(encbytes); InputStream fis = new ByteArrayInputStream(barray2); PutObjectResult objectResult = s3client.putObject("xxx", file.getOriginalFilename(), fis, data);
I have a Spring App (running on AWS Lambda) that gets a file and uploads it on AWS S3.The Spring Controller sends aMultipartFileto my method, where it's uploaded to AWS S3, using Amazon API Gateway.public static void uploadFile(MultipartFile mpFile, String fileName) throws IOException{ String dirPath = System.getProperty("java.io.tmpdir", "/tmp"); File file = new File(dirPath + "/" + fileName); OutputStream ops = new FileOutputStream(file); ops.write(mpFile.getBytes()); s3client.putObject("fakebucketname", fileName, file); }I try to upload a PDF file which has 2 pages with text. After upload, the PDF file(on AWS S3) has 2 blank pages.Why is the uploaded PDF file blank?I also tried with other files(like PNG image) and when I open it the image I uploaded is corrupted.The only thing that worked was when I uploaded a text file.
AWS Lambda and S3 - uploaded pdf file is blank/corrupt
Assuming that the Mongo instances allow traffic from Bastion Host (in security groups) for required ports, you can use SSH tunnelling mechanism to access the cluster/instance from your local host:ssh -N -L <local_port_x>:<mongoDB instance ip>:<mongo_port_y> <ssh_username>@<bastion_host_ip> -i <ssh_key_path>Local_Port_X: Port on your local machine where you want to access remote Mongo instanceMongoDB Instance IP: ip address for ec2 instance hosting MongoDBMongo_Port_Y: Port that MongoDB is listening on (seems 27017 from your question - please do verify that you can talk to Mongo Instance from within Bastion host on this port)Bastion_Host_Ip: IP address on bastion host which should directly be reachable from your local machine
Trying to understand how this works, documentation isn't very clear. Using AWS quickstart-mongo, I am making a VPN for 3 Mongo nodes, with a bastion server. I can log into my bastion server via SSH and my key. Then I can copy the key to bastion server and SSH into the primary replica node. This node is running mongo and shows via rs.status() that all 3 nodes are running correctly.Once logged into bastion server, I try to docurl primary-mongo-node-ip:27017, and it seems to hang.Local Computer -> Bastion Server -> Replica Node 1 / 2 / 3I think I understand I need to somehow connect to Bastion server, then set up a ssh forwarding to primary-mongo-node-ip:27017, sec1-mongo-node-ip:27017, sec2-mongo-node-ip:27017, so that my mongo URI connection looks like this:SSH into bastion-dns mongodb://user:pass@localhost:1000,localhost:1001,localhost:1002/databaseHow do I do this when I cant event connect to the server on bastion servers without SSH?
AWS EC2 SSH Tunnel Bastion Server
Even on Windows,it's easier to use push instructions on Linux and macOS.you just need to install AWS CLI, docker, and set up AWS credentials.Install AWS CLI version 2msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msiTo confirm the installationC:\> aws --version aws-cli/2.2.43 Python/3.8.8 Windows/10 exe/AMD64 prompt/offIn AWS IAM, create user with the required role.It's not recommended. I simply created a user with AdministratorAccess.set up your AWS credentialsaws configure AWS Access Key ID [None]: Access Key AWS Secret Access Key [None]: Secret Key Default region name [None]: us-west-2 Default output format [None]: jsonNow, Authenticate the docker of the Amazon ECR registryaws ecr get-login-password | docker login --username AWS --password-stdin YOUR-REGISTRY-URL Login SucceededFinally, Pushing a Docker Imagedocker build -t YOUR-BUILD-NAME . docker tag YOUR-BUILD-NAME:latest YOUR-REGISTRY-URL/YOUR-BUILD-NAME:latest docker push YOUR-REGISTRY-URL/YOUR-BUILD-NAME:latest
I am new to AWS and I am trying to register an image on ECR on windows. To do that I am using PowerShell to connect to AWS.Below is my versionPS C:\> aws --version aws-cli/2.0.0 Python/3.7.5 Windows/10 botocore/2.0.0dev4I used aws configure command to login. I went to users -> createdUser -> Security Credentials for Access key and Secret key.When I useGet-ECRLoginCommandPS C:\> Get-ECRLoginCommand Get-ECRLoginCommand : The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. At line:1 char:1 + Get-ECRLoginCommand + ~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Amazon.PowerShe...inCommandCmdlet:GetECRLoginCommandCmdlet) [Get-ECRL oginCommand], InvalidOperationException + FullyQualifiedErrorId : Amazon.ECR.AmazonECRException,Amazon.PowerShell.Cmdlets.ECR.GetECRLoginCommandCmdletCan someone please help me how to registry docker image on ECR?
AWS ECR trying to use Get-ECRLoginCommnad fails
AWS API expects 2 urls when updating an input, main and backup.client.update_input(InputId=input_id, Sources=[{'Url': url},{'Url': url}])
I'm trying to update an Input MediaLive URL usingboto3in python.The input is anURL_PULLtype (HLS) and is attached to a channel, and I think it's the source of my issue.account = { all credentials and stuff } url = 'https://mynew/supercool/hls/playlist.m3u8' client = boto3.client("medialive", aws_access_key_id=account['access_key'], aws_secret_access_key=account['key_secret'], region_name=account['region_name']) input_id = 1234567 client.update_input(InputId=input_id, Sources=[{'Url': url}])The code is working fine, but I get this error, and I don't know how to handle it :An error occurred (BadRequestException) when calling the UpdateInput operation: You cannot change the input class of an input while it's attached to a channel. Please detach the input from the channel in order to switch its class.Question :Which workflow should I use to update an input that is already attached to a channel ?
Changing input url in MediaLive
Errors such you get:nslookup: can't resolve 'kubernetes.default'indicate that you have problem with the coredns/kube-dns add-on or associated Services.Please check if you did following steps to debug DNS:coredns.It also seems that like DNS inside busybox does not work properly.Try to use busybox images <=1.28.4Change pod configuration file:containers: - name: busybox-image image: busybox:1.28.3Learn more about most known dns kubernetes issues:kubernetes-dns.
Kubernetes not able to resolve DNS. Container/Pods not able to access Internet.I have a Kubernetes 2 node cluster on separate AWS EC2 instances (t2.Medium) container networking has been done using: Flannel version: flannel:v0.10.0-amd64 (image) Kubernetes version: 1.15.3DNS LogsDNS LogsnodesKubernetes svc:enter image description hereenter image description hereAt times when I delete core-dns pods, the DNS issue gets resolved for some time but it is not consistant. Please suggest what can be done. I flannel mapping may have something to do with this. Please let me know if any other information is also needed.
Kubernetes DNS fails to resolve most of the times but sometimes it works. What can i do to solve this?
We just needed to add some positive value in the buffering and the problem was solved. Code will buffer 555444333 Bytes and then process 111222333 bytes each time. Since our files are in Json, we can easily convert incoming bytes to string and then clean strings by removing incomplete json parts. Final code looks like:number_of_bytes_to_read = 111222333 number_of_bytes_to_buffer = 555444333 with open(fifo_path, "rb", buffering=number_of_bytes_to_buffer) as fifo: while True: data = fifo.read(number_of_bytes_to_read)
I am using my own algorithm and loading data in json format from s3. Because of the huge size of data, I need to setup pipe mode. I have followed the instructions as given in:https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/pipe_bring_your_own/train.py. As a result, I am able to setup pipe and read data successfully. Only problem is that fifo pipe is not reading the specified amount of bytes. For example, given path to s3-fifo-channel,number_of_bytes_to_read = 555444333 with open(fifo_path, "rb", buffering=0) as fifo: while True: data = fifo.read(number_of_bytes_to_read)The length of data should be555444333bytes, but it is always less12,123,123bytes or so. Data in S3 looks as following:s3://s3-bucket/1122/part1.jsons3://s3-bucket/1122/part2.jsons3://s3-bucket/1133/part1.jsons3://s3-bucket/1133/part2.jsonand so. Is there any way to enforce the number of bytes to be read? Any suggestion will be helpful. Thanks.
aws sagemaker training pipe mode reading random number of bytes
Valuable reading:Treat your servers like cattle, not pets.You're asking for an opinion, which is not really Stack Overflow's space, but here's an opinion: don't let your EC2 instance be the storage of record for your user's data, regardless of the cost. As it happens, S3 is both extremely cheap andbeyond extremely durable.
I have an aws ec2 instance running and I'm wondering if I really need to use an s3 bucket to store files that users upload or if I should just store the files onto my ec2 instance. Which technqiue is more safe and costs less, etc. Any answer will be appreciated.
Should I use an s3 bucket for my files or should I just stick to my ec2 instance
Finally, I changed the implementation to listening to aSQStrigger rather than waiting for the response from an API (The API is handled by a different component and response will take a significant amount of time)Looks like we should avoid using parallel processing tasks with python in aws lambda.From the AWSdocs:The multiprocessing module that comes with Python lets you run multiple processes in parallel. Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.Ifmultiprocessingought to be used, only PIPE is supported.
I have the following code inawslambdato get response from an API until the status is complete. I have used theThreadPoolExecutorfromconcurrent.futures.Here is the sample code.import requests import json import concurrent.futures def copy_url(headers,data): collectionStatus = 'INITIATED' retries = 0 print(" The data to be copied is ",data) while (collectionStatus != 'COMPLETED' or retries <= 50): r = requests.post( url=URL, headers=headers, data=json.dumps(data)) final_status= r.json().get('status').pop().get('status') retries += 1 print(" The collection status is",final_status) with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future = executor.submit(copy_url,headers,data) return_value = future.result()I had already implemented this using regular threads in python. However, since I wanted a return value from the thread tried implementing this. Though this works perfectly inpycharm, it always throws a timeout error inawslambda.Could someone please explain why this happens only inaws-lambda?Note : I have already tried increasing thelambdatimeout value. This happens only whenthreadpoolexecutoris implemented. When I comment out that code it works fine.Also it works fine with the regular python thread implementation
Usage of concurrent.futures.ThreadPoolExecutor throws timeout exception always in aws lambda
If you are using the ECS patterns constructs (eg, ApplicationLoadBalancedFargateService) you can get this working by adding the following flags:enable_ecs_managed_tags=True, propagate_tags=aws_ecs.PropagatedTagSource.TASK_DEFINITION,After that point my tags propogated from my CDK stack to the ECS cluster/service/task definitions and then into the underlying tasks.
We're using the AWS CDK to deploy a large part of our infrastructure, including ECS resources. I have a file that creates an ECS cluster, task definition and tasks. Perthe Tag classI'm then using Tag.add() to apply a tag to everything in the scope of the file, including all ECS resources.When I deploy the stack, the tag applies to the cluster and the task definition, but not the task. I also don't get any error messages; the tag just silently doesn't apply to the task. Applying tags directly to the task doesn't seem to be a supported workaround so I'm stuck. Does anyone know the solution to get the task tagged?
Difficulty with propagating tags in the AWS Cloud Development Kit (CDK)
It is a SAM bug that I hope gets fixed soon also. For now, I created a workaround. It is a bash file that gets the exported output value from another stack and passes it as an env variable during invocation, so it will be overwritten.Just substitute the EXPORTED_VARIABLE_NAME and S3_BUCKET_NAME for your actual names and you are good to go.Run it like so:./sam_local_invoke.sh#!/bin/bash # The exported variable name from another stack (change it with your variable name) EXPORTED_VARIABLE_NAME=BotsDataBucketSAM # Get it as the name of the parameter that we want to overwrite S3_BUCKET_NAME=$(aws cloudformation list-exports --query "Exports[?Name=='$EXPORTED_VARIABLE_NAME'].Value" --output text | cat) ENV_OVERWRITES=$(cat <<EOT { "Parameters": { "S3_BUCKET_NAME": "$S3_BUCKET_NAME" } } EOT ) echo ${ENV_OVERWRITES} > env.json sam local invoke --env-vars env.json
I have SAM template (post here partially):AWSTemplateFormatVersion: "2010-09-09" Transform: "AWS::Serverless-2016-10-31" Parameters: StorageStackName: Type: String Description: Name of the stack which provisions DynamoDB table and S3 bucket. Globals: Function: Runtime: nodejs12.x MemorySize: 128 Timeout: 8 CodeUri: . AutoPublishAlias: latest Environment: Variables: SOURCE_TABLE_NAME: Fn::ImportValue: Fn::Sub: "${StorageStackName}-SourceTableName"Command gives me a notificationsam local start-api --debug --parameter-overrides='StorageStackName=storage-dev'Unable to resolve property SOURCE_TABLE_NAME: OrderedDict([('Fn::ImportValue', OrderedDict([('Fn::Sub', '${StorageStackName}-SourceTableName')]))]). Leaving as is.I tried to remove Sub (no luck):SOURCE_TABLE_NAME: Fn::ImportValue: "storage-dev-SourceTableName"The template works on the server, so Fn::ImportValue supported. So my question is Fn::ImportValue supported in local invocation at all?I am made sure I use same credentials (profile) for local SAM as the one where I havestorage-devstack. Any way I can recheck it again to make sure even more?
AWS SAM local start-api cannot resolve Fn::ImportValue
You need to set the set the time_zone parameter in the DB parameter group for the DB instance. A full guide is here -https://aws.amazon.com/premiumsupport/knowledge-center/rds-change-time-zone/If you need to maintain the same -5 UTC, then I would find a time_zone that matches from here -https://en.wikipedia.org/wiki/UTC%E2%88%9205:00and then match it to a time_zone setting from here -https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.LocalTimeZoneYou may have to change it again, as different time zones have different daylight saving times, however once you know this for a whole year cycle, automating the changes when needed shouldn't be to hard.
I have an RDS MySQL 5.7 instance which was set to time_zone as US/Eastern from the parameter group. Right now it was in EST (UTC−05:00) but I think it will change to EDT (UTC−04:00) on March 8 2020, but I don't want that to be happening. It would create a discrepancy in the data. It should always be in EST or UTC−05:00. I could not find an option to achieve this in rds parameter group. How to set the timezone to EST?Note: I know that I should have created and set time zone to UTC initially to avoid all these griefs but its already done and a lesson learned.
How to set timezone to EST in AWS RDS
QueryExecutionContextaccepts only one database as an argument.So if you want to run a query across multiple databases then you have to pass fully qualified table name along with database.
I am using Boto3 package in python3 to execute an Athena query. From thedocumentation of Boto3, I understand that I can specify a query execution context, i.e. a database name under which the query has to be executed. With a properly specified query execution context, we can omit the fully qualified table name(db_name.table_name) from the query and instead use just the table name.So the querySELECT * FROM db1.tab1can be converted toSELECT * FROM tab1withQueryExecutionContext : {'database':'db1'}The problem:I need to run a query on Athena from python which looks something like thisSELECT * FROM ((SELECT * FROM db1.tab1 AS Temp1) INNER JOIN (SELECT * FROM db2.tab2 AS Temp2) ON temp1.id = temp2.id)As we can see, the query joins tables from two different databases. If I want to omit the database names from this query, how do I specify theQueryExecutionContext?
How to set QueryExecutionContext in boto3 when the query contains joining of tables from multiple databases?
Adding the worker type to the Job Properties will resolve the issue.Based on the file size please select the worker type as below:Standard – When you choose this type, you also provide a value for Maximum capacity. Maximum capacity is the number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. The Standard worker type has a 50 GB disk and 2 executors.G.1X – When you choose this type, you also provide a value for Number of workers. Each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.G.2X – When you choose this type, you also provide a value for Number of workers. Each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs and jobs that run ML transforms.
I am executing a AWS-Gluejob with python shell. It fails inconsistently with the error "Command failed with exit code 137" and executes perfectly fine with no changes sometimes.What does this error signify? Are there any changes we can do in the job configuration to handle the same?Error Screenshot
AWS GlueJob Error - Command failed with exit code 137
You cannot use a CNAME record at the apex or domain root with standard DNS services. I suggest you try using a hostname for your endpoint and using the CNAME there egapi.example.com.Alternatively, you can move your DNS to Route 53. The Route 53 system does support aliases at the root domain level, using the Alias record type.For more information on Alias records in Route 53 seehttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
I'm trying to set up a custom domain (say,myapi.com) for my API Gateway but am running into problems. The domain is currently registered on GoDaddy. So far, I've followedthistutorial and done the following:Obtained a certificate formyapi.comand*.myapi.comfrom the AWS Certificate Manager.Mapped the domainmyapi.com(not*.myapi.comas I don't need it yet) to an API in the API Gateway.Added a CNAME entry for the resulting "target domain name" in GoDaddy.Here are the screenshots:Now here's the problem: When I doping myapi.comI get:No address associated with hostname. I'm not sure what's causing this, so would really appreciate some help. And while we're at it, are there any other steps I need to perform before this works as expected?
Unable to map custom domain to API Gateway (from Godaddy)
The instances in SG2 need to access the instance in SG1 by using that instance's private IP address. That way the traffic stays inside the VPC and will remain associated with the instances in SG2, thus passing the Security Group rule. When you address the instance in SG1 using its public IP address the traffic leaves the VPC and goes out to the Internet and back, at which point the association with the security group SG2 is lost.
I have an EC2 instance running an HTTP server in one security group (lets call it SG1) and a number of other EC2 instances in a second security group (SG2) which need to make requests to the first.If I allow HTTP (TCP port 80) inbound traffic for 0.0.0.0/0, there is no problem.If I replace that rule by specifying that inbound traffic is allowed from SG2 I can't access the server from anywhere (including the EC2 instances in SG2).My ACL is permissive enough in either case (allowing all traffic) and regardless it doesn't change.I should be able to allow inbound traffic by sgID as indicated by the following message that is displayed in the console when configuring SG1:Determines the traffic that can reach your instance. Specify a single IP address, or an IP address range in CIDR notation (for example, 203.0.113.5/32). If connecting from behind a firewall, you'll need the IP address range used by the client computers. You can specify the name or ID of another security group in the same region. To specify a security group in another AWS account (EC2-Classic only), prefix it with the account ID and a forward slash, for example: 111122223333/OtherSecurityGroup.
Why does AWS Security group not allow inbound http traffic by sg-ID
I need users authentication management.I have read AWS Cognito is a good optionIndeed the AWS Cognito is a good option for user authentication and authorization. If you have a web app, you may as well check out theAWS amplifyframework for easier onboarding.if it is possible to implement only AWS CognitoYou don't need to use any other AWS services or migrate your infrastructure. Your application can use Cognito indepently.You can use Cognito even as a pureOAuth 2.0based authentication and authorization service if you want to keep really independent.all talk about using Cognito in addition to other AWS servicesCognito can provide its users session (temporary) aws credentials to use AWS services. You don't have to use the feature if you don't need to.
I have an app hosted in a DigitalOcean server that is only used by me. Now I would like to give access to some friends, so I need users authentication management.I have read AWS Cognito is a good option however it is not clear to me if it is possible to implement only AWS Cognito to work in joint cooperation with other services or if I need to migrate all to AWS to be able to use Cognito. I’ve been looking for tutorials but all talk about using Cognito in addition to other AWS services.The point is that I’m using a Postgres DB and looking at AWS prices it is expensive to me to migrate to AWS. In case it is to do what I would like, I really appreciate recommended lectures.Thanks in advance.
Can I use Cognito for users authentication in an app hosted in DigitalOcean?
Since negative lookaheads are unsupported, I broke mine out into several expressions that cover all cases. WAF lets you specify multiple expressions. It uses logical OR matching, so only one of them has to match. Using the example in the question, the solution could be...joe[^aj] joea[^n] joean[^n] joej[^e] joeje[^n]joematches, unless he's followed by anaor aj. Then he's suspicious, so we go on to the next rule. If thatais followed by ann, the we're still suspicious, so we go on to the next rule. We repeat that process until we've decided whether or not the entire word isjoeannorjoejenMy particular use case was URI matching. I wanted to throttle requests to an entire directory, except for one subdirectory (and all its subdirectories).Say we want to throttle/my/dirbut not anything in/my/dir/safe. We would do it like so...^/my/dir/?$ ^/my/dir/[^s] ^/my/dir/s[^a] ^/my/dir/sa[^f] ^/my/dir/saf[^e] ^/my/dir/safe[^/]We follow the same process of identifying each letter in sequence."You can't start with S. Ok, you can start with S, but youcan'talso have an A. Ok ok, I'll let it slide, but you cannot have an F too. Ok fine, your persistent, but..."Notice we have to include a rule for the trailing slash/. This covers the optional slash in/my/dir/safe/and all subdirectories such as/my/dir/safe/whatever.
I am building a regexp for AWS WAF using a negative lookahead.joe(?!(ann|jen))However, I've got back the following error from WAF consoleWAFInvalidParameterException: Error reason: The parameter contains formatting that is not valid., field: REGEX_PATTERN_SET, parameter: joe(?!(ann|jen))It seems like the AWS WAF does not support this kind of regexp. I've found this bloghttps://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-regular-expressions-regex/Is there anyone having similar issue? can you share how to fix it?
AWS WAF Regexp issue with lookahead
If you're ok with temporarily putting the data on an ec2 instance, you can do it in two steps:aws s3 cp s3://path/to/mydatafile /local/path/to/mydatafilemysql --defaults-file=/path/to/.my.cnf -e "load data local infile '/local/path/to/mydatafile' into table sampletable"ReferencesStackOverflow discussionon loading dataMySQL "load data" referenceCopying from s3Using MySQL options files
So I was searching for a solution that could let me export S3 data into Aurora Serverless. I know that the LOAD DATA request is only available for the Aurora cluster not the serverless one. I've found some documentation about the data injection from S3 to RDS MySQL but I don't know if this still applies to Amazon Aurora MySQL.
Is there a way to export data from S3 to Amazon Aurora serverless with lambda?
Ultimately, I changed my approach from using Lambda to using EC2. I deployed the whole code with libraries on an EC2 instance and then triggered it using Lambda. On EC2, it can also be deployed on Apache server to change port mapping.
I have to deploy a Deep Learning model on AWS Lambda which does object detection. It is triggered on addition of image in the S3 bucket. The issue that I'm facing is that the Lambda function code uses a lot of libraries like Tensorflow, PIL, Numpy, Matplotlib, etc. and if I try adding all of them in the function code or as layers, it exceeds the 250 MB size limit. Is there any way I can deploy the libraries zip file on S3 bucket and use them from there in the function code (written in Python 3.6) instead of directly having them as a part of the code? I can also try some entirely different approach for this.
Importing libraries in AWS Lambda function code from S3 bucket
After looking at the documentation, you are trying to do a WAFv2 rule under a classic WAF resource. Your resource type ofAWS::WAF::Ruleis the classic WAF rule while the structure is of WAFv2.I haven't used WAFv2 yet myself but looking at thedocumentation, this should be about what you want in yaml format:Description: Create WebACL example Resources: ExampleWebACL: Type: AWS::WAFv2::WebACL Properties: Name: ExampleWebACL Scope: REGIONAL Description: This is an example WebACL DefaultAction: Allow: {} Rules: - Name: GeoRestrictExample Priority: 0 Action: Block: {} Statement: NotStatement: Statement: GeoMatchStatement: CountryCodes: - USAs of 1/13/2020, you cannot associate a resource such as api gateway stage with a WAFv2 ACL using cloudformation. You can do so using the console, sdk, a custom resource, andcli.
I want to create a json format cloud formation template that creates an ACL and rule in WAF to allow only the United States users to access the API gateway. I have the following code so far but it gives an error ("Encountered unsupported property Action") in AWS:"Type":"AWS::WAF::Rule", "Properties":{ "Name":"APIGeoBlockRule", "Priority":0, "Action":{ "Block":{} }, "VisibilityConfig":{ "SampledRequestsEnabled":true, "CloudWatchMetricsEnabled":true, "MetricName": "APIGeoBlockRule" }, "Statement":{ "NotStatement":{ "Statement":{ "GeoMatchStatement":{ "CountryCodes":[ "US" ] } } } } } }
AWS WAF Create an ACL and rule to allow access to only one country to access the API gateway
Not currently but GraphQL is just the implementation medium as it gives advantages around the transport as well as type inference for performing conflict resolution and sync. Without it you would need a proprietary way at the network layer to convey the same information. Is there a reason or use case you're looking to not use GraphQL?
Is it possible to use AWS Amplify DataStore without GraphQL? I tried looking at the documentation and it has operations like save, query, etc only in GraphQL.https://aws-amplify.github.io/docs/js/datastore
Use AWS Amplify DataStore without GraphQL?
I have seen the second method used when you wish to provide specific credentials without using the standard Credentials Provider Chain.For example, when assuming a role, you can use the new temporary to create a session, then create a client from the session.Fromboto3 sessions and aws_session_token management:import boto3 role_info = { 'RoleArn': 'arn:aws:iam::<AWS_ACCOUNT_NUMBER>:role/<AWS_ROLE_NAME>', 'RoleSessionName': '<SOME_SESSION_NAME>' } client = boto3.client('sts') credentials = client.assume_role(**role_info) session = boto3.session.Session( aws_access_key_id=credentials['Credentials']['AccessKeyId'], aws_secret_access_key=credentials['Credentials']['SecretAccessKey'], aws_session_token=credentials['Credentials']['SessionToken'] )You could then use:s3 = session.client('s3')
By default boto3 creates sessions whenever required, according to the documentationit is possible and recommended to maintain your own session(s) in some scenariosMy understanding is if I use a session created by me I can reuse the same session across the application instead of boto3 automatically creating multiple sessions or if I want to pass credentials from code.Has anyone ever maintained sessions on their own? If yes what was the advantage that it provided apart from the one mentioned above.secrets_manager = boto3.client('secretsmanager')session = boto3.session.Session() secrets_manager = session.client('secretsmanager')Is there any advantage of using one over the other and which one is recommended in this case.References:https://boto3.amazonaws.com/v1/documentation/api/latest/guide/session.html
When to use boto3 sessions explicitly
A new version ofhtmldatemakes some of the dependencies optional,regexis such a case. That should solve the problem. (FYI: I'm the main developer of the package.)
I am getting the following error:Unable to import module '': No module named 'regex._regex'The AWS Lambda deployment package runs just fine withoutimport htmldatestatement (the module I want to use) which in turn requires regex.Also the code runs fine locally.So this seems to be a problem running regex on AWS Lambda.
How can I import regex on AWS Lambda
You can do by following step :OpenCloudFrontand select relevantCloudFrontGo to Error Page tabCreate custom page error handling as it showing images below
I am using S3 static website hosting to deploy code. I want to send 200 status code when error document is served. currently, it is sending 404 status code. Is it possible to customize the status code.I can't see any option here to set HTTP status code.Thanks in advance !!
Is it possible to customise HTTP status code when error document is returned in S3 static website hosting
So I had to talk to the SNS customer support team and found out that they don't have AND operation within a String.array message attributes.A workaround that I found was to replicate the same message attributes for the number of filters you want to provide. For the message in the question, it should have structure like:"fruit_found": ["Apple"], "all_fruits_found_filter_1":["Mango","Apple","Banana"], "all_fruits_found_filter_2":["Mango","Apple","Banana"]The filter policy defined for when both Mango and Apple are found would be:"all_fruits_found_filter_1": ["Mango"] //and "all_fruits_found_filter_2": ["Apple"]However, there is a limitation of at max 10 message attributes per SNS message. So if you are within that boundary the above solution works fine. Else you would have to refer to the answer from Ali.
I want to publish a notification using SNS and I want subscribers to be able to filter on multiple message attribute(s). One of such message attribute is going to be a String.Array. For example, the notification can have two attributes fruit_found and all_fruits_found."fruit_found": ["Apple"],"all_fruits_found":["Mango","Apple","Banana"]There can be use cases where a subscriber might need to know if both Mango and Apple were found and only then consume the notification else drop it. Is it possible to do so in SNS?
Does SNS allow filtering based on presence of multiple values in String.array
In order to make the session appear I had to include in the templateEmailVerificationMessage: Your verification code is {####}. EmailVerificationSubject: Your verification code
I'm trying to create a cognito user pool using cloud formation. I'm using this yaml templateUserPoolApp: Type: AWS::Cognito::UserPool Properties: EmailConfiguration: EmailSendingAccount: COGNITO_DEFAULT MfaConfiguration: "OFF" Policies: PasswordPolicy: MinimumLength: 8 RequireLowercase: true RequireNumbers: true RequireSymbols: false RequireUppercase: false TemporaryPasswordValidityDays: 7 Schema: - Name: email Required: true - Name: name Required: true UsernameAttributes: - email UserPoolName: !Ref AppUserPoolName VerificationMessageTemplate: DefaultEmailOption: CONFIRM_WITH_CODE EmailMessage: Your verification code is {####}. EmailSubject: Your verification codeBut the user pool is created without the screen of verification email messageEven if I remove VerificationMessageTemplate, the template still is created without this section. How can I solve this issue?Thanks in advance
VerificationMessageTemplate for Cognito in Cloud Formation does not work
For development, you can use the single word invocation on Alexa console and it should work. The rules for invocation names will be applicable during the publishing process. So if you want to use a single word invocation, you need to prove that you own the brand related to that word.
I want to add a single word invocation name for Alexa, in documentation its mentioned that a single word is allowed.One-word invocation names are not allowed, unless: The invocation name is unique to your brand/intellectual property with proof of ownership established through legitimate documentation, or (German skills only) The invocation name is a compound of two or more words. In this case, the word must form an actual word in the skill's language to ensure that Alexa can recognize it.But I can't find anything on alexa console.
How to add single word invocation name alexa?
If you create a Pipeline from AWS CodePipeline Console and choose Amazon ECR as source provider, it will create a CloudWatch event{ "source": [ "aws.ecr" ], "detail": { "eventName": [ "PutImage" ], "requestParameters": { "repositoryName": [ "my-repo/nginx" ], "imageTag": [ "0.1" ] } }Target of this event will be the CodePipeline. You can inspect the Event details in AWS CloudWatch console. Whenever a Push (PutImage) occurs on the ECR repo, Pipeline will be excecuted.
An AWS CodePipeline can be triggered on a commit action to AWS CodeCommit.I do not see an option/way to trigger an AWS CodePipeline on a push action to AWS ECR. Is there a such option?
AWS trigger a Pipeline on ECR push action?
YourAttributeTypemust be a capital S like so'AttributeType': 'S'This is causing your error.You also need to specifyBillingModeand probablyProvisionedThroughputif you don't go for on-demand.The code should look something like this:table = dynamodb.create_table( TableName='log', AttributeDefinitions=[ { 'AttributeName': 'lastcall', 'AttributeType': 'S' } ], KeySchema=[ { 'AttributeName': 'lastcall', #partition key 'KeyType': 'HASH' } ], BillingMode='PROVISIONED', ProvisionedThroughput={ 'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5 }, )
I am trying to create a table with the following codetable = dynamodb.create_table( TableName='log', AttributeDefinitions=[ { 'AttributeName': 'lastcall', 'AttributeType': 's' } ], KeySchema=[ { 'AttributeName': 'lastcall', #partition key 'KeyType': 'HASH' } ] )I am getting the above error not able to figure out what cloud be wrong.
An error occurred (ValidationException) when calling the CreateTable operation: Member must satisfy enum value set: [B, N, S]
You can create a "keep warm" trigger on cloudwatch that calls your lambda every 5-15 minutes to keep it warm. You get a million free calls every month on lambda so it shouldn't really affect you too much. This is how libraries like zappa keep your APIs warm so it is a well established practice.You can read morehere.
I've got a Lambda that uses the AWS Java SDK.In this lambda's handler, I've got code that looks like this:AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient(); sqs.sendMessage( ... )I'd expect the above lines to be pretty fast, and for most cases, this is what I'm observing.However on cold starts, this code is taking about 20 seconds to execute. In fact, just the first line, the client builder, is taking about 10 seconds to complete.Is this the expected performance of the AWS SQS java api's on cold starts?
AWS SQS operations in lambda takes too long on cold start
So my question is really, where is best to have a long running process in AWS that just sits listening for things to happen, rather than responding to a prod from something like a Lambda.I will suggest to go withFargateor EC2 type ECS container, with fargate you do not need manage server, something similar to lambda but more suitable for such long-running process.This seems to need to be fired by a task, rather than sit in the container on a subscription.no, you can run fargate in two ways.Running as a long-running servicesfire service based oncloud watch event or schedule time( perform task and terminate)AWS Fargate now supports the ability to run tasks on a regular, scheduled basis and in response to CloudWatch Events. This makes it easier to launch and stop container services that you need to run only at certain times.AWS fargateWhere is best to have a long-running process in AWS that just sits listening for things to happen, rather than responding to an event from something like a LambdaIf your task is supposed for the run for a long time then lambda is not for you, there is always timeout in case of lambda.If you do not want to manage the server, and the process is supposed to run for a long time, then fargate is for you, so then it's fine to sit for the event and listen.
I currently am trying to set up a system in AWS that utilises EventSourcing & CQRS. I've got everything working on the Command side, and this is storing the events into Aurora. I've got SqlEventStore as my EventSourcing store and that has a Subscription mechanism that will listen for new events and then fire a function appropriately.So far it's all set up in Lambda, but I can't have the subscription in Lambda as they aren't always running, so my first thought was running this side in Fargate and a docker container. Using my reading though, this seems to need to be fired by a task, rather than sit in the container on a subscription.So my question is really, where is best to have a long running process in AWS that just sits listening for things to happen, rather than responding to a prod from something like a Lambda.
Best place in AWS for a long running subscription background service
Check which version of python your default is set too. You can change the preference default if you need to use a newer version of python. You can check your version via your cli:python --versionTo set a user preference you can usealiasalias python='/usr/bin/python3.4' # or whatever your path name is.Once you have done that re-login or source your-bash.rcfile with. ~/.bashrcThen check your python version again to confirm it worked.
I had installed Python and AWSCLI on Windows 10 and it was working fine a while ago. Now when I runaws ssm start-sessioncommands I get the following error:ImportError: No module named awscli.clidriverI know this is because Python cannot find the cli driver and is usually because it is not installed [properly]. In my case it was working fine and I think another installation that included Python broke it. I think it could have been Anaconda. I have done installed it again usingpip3 install awscli --upgrade --userand still get the same error. So my guess is that it is happening because I have two versions of python installed and somehow the right one is not found or part of my path. How can I investigate and resolve this issue?
ImportError: No module named awscli.clidriver because of wrong path for python?
You need to add PublicAccessBlockConfiguration to your templateMyS3Bucket: Type: AWS::S3::Bucket Properties: BucketName: health-app-bucket AccessControl: PublicRead PublicAccessBlockConfiguration: BlockPublicAcls: false BlockPublicPolicy: false IgnorePublicAcls: false RestrictPublicBuckets: falseWhen pushing your objects to S3, you'll still need to put them withACL: public-read.Note:theAccessControl: PublicReadwill grantlistpermission on your bucket allowing all objects to be found publically.
I am writing a AWS cloudformation template to receive a file inside a s3 bucket from Kinesis Firehose. I have gave public read access to the bucket (bucket is public) but when i access the file inside the bucket using object URL, i get "The XML file does not appear to have any style associated with it" error and it says access denied. However the object (JSON file) is downloadable.I have given full access to the s3 bucketResources: # Create s3 bucket MyS3Bucket: Type: AWS::S3::Bucket Properties: BucketName: health-app-buckett AccessControl: PublicRead # Create Role S3BucketRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: - s3.amazonaws.com Action: - 'sts:AssumeRole' #Create policy for bucket S3BucketPolicies: Type: 'AWS::IAM::Policy' Properties: PolicyName: S3BucketPolicy PolicyDocument: Statement: - Sid: PublicReadForGetBucketObjects Effect: Allow Action: 's3:GetObject' Resource: !Join - '' - - 'arn:aws:s3:::' - !Ref MyS3Bucket - /* Roles: - !Ref S3BucketRoleI want to be able to view the file using Object URL
give public read and view access to s3 bucket objects using cloudformation template
What problem are you solving by having individual keys per user? The KMS paradigm is to use policy to grant access to a Customer Master Key (CMK). As Mark pointed out above, there is a limit on the number of keys.Have a look atthis walkthroughThere is a section at the bottom about Key rotation strategies that might help:"A recommended approach to manual key rotation is to use key aliases within AWS KMS. This allows users to always select the same key alias when configuring databases, while the key administrator rotates the underlying CMK. By keeping the old CMK, you allow any applications that currently use this key to still decrypt any data that was encrypted by it, as long as the CMK key policy still gives the AWSServiceRoleForRDS role permission as a Key User. It also allows for any new data to be encrypted with the new CMK."
I found thatAWS RDS allows encrypting DB resourceswith AWS KMS. Because it is done inside the AWS infrastructure the encryption key can be easily rotated automatically. It is cool, but it is only encryption-at-rest.I would additionally like to have encrypted some particular columns in the database. For example SSN. I would like to store them encrypted and decrypt them to display inside my application. Moreover, I would like to have an individual key for every user.The main problem which I observed will be the rotation of the key. As I'm thinking to rotate the key for one user I would like to do this inside my application:get a current encryption key from KMSdecrypt all the data from RDS encrypted with the current keygenerate a new encryption keyencrypt everything again and store data in RDSstore the new key in the KMSThe main problem here would be to keep everything in a "transaction" - to "commit" if everything was fine and to "rollback" everything if anything went wrong.I wonder if such keys rotation for the encryption at the columns level could be done inside the AWS infrastructure automatically. Do you have any ideas about that? Maybe you know any other, better approach for such a situation?
How to encrypt data in AWS RDS with AWS KMS on the column level?
You have the batch size set to 100 which tell Lambda to read 100 records before invoking your function.There are 2 settings related to batch.Batch size – The number of records to read from a shard in each batch, up to 10,000. Lambda passes all of the records in the batch to the function in a single call, as long as the total size of the events doesn't exceed the payload limit for synchronous invocation (6 MB).Batch window – Specify the maximum amount of time to gather records before invoking the function, in seconds.Before invoking your function, Lambda continues to read records from the stream until it has gathered a full batch, or until the batch window expires.I haven't done performance testing with these 2 setting but I would start by setting my size to 1 and my window to 0. However, there could be side effects from launching a large amount of Lambda's but it should give you the minimum delay possible.
Got a function triggered on kinesis stream messages (serverless.yml):functions: kinesis-handler: handler: kinesis-handler.handle events: - stream: type: kinesis arn: Fn::Join: - ':' - - arn - aws - kinesis - Ref: AWS::Region - Ref: AWS::AccountId - stream/intercom-stream startingPosition: LATEST batchSize: 100 enabled: trueThe function does get triggered eventually (2-5 sec after the message is sent) but not immediately. Is this by design? Can I assume kinesis data streams are not good for (near) real time event driven architecture?What actually triggers a lambda when the trigger is a kinesis stream? It looks like there's just background periodic polling every 1-2 sec, the lambda is triggered if new messages found in the stream.
kinesis data stream is not real time?
This is expected behavior. See here:By default the SDK will only load the shared credentials file's (~/.aws/credentials) credentials values, and all other config is provided by the environment variables, SDK defaults, and user provided aws.Config values.If the AWS_SDK_LOAD_CONFIG environment variable is set, or SharedConfigEnable option is used to create the Session the full shared config values will be loaded. This includes credentials, region, and support for assume role. In addition the Session will load its configuration from both the shared config file (~/.aws/config) and shared credentials file (~/.aws/credentials). Both files have the same format.Linkhere.So just set the AWS_SDK_LOAD_CONFIG environment variable to read the config.
My code:sess = session.Must(session.NewSessionWithOptions(session.Options{ Profile: "gms-ai", }))My~/.aws/config:[default] output = json region = us-east-1 [profile gms-ai] output = json region = us-east-2But for example this is working snipet from my deployment script:AWS_PROFILE=gms-ai \ aws lambda update-function-code...So looks likeaws clido readregionbut AWS SDK ignore it?
"MissingRegion" : could not find region configuration, but I have it in my ~/.aws.config
You can useSSH.NETfor this. You can find a working examplehere in the edited question.
I'm not able to find the Aurora MySql Db through an EC2 tunnel.We have an Aurora serverless Db (MySql). The problem is that I don't know how connect to the db locally from my machine.I tried to add the SSH options tomysqlstringbuilder like:MySqlConnectionStringBuilder _connectionBuilder = new MySqlConnectionStringBuilder() { UserID = "admin", Server = "RDS endpoint in Aws", Port = 3306, SshHostName = "Ip to the Ec2", SshUserName = "the ec2 user", SshPort = 22, SshKeyFile = @"filepath to local .pem file", Database = "db name", Password = "db-password" };I tried to use both string builder and a sshclient like:using (var sshClient = new SshClient(_connectionBuilder.SshHostName, 22, _connectionBuilder.SshUserName, new PrivateKeyFile(_connectionBuilder.SshKeyFile))) { sshClient.Connect(); // SQL QUERY HERE sshClient.Disconnect(); }The code works and connects when it is released to the lambda instance but not on my local machine.Works if I open a CMD window and type:ssh -N -L 3306:{aws Db endpoint}:3306 -i {path to .pem} {user}@{ip}And changes server to localhost.
How to Connect to Aurora serverless MySQL instance over SSH
AWS utilizes database physical and logical database replication as appropriate for them.As per the official documentationMulti-AZ deployments for the MySQL, MariaDB, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
Anyone knows what AWS uses to do RDS DB instance sync. replication? DRBD or any other low level device block transfer or something else? cause there are situations where the standby DB instance fails when a failure occurs on master/primary DB instance?Note: claimed in RDS section "AWS Cloud Practitioner Essentials (Second Edition): AWS Integrated Services" digital training video
AWS claims that RDS sync replication to standby instance protects against data loss
AWS now has introduced new rate limiting for WAF. (100 requests in 5 minutes)https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/Also, you can apply rate limit on API gateway itself:https://cloudonaut.io/customized-rate-limiting-for-api-gateway-by-path-parameter-query-parameter-and-more/This is not IP based but still useful to stop unnecessary requests.
Let's say I am running a serverless REST API in AWS. I therefore have my REST API implemented in an AWS lambda and the lambda is exposed over HTTP using an API Gateway or an Application Load Balancer. Then, I want to protect my API from potential hackers that use too intensively my API. I therefore want to limit the API calls frequency by IP address of caller.I see that this can be done with AWS WAF using arate based rule. When reading the documentation, the minimum threshold is 2000 calls by 5 minutes. This is about 7 calls by second. This is a little too big for our standards. Furthermore, it is not possible to specify a limit by minute, hour, day, etc. So it is pretty limited.Are there any other alternative than AWS WAF rate based rule to achieve IP based rate limiting?
How to apply ip based rate limiting in AWS serverless
You can try to usenetcatcommand for a variety of connectivity tests. Here is the syntaxnc -v {host} {port}With -v(verbose) option, you should ideally see an output if your server socket returns something on connection.
I have created an AWS Network Load Balancer, with TCP:80 (HTTP) listener. This listener forward requests to a Target Group called "My-TargetGroup."I have created a Task Defintion, that points to a Docker Image of the Spring Boot service, that runs on port 8080. In ECS, When I created the ECS Service I selected "My-TargetGroup", with listener port at 80.I can see that my ECS Service has one Task running successfully. However, I do not know how to test whether NLB is able to forward the request to the underlying spring boot service. For eg. in my Spring boot API, I have a the endpoint myapi/faq. How do I call this API through curl?. Basically, I will be calling this API end point as http/https method. So I want to now test this API as a get call through https protocol
How to test AWS Network Load Balancer using Curl command?
You can't.The library you use certainly does it right: download the existing file, do the edit locally, then push back the results. It's always going to beslow.Withsed, it may be possible to make it faster, assuming your existing library does it in three separate steps. But you can't send the result right back and overwrite the file before you're done reading it (at least I would suggest not doing so.)If this is a one time process, then the slowness should not be an issue. If that's something you are likely to perform all the time, then I'd suggest you use a different type of storage. This one may not be appropriate for your app.
I have many very large files (> 6 GB) stored in an AWS S3 bucket that need very minor edits done to them.I can edit these files by pulling them to a server, usingsedorperlto edit the key word, and then pushing them back, but this is very time-consuming, especially for a one-word edit to a 6 or 7 GB text file.I use a program that makes the AWS S3 like a random-access file system,https://github.com/s3fs-fuse/s3fs-fuse, but this is unusuably slow, so it isn't an option.How can I edit these files, or usesed, via a script without the expensive and slow step of pulling from and pushing back to S3?
one line edits to files in AWS S3
I was able to solve this by changing the used engine parameter. according to the official documentation ofpandas, those are the engine options:engine : {‘auto’, ‘pyarrow’, ‘fastparquet’}, default ‘auto’so by just changing to 'auto', problem was solved.df = pd.read_parquet('<my_s3_path.parquet>')
I have a python script running on an AWS EC2 (on AWS Linux), and the scripts pulls a parquet file from S3 into Pandas dataframe. I'm now migrating to new AWS account and setting up a new EC2. This time when executing the same script on python virtual environment I get "Segmentation Fault" and the execution ends.import pandas as pd import numpy as np import pyarrow.parquet as pq import s3fs import boto3 from fastparquet import write from fastparquet import ParquetFile print("loading...") df = pd.read_parquet('<my_s3_path.parquet>', engine='fastparquet')All packages were imported and all S3 and AWS configurations were set.when executing the full script I get:loading... Segmentation faultAs you can see not much to work with. I've been googling for a few hours and I saw many speculations and reasons for this symptom. I'll appreciate the help here.
Segmentation Fault while reading parquet file from AWS S3 using read_parquet in Python Pandas
You can add a step (say, with a Lambda function) which would check if the same state machine is already being executed (and in which state). If this is the case, the lambda and the step would fail.Depending on what you want to achieve, you can additionally configure aRetryso that the execution will continue once the old state machine has finished.
Is there a way to prevent concurrent execution of AWS Step Functions state machines? For example I run state machine and if this execution is not finished and I run this machine again I get an exception.
How to prevent concurent runs of a state machine in AWS Step Functions?
If you are using a Lambda Authorizer, returning anAlloworDenyPolicy is what you are looking for.This essentially grants API Gateway permissions to invoke the underlying target. I know it sounds weird at a first glance, but that's how it works. Think of anAllowpolicy as atruereturn statement (credentials matched) kind of thing whilst aDenypolicy is more of afalsereturn statement (credentials didn't match / not enough permissions based on your rules, etc).To get you off ground, you can simply copy/paste the code available at thedocsand modify the authentication way to your liking (the docs show an example using a header withAlloworDenyvalues, which is definitely not what you want, that's just meant for the sake of an example).So, back to your question by enumerating all the answers:Yes, but it's called a Lambda Authorizer instead of a Lambda GatewayEither anAlloworDenypolicy for valid/invalid tokens respectively.If the Lambda Authorizer responds with anAllowpolicy, it will then invoke the target (which can be a Lambda function, an SNS Topic, an HTTP endpoint - this is likely your case - and so on). The authorizer will just act as an interceptor and decide whether to proxy the call to the target or not.
I have my WEB API's hosted in Docker. My Angular client will send a JWT token to access any of these API's. I wanted to make use of AWS API Gateway feature to add an Authorization check before calling the API client requested. From thedocsI see that we can leverage the Lambda Authorizer concept to Achieve this. But then again I though why using Lambda Authorizer when I can come up with an DOT NET CORE API which can validate the user.Does my Lambda Gateway makes sense for my case?If it does, what would be the output of the lambda Authorizer? A simple true/false which says the the Token is valid or not?I see that this is what the response should/might look like. How this should translate to in my case{ "policyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": "execute-api:Invoke", "Resource": [ "arn:aws:execute-api:us-east-1:1234567:myapiId/staging/POST/*" ], "Effect": "Allow" } ] }, "principalId": "Foo" }What should happen in API gateway after the Lambda Authorizer executed ? Who calls my actual API which is requested by the client?
Using AWS Lambda Authorizer in API Gateway
The .serverless folder will get re-generated every time you deploy and is just a build artifact, you can safely delete/ignore it.
Will I need to push the .serverless folder to my repo and keep it updated to allow for a frictionless sls deployment or can I discard it at any time?
Is the .serverless folder needed for trouble-free execution?
This is not possible if you use kubectl to deploy your kubernetes manifests. However if you write a helm chart for your application it is possible. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources in the form for templates. There in the ingress.yaml template you can write such config using range block and putting the variable values in values.yamlIn your case it will look something like belowspec: rules: {{- range .Values.ingress.hosts }} - host: {{ .name }}.dev.cloud http: paths: - path: {{ default "/" .path | quote }} backend: serviceName: {{ .name }} servicePort: 8080 {{- end }}and the values.yaml will haveingress: hosts: - name: abc - name: xyz
My current ingress looks something likeapiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: rules: - host: web1.dev.cloud http: paths: - path: / backend: serviceName: web1 servicePort: 8080Meaning that the first part of the host will always match the serviceName. So for every web pod I would need to repeat the above like:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: rules: - host: web1.dev.cloud http: paths: - path: / backend: serviceName: web1 servicePort: 8080 - host: web2.dev.cloud http: paths: - path: / backend: serviceName: web2 servicePort: 8080I was just wondering if there is some support for doing the following:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: rules: - host: $1.dev.cloud http: paths: - path: / backend: serviceName: $1 servicePort: 8080
Variables in ingress host carried over to service name
You need to return a header in the response, e.g. in Python:return { "statusCode": 200, 'headers': { 'Content-Type': 'application/json' }, "body": json.dumps(body) }
I'm uploading an image to s3, through a lambda, and everything works well, with no errors, but the response from API Gateway is 500 Internal Server Error.I configured my api-gateway following this tutorial:Binary Support for API Integrations with Amazon API Gateway.My lambda receives the base64Image, decode it and successfully upload to s3.This is my lambda code:def upload_image(event, context): s3 = boto3.client('s3') b64_image = event['base64Image'] image = base64.b64decode(b64_image) try: with io.BytesIO(image) as buffer_image: buffer_image.seek(0) s3.upload_fileobj(buffer_image, 'MY-BUCKET', 'image') return {'status': True} except ClientError as e: return {'status': False, 'error': repr(e)}This is what i'm receiving: { "message": "Internal server error" }, with a 500 status code.Obs: I'm not using lambda proxy integration.
AWS API Gateway - Lambda - Internal Server Error
Looks like, you can't run more than one AWS service using motostandaloneserver. If you want say bothec2andacmservices to be run with moto, run both these commands,moto_server ec2 -p 5000 -H 0.0.0.0 moto_server acm -p 5001 -H 0.0.0.0However, if you want multiple services of AWS for testing, you could considerlocalstackhere. It claims that it internally uses moto and few other open source applications. Though it has a few limitations, such as ACM service not available, implementation of few AWS APIs varies slightly.
I am trying to run integration tests against AWS services, to do this I choose moto. Because I am doing this under Java, I wanted to run moto_server, and execute these tests against this mock. The problem I have is that moto_server allows only one service to be mocked. And I need a couple of them. I can lunch moto_server instance per service, but this way it will not share state (like EC2 instances or IAM roles). Is there another way I can mock more than one service with moto_server?
How to run multiple AWS services with moto_server
You have to do it in your user data.https://forums.aws.amazon.com/thread.jspa?threadID=52601#!/bin/bash # configure AWS aws configure set aws_access_key_id {MY_ACCESS_KEY} aws configure set aws_secret_access_key {MY_SECRET_KEY} aws configure set region {MY_REGION} # associate Elastic IP INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) ALLOCATION_ID={MY_EIP_ALLOC_ID} aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID --allow-reassociation
What's the best approach for use a same ip on autoscaling group without use a load balancer? I need to use a route53 subdomain to route to instance on autoscaling group.For now i try to associate a elastic ip to network interfaceI have this:resource "aws_eip" "one_vault" { vpc = true network_interface = "${aws_network_interface.same.id}" associate_with_private_ip = "10.0.1.232" } resource "aws_network_interface" "same_ip" { subnet_id = "subnet-567uhbnmkiu" private_ips = ["10.0.1.16"] } resource "aws_launch_configuration" "launch_config" { image_id = "${var.ami}" key_name = "${var.keyname}" }
Elastic IP for autoscaling group terraform
It's possible that EC2Config is disabled. The empty string you are getting for console output caused by this. It could be an issue with the EC2Config service; either the misconfigured configuration file or that Windows failed to boot properly.For recovery, I'd say try the password you used at the machine used to create the AMI and if it's not a custom made AMI, try a different one altogether. I'd be more helpful if you can share the AMI ID.Additionally, if you are looking to recover data on an EBS volume of the server, you can followthis
I launched a windows EC2 instance on AWS but I can't get the password for login. I keep getting this warning message even one day after launching the server.Password not available yet. Please wait at least 4 minutes after launching an instance before trying to retrieve the auto-generated password. Note: Passwords are generated during the launch of Amazon Windows AMIs or custom AMIs that have been configured to enable this feature. Instances launched from a custom AMI without this feature enabled use the username and password of the AMI's parent instance.And I also tried below command line:$ aws --profile ie ec2 get-password-data --instance-id i-xxxxx --priv-launch-key my.pem --region ap-southeast-2but it returns an empty password:{ "InstanceId": "i-xxxx", "PasswordData": "", "Timestamp": "2019-08-05T23:12:04.000Z" }So how can I get the password for this EC2 instance? I have tried to stop/start the instance but it doesn't help.One possible reason is that the instance is launched from a customised AMI but I also don't know that AMI's password. Is there a way to reset the password?
Why can't I get windows password from AWS?
Yes, I believe that's correct. To do client authentication over TLS, you need to provide the ARN of your private CA that's set up with AWS PCM at the time the cluster is created - and you have to use the aws command-line tool (aws kafka create-cluster ...) to create the cluster. The UI (last time I looked) didn't have anywhere to specify that ARN.I don't know - we bit the bullet and set up a private CA with ACM.Nope. We're hoping that eventually AWS will integrate IAM so you can authenticate as an IAM user instead of a client certificate, but that's not where it stands today. Today, it's client certificate only for authentication.
i want to be able to authenticate/authorize clients to produce/consume messages on certain topics. they would be part of our vpn (incl. aws). as i understand the available documentation the only option to do this is to issue client certificates and setup ACLs based on the clients DNs? Unfortunately i was not able to use my private CA (that i've created on my linux laptop) to create client certs. so the following questions arise:is it correct that i need to setup an AWS hosted CA (ACM PCA). that would result in almost twice the setup costs incl. the minimum broker configs.i could proxy the outer world into the msk cluster via something like "kafka rest proxy" from confluent - correct?am i missing something? is there an easier way built into AWS?please enlighten me :)thanks in advance marcel
AWS MSK User/Password Authentication/Authorization