Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
So basically you can return two things from theAWS::Serverless::Functioncustomresourceoutput: Value: !GetAtt creates3bucketlambda.Arn -> arn of lambda functionandcustomresourceoutput: Value: !Ref creates3bucketlambda -> name of lambda functionMore details about serverless function outputshere.If you're interested inAWS::CloudFormation::CustomResourcethere is also adocumentationfor that.You can useFn::GetAttlike:customresourceoutput: Value: !GetAtt customerResource.responseKeyName -> name of the key from the response
I wanted to output a value I get form an CloudFormation Custom Resource. I'm definitely returning the value, but I wasn't sure how to reference it in an outputThis is mytemplate.yml:Outputs: customresourceoutput: Value: !GetAtt creates3bucketlambda.myvalue Resources: creates3bucketlambda: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs6.10 CodeUri: setups3bucket MemorySize: 512 Timeout: 300 Policies: - AWSLambdaBasicExecutionRole - AmazonS3FullAccess Creates3BucketLoginPage: Type: Custom::AppConfiguration Properties: ServiceToken: !GetAtt creates3bucketlambda.Arn aOrg: !Ref aOrgThe Error I get is:Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Template error: resource creates3bucketlambda does not support attribute type myvalue in Fn::GetAttI'm not sure if I use a!Sub,!Ref
AWS CloudFormation, Output value from Custom Resource
The AWS SQS Queue url is provided in the following format (seethis link):https://{REGION_ENDPOINT}/queue.|api-domain|/{YOUR_ACCOUNT_NUMBER}/{YOUR_QUEUE_NAME}If you had given us the code you used to retrieve the urls in java, we could have seen what the real root cause is. What you have got as the response for your JAVA SDK call is anAWS Endpoint. (seethis link).To reduce data latency in your applications, most Amazon Web Services offer a regional endpoint to make your requests. An endpoint is a URL that is the entry point for a web service. For example,https://dynamodb.us-west-2.amazonaws.comis an entry point for the Amazon DynamoDB service.You can try out to retrieve the SQS URL using the JAVA SDK using the following code. This is for a single Queue URL given the Queue name. (Seethis doc)AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient(); String queue_url = sqs.getQueueUrl(QUEUE_NAME).getQueueUrl();You can also use thelistQueuesmethod to retrieve the Queue URL list, as given inthis doc.If you are still getting the SQS Endpoint...If you keep on getting the SQS Queue URL Endpoint, instead of the SQS actual Queue URL, you can use the endpoint to access and manipulate the queue as you require. Have a look atthis example written in Java, which will help you understand how to use the endpoint and create a workaround with it.
When using AWS CLI versionaws-cli/1.15.50,Python/3.7.0,Darwin/16.7.0,botocore/1.10.49, the commandaws sqs list-queuesreturns a list of the formathttps://us-west-2.queue.amazonaws.com/<a number>/<queue-name>When I call the equivalent from the Java SDK (SDK version 1.11.344 called from Scala version 2.12.6), I get a list of the formathttps://sqs.us-west-2.amazonaws.com/<a number>/<queue-name>PLEASE NOTE: the numberis the samein both URLs as are the corresponding queue names.The differences are:The CLI begins with the region (us-west-2) while the SDK begins withsqs.After the region, the CLI's domain name is.queue.amazonaws.combut the JDK has just.amazonaws.com. (The SDK does not have the tokenqueue..)I get the same results when usingget-queue-urlandgetQueueUrl(either overload in the SDK).Messages sent usingaws sqs send-messagewith the URL returned by the CLI are not received by the Scala program using the URL returned by the AWS Java SDK.What am I doing wrong?
SQS Queue URLs different from AWS CLI than from Java SDK
You need to update the name servers (NS records) of the new AWS account route53 to your domain registrar (Basically from where you purchased the domain)Once that is done, wait for sometime now your domain will work as per the domain settings in your new account.
I have 2 AWS accounts , say A and B. I have to move the ec2 instances and domains & records to the account B. I was able to make AMI for account A and successfully launched it in account B. I was ablee to move the domains also to account B and made new records in account B.But the domain still points to the account A instance. Is there any delay in the process and by how much. Thanks in advance.
How to make the Route53 domain point to the newly created instance in AWS
I know you don't want to use a DB but dynamodb can work well for this kind of thing. If you have something you can use as a good partition key then it will still be quite performant. It will still add a very small amount of time to your lambda run time and, of course, you will be charged for your dynamodb capacity & data. I use this successfully to discard duplicate messages.The other thing that might be worth looking into would be elasticache which has memcached and redis versions. This would be faster - if performance is a particular focus - but is not persistent like DynamoDB.
Since several of the triggers for AWS Lambda can only guarantee message delivery "at least once" (SQS and IoT with QoS=1), I wonder what's the best way to identify a duplicate message and ignore it.I can see that I currently get several duplicate messages, triggering my lambdas twice, causing noise and invalid data as a consequence.In my client, I solve it by just storing a list of message IDs that I've processed, but in the Lambdas, I have nowhere to store a state.Of course I could maintain a DB table of processed message IDs but it seems like overkill to me (and probably adds extra billed runtime to the lambdas). A simple key/value store service in memory would be enough.What other solutions are you guys using?
AWS Lambda - how to identify duplicate messages
In the Listener configuration, you are forwarding the default HTTP port80to port30987on the back-end server. So this tells me that the back-end server is listening for HTTP requests on port30987.You then added an SSL listener on the default port443but you are forwarding that to port443on the back-end server. Do you have something on your back-end listening on port443in addition to30987?The most likely fix for this is to change the SSL listener on the load balancer to forward to port30987on the back-end by setting that as the "Instance Port" setting.
I have a AWSLoadBalancerwhich created usingKube,KopsandAWS. protocl type for theELBistcp. this work fine forhttprequests, means I can access my site withhttp://testing.example.com. Now I tried to addSSLfor thisELBusingACM(Certificate manager). I added my Domain detailsexample.comand*.example.comby requesting apublic Certificate. it created successfully and domain validation is also success.Then I tried to add this ssl to my ELB like below.went to my ELB and selected the ELB.Then went to Listeners tab and Added SSL to it like below.andELBdescription is like below.I cannot access thehttps://testing.example.com, it hangs for few minutes and nothing happens. what is going on here. hope your help with this.
added SSL does not work for AWS Load Balancer using ACM
Just throw an exception and make sure your function is idempotent:Any Lambda function invoked asynchronously is retriedtwicebefore the event is discardedSeeDead Letter Queuesin AWS Lambda Developer Guide.
I have a lambda function configured for receiving eventes whenever a file is dropped to my s3 bucket.My requirement is to configure a dead letter queue for lambda function, so that any failure happened, event should go to dlq.My question is that, what response should be given from lambda function, so that it will push event to dlq?Eg scenario: I have an event validator module within lambda, if validation fails, i wanted to move it to dlq configured for my lambda function.
How to give error response in aws lambda function when it is configured as s3 notification handler
Lex always passes the entire user's input in theRequestunder the fieldinputTranscript.Lex-Lambda Event Format:inputTranscript– The text used to process the request.If the input was text, the inputTranscript field contains the text that was input by the user.If the input was an audio stream, the inputTranscript field contains the text extracted from the audio stream. This is the text that is actually processed to recognize intents and slot values.This is the format of the Lex Request received by Lambda asevent:{ "currentIntent": { "name": "intent-name", "slots": {...}, "slotDetails": {...}, "confirmationStatus": "(None, Confirmed, or Denied)" }, "bot": {...}, "userId": "XXXX", "invocationSource": "(FulfillmentCodeHook or DialogCodeHook)", "outputDialogMode": "(Text or Voice)", "messageVersion": "1.0", "sessionAttributes": {...}, "requestAttributes": {...} "inputTranscript": "Text of full user's input utterance", }So in Lambda, you can access theinputTransciptwith:userInput = event.inputTranscript
Consider the following scenario (U=User, L=Lex):U1: HelloL1: Hello, please give me your name to get started.U2: BobL2: Bob, consider the following question: What colour is the sky?U3: The sky is usually blue but sometimes the sky is red.The system reads a database of questions and randomly chooses one to present to the user. This is done via AWS Lambda and the question is presented to the user in message L2.Is there any way to say that 'the next response from the user should be treated as their answer to the question without defining utterances etc? This is because the questions that the bot sends across can vary to a great degree.I need a way to pass all of block U3 back to Lambda for processing. How would I achieve this regardless of context? (I am using python for Lambda)Thanks
AWS Lex + Lambda - Intercepting all of next user response regardless of context - without defining sample utterances?
I don't have a full terraform-only solution to this.The approach I have is to run a small script to get the current desired capacity, set a variable, and then use that variable in the asg.handle-desired-capacity: @echo "Handling current desired capacity" @echo "---------------------------------" @if [ "$(env)" == "" ]; then \ echo "Cannot continue without an environment"; \ exit -1; \ fi $(eval DESIRED_CAPACITY := $(shell aws autoscaling describe-auto-scaling-groups --profile $(env) | jq -SMc '.AutoScalingGroups[] | select((.Tags[]|select(.Key=="Name")|.Value) | match("prod-asg-app")).DesiredCapacity')) @if [ "$(DESIRED_CAPACITY)" == '' ]; then \ echo Could not determine desired capacity.; \ exit -1; \ fi @if [ "$(DESIRED_CAPACITY)" -lt 2 -o "$(DESIRED_CAPACITY)" -gt 10 ]; then \ echo Can only deploy between 2 and 10 instances.; \ exit -1; \ fi @echo "Desired Capacity is $(DESIRED_CAPACITY)" @sed -i.bak 's!desired_capacity = [0-9]*!desired_capacity = $(DESIRED_CAPACITY)!g' $(env)/terraform.tfvars @rm -f $(env)/terraform.tfvars.bak @echo ""Clearly, this is as ugly as it gets, but it does the job.I am looking to see if we can get the name of the ASG as an output from the remote state that I can then use on the next run to get the desired capacity, but I'm struggling to understand this enough to make it useful.
I have seen multiple articles discussing blue/green deployments and they consistently involve forcing recreation of the Launch Configuration and the Autoscaling Group. For example:https://groups.google.com/forum/#!msg/terraform-tool/7Gdhv1OAc80/iNQ93riiLwAJThis works great in general except that the desired capacity of the ASG gets reset to the default. So if my cluster is under load then there will be a sudden drop in capacity.My question is this:is there a way to execute a Terraform blue/green deployment without a loss of capacity?
How to implement blue/green deployments in AWS with Terraform without losing capacity
Lambda writes logs to a buffer which should be flushed after completing the lambda. But looks like for some reason lambda cannot flush it. I have notified AWS team about it few weeks ago, but the issue still not fixed.
I have made a simpleAWS lambdaand deployed it to withAWSLambdaFullAccesspermissions. There was some logs after invocation. Next day I invoked the lambda again multiple times, all executions were successful but I didn't see any new logs intoCloudWatch. I saw some logs only after redeploy the lambdaThere is the code:public string FunctionHandler(string input, ILambdaContext context) { LambdaLogger.Log(input); return input.ToLower(); }
AWS lambda does not write logs in the CloudWatch
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and callsls deploy -s production -ffor each one of them.
I have aserverless frameworkservice with (say)five aws lambda functions using python. By usinggithubI have created aCodePipelinefor CI/CD.When I push the code changes, it deploys all the functions even only function is changed.I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quittingserverless framework
How to avoid deployment of all five functions in a server of serverless framework if only one function is changed
Yes. From the AppDrag Dashboard, you open the code editor, where you will find the folders of your site. Right click the root folder and you should see the download as zip option.
I have used the AWS hosting of AppDrag for my client's project, but he now wants to take it to his own hosting service. Is there a way to export the site like in a zipped folder so I can send it to him?
How do I export my project on AppDrag?
As of November 2019 it is possible to convert existing DynamoDB tables to Global Tables without data loss."Starting today, you can convert your existing DynamoDB tables to global tables with a few clicks in the AWS Management Console, or using the AWS Command Line Interface (CLI), or the Amazon DynamoDB API. Previously, only empty tables could be converted to global tables. You had to guess your regional usage of a table at the time you created it. Now you can go global, or you can extend existing global tables to additional regions at any time."https://aws.amazon.com/blogs/aws/new-convert-your-single-region-amazon-dynamodb-tables-to-global-tables/To convert a DynamoDB table to Global Tables in the AWS console:Navigate to your DynamoDB table in the AWS console.Select the Global Tables tab.If prompted to enable Streams, enable Streams. Streams are required for Global Tables.Click Add Region to add a Global Table and select a region.In my experience it has taken 10-20 minutes for a Global Table to be created.
AWS is not allow to convert existing DynamoDB tables to Global DynamoDB Tables. So I need to write some code or find already existing tool to do it. The existed tables has a lot data, because of it export-import process will take a long time, and downtime it is not an option.I have an approximate plan of action, to do migration:Create global tablesChange application logic to start write in global tables. When request come to read data first try global tables, if there is no data - when read normal tables.Copy data from normal tables to global tables.Change application logic again to write and read only from global tables.Remove normal tables.I'm wondering if someone did a similar migration? How do you simplify read from two tables (global and normal)? Is there exists any plugin/lib/wrapper for boto, or pynamodb, or other lib to do this? Or you did migration using other method, please share it.
How can I migrate DynamoDB tables to Global DynamoDB tables with minimum downtime?
You need to specifyjava.library.pathJVM property.By modifying JVM command line optionsJAVA_OPTS = $JAVA_OPS -Djava.library.path= /var/task/lib/ java $JAVA_OPTS ...Or modify it directly in your codeSystem.setProperty("java.library.path", "/var/task/lib/"); System.loadLibrary("libvips.so");Also, you can useJNAlibrary. JNA provides functionality to auto-unpack and load native librarians from the JAR archive (resources) added to the JVM class path. It's includes selecting correct operating system and CPU architecture version binaries.
In my java serverless project I have to call a native library for image processing (libvips). I am using Gradle to create a zip file and sending to the lib folder all the dependencies, including the native libraries:task buildZip(type: Zip) { archiveName = "${project.name}.zip" from compileJava from processResources from('.') { include 'lib/**' include 'bin/**' } into('lib') { from configurations.runtime } }In the generated zip file, in thelibfolder all the libraries are there (jars/native/etc).After deploying the function throughserverless deployI am not able to load thelibvips.solibrary usingNative.loadLibrary("/var/task/lib/libvips.so", Object.class). Apparently in/var/task/lib/are located only java dependencies and not the native libraries.Is there another path where AWS stores native libraries?EDITException being thrown:ava.lang.UnsatisfiedLinkError: Unable to load library '/var/task/lib/libvips.so': Native library (var/task/lib/libvips.so) not found in resource path ([file:/var/task/, file:/var/task/lib/aopalliance-repackaged-2.5.0-b42.jar, file:/var/task/lib/asm-all-repackaged-2.5.0-b42.jar, file:/var/task/lib/aws-java-sdk-core-1.11.336.jar, file:/var/task/lib/aws-java-sdk-kms-1.11.336.jar, file:/var/task/lib/aws-java-sdk-s3-1.11.336.jar, file:/var/task/lib/aws-lambda-java-core-1.1.0.jar,....
How to load native libraries in AWS Lambda?
I was also facing same issue the solution is very simple, bind your inspector to 0.0.0.0 instead of 127.0.0.1so change your package.json script to something like this:scripts:{ "debug": node --inspect=0.0.0.0:9229 ./bin/www ... }Reference:https://medium.com/@auchenberg/introducing-remote-debugging-of-node-js-apps-on-azure-app-service-from-vs-code-in-public-preview-9b8d83a6e1f0
I have a node.js application hosted in AWS EC2 instance. I ssh into the host with .pem file for authentication. Is there any way I could debug this code in VS Code, I see there isremote debuggingin VS Code, there is configuration to specify port and host but no option to specify the pem file.How should I configure VS code to debug?
Remote debugging in Visual Studio Code
If you are running Debian or Ubuntu, just install Icecast from the official Xiph.org repositories:https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)It has TLS support built in.The certificate needs to be provided as acombinedfile, with both public and private key in the same file. In case of Letsencrypt - some ACME clients can natively produce that sort of output.As you don't specify if you control the origin server or need to relay an external server I won't venture into further explanations, please clarify your question if you need specific aspects covered.
I need to restream several existing mp3 streams over https.I have a current stream with the url :http://cdn.stream.com/radio.mp3and I would like to have it as :https://cdn.newstream.com/radio.mp3I have seen several solutions such as :rebuild my own cast with icecastnginx proxystunnelcloudfront (could be expensive)or a paid service :https://www.autopo.st/secure-streams/But couldn't find an simple tutorial with a cheap solution using AWS.Is there any way to secure an existing stream in a cheap way using AWS ?Thanks,
Restream a mp3 stream over https with ssl
Simple answer is that you can't do that. You could configure a lambda subscriber to output the messages to a log or something and then watch that from the CLI.If you want to subscribe an arbitrary client to a queue of messages, thenSQSmight be more suitable.
I'm looking for a way to listen arbitrarily to my SNS Topic, and in parallel trigger a SNS message from my code base. Next I need to test if that message was sent correctly.code-that-listens-and-exits-when-it-gets-hello-world-message aws sns publish --topic-arn arn:aws:sns:ap-southeast-1:123456789:hello --message "Hello World!"I find plenty of information how tosubscribe to a topicfrom the CLI, but I am puzzled how to actually listen or test for the event coming through the topic. Whichprotocolshould I be using? I don't want to go down the route of checking a subscribed email endpoint contains the message in the inbox.
How to listen to SNS messages from the CLI to test they have been sent?
It is clear that the cloud-init mount module does not support the efs "device" name.
I am deploying an Amazon Linux AMI to EC2, and have the following directive in myuser_data:packages: - amazon-efs-utils mounts: - [ "fs-12345678:/", "/mnt/efs", "efs", "tls", "0", "0" ]I am expecting this to add the appropriate line to my/etc/fstaband mount the Amazon EFS filesystem. However, this does not work. Instead I see the following in my/var/log/cloud-init.loglog file:May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: Attempting to determine the real name of fs-12345678:/ May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: Ignoring nonexistent named mount fs-12345678:/ May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: changed fs-12345678:/ => NoneIf I manually add the expected entry to my/etc/fstab, I can indeed mount the filesystem as expected.I've found a couple of bugs online that talk about similar things, but they're all either notquitethe same problem, or they claim to be patched and fixed.I need this filesystem to be mounted by the time I start executing scripts via thecloud_final_modulesstage, so it would be highly desirable to have themount:directive work rather than having to do nasty hacky things in my later startup scripts.Can anybody suggest what I am doing wrong, or if this is just not supported?
cloud-init cc_mounts.py ignores AWS EFS mounts
That extension part you are referring to,is mandatory.AWS SNS SMS followsE.164 format.you can read morehereFrom above link:When you send an SMS message, specify the phone number using the E.164 format. E.164 is a standard for the phone number structure used for international telecommunication. Phone numbers that follow this format can have a maximum of 15 digits, and they are prefixed with the plus character (+) and the country code. For example, a U.S. phone number in E.164 format would appear as +1XXX5550100.HIH
$args = array( "SenderID" => "SenderName", "AWS.SNS.SMS.SMSType" => "Transactional", "Message" => "Testing", 'PhoneNumber' => '+91xxxxxxxxx', );When I try to send sms using the extension everything works fine.But when i remove the extension no error pops up and i dont receive the message.Mobile Number = +919876543210where +91 is the extension followed by the mobile number.Thanks in Advance.
AWS SNS Service
I've found the answer and the solution, You have to installAmazon Inspectoron eachEC2in order to inspect them all usingAmazon Inspector.About theAuto-Scale, I've applied Amazon Inspector on the mainEC2servers and took an image from them (after inspecting all theEC2sand fix all the issues). Then I've configured theAuto-Scaleto lunch to lunch from the newAMIs(The InspectedAMIs).
I'm about to install and useAmazon Inspector. We have manyEC2instances behindELB. Plus someEC2instances are opened viaAuto-Scale.My question: Is theAmazon Inspectordoing its work locally or globally, meaning is the monitoring being made on the instance that it is installed on or it can be configured to include all the instances of the infrastructure?If Inspector should be applied on everyEC2instance, can theAuto-Scalebe configured to open the new instances with Inspector already installed on them and if yes, how can i do that?
Installing Amazon Inspector Service
Its easier to run the training again on SageMaker. Otherwise, here are the steps that you would have to do.Take the checkpoint file generated during the training and convert them into tensorflow serving models.Zip them in a specific format and upload to S3Then create estimator as you have done above and do the inference.If you want details on each of the specific steps above do let me know, but if your dataset is not too big, I would say just retrain on SageMaker.
I have a the following challenge with SageMaker:I've downloaded one of the tutorial notebooks (https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_abalone_age_predictor_using_keras/tensorflow_abalone_age_predictor_using_keras.ipynb)I ran the training locally (successfully) with the modifying the following line:abalone_estimator = TensorFlow(entry_point='abalone.py', role=role, training_steps= 100, evaluation_steps= 100, hyperparameters={'learning_rate': 0.001}, train_instance_count=1, **train_instance_type='local'**) abalone_estimator.fit(inputs)I then wanted to deploy my model to AWS with the following line but it seems the SDK deploys it locally (it doesn't fail, I just see it running on my machine)abalone_predictor = abalone_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')Any tips on how to either fix it so it gets deployed to AWS or alternatively re-load my training model and deploy it to AWS from scratch?Many thanks, Stefan
AWS SageMaker - training locally but deploying to AWS?
If I understand the question correctly, your question is about SQS Queue endpoint not having theus-east-1appended at the beginning of the endpoint. It is not a legacy endpoint. In AWS, there are certain services that does not allow you to specify a region in the endpoint. Whatever that is being routed to this endpoint:https://queue.amazonaws.com/1234567890/queue-name.fifo, will be automatically routed tous-east-1region in AWS.This is clearly mentioned in the documentation in AWS related to Endpoints. (Link)Some services, such as IAM, do not support regions; therefore, their endpoints do not include a region. Some services, such as Amazon EC2, let you specify an endpoint that does not include a specific region, for example,https://ec2.amazonaws.com. In that case, AWS routes the endpoint to us-east-1.
I am using the AWS Java SDK as well as the spring cloud aws to utilize SES and SQS in my project. I am running into a small issue. When I try running my app I get the error:Error creating bean with name 'simpleMessageListenerContainer' defined in class path resource [org/springframework/cloud/aws/messaging/config/annotation/SqsConfiguration.class]: Invocation of init method failed; nested exception is com.amazonaws.services.sqs.model.AmazonSQSException: Credential should be scoped to a valid region, not 'queue'.As a preface, in myapp.propertiesfile, I have a propertyqueue.endpoint=https://queue.amazonaws.com/1234567890/queue-name.fifoand the endpoint is retrieved from the aws cli.I've read the AWS documentation and found out that this endpoint is a legacy endpoint. This property is used by the@SqsListenerannotation from spring cloud aws library.I managed to avoid this issue by seeing if I was using a legacy endpoint and converted it into the non-legacy endpoint through a shell script, iehttps://sqs.us-east-1.amazonaws.com/123456780/queue-name.fifo.I was wondering if the spring cloud aws library had issues with using legacy endpoints. I noticed there were no issues for my other queues where the endpoints werehttps://us-east-2.queue.amazonaws.com/1234567890/queue2-name.fifohowever, so maybe it parsed theus-east-1legacy endpoint incorrectly? I am also unsure if there were any configurations that needed to be done in my application to utilize the legacy endpoints.
AWS Java SDK SQSlistener endpoint issues
Try the command:source activate pytorch_p36Reference:https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-pytorch.htmlWhen you login to the machine, the MOTD (Message of the Day) on the login screen displays various source activate commands you can run. To find the source activate commands of other Deep Learning frameworks, see this reference:https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-conda.html
I'm trying to set up a Jupyter Server using AWS EC2 starting with aDeep Learning AMI (Ubuntu) Version 7.0AMI. It says that it comes with separate virtual environments:Comes with latest binaries of deep learning frameworks pre-installed in separate virtual environments: MXNet, TensorFlow, Caffe, Caffe2, PyTorch, Keras, Chainer, Theano and CNTK.So I ssh into the instance and found an directory~/anaconda3/envs/which contains a bunch of folders liketensorflow_p36. But I was unable to find theactivatefiles in them.There doesn't seem to be any other folder that looks remotely like a virtual env so I'm stuck. Can someone help me?Thanks!
AWS Deep Learning AMI Virtual Environment Activation
To query your table usingCreatedDatewithout knowing theItemId, you can use Global Secondary Index write sharding by adding an attribute (e.g.,ShardId) containing a (0-N) value to every item that you will use for the global secondary index partition key.Depending on how your items are distributed againstCreatedDate, you can set theShardIdso that it is likely to have evenly distributed access patterns. For example:YYYY,YYYYMMorYYYYMMDD. Then, you create a global secondary index withShardIdas an index partition key andCreatedDateas an index sort key.Knowing the primary key for your GSI (since theShardIdvalue is derived fromCreatedDate), you can query the table for the 100 most recent items with query'sLimitparameter (orLastEvaluatedKeyif your items set size is larger than 1 MB of data).SeeUsing Global Secondary Index Write Sharding for Selective Table Queries.
I have created a dynamo db table with name- "sample".It has below columns. CreatedDate will have creation time of any records inserted to this table.Itemid, ItemName, ItemDescription, CreatedDate, UpdatedDateI am creating a python-flask based rest api which always fetches last 100 records inserted to this table. This API (python-flask function) does not have any input parameters. It should just return the last records inserted to this table.Question 1What should be the partition key for this table? I am using the boto3 library to fetch records from DynamoDB. I prefer not to do scan operation because it may cause performance issues. If I use the query function it asks for a partition key. Since this rest API does not accept any input I am not sure how to use it.Question 2Has anyone faced similar situation? And what was done to fix this?Note: I am pretty much newbie to DynamoDB, NoSQL and Boto
How to select a partition key for for a DynamoDB query?
A load test could be as simple as pinging the healh-check URL of your Beanstalk application.Write a threaded/concurrent program in the language of your choice to bombard your Beanstalk application with HTTP requests to elicit HTTP 200 responses.At a sufficiently high request rate (which you could facilitate by increasing the degree of concurrency), you could observe auto-scaling kick in to launch newer instances of my environment.For every request from the concurrent program, you should check the response is a 200 OK, if not log that as an anomaly.Measure the mean time between request and response to give you an indication of whether your responses are lagging.You could repeat the above process with the operations that you expect to be most popular for your application.One such plausible operation is that ofusers logging in, for which you would need to have setup a large number of dummy users in the (development) database. Now, instead of requesting the health-check URL, you would perform POST requests with the authentication credentials per-user to the/loginURL of your app.
I have my laravel application (which is serving Rest API's) configured in elastic beanstalk environment. currently i have configured 1 t2.medium Ec2 instance under application load balancer. how to load test to check the maximum number of concurrent users the environment can handle?
Load test concurrent users in elastic beanstalk environment
Yes. You can define catch handlers in your Step Function to handle failing lambdas and rerun them, or do whatever you need on failures.Here's an example of triggering Step Functions from file uploads to S3:https://aws.amazon.com/blogs/compute/synchronizing-amazon-s3-buckets-using-aws-step-functions/That said, if all you need is a simple re-try logic, you might be able to get there faster using SQS. When SQS clients receive messages from the queue, they are not actually removed immediately, but rather SQS puts a hold on the messages. If a client doesn't delete the messages within a certain time, then those messages are put back into the queue.Unfortunately, there's currently no way to trigger lambdas directly from SQS, but you can set up one or more CloudWatch events to poll SQS at regular intervals.
Context:I have a pipeline of 6 lambda functions (chained together), triggered by an SNS notification which is generated whenever a file lands on S3. This pipeline essentially takes the file(few GBs), filters it (Spark cluster is created to run the job, then deleted at the end), and inserts it into a DB. Lambdas are orchestrating the flow.Issues:If one Lambda fails, the chain breaks hence no effective failure handling. Secondly, we experience timeouts if a polling/computation takes longer than 5 minutes, so no effective retry. It takes a long time to test/debug an issue if a lambda fails. Also there is no visibility, say for example how many jobs failed and how many passed? we dont know. Getting a bunch of SNS notifications on email is not very effective/helpful. If the chain breaks, we cannot perform cleanup operations like deleting SPark cluster or housekeeping steps.My Questions:Is AWS Step Functions a good choice for solving the above issues? When would you not use a Step Function service? If you cannot invoke Step Function through SNS, then what would be the best way to call it whenever a file lands on S3? Feel free to share any other approach to easily and effectively tackle this usecase.
Challenges of using AWS Lambda chain
The temporary copy of the object is accessible by fetching the object as usual, using the same object key. There isn't a separate location that is user-accessible.If you make aHEADrequest for the object, the response includes thex-amz-restoreheader:x-amz-restore: ongoing-request="false", expiry-date="Fri, 23 Dec 2012 00:00:00 GMT"Theexpiry-dateis the date when the temporary copy will be removed, based on the number of days you specified when you initiated the restoration.Theongoing-requestvalue offalsemeans the restoration from Glacier is complete, and the object is accessible, whiletruemeans the initial restoration operation is still in progress and the object is not yet ready for access.https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.htmlIt isn't possible to remove the temporary copy earlier than the number of days you specified, or to make it persist longer. If you aren't sure how long you will need an object to be accessible, you can restore for only 1 day, and make a copy of the object and store it elsewhere when the restoration is complete.
When you restore files from AWS Glacier, they go to a temporary location in S3 and remain there for the period specified in the restore request.If that request was made by another tool, is there a way I can see the temporary storage location, and the time remaining in the temporary period?s3api list-objectsands3api list-objects-v2just show the files still in"StorageClass": "GLACIER", but I know the temporary files still exist because the other tool can now work on them atSTANDARDspeeds, rather than glacial ones.While not urgent for me, this is surprising given you do get charged for S3 storage of the temporary objects - seems unfair if you can't delete a massive restore you made once you're finished with it!
List files restored from AWS Glacier to S3, time remaining?
As far as I know there is now way to decrease the polling frequency if you are using an event source mapping.These are all the settings you can set (source:https://docs.aws.amazon.com/de_de/lambda/latest/dg/API_CreateEventSourceMapping.html):{ "BatchSize": number, "Enabled": boolean, "EventSourceArn": "string", "FunctionName": "string", "StartingPosition": "string", "StartingPositionTimestamp": number }So going with a scheduled event seems to be the only feasible option. An alternative would be to let the lambda function sleep before exiting so it will only poll again after a desired time. But of course this means you are paying for that.. So this is probably not desired.
I want to change Kinesis stream polling frequency of AWS Lambda function. I was going through this article:https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.htmlbut, no luck. The only information it conveys isAWS Lambda then polls the stream periodically (once per second) for new records.I was also looking for answers in threads, but no luck:https://forums.aws.amazon.com/thread.jspa?threadID=229037There is another option though, which can be used if desired frequency is required:https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.htmlSo, my question is, can we decrease AWS Lambda's polling frequency to, lets say 1-2 mins? Or do we have to go withAWS Lambda with Scheduled Events?
Change AWS Lambda Kinesis stream polling frequency
An alternative would be to leave everything as it is (from application perspective) and checkAmazonCloudWatch Logs Filter.You use metric filters to search for and match terms, phrases, or values in your log events. When a metric filter finds one of the terms, phrases, or values in your log events, you can increment the value of a CloudWatch metric.If you defined your filter you can create aCloudWatch Alarmon the metric and get notified as soon as your defined threshold is reached :-)EditI didnt check the link from@Renato Gama. Sorry. Just follow the instructions behind the link and your problem should be solved easily...
I have some AWS Lambda functions that get run about twenty thousand times a day. So I would like to enable logging/alert to monitor all the errors and exceptions.The cloudwatch log is giving too much noise, and difficult to see the error. Now I'm planning to write the log to AWS S3 Bucket, this will have an impact on the performance.What's the best way you suggest to log and alert the errors?
Logging errors and exceptions on AWS Lambda
I think one approach would be to use a MySQL statement interceptor:https://github.com/spullara/mysql-connector-java/blob/master/src/main/java/com/mysql/jdbc/StatementInterceptor.javaYou can use AWSXRay.beginSubsegment() in preProcess() method and then AWSXRay.endSubsegment() in the postProcess().Would be a nice addition toAWS X-Ray SDK for Javawhich is in open source. in case you get it working.For reference of spring based implementation for X-Ray:DataSource based intereceptorYou can use the statementinterceptors property as part of the connection URL to intercept the statement as documented here:https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-configuration-properties.htmlUpdateMy teammate pointed out, a newer version of StatementInterceptor is available:https://github.com/spullara/mysql-connector-java/blob/master/src/main/java/com/mysql/jdbc/StatementInterceptorV2.javaYou may want to use that.
I'm running some SQL queries in an AWS Lambda, and was hoping to utilize AWS-XRay's tracing capabilities to get some more detailed information on these calls.This documentationshows examples of configuration with Spring and Tomcat, but neither of which makes sense to use in my obviously serverless and supposed-to-be lightweight Lambda. Here's how I establish my connections currently:public Connection getDatabaseConnection(String jdbcUrl, String dbUser, String dbPassword) throws SQLException { return DriverManager.getConnection(jdbcUrl, dbUser, dbPassword); } try (Connection connection = getDatabaseConnection(getJdbcUrl(), getDbUser(), getDbPassword())) { try(ResultSet results = connection.createStatement().executeQuery("SELECT stuff FROM whatever LIMIT 1)) { return (results.getLong(1)); } }Is there any way to utilize AWS-XRay SQL tracing in my use case?
Can I use AWS-XRay to trace MySQL queries without Tomcat or Spring?
It is not possible the way you are asking. The Crawler does not alter data.The Crawler is populating the AWS Glue Data Catalog with tables only. Please see here for details:https://docs.aws.amazon.com/glue/latest/dg/add-crawler.htmlIf you want to do data cleaning using Athena/Glue before using data you need to follow the steps:Map the data using Crawler into a temporary Athena database/tableProfile your data using Athena. SQL or QuickSight etc. to get the idea what you need to alterUse Glue job tomake data transformation/cleaning/renaming/dedupingusing PySpark or Scalaexport data into S3 new location (.csv / .paruqet etc.) potentially partitioningRun one more Crawler to map cleaned data from the new S3 location into Athena databaseThe dedupe you are askinging about happens in step 3
I am trying to leverage Athena to run SQL on data that is pre-ETL'd by a third-party vendor and pushed to an internal S3 bucket.CSV files are pushed to the bucket daily by the ETL vendor. Each file includes yesterday's data in addition to data going back to 2016 (i.e. new data arrives daily but historical data can also change).I have an AWS Glue Crawler set up to monitor the specific S3 folder where the CSV files are uploaded.Because each file contains updated historical data, I am hoping to figure out a way to make the crawleroverwritethe existing table based on the latest file uploaded instead of appending.Is this possible?Thanks very much in advance!
AWS Glue Crawler Overwrite Data vs. Append
It appears that your requirement is to compare the contents of two Amazon S3 buckets and identify files that are missing or differ between the buckets.To do this, you could use:Object name:This, of course, will help find missing filesObject size:A different size indicates different contents and the size is given with each bucket listing.eTag:An eTag is an MD5 checksum on the contents of an object. If the same file has a different eTag, then the contents is different.Creation date:This isnotactually a reliable way to identify differences, but it can be used with other metadata to determine whether you want to update a file. For example, if two files differ the object in the destination bucket has a newer date than the object in the source bucket, you probably don't need to copy the file across. But if the source file was modified after the destination file, it's likely to be a candidate for re-copying.Instead of doing all the above logic yourself, you can also use theAWS Command-Line Interface (CLI). It has aaws s3 synccommand that will compare files from the source and destination, and will then copy files that are modified or missing.
I would like to compare the file contents of two S3-compatible buckets and identify files that are missing or that differ.Should I use checksum to do it instead?
s3 - Comparing files between two buckets
EFS is the service you are looking for. You can mount it on EC2 nodes running in multiple Availability Zones in the same region.The EC2 instances mount Amazon EFS file systems via the NFSv4 protocol, using standard operating system mount commands.You can also mount the EFS on every node of EMR through a bootstrap script.It will satisfy all three criteria for you.
Does AWS provide any storage solutions that satisfy the following criteria?can be mounted in a master node in EMR cluster as an OS directory under e.g./mntwould outlive the EMR cluster if the cluster is terminated or deletedcan be accessed simultaneously by multiple EC2 instances (in EMR or not)In my mind, anNFS-like volume should satisfy all three, but I don't know if EBS, EFS and/or EMRFS can be used that way. At a minimum I am looking for something that gives me (1) and (2)Background: EBSIn the context of the questions above, I looked into EBS, but I foundconflicting informationon this topic.The EMR documentation says that EBS volumes are ephemeral in EMR:Amazon EBS works differently within Amazon EMR than it does with regular Amazon EC2 instances. Amazon EBS volumes attached to EMR clusters areephemeral: the volumes are deleted upon cluster and instance termination (for example, when shrinking instance groups), so it’s important that you not expect data to persistMeanwhile I see an option called"Delete on termination"in EBS that could be set toFalse, see the screenshot below.
Persisting, mounting and sharing volumes in EMR
If I understand correctly, you want to integrate Social Identity Providers with a User Pool and you have successfully integrated Social Identity Providers as Federated Identities.There has been some discussion on this topic. I would recommend starting here:AWS cognito: sign in with usernam/password OR facebookSince this discussion, AWS has added support toAdd Social Identity Providers to a User PoolWhich would allow you to use augment social identities as part of an AWS managed user pool. If you still want to federate with the social identity, rather than your user pool, you will need to create a way to manage identities.
i want to know how to create user entry in user pool when user login with facebook. I am able to integrate aws cognito up and the facebook log in just fine but user is not creating in user pool when login with facebook. identity Id is creating in default facebook group.Map<String, String> logins = new HashMap<String, String>(); logins.put("graph.facebook.com", AccessToken.getCurrentAccessToken().getToken()); credentialsProvider.setLogins(logins); credentialsProvider.refresh();
User not creating in user pool when login with facebook in aws ? - android
Increasing of an instance from small to medium actually solved my problem. It seems that the app could not handle this amount of load with limited resources of small instance type.
We have Elastic Beanstalk set to load balancing. When our app is consuming 100% CPU for longer time (i.e. after some downtime when we receive tons of webhooks) then the load balancer restarts docker inside the instance. Our app starts aprox. 2 minutes therefore you can never recover from downtime.Is there any way how to extend this restart period or even disable it?Scaling using CPU threshold is not an option for us as our app consumes lots of CPU during higher load.
AWS Elastic Beanstalk restart docker if CPU is 100% for longer time
Check that you have don't have a mistmach of aws versions. you might be having the same issue I had, upgrading ssm aws version to 1.11.301 while other components had the 1.11.271 aws core version, causing the same exception. You should make sure versions are aligned
Following code gives NoSuchFieldError when used in Lambda. The same works in a simple java program. Appreciate any help..AWSSimpleSystemsManagementAsync client = AWSSimpleSystemsManagementAsyncClientBuilder.defaultClient(); PutParameterRequest putRequest = new PutParameterRequest(); putRequest.setName("testKey"); putRequest.setValue("testValue"); client.putParameter(putRequest);Digging into the source code shows error at AWSSimpleSystemsManagementClient.java -> request.addHandlerContext(HandlerContextKey.SIGNING_REGION, getSigningRegion());Also tried with AWSSimpleSystemsManagementClientBuilder.standard(); AWSSimpleSystemsManagementClientBuilder.defaultClient();WSSimpleSystemsManagementClientBuilder.standard().withRegion("us-east-1").build(); returning the same error
aws java sdk for ssm gives java.lang.NoSuchFieldError: SIGNING_REGION
Yes, it's possible usingUpdateUserAttributes. Perthe docs:In your call to AdminCreateUser, you can set the email_verified attribute to True, and you can set the phone_number_verified attribute to True. (You can also do this by calling AdminUpdateUserAttributes.)email: The email address of the user to whom the message that contains the code and username will be sent. Required if the email_verified attribute is set to True, or if "EMAIL" is specified in the DesiredDeliveryMediums parameter.
When I update the cognito users' email attribute via theupdateAttributeoradminUpdateAttributeAPI,email_verifiedwill be set to false. So I'd like to setemail_verifiedto true programitically.My understanding is that it should useGetUserAttributeVerificationCodeandVerifyUserAttributeAPI to set email_verified to true, but I don't want users to enter verification code.https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_GetUserAttributeVerificationCode.htmlhttps://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_VerifyUserAttribute.htmlAs far as I see below, it seems impossible.https://forums.aws.amazon.com/thread.jspa?messageID=782609
Can one set email_verified to true in Cognito programmatically? How?
yes this is totally possible with standard Cloudformation templates. You can solve this in a couple of ways.If you are using nested stacks, you can create all the security groups you need in one sub-stack. That stack should have Outputs for each of the Security Group Ids you created.Outputs: SecurityGroup1Id: Description: Security Group 1 ID Value: !Ref SecurityGroup1In the stack that then creates your EC2 instances, you can defineParametersfor each of the security Groups. It can either be an array or one parameter for each Group, depending on your use case.Single TemplateIf the EC2 instances and security groups are being defined in the same template, then you can use a simple Ref to access the ID of the already created security group. ie:!Ref SecurityGroup1Name
I'm trying to build a stack with multiple EC2 instances that have varied security groups.It would be easy for me if I could create my security groups in advance and reference them in my EC2 stack.Is there a way to reference an existing security group resource in a CF stack?Thanks in advance for the help!
Is there a way to reference a security group from a previous Cloudformation stack in a new CF stack?
There are many ways to approach this.If your main need is visualizing your data workflow, usingAWS Step Functionswould do it. They recently launched a cheap version calledExpress Workflows.Break down into multiple Lambdas, one for each task. Step Functions will take care of all the orchestration, error handling, retrying, etc. UsingAWS SQSmay also be beneficial to smooth out the batching process.For full visibility, a specialized tool will be required. In serverless, we don't have control over the infrastructure, so a different approach is required. I'd recommend to check outDashbird.
I have a sync job (in Node.js) which has to process several hundred of documents in one batch. For each of them also perform several tasks. As usually, after deployment, such job will become a blackbox: without propper logging it is impossible to find a problem.Therefore, I log any reasonable information - which document job is being processed, what task is performing now etc. I use console.log / console.error for logging. This results in a very large log file, which is not that big problem when running localy.Once deployed on AWS, is there any best practice / limitation for logging? I am afraid of costs also.Thanks!
Amazon AWS Lambda NodeJS logging
As suggested onofficial documentationthere is low level handler.public class Hello implements RequestStreamHandler{ public static void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException { int letter; while((letter = inputStream.read()) != -1) { outputStream.write(Character.toUpperCase(letter)); } } }By using these handler, I can able to convert request to S3Event and SNSEvent. There is an example codehere.
I want to trigger a Lambda function from s3 event and sns event.Current version is like that:public class LambdaFunctionHandler implements RequestHandler<S3Event, Object> { public Object handleRequest(S3Event input, Context context) { context.getLogger().log("S3Event: " + input); return null; } }Is there any way to handle both event types?
How can I use multiple LambdaFunctionHandler for SNSEvent and S3Event in Java?
I found the solution myself, the problem was node version it was 9+ so I downgraded to 6.9. So problem was solved
I am trying to hit third party api splash payment from my nodejs application. To hit api I am using response modulevar options = { method: 'post', body: postData, json: true, url: url, headers: { "Content-Type": "application/json", "APIKEY" : config.splash_key } } request(options, function (err, res, body) { if (err) { console.error('error posting json: ', err); return cb( err , null); // throw err; } return cb( body.response.errors , body.response.data); })But it gives me errorError: write EPROTO 140467444299648:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:../deps/openssl/openssl/ssl/s23_clnt.c:772:Please help
nodejs SSL handshake error on aws
The first place I'd look is your CodeBuild service role, make sure it has something like the following in the policy:{ "Sid": "S3GetObjectPolicy", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "*" ] },
No matter how open an IAM policy I give to my CodePipeline role, my CodeBuild step always fails with Access Denied in the DOWNLOAD_SOURCE phase. The build works fine when I run manually from CodeBuild.I have literally granted this policy to the CodePipeline service role, and the pipeline still fails:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "*" ], "Resource": [ "*" ] } ] }Has anyone else encountered a similar problem? Where should I be looking to fix this?
AWS: IAM Policy for CodePipeline?
You can try AWS Cognito. I am not entirely sure, what kind of aws services are being accessed but Cognito does allow to expose certain services. Please go throughhttps://aws.amazon.com/cognito/faqs/for more details.
I want to authenticate an Amazon Web Services user each time they use my app. I am developing the app using the AWS SDK for asp.net that will use the AWS-CLI to interact with the AWS API.I know that a user with an AWS account can setup IAM credentials, is there some way I can use this?The authentication would need to take place while the application is running which is why the following page is of no use to me!https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/net-dg-config-creds.htmlBasically I'm looking for some kind of OAuth to gain access to an AWS user's account when they use my app so that I can perform backups, starting and stopping of servers on behalf on them using the AWS Cli on ASP.NET.Some kind of endpoint that I could get a response with a access and refresh token from would be ideal. For examplethis APIcontains lots of detail on exactly how to connect on behalf of a user, whereas AWS doesn't seem to have provided anything like this!
How to authenticate using the AWS SDK
Try by providing the host name corresponding to the S3 bucket region. For example:from boto.s3.connection import S3Connection conn = S3Connection(aws_access_key, aws_secret_key, host='s3.us-east- 2.amazonaws.com')
I am getting this error to access S3 bucket. My region is mumbai.boto.exception.S3ResponseError: S3ResponseError: 400 Bad RequestInvalidRequestThe authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.7BD86CC040523574I1zzUtAgjBS0dOUo/mP/Z7uei/l+f8YXEdlqeu1N+7mXrHV9IwYxWBLkx1E/y4DNm6QzPdyRihE=
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
Well you can just modify the volume directly and this will not affect any file, it will take around 1 min or so to upgrade the size or you might want to restart your instance. to ensure data safety you can create a snapshot of that volume and from that snapshot create a new volume of whatever size you want and delete the old volume which now contains old data.
My question is so simple: What happens when I increase the size of running volume of ec2 instance.1) Does my all data wiped ?2) Does the space of my instance will also modify with new size ?Actually my instance has storage of 8GB and that is almost full. I want to increase space that can help me to save more files to my instance.I have found this option in my console. I have found that connected ec2 volume. Does directly modifying the volume size will automatically reflect my instance space after reboot. I know this is quiet simple. I am just worried about my existing data.Thank you for your help !
What happen when I increase the size of running volume of ec2 instance
I only opened the 8080 for TCP and also UDP and used the Public DNS (IPv4) as Payload url:http://ec2-XX-XXX-XX-XXX.eu-west-1.compute.amazonaws.com:8080/github-webhook/remember to end the url exactly with "github-webhook"
I want to set up GitHub webhook which will trigger Jenkins job. Jenkins is installed on AWS EC2. In this case I have to open Jenkins port so that Webhook will trigger Jenkins Job. I found thishttps://help.github.com/articles/about-github-s-ip-addresses/link where they have mentioned GitHub IPs. Should I open all ports for this GitHub IPs? Is it secure and compliant with Best practices ? Is there any other solution which will do the same thing instead of opening ports.
Setup GitHub Webhook for Jenkins installed on AWS EC2
see this answeradd application load balancer to your fargate service. service auto scaling is bettor approach to do this.
During migration on AWS, I created a new cluster on AWS and deployed several docker application with Fargate approach. During each update of task definition, new task is launched inside service and new public IP is assigned from AWS public IP poolIs there any solution or instruction on how I can attach static IP adderes to the service tasks?I saw similar question hereHow do I associate an Elastic IP with a Fargate container?but still can't find any solution
Amazon AWS Fargate Task static IP adress
After some research, I can now give the answer myself:I had to add one more command to mypython.configin.ebextensionsfolder:... container_commands: ... 03wsgipass: command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'After that, AWS allows incoming requests to pass authorization, and I get the response without an error.
I use django-oauth-toolkit with my django/django-rest-framework application. When I request an access token in dev mode on localhost, it works OK:dev@devComp:$ curl -X POST -d "grant_type=password&username= <user_name>&password=<password>" -u"<client_id>:<client_secret>" http://localhost:8000/o/token/ {"access_token": "fFySxhVjOroIJkD0IuEXr5WIhwdXg6", "expires_in": 36000, "token_type": "Bearer", "scope": "read write groups", "refresh_token": "14vhyaCZbdLtu7sq8gTcFpm3ro9YxH"}But if I request an access token from absolutely the same application deployed at AWS Elasticbeanstalk, I get 'invalid client' error:dev@devComp:$ curl -X POST -d "grant_type=password&username= <user_name>&password=<password>" -u"<client_id>:<client_secret>" http://my-eb-prefix.us-west-1.elasticbeanstalk.com/o/token/ {"error": "invalid_client"}Please advise me what to do to get rid of this error and normally request access tokens from django app deployed at AWS.
django-oauth-toolkit 'invalid client' error after deploy on AWS Elasticbeanstalk
<configuration> ......... <logger name="com.amazonaws.services.kinesis.producer" level="warn"/> <logger name="com.amazonaws.services.kinesis.clientlibrary" level="warn"/> ......... </configuration>Log levels:https://www.slf4j.org/api/org/apache/commons/logging/Log.html. You can also define a prefix of packages.
I'm running onUbuntu 16.04&Java 8.kclgenerate thousands ofINFOlog lines. Does any one know how to enable onlyERRORandWARNlogs?*I have also the same question forkpl.I don't have a logfile.
Disable INFO logs in aws kcl - Kinesis
Google Cloud does not currently support a storage gateway.
Does google cloud have a storage gateway concept like AWS? AWS has the followinghttps://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.htmlDoes google cloud have a similar solution? I didnt get anything in the documentation.
Does google cloud havea storage gateway concept like AWS?
You're correct that you need to add the information toauthResponse.context. Once you do that, you can configure it as a URL Query String parameter, or Request Header, etc, via the "Integration Request" interface for your API Gateway method.Now the value which was generated in your authorizer is available to the upstream endpoint of your API Gateway.This solution is suggested (but not wholly spelled out) by the following documents:"Amazon API Gateway API Request and Response Data Mapping Reference"Map Method Request Data to Integration Request ParametersIntegration request parameters, in the form of path variables, query strings or headers, can be mapped from any defined method request parameters and the payload."API Gateway Mapping Template Reference"$context.authorizer.propertyThe stringified value of the specified key-value pair of the context map returned from an API Gateway custom authorizer Lambda function. For example, if the authorizer returns the following context map:"context" : { "key": "value", "numKey": 1, "boolKey": true } calling $context.authorizer.key returns the "value" string, calling $context.authorizer.numKey returns the "1" string, and calling $context.authorizer.boolKey returns the "true" string.
Say, I have access to a certain value in custom authorizer, how do I pass it on to the api endpoint what I read till now, is that we could use context in the policy Generator to do that . something like this -authResponse.context = { "key": "this is the data sent from custom authorizer", "numKey": 1, "mysql":"sdf" };but this code is in the policy generator and not in custom authorizer. so, how do I access the value that I got in custom authorizer and pass it on to the policy generator then?? my main aim is to send that value to the api endpoint.
how to send data from custom authorizer to api endpoint
We have a WordPress implementation, and assume CodeDeploy will remove all files and replace them with the deployment package. This is its standard behavior, and I am pretty certain you cannot change that. It will want to sync the local file system with the deployment package you have provided.For this reason, consider moving the upload directory outside of the document root to account for this. Check outhttps://premium.wpmudev.org/blog/change-default-wordpress-uploads-folder/Regarding files, we moved the upload folder to /var/files, and mounted that as a EFS volume. This provides you with better durability, and makes the file system independent of any given instance.Also you should check in all files like wp-config.php on to the repo, for the same reasons - if you do not include it, then it will not be deployed.With this approach we can easily replace instances via autoscalling. You may only have one instance at this time, but at some point you will want to scale.But to answer the question directly:Yes CodeDeploy can be configured so that files are retained the way you require.You would implement a lifecycle hook script, wherebeforeInstallthe reserved files are moved to /tmp, then theafterinstallhook would move them back. extra overhead for a deploy, which is why I suggest the above approach.Seehttps://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-example.html
We have a CodePipeline set up which uses CodeDeploy to deploy the latest updates from our repository on GitHub to an EC2 instance. This works fine, except for one issue: everything we have in our.gitignorefile is deleted from the server whenever a deployment is performed.For instance, this is a WordPress site, so we havewp-config.phpandwp-content/uploadsexcluded from the repository. When a deployment runs, it deletes these files rendering the site unusable.Our desired behavior is for CodeDeploy to overwrite existing files, but also ignore any files/directories not included in the repository so they can remain untouched. By default there seems to be a step that "clears out" the deployment destination before adding the new files, but we need to skip that.Is there any setting, either in the console orappspec.yml, which will allow us to make deployments without having anything deleted? It seems like this would be a very common use case...if we can't make deployments like this then I'll have to just do all our updates via SFTP, which is pretty lame.
AWS CodeDeploy: How to stop it from deleting files?
I'm sorry, but from your question it's not clear whether you're just experimenting with the API or if you want to write a client that calls it (as in production code).If you're just testing, you can use Postman to call the API (it supports SigV4). Detailshere.If you are writing a client, the way to go is generating the SDK from API gateway, as noted in the comments. Should that not be possible, the next best option is to use one of the language-specific SDK signers to generate the SigV4 signature.AWS4Signer, like you said, is the way to go. It should be straightforward to integrate with it, but if you can share more details of your specific use case (platform, language, where do you get the AWS credentials from, etc), people can give you a better answer.Last, if you want to generate the signature yourself,here's howthe canonical generation of signatures work.
I'm trying to invoke an api request (service: execute-api) and a Signature v4 is required. I've been going through the documentation and I see clearly this:Alternatively, you can use the AWS CLI or one of the AWS SDKs to handle request signing for you.I don't own the API and originally just thought I could use CURL but obviously IAM is configured. I'm wondering what the best way of making this request signed is?Note: Looks like there is an AWS4Signer class that may be what I'm looking for to generate the signature non-manually
How to use AWS SDK for request signing
Sounds like the recommended way to do it is to have your server read the Origin header from the client, compare that to the list of domains you would like to allow, and if it matches, echo the value of theOriginheader back to the client as theAccess-Control-Allow-Originheader in the response.With.htaccessyou can do it like this:# ---------------------------------------------------------------------- # Allow loading of external fonts # ---------------------------------------------------------------------- <FilesMatch "\.(ttf|otf|eot|woff|woff2)$"> <IfModule mod_headers.c> SetEnvIf Origin "http(s)?://(www\.)?(google.com|staging.google.com|development.google.com|otherdomain.example|dev02.otherdomain.example)$" AccessControlAllowOrigin=$0 Header add Access-Control-Allow-Origin %{AccessControlAllowOrigin}e env=AccessControlAllowOrigin Header merge Vary Origin </IfModule> </FilesMatch>
Is there a way to allow multiple cross-domains using theAccess-Control-Allow-Originheader?I'm aware of the*, but it is too open. I really want to allow just a couple domains.As an example, something like this:Access-Control-Allow-Origin: http://domain1.example, http://domain2.exampleI have tried the above code but it does not seem to work in Firefox.Is it possible to specify multiple domains or am I stuck with just one?
Access-Control-Allow-Origin Multiple Origin Domains?
RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC] RewriteRule ^(.*)$ https://%1/$1 [R=301,L]Same asMichael'sexcept this one works :P
I would like to redirectwww.example.comtoexample.com. The following htaccess code makes this happen:RewriteCond %{HTTP_HOST} ^www\.example\.com [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301]But, is there a way to do this in a generic fashion without hardcoding the domain name?
Generic htaccess redirect www to non-www
If you start Notepad and then File -> Save As -> Write .htaccess and choose "All Files" as the type - then it will create the .htaccess file for you.
I want to create a.htaccessfile manually and discovered it seems impossible through the Windows UI. I get a"you must type a filename."message. There has to be a way to create files with.as a prefix in Windows.Can this be done manually?
How do I manually create a file with a . (dot) prefix in Windows? For example, .htaccess
or defined by a module not included in the server configurationCheck to make sure you havemod_rewriteenabled.From:https://webdevdoor.com/php/mod_rewrite-windows-apache-url-rewritingFind the httpd.conf file (usually you will find it in a folder called conf, config or something along those lines)Inside the httpd.conf file uncomment the line LoadModule rewrite_module modules/mod_rewrite.so (remove the pound '#' sign from in front of the line)Also find the line ClearModuleList is uncommented then find and make sure that the line AddModule mod_rewrite.c is not commented out.If theLoadModule rewrite_module modules/mod_rewrite.soline is missing from the httpd.conf file entirely, just add it.Sample commandTo enable the module in a standard ubuntu do this:a2enmod rewrite systemctl restart apache2
I have this error when trying to browse php files locally[Fri Apr 13 19:16:40 2012] [alert] [client 127.0.0.1] C:/AppServ/www/hr-website/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration, referer: http://127.0.0.1/what is the problem ?
.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration
Add the following inside your.htaccessfileRewriteEngine On RewriteCond %{HTTPS} !on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
I'm trying to redirect all insecureHTTPrequests on my site (e.g.http://www.example.com) toHTTPS(https://www.example.com). How can I do this in.htaccessfile?By the way, I'm usingPHP.
How to redirect all HTTP requests to HTTPS using .htaccess rules?
To first force HTTPS, you must check the correct environment variable%{HTTPS} off, but your rule above then prepends thewww.Since you have a second rule to enforcewww., don't use it in the first rule.RewriteEngine On RewriteCond %{HTTPS} off # First rewrite to HTTPS: # Don't put www. here. If it is already there it will be included, if not # the subsequent rule will catch it. RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Now, rewrite any request to the wrong domain to use www. # [NC] is a case-insensitive match RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]About proxyingWhen behind some forms of proxying, whereby the client is connecting via HTTPS to a proxy, load balancer, Passenger application, etc., the%{HTTPS}variable may never beonand cause a rewrite loop. This is because your application is actually receiving plain HTTP traffic even though the client and the proxy/load balancer are using HTTPS. In these cases, check theX-Forwarded-Protoheader instead of the%{HTTPS}variable.This answer shows the appropriate process
I have the following htaccess code:<IfModule mod_rewrite.c> RewriteEngine On RewriteCond !{HTTPS} off RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] </IfModule>I want my site to be redirected tohttps://www.with HTTPS, and enforcing thewww.subdomain, but when I accesshttp://www.(without HTTPS), it does not redirect me tohttps://wwwwith HTTPS.
htaccess redirect to https://www
Create an .htaccess file containing the following line:Options -IndexesThat is one option. Another option is editing your apache configuration file.In order to do so, you first need to open it with the command:vim /etc/httpd/conf/httpd.confThen find the line:Options Indexes FollowSymLinksChange that line to:Options FollowSymLinksLastly save and exit the file, and restart apache server with this command:sudo service httpd restart(You have a guide with screenshotshere.)
I want to disable directory browsing of /galerias folder and all subdirectoriesIndex of /galerias/409* Parent Directory * i1269372986681.jpg * i1269372986682.jpg * i1269372988680.jpg
How do I disable directory browsing?
Note: This introduces very significant security issues and is not recommended.For Laravel 5:Renameserver.phpin your Laravel root folder toindex.phpCopy the.htaccessfile from/publicdirectory to your Laravel root folder.
This question already has answers here:How to make public folder as root in Laravel?(5 answers)Closed9 months ago.I want to remove the/public/fragment from my Laravel 5 URLs.I don't want to run a VM, this just seems awkward when switching between projects.I don't want to set my document root to the public folder, this is also awkward when switching between projects.I've tried the .htaccess mod_rewrite method:<IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^(.*)$ public/$1 [L] </IfModule>but this gives me a Laravel NotFoundHttpException in compiled.php line 7610.In Laravel 4 I could move the contents of the public folder into the root. The structure of Laravel 5 is quite different and following the same steps completely broke Laravel (the server would only return a blank page).Is there a method of removing 'public' in a development environment that allows me to switch between projects with ease (I'm usually working on 2 or 3 at any one time)?I'm using MAMP and PHP 5.6.2
How to remove /public/ from a Laravel URL [duplicate]
May be like this:Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^OLDDOMAIN\.com$ [NC] RewriteRule ^(.*)$ http://NEWDOMAIN.com [R=301,L]
Which redirect rule would I use to redirect all pages underolddomain.exampleto be redirected tonewdomain.example?The site has a totally different structure, so I wantevery pageunder the old domain to be redirected to the new domainindex page.I thought this would do (under olddomain.example base directory):RewriteEngine On RewriteRule ^(.*)$ http://newdomain.example/ [R=301]But if I navigate toolddomain.example/somepageI get redirected tonewdomain.example/somepage. I am expecting a redirect only tonewdomain.examplewithout the page suffix.How do I keep the last part out?
.htaccess redirect all pages to the homepage on a new domain
You can use a rewrite rule that uses^$to represent the root and rewrite that to your /store directory, like this:RewriteEngine On RewriteRule ^$ /store [L]
Trying to getwww.example.comto go directly towww.example.com/storeI have tried multiple bits of code and none work.What I've tried:Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com$ RewriteRule (.*) http://www.example.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^(.+)\www.example\.com$ RewriteRule ^/(.*)$ /samle/%1/$1 [L]What am I doing wrong?
How can I use .htaccess rewrite to redirect root URL to subdirectory?
I have actually followed this example and it worked for me :)NameVirtualHost *:80 <VirtualHost *:80> ServerName mysite.example.com Redirect permanent / https://mysite.example.com/ </VirtualHost> <VirtualHost _default_:443> ServerName mysite.example.com DocumentRoot /usr/local/apache2/htdocs SSLEngine On # etc... </VirtualHost>Then do:/etc/init.d/httpd restart
I am trying to set up automatic redirection from HTTP to HTTPS:From manage.mydomain.com --- To ---> https://manage.mydomain.comI have tried adding the following to myhttpd.conffile, but it didn't work:RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L]How can I fix it?Environment:CentOSwithApache
How can I automatically redirect HTTP to HTTPS on Apache servers?
For Apache, you can usemod_sslto force SSL with theSSLRequireSSL Directive:This directive forbids access unless HTTP over SSL (i.e. HTTPS) is enabled for the current connection. This is very handy inside the SSL-enabled virtual host or directories for defending against configuration errors that expose stuff that should be protected. When this directive is present all requests are denied which are not using SSL.This will not do a redirect to https though. To redirect, try the following withmod_rewritein your .htaccess fileRewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]or any of the various approaches given athttp://www.askapache.com/htaccess/http-https-rewriterule-redirect.htmlYou can also solve this from within PHP in case your provider has disabled .htaccess (which is unlikely since you asked for it, but anyway)if (!isset($_SERVER['HTTPS']) || $_SERVER['HTTPS'] !== 'on') { if(!headers_sent()) { header("Status: 301 Moved Permanently"); header(sprintf( 'Location: https://%s%s', $_SERVER['HTTP_HOST'], $_SERVER['REQUEST_URI'] )); exit(); } }
How can I force to SSL/https using .htaccess and mod_rewrite page specific in PHP.
Force SSL/https using .htaccess and mod_rewrite
In my own words, after reading the docs and experimenting:You can useRewriteBaseto provide abasefor your rewrites. Consider this# invoke rewrite engine RewriteEngine On RewriteBase /~new/ # add trailing slash if missing rewriteRule ^(([a-z0-9\-]+/)*[a-z0-9\-]+)$ $1/ [NC,R=301,L]This is a real rule I used to ensure that URLs have a trailing slash. This will converthttp://www.example.com/~new/pagetohttp://www.example.com/~new/page/By having theRewriteBasethere, you make the relative path come off theRewriteBaseparameter.
I have seen this in a few.htaccessexamplesRewriteBase /It appears to be somewhat similar in functionality to the<base href="">of HTML.I believe it may automatically prepend its value to the beginning ofRewriteRulestatements (possibly ones without a leading slash)?I could not get it to work properly. I think it's use could come in very handy for site portability, as I often have a development server which is different to a production one. My current method leaves me deleting portions out of myRewriteRulestatements.Can anyone explain to me briefly how to implement it?Thanks
How does RewriteBase work in .htaccess
Change your configuration to this (add a slash):RewriteCond %{HTTP_HOST} ^example.com$ [NC] RewriteRule (.*) http://www.example.com/$1 [R=301,L]Or the solution outlined below (proposed by@absiddiqueLive) will work for any domain:RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]If you need to support http and https and preserve the protocol choice try the following:RewriteRule ^login\$ https://www.%{HTTP_HOST}/login [R=301,L]Where you replaceloginwithcheckout.phpor whatever URL you need to support HTTPS on.I'd argue this is a bad idea though. For the reasoningplease read this answer.
I have this in my .htaccess file:RewriteCond %{HTTP_HOST} ^example.com$ RewriteRule (.*) http://www.example.com$1 [R=301,L]but whenever I access a file on my root likehttp://example.com/robots.txtit will redirect tohttp://www.example.comrobots.txt/.How can I correct this so that it will redirect correctly tohttp://www.example.com/robots.txt?
Redirect non-www to www in .htaccess
In case you are on Ubuntu, edit the file/etc/apache2/apache2.conf(here we have an example of/var/www):<Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory>and change it to;<Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory>then,sudo service apache2 restartYou may need to also dosudo a2enmod rewriteto enable module rewrite.
I want to set theAllowOverride allBut I don't know how to do it. I have found the following code by searching the google and pasted it in.htaccess:<Directory> AllowOverride All </Directory>But after pasting it I started receiving"Internal Server Error"Can anyone guide me where to put this code or how to do it?
How to Set AllowOverride all
Createsite/includes/.htaccessfile and add this line:Deny from all
I'm trying to deny users from accessing thesite/includesfolder by manipulating the URL.I don't know if I have to deny everything and manually make individual exceptions to allow, if I can just deny this one folder, or if there's a rewrite function that can be used.Specific example: I don't want to see the directory files by typing inlocalhost/site/includesinto the URL.
Deny access to one specific folder in .htaccess
Try this in the .htaccess of the external root folder :<IfModule mod_headers.c> Header set Access-Control-Allow-Origin "*" </IfModule>And if it only concerns .js scripts you should wrap the above code inside this:<FilesMatch "\.(js)$"> ... </FilesMatch>
I'm creating a script that loads externally on other sites. It loads CSS and HTML and works fine on my own servers.However, when I try it on another website it displays this awful error:Access-Control-Allow-OriginHere you can see it loads perfectly:http://tzook.info/bot/But on this other website it shows the error:http://cantloseweight.co/robot/I uploaded the loading script to jsfiddle:http://jsfiddle.net/TL5LK/I tried editing the htaccess file like this:<IfModule mod_headers.c> Header set Access-Control-Allow-Origin * </IfModule>Or like this:Header set Access-Control-Allow-Origin *But it still doesn't work.
htaccess Access-Control-Allow-Origin
Thanks for your replies. I have already solved my problem. Suppose I have my pages underhttp://www.yoursite.com/html, the following.htaccessrules apply.<IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /html/(.*).html\ HTTP/ RewriteRule .* http://localhost/html/%1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /html/(.*)\ HTTP/ RewriteRule .* %1.html [L] </IfModule>
How to remove.htmlfrom the URL of a static page?Also, I need to redirect any url with.htmlto the one without it. (i.e.www.example.com/page.htmltowww.example.com/page).
How to remove .html from URL?
order deny,allow deny from all allow from <your ip>
I'm trying to deny all and allow only for a single IP. But, I would like to have the following htaccess working for that single IP. I'm not finding a way to have both working: the deny all and allow only one, plus the following options:<IfModule mod_rewrite.c> RewriteEngine On RewriteBase / #Removes access to the system folder by users. #Additionally this will allow you to create a System.php controller, #previously this would not have been possible. #'system' can be replaced if you have renamed your system folder. RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] #When your application folder isn't in the system folder #This snippet prevents user access to the application folder #Submitted by: Fabdrol #Rename 'application' to your applications folder name. RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] #Checks to see if the user is attempting to access a valid file, #such as an image or css document, if this isn't true it sends the #request to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> <IfModule !mod_rewrite.c> # If we don't have mod_rewrite installed, all 404's # can be sent to index.php, and everything works as normal. # Submitted by: ElliotHaughin ErrorDocument 404 /index.php </IfModule>Is there a way to make this work?
Deny all, allow only one IP through htaccess
Try the followingOpen config.php and do following replaces$config['index_page'] = "index.php"to$config['index_page'] = ""In some cases the default setting foruri_protocoldoes not work properly. Just replace$config['uri_protocol'] ="AUTO"by$config['uri_protocol'] = "REQUEST_URI".htaccessRewriteEngine on RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA]Note: .htaccess code vary depending on hosting server. In some hosting server (e.g.: Godaddy) need to use an extra ? in the last line of above code. The following line will be replaced with last line in applicable case:// Replace last .htaccess line with this line RewriteRule ^(.*)$ index.php?/$1 [L,QSA]
My current urls look like this[mysite]index.php/[rest of the slug].I want to stripindex.phpfrom these urls.mod_rewriteis enabled on my apache2 server. Inconfig,$config['index_page'] = '';My codeignitor root.htaccessfile contains,RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L]But still it is not working. Where am I going wrong?
CodeIgniter removing index.php from url
Create a.htaccessfile in the.gitfolder and put the following in this file:Order allow,deny Deny from allBut note, that it would be lost if you ever re-cloned the repository
I have a website that I use github (closed source) to track changes and update site. The only problem is, it appears the .git directory is accessible via the web. How can I stop this and still be able to use git?Should I use .htaccess? Should I change permissions of .git?
Make .git directory web inaccessible
Try this rule before your other rules:RewriteRule ^(admin|user)($|/) - [L]This will end the rewriting process.
I have 8 lines of rewrite rules in my .htaccess file. I need to exclude two physical directories on my server from these rules, so they can become accessible. For now all requests are sent to index.php file.Directories to exclude: "admin" and "user".So http requests:http://www.domain.com/admin/should not be passed to index.php file.ErrorDocument 404 /index.php?mod=error404 Options FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\.domain\.com$ [NC] RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L] RewriteRule ^([^/] )/([^/] )\.html$ index.php?lang=$1&mod=$2 [L] RewriteRule ^([^/] )/$ index.php?lang=$1&mod=home [L]
.htaccess mod_rewrite - how to exclude directory from rewrite rule
Mu.No correct way exists, not even one that's consistent across browsers.This is a problem that comes from theHTTP specification(section 15.6):Existing HTTP clients and user agents typically retain authentication information indefinitely. HTTP/1.1. does not provide a method for a server to direct clients to discard these cached credentials.On the other hand, section10.4.2says:If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information.In other words,you may be able to show the login box again(as@Karstensays),but the browser doesn't have to honor your request- so don't depend on this (mis)feature too much.
What is thecorrectway to log out of HTTP authentication protected folder?There are workarounds that can achieve this, but they are potentially dangerous because they can be buggy or don't work in certain situations / browsers. That is why I am looking for correct and clean solution.
HTTP authentication logout via PHP
OK I am using the wrong syntax, I should be usingAllow from 127.0.0.1 Allow from ::1 ...
I am getting[Tue Apr 24 12:12:55 2012] [error] [client 127.0.0.1] client denied by server configuration: /labs/Projects/Nebula/bin/My directory structure looks like (I am using Symfony 2, should be similar structure for other web frameworks)I have vhosts setup like:<VirtualHost nebula:80> DocumentRoot "/labs/Projects/Nebula/web/" ServerName nebula ErrorLog "/var/log/httpd/nebula-errors.log" </VirtualHost> <Directory "/labs/Projects/Nebula/"> Options All AllowOverride All Order allow,deny Allow from 127.0.0 192.168.1 ::1 localhost </Directory>I wonder whats the problem and how do I fix it?
Apache: client denied by server configuration
+50Tried this? Should work in both.htaccess,httpd.confand in aVirtualHost(usually placed inhttpd-vhosts.confif you have included it from your httpd.conf)<filesMatch "\.(html|htm|js|css)$"> FileETag None <ifModule mod_headers.c> Header unset ETag Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate" Header set Pragma "no-cache" Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT" </ifModule> </filesMatch>100% Prevent Files from being cachedThis is similar to how google ads employ the header Cache-Control: private, x-gzip-ok="" > to prevent caching of ads by proxies and clients.Fromhttp://www.askapache.com/htaccess/using-http-headers-with-htaccess.htmlAnd optionally add the extension for the template files you are retrieving if you are using an extension other than.htmlfor those.
I am developing a single page Javascript application in MAMP. My JavaScript and HTML template files are getting cached between requests.Is there a simple way to indicate in MAMP that I want to prevent http file caching? Possibly with a.htaccessfile? Where do I place the.htaccessor modify the virtual host for MAMP on Mac?
How to prevent http file caching in Apache httpd (MAMP)
You can essentially do this 2 ways:The .htaccess route with mod_rewriteAdd a file called.htaccessin your root folder, and add something like this:RewriteEngine on RewriteRule ^/?Some-text-goes-here/([0-9]+)$ /picture.php?id=$1This will tell Apache to enable mod_rewrite for this folder, and if it gets asked a URL matching the regular expression it rewrites itinternallyto what you want, without the end user seeing it. Easy, but inflexible, so if you need more power:The PHP routePut the following in your .htaccess instead: (note the leading slash)FallbackResource /index.phpThis will tell it to run yourindex.phpfor all files it cannot normally find in your site. In there you can then for example:$path = ltrim($_SERVER['REQUEST_URI'], '/'); // Trim leading slash(es) $elements = explode('/', $path); // Split path on slashes if(empty($elements[0])) { // No path elements means home ShowHomepage(); } else switch(array_shift($elements)) // Pop off first item and switch { case 'Some-text-goes-here': ShowPicture($elements); // passes rest of parameters to internal function break; case 'more': ... default: header('HTTP/1.1 404 Not Found'); Show404Error(); }This is how big sites and CMS-systems do it, because it allows far more flexibility in parsing URLs, config and database dependent URLs etc. For sporadic usage the hardcoded rewrite rules in.htaccesswill do fine though.
I have a URL that looks like:url.com/picture.php?id=51How would I go about converting that URL to:picture.php/Some-text-goes-here/51I think WordPress does the same.How do I go about making friendly URLs in PHP?
URL rewriting with PHP
Update 2016As this answer receives some attention, I want to hint to a more recommended way on doing this using Virtual Hosts:Apache: Redirect SSL<VirtualHost *:80> ServerName mysite.example.com Redirect permanent / https://mysite.example.com/ </VirtualHost> <VirtualHost _default_:443> ServerName mysite.example.com DocumentRoot /usr/local/apache2/htdocs SSLEngine On # etc... </VirtualHost>Old answer, hacky thinggiven that your ssl-port is not set to 80, this will work:RewriteEngine on # force ssl RewriteCond %{SERVER_PORT} ^80$ RewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R]Note that this should be your first rewrite rule.Edit:This code does the following. The RewriteCond(ition) checks wether the ServerPort of the request is 80 (which is the default http-port, if you specified another port, you would have to adjust the condition to it). If so, we match the whole url(.*)and redirect it to a https-url.%{SERVER_NAME}may be replaced with a specific url, but this way you don't have to alter the code for other projects.%{REQUEST_URI}is the portion of the url after the TLD (top-level-domain), so you will be redirected to where you came from, but as https.
I have an old url (www1.test.net) and I would like to redirect it tohttps://www1.test.netI have implemented and installed our SSL certificate on my site.This is my old file.htaccess:RewriteEngine On RewriteRule !\.(js|gif|jpg|png|css|txt)$ public/index.php [L] RewriteCond %{REQUEST_URI} !^/public/ RewriteRule ^(.*)$ public/$1 [L]How can I configure my.htaccessfile so that url auto redirect tohttps?Thanks!
.htaccess redirect http to https
Within an htaccess file, the scope of the<Files>directive only applies to that directory (I guess to avoid confusion when rules/directives in the htaccess of subdirectories get applied superceding ones from the parent).So you can have:<Files "log.txt"> Order Allow,Deny Deny from all </Files>For Apache 2.4+, you'd use:<Files "log.txt"> Require all denied </Files>In an htaccess file in yourinscriptiondirectory. Or you can use mod_rewrite to sort of handle both cases deny access to htaccess file as well as log.txt:RewriteRule /?\.htaccess$ - [F,L] RewriteRule ^/?inscription/log\.txt$ - [F,L]
I have the following .htaccess file:RewriteEngine On RewriteBase / # Protect the htaccess file <Files .htaccess> Order Allow,Deny Deny from all </Files> # Protect log.txt <Files ./inscription/log.txt> Order Allow,Deny Deny from all </Files> # Disable directory browsing Options All -IndexesI am trying to forbid visitors to access the following file:domain.example/inscription/log.txtbut what I have above does not work: I can still access the file from the browser remotely.
How to deny access to a file in .htaccess
I would just move theincludesfolder out of the web-root, but if you want to block direct access to the wholeincludesfolder, you can put a.htaccessfile in that folder that contains just:deny from allThat way you cannot open any file from that folder, but you can include them in php without any problems.
Here is the scenario:There is aindex.phpfile in root foldersome files are included inindex.phpwhich are in theincludesfolder.1 other file (submit.php) is in the root folder for form submit action.I want to restrict direct user access to the files inincludesfolder by htaccess. also forsubmit.php. But include will work forindex.phpfile. Like, if user typeswww.domain.com/includes/somepage.php, it will restrict it (may be redirect to a error page).
deny direct access to a folder and file by htaccess
Options -Indexesshould work to prevent directory listings.If you are using a .htaccess file make sure you have at least the "allowoverride options" setting in your mainapache config file.
I have a folder, for example :/public_html/Davood/and too many sub folder in folder, for example :/public_html/Davood/Test1/,/public_html/Davood/Test1/Test/,/public_html/Davood/Test2/, ...I want add a htaccess file into/public_html/Davood/To deny DirectoryListing in/Davoodand sub folders. It's possible?
deny directory listing with htaccess
from the command line, typesudo a2enmod rewriteif the rewrite mode is already enabled, it will tell you so!
Currently I am using the hosting withlightspeedserver. Hosting saysmod_rewriteis enabled but I can't get my script working there. Whenever I try to access the URL, it returns404 - not foundpage.I put the same codes at another server which is running with Apache. It's working over there. So I guess, it's the.htaccessandmod_rewriteissue.But Hosting support is still insisting with me that their mod_rewrite is on, so I would like to know how can I check whether it's actually enabled or not.I tried to check withphpinfo(), but no luck, I can't findmod_rewritethere, is it because they are usinglightspeed?Is there any way to check? Please help me out. Thank you.FYI:my.htaccesscode isOptions -Indexes <IfModule mod_rewrite.c> DirectoryIndex index.php RewriteEngine on RewriteCond $1 !^(index\.php|assets|robots\.txt|favicon\.ico) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ ./index.php/$1 [L,QSA] </IfModule>I tried like this alsoDirectoryIndex index.php RewriteEngine on RewriteCond $1 !^(index\.php|assets|robots\.txt|favicon\.ico) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ ./index.php/$1 [L,QSA]But same result.
How to check whether mod_rewrite is enable on server?
Gumbo's answer in the Stack Overflow questionHow to hide the .html extension with Apache mod_rewriteshould work fine.Re 1) Change the .html to .phpRe a.) Yup, that's possible, just add#tabto the URL.Re b.) That's possible usingQSA(Query String Append), see below.Thisshouldalso work in a sub-directory path:RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule !.*\.php$ %{REQUEST_FILENAME}.php [QSA,L]
Yes, I've read the Apache manual and searched here. For some reason I simply cannot get this to work. The closest I've come is having it remove the extension, but it points back to the root directory. I want this to just work in the directory that contains the.htaccessfile.I need to do three things with the.htaccessfile.I need it to remove the.phpa. I have several pages that use tabs and theURLlooks likepage.php#tab- is this possible?b. I have one page that uses a session ID appended to the URL to make sure you came from the right place,www.domain.example/download-software.php?abcdefg.Is this possible? Also in doing this, do I need to remove.phpfrom the links in my header nav include file? ShouldIE "<a href="support.php">support</a>" be <a href="support">support</a>?I would like it to forcewwwbefore every URL, so it's notdomain.example, butwww.domain.example/page.I would like to remove all trailing slashes from pages.I'll keep looking, trying, etc. Would being in a sub directory cause any issues?
Remove .php extension with .htaccess
Finally Overcame the problemIt was not the .htaccess file that was the problem nor the index.php. The problem was on accessing the files and requiring permissions.For solving the problem i ran the following commands through terminal.sudo chmod -R 755 laravel_blogand then type below to allow laravel to write file to storage folderchmod -R o+w laravel_blog/storageThis two commands solved the problem.
I have installed Laravel many times on Windows OS but never had this problem.However, on Ubuntu 14.04 I am getting a 500 Internal Server Error, and messages like this in my logs:[Wed Jul 22 10:20:19.569063 2015] [:error] [pid 1376] [client 127.0.0.1:52636] PHP Fatal error: require(): Failed opening required '/var/www/html/laravel_blog/../bootstrap/autoload.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/laravel_blog/index.php on line 22Previously I've had problems when mod_rewrite was not installed or set up properly, but I have installed it and it is not working. Changed .htaccess as well from original to this.+FollowSymLinks RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L]I've given access to all my folders and files inside i.e./var/www/html/laravel_projectI have all the necessary extensions needed for Laravel 5+ as well. Is there something left that I didn't do?
Getting a 500 Internal Server Error (require() failed opening required path) on Laravel 5+ Ubuntu 14.04
Enter some junk value into your.htaccesse.g.foo bar,sakjnaskljdnasany keyword not recognized by htaccess and visit your URL. If it is working, you should get a500 Internal Server ErrorInternal Server ErrorThe server encountered an internal error or misconfiguration and was unable to complete your request....I suggest you to put it soon afterRewriteEngine on.Since you are on your machine. I presume you have access to apache.conffile.open the.conffile, and look for a line similar to:LoadModule rewrite_module modules/mod_rewrite.soIf it is commented(#), uncomment and restart apache.To log rewriteRewriteEngine On RewriteLog "/path/to/rewrite.log" RewriteLogLevel 9Put the above 3 lines in yourvirtualhost. restart the httpd.RewriteLogLevel 9Using a high value for Level will slow down your Apache server dramatically! Use the rewriting logfile at a Level greater than 2 only for debugging! Level 9 will log almost every rewritelog detail.UPDATEThings have changed in Apache 2.4:FROMUpgrading to 2.4 from 2.2TheRewriteLogandRewriteLogLeveldirectives have been removed. This functionality is now provided by configuring the appropriate level of logging for the mod_rewrite module using theLogLeveldirective. See also the mod_rewrite logging section.For more on LogLevel, referLogLevel Directiveyou can accomplishRewriteLog "/path/to/rewrite.log"in this manner nowLogLevel debug rewrite_module:debug
I have aRewriteRulein a.htaccessfile that isn't doing anything. How do I troubleshoot this?How can I verify if the.htaccessfile is even being read and obeyed by Apache? Can I write an echo "it is working" message, if I do write it, where would that line be echoed out?If the.htaccessfile isn't being used, how can I make Apache use it?If the.htaccessis being used but myRewriteRulestill isn't having an effect, what more can I do to debug?
How to debug .htaccess RewriteRule not working
I had this problem too. My advice is look in your server error log file. For me, it was that the top directory for the project was not readable. The error log clearly stated this. A simplesudo chmod 755 <site_top_folder>fixed it for me.
I have created a simple app using AngularJS. When I tried to host that project in my websitehttp://demo.gaurabdahal.com/recipefinderit shows the following error:ForbiddenYou don't have permission to access /recipefinder on this server. Server unable to read htaccess file, denying access to be safeBut if I go tohttp://demo.gaurabdahal.com/it displays "access denied" message as expected, that I have printed. But why is it unable to open that AngularJS projects "recipefinder". If I tried to put a simple HTML app there, it opens just fine.The same AngularJS project works fine when I host that in github (http://gaurabdahal.github.io/recipefinder)I can't understand what's wrong.
Server unable to read htaccess file, denying access to be safe
.htaccess:php_flag display_startup_errors on php_flag display_errors on php_flag html_errors on php_flag log_errors on php_value error_log /home/path/public_html/domain/PHP_errors.log
I am testing a website online.Right now, the errors are not being displayed (but I know they exist).I have access to only the.htaccessfile.How do I make all errors to display using my.htaccessfile?I added these lines to my.htaccessfile:php_flag display_startup_errors on php_flag display_errors on php_flag html_errors onAnd the pagesnowdisplay:Internal server error
Enabling error display in PHP via htaccess only
It is not possible to use relative paths forAuthUserFile:File-path is the path to the user file. If it is not absolute (i.e., if it doesn't begin with a slash), it is treated as relative to theServerRoot.You have to accept and work around that limitation.We're usingIfDefinetogether with an apache2command line parameter:.htaccess(suitable for both development and live systems):<IfDefine !development> AuthType Basic AuthName "Say the secret word" AuthUserFile /var/www/hostname/.htpasswd Require valid-user </IfDefine>Development server configuration (Debian)Append the following to/etc/apache2/envvars:export APACHE_ARGUMENTS=-DdevelopmentRestart your apache afterwards and you'll get a password prompt only when you're not on the development server.You can of course add another IfDefine for the development server, just copy the block and remove the!.
I have a .htaccess that uses basic authentication. It seems the path to the .htpasswd file isn't relative to the htaccess file, but instead to the server config.So even though I have the .htaccess and .htpasswd files in the same directory, this doesn't work:AuthType Basic AuthName "Private Login" AuthUserFile .htpasswd Require valid-userHowever, it does work if I change the AuthUserFile to use the absolute path:AuthType Basic AuthName "Private Login" AuthUserFile "/home/user/public_html/mydir/.htpasswd" Require valid-userBut I would prefer something more mobile as I use this on multiple sites in different areas. I've searched the web but haven't had any resolution. Is it possible to use relative path or variables like%{DOCUMENT_ROOT}?
How to use a RELATIVE path with AuthUserFile in htaccess?
I would use this rule:RewriteEngine On RewriteCond %{HTTP_HOST} !="" RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteCond %{HTTPS}s ^on(s)| RewriteRule ^ http%1://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]The first condition checks whether theHostvalue is not empty (in case of HTTP/1.0); the second checks whether the theHostvalue does not begin withwww.; the third checks for HTTPS (%{HTTPS}is eitheronoroff, so%{HTTPS}sis eitheronsoroffsand in case ofonsthesis matched). The substitution part ofRewriteRulethen just merges the information parts to a full URL.
This will changedomain.exampletowww.domain.example:# Force the "www." RewriteCond %{HTTP_HOST} !^www\.domain\.example$ [NC] RewriteRule ^(.*)$ `http://www.domain.example/$1` [R=301,L]How do I replace the "domain" part so that this works onanydomain?
.htaccess - how to force "www." in a generic way?
Not the place to give a complete tutorial, but here it is in short;RewriteCondbasically means "execute the next RewriteRule only if this is true". The!-lpath is the condition that the request is not for a link (!means not,-lmeans link)TheRewriteRulebasically means that if the request is done that matches^(.+)$(matches any URL except the server root), it will be rewritten asindex.php?url=$1which means a request forollewill be rewritten asindex.php?url=olle).QSAmeans that if there's a query string passed with the original URL, it will be appended to the rewrite (olle?p=1will be rewritten asindex.php?url=olle&p=1.Lmeans if the rule matches, don't process any more RewriteRules below this one.For more complete info on this, follow the links above. The rewrite support can be a bit hard to grasp, but there are quite a few examples on stackoverflow to learn from.
I need to change my.htaccessand there are two lines which I don't understand.RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.+)$ index.php?url=$1 [QSA,L]When I should use these lines ?
What does $1 [QSA,L] mean in my .htaccess file?
+50Option 1: Use .htaccessIf it isn't already there, create an .htaccess file in the Laravel root directory. Create a.htaccessfile your Laravel root directory if it does not exists already. (Normally it is under yourpublic_htmlfolder)Edit the .htaccess file so that it contains the following code:<IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^(.*)$ public/$1 [L] </IfModule>Now you should be able to access the website without the "/public/index.php/" part.Option 2 : Move things in the '/public' directory to the root directoryMake a new folder in your root directory and move all the files and folder except public folder. You can call it anything you want. I'll use "laravel_code".Next, move everything out of the public directory and into the root folder. It should result in something somewhat similar to this:After that, all we have to do is edit the locations in thelaravel_code/bootstrap/paths.phpfile and theindex.phpfile.Inlaravel_code/bootstrap/paths.phpfind the following line of code:'app' => __DIR__.'/../app', 'public' => __DIR__.'/../public',And change them to:'app' => __DIR__.'/../app', 'public' => __DIR__.'/../../',Inindex.php, find these lines:require __DIR__.'/../bootstrap/autoload.php'; $app = require_once __DIR__.'/../bootstrap/start.php';And change them to:require __DIR__.'/laravel_code/bootstrap/autoload.php'; $app = require_once __DIR__.'/laravel_code/bootstrap/start.php';Source:How to remove /public/ from URL in Laravel
This question already has answers here:How to make public folder as root in Laravel?(5 answers)Closed9 months ago.The community reviewed whether to reopen this questionlast monthand left it closed:Original close reason(s) were not resolvedI need to removeindex.phporpublic/index.phpfrom the generated URL in Laravel; commonly path islocalhost/public/index.php/someWordForRoute, It should be something likelocalhost/someWordForRoute..htaccess<IfModule mod_rewrite.c> <IfModule mod_negotiation.c> Options -MultiViews </IfModule> RewriteEngine On # Redirect Trailing Slashes. RewriteRule ^(.*)/$ /$1 [L,R=301] # Handle Front Controller. RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php[L]app/config/app.php'url' => 'http://localhost',How can I change that?
How Can I Remove “public/index.php” in the Laravel Url Generated? [duplicate]
A restart isnotrequired for changes to .htaccess. Something else is wrong.Make sure your .htaccess includes the statementRewriteEngine onwhich is required even if it's also present in httpd.conf. Also check that .htaccess is readable by the httpd process.Check the error_log - it will tell you of any errors in .htaccessifit's being used. Putting an intentional syntax error in .htaccess is a good check to make sure the file is being used -- you should get a 500 error on any page in the same directory.Lastly, you can enable a rewrite log using commands like the following in your httpd.conf:RewriteLog "logs/rewritelog"RewriteLogLevel 7The log file thus generated will give you the gory detail of which rewrite rules matched and how they were handled.
I have pushed my .htaccess files to the production severs, but they don't work. Would a restart be the next step, or should I check something else.
Do you have to restart apache to make re-write rules in the .htaccess take effect?
php_value upload_max_filesize 30Mis correct.You will have to contact your hosters -- some don't allow you to change values in php.ini
I have try to put these 2 linesphp_value post_max_size 30M php_value upload_max_filesize 30MIn my root.htaccessfile but that brings me "internal server error" message.php5 is running on the serverI don't have access to php.ini so I thinkhtaccessis my only chance.Can you tell me where the mistake is?
How to set upload_max_filesize in .htaccess?
You need to create a new.htaccessfile in the required directory and include theSatisfy anydirective in it like so, for up to Apache 2.3:# allows any user to see this directory Satisfy AnyThe syntax changed in Apache 2.4, this has the same effect:Require all granted
I have password protected my entire website using.htaccessbut I would like to expose one of the sub directories so that it can be viewed without a password.How can I disable htaccess password protection for a sub directory? Specifically what is the.htaccesssyntax.Here is my.htaccessfile that is placed in the root of my ftp.AuthName "Site Administratrion" AuthUserFile /dir/.htpasswd AuthGroupFile /dev/null AuthName secure AuthType Basic require user username1 order allow,deny allow from all
How to remove .htaccess password protection from a subdirectory
Comments in .htaccess must be on theirown line, not appended to other statements.The last rule doesn't work because the comments aren't really comments. Comments in htaccessmust beginwith a#(must be at the start of a line), and not arbitrarily anywhere.In the second case, the#bla bla blais interpreted as a 4th parameter of theRewriteRuledirective, which is simply ignored.In the last case, the#bla bla blais interpreted as a 3rd parameter, which in theRewriteRule's case is where the flags go, and#bla bla blaisn't any flags that mod_rewrite understands so you get an error.
Why does this work:RewriteRule (.+)/$ $1and this work:RewriteRule (.+)/$ $1 [L] #bla bla blabut this doesn't work:RewriteRule (.+)/$ $1 #bla bla bla
Adding comments to .htaccess
UPDATED NEW WAYI also faced similar problem in a local project.I usedindex.phpafter my project urland it worked.http://localhost/myproject/index.php/wp-json/wp/v2/postsIf it displays a 404 error thenupdate permalinks first(see "Paged Navigation Doesn't Work" sectionIf it works, maybe you need to enablemod_rewrite, on ubuntu:a2enmod rewrite sudo service apache2 restartInstallationThe REST API is included in WordPress 4.7! Plugins are no longer required, just install the latest version of WordPress and you're ready to go.If you're before 4.7:Download plugin from here:http://v2.wp-api.org/install and activate it.UsageTo get all posts:www.mysite.com/wp-json/wp/v2/postsFor the search functionality, searching for test post looks like this:/wp-json/wp/v2/posts?filter[s]=test
I have been using the Wordpress REST plugin WP-API for months now while developing locally with XAMPP. I recently migrated my site to an EC2 instance and everything is working fineexceptI now get a 404 with the following message whenever I try to access any endpoint on the API:The requested URL /wordpress/wp-json/ was not found on this serverPretty permalinks are enabledwith the following structurehttp://.../wordpress/sample-post/which works fine when navigating to a specific post in the browser.Here are some details about my setup:Wordpress 4.4.1Not a MultisiteWP REST API plugin 2.0-beta9Apache 2.2.22Ubuntu 12.04.5Any help would be greatly appreciated as I have gone through SO and the WP Support forums for several hours and am out of ideas. Thank you!
Wordpress REST API (wp-api) 404 Error: Cannot access the WordPress REST API
Since I had everything being forwarded to index.php anyway I thought I would try setting the headers in PHP instead of the .htaccess file and it worked! YAY! Here's what I added to index.php for anyone else having this problem.// Allow from any origin if (isset($_SERVER['HTTP_ORIGIN'])) { // should do a check here to match $_SERVER['HTTP_ORIGIN'] to a // whitelist of safe domains header("Access-Control-Allow-Origin: {$_SERVER['HTTP_ORIGIN']}"); header('Access-Control-Allow-Credentials: true'); header('Access-Control-Max-Age: 86400'); // cache for 1 day } // Access-Control headers are received during OPTIONS requests if ($_SERVER['REQUEST_METHOD'] == 'OPTIONS') { if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD'])) header("Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS"); if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS'])) header("Access-Control-Allow-Headers: {$_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']}"); }credit goes to slashingweapon for his answer onthis questionBecause I'm using Slim I added this route so that OPTIONS requests get a HTTP 200 response// return HTTP 200 for HTTP OPTIONS requests $app->map('/:x+', function($x) { http_response_code(200); })->via('OPTIONS');
I have created a basic RESTful service with the SLIM PHP framework and now I'm trying to wire it up so that I can access the service from an Angular.js project. I have read that Angular supports CORS out of the box and all I needed to do was add this line:Header set Access-Control-Allow-Origin "*"to my .htaccess file.I've done this and my REST application is still working (no 500 internal server error from a bad .htaccess) but when I try to test it fromtest-cors.orgit is throwing an error.Fired XHR event: loadstart Fired XHR event: readystatechange Fired XHR event: error XHR status: 0 XHR status text: Fired XHR event: loadendMy .htaccess file looks like thisRewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ /index.php [QSA,L] Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods: "GET,POST,OPTIONS,DELETE,PUT"Is there something else I need to add to my .htaccess to get this to work properly or is there another way to enable CORS on my server?
enable cors in .htaccess
Apparently there is a HTTPS environment variable available that can be used easily. For people with the same question:Header set Strict-Transport-Security "max-age=31536000" env=HTTPS
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed3 years ago.Improve this questionMy web application runs on a different number of hosts that I control. To prevent the need to change the Apache config of each vhost, I add most of the config using .htaccess files in my repo so the basic setup of each host is just a couple of lines. This also makes it possible to change the config upon deploying a new version. Currently the .htaccess (un)sets headers, does some rewrite magic and controls the caching of the UA.I want to enable HSTS in the application using .htaccess. Just setting the header is easy:Header always set Strict-Transport-Security "max-age=31536000"But the spec clearly states: "An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.". So I don't want to send the header when sending it over HTTP connections. Seehttps://datatracker.ietf.org/doc/html/draft-ietf-websec-strict-transport-sec-14.I tried to set the header using environment vars, but I got stuck there. Anyone that knows how to do that?
How to set HSTS header from .htaccess only on HTTPS [closed]
This should work:Header add Access-Control-Allow-Origin "*" Header add Access-Control-Allow-Headers "origin, x-requested-with, content-type" Header add Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS"
I can't figure out why my.htaccessheader settings doesn't work.My.htaccessfile content:Header set Access-Control-Allow-Origin * Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT" Header always set Access-Control-Allow-Headers "*" RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L]But when I removeHeader's and add them inindex.phpthen everything works fine.header("Access-Control-Allow-Origin: *"); header("Access-Control-Allow-Methods: PUT, GET, POST, DELETE, OPTIONS"); header("Access-Control-Allow-Headers: *");What am i missing?
Header set Access-Control-Allow-Origin in .htaccess doesn't work