Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
It is based on client source ip address hash and as long as you have same set of backends stickiness will persist.http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash | I want to replace pound with nginx as loadbalancer and all tests look fine so far. I will do a typical upstream configuration like this:upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}There are now 2 questions left open:How long does this stickyness last? Is there a ttl to be defined somewhere?Does the stickyness survive restarts and/or reloads of nginx?I could not find the answer in the nginx wiki. Links to official docs are welcome. | about ip_hash in nginx upstream module |
May beSSIis what you are looking for?All variables, which available in core module also available in ssi module.Usage example::I had put this string inside test.html and invoke this page throughhttp://localhost/test.htmlAs a result I get following string:localhost:80 | Does nginx offer a way to add the hostname (and port) in an HTML file (without using other solutions, such as PHP)? | Including the hostname in a HTML file served by nginx |
http://wiki.nginx.org/HttpUpstreamModulehttp://wiki.nginx.org/HttpFcgiModuleupstream backend {
server main_backend.server:port1;
server backup.server:port2 backup;
}
fastcgi_pass backend; | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed10 years ago.Improve this questionI'd like to have 1 web server (nginx) and 2 FastCGI instances of the same application as back-end. The idea is to forward requests to second one if the first one is down.Apparently, I need to use upstream and fastcgi_next_upstream. But I could not find a working example of a nginx.conf file. Does anybody have such example? | How to use fastcgi_next_upstream in Nginx [closed] |
In my case, I had to add the "server_name" line because it wasn't in my nginx config so it was giving me the error message "Cannot find a VirtualHost matching domain my.domain.com" when I ran:certbot --nginxMake sure this is in your config:server {
server_name my.domain.com;
....
} | I have an nginx running.
Now I want my nginx to use SSL:certbot-auto --nginx -d my.domain.com -n --agree-tos --email[email protected]OUTPUT:Performing the following challenges:
tls-sni-01 challenge for my.domain.com
Cleaning up challenges
Cannot find a VirtualHost matching domain my.domain.com.my.domain.com is pointing to the IP of my server. It's its dns name.
What am I doing wrong? I did this already for apache and it was working fine. My nginx is running (and I'm not able to restart it manually after thecertbot-autobut this wasn't necessary when I usedcertbot-auto --apache | using certbot-auto for nginx |
Thanks to @RichardSmith I finally managed to create the right configuration. Here is the final working config. I had to use the combination of nestedlocationblocks and an inverse regex match for it to work.server {
listen 443 ssl;
server_name example.com;
root /home/hamed/laravel/public;
# index index.html index.htm index.php;
ssl_certificate /root/hamed/ssl.crt;
ssl_certificate_key /root/hamed/ssl.key;
location ~ ^/blog(.*)$ {
index index.php;
root /home/hamed/www/;
try_files $uri $uri/ /blog/index.php?do=$request_uri;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.hamed.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
location ~ ^((?!\/blog).)*$ { #this regex is to match anything but `/blog`
index index.php;
root /home/hamed/laravel/public;
try_files $uri $uri/ /index.php?$request_uri;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.hamed.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
} | I'm trying to configure nginx to serve 2 different php scripts from 2 different location. The configuration is as follows.I have a Laravel installation which resides in/home/hamed/laravelin which itspublicdirectory should be served.I have a Wordpress installation in/home/hamed/www/blog.And this is mynginxconfiguration:server {
listen 443 ssl;
server_name example.com www.example.com;
#root /home/hamed/laravel/public;
index index.html index.htm index.php;
ssl_certificate /root/hamed/ssl.crt;
ssl_certificate_key /root/hamed/ssl.key;
location /blog {
root /home/hamed/www/blog;
try_files $uri $uri/ /blog/index.php?do=$request_uri;
}
location / {
root /home/hamed/laravel/public;
try_files $uri $uri/ /index.php?$request_uri;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.hamed.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}The problem is when trying to access the wordpress section by callingexample.com/blogstill the the laravel installtion takes over the request.Now I have tried replacingrootdirective insidelocationblocks withaliasto no avail.According tothis guidehaving theindexdirective ortry_filesinsidelocationtriggers an internal redirect which I suspect causes this behavior.Would someone please help me figure this out? | nginx configuration with multiple location blocks |
server {
listen 80;
server_name myproject.dev;
root /var/www/myproject/web;
}Start from herehttp://wiki.nginx.org/Configuration. | I am moving from an Apache to an NGINX environment and need to convert the following virtual server configuration to NGINX.
DocumentRoot /var/www/myproject/web
ServerName myproject.dev
ServerAlias myproject.dev
AllowOverride All
Order allow,deny
Allow from All
What would be the "exact translation" of this to NGINX? | How To Convert Apache Config To NGINX |
I took the code from @grosser's answer and turned it into a Gem:https://rubygems.org/gems/rails_weak_etagshttps://github.com/johnnaegle/rails_weak_etagsYou can just add this to your gemfile:gem 'rails_weak_etags'And it will be installed into your middleware beforeRack::ConditionalGet:> bundle exec rake middleware
....
use RailsWeakEtags::Middleware
use Rack::ConditionalGet
use Rack::ETag
....Then all the e-tags generated by rails, either with Rack::ETag or with explicit e-tags will be converted to weak. Using a patched, or version > 1.7.3 of nginx, will then let you use e-tags and gzip compression.RACK 1.6 defaults etags to weak- this gem is no longer helpful if you upgrade. | What is the best way to tell rails to useweak instead of strong ETAGswhen using methodsfresh_whenandstale??The reason I ask is thatnginx (correctly) removes strong ETAG headers from responses when on-the-fly gzipping is enabled. | Weak ETAGs in Rails? |
If you put the proxy settings into the server context and let the locations inherit them, then it's not much to duplicate. You can also set up an upstream block to make it easier to change the proxy target should you need to:upstream _varnish {
server localhost:6081;
}
server {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Request-URI $request_uri;
proxy_pass_header Set-Cookie;
location @varnish {
proxy_pass http://_varnish;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
access_log off;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
expires max;
open_file_cache_valid 120m;
try_files $uri @varnish;
}
location ~ \.php$ {
proxy_pass http://_varnish;
}
} | I am trying to create a nginx conf file that has little repetition in it. I am using nginx to serve static files, and it proxies 404s or php content to the named location @varnish:location @varnish {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_pass http://localhost:6081;
proxy_set_header Request-URI $request_uri;
}For the "standard" situation whereby nginx should check to see if it has a file and then pass through to the backend, the following works fine:location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
access_log off;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
expires max;
open_file_cache_valid 120m;
try_files $uri @varnish;
}However, for PHP, I don't even want it to try the file, it should just immediately redirect the request to @varnish:location ~ \.php$ {
rewrite . @varnish last;
}However, this does not appear to work. It seems a pain to have two separate near identical blocks (one for @backend and one for php) both referencing the same proxy, and is the sort of issue where humans can forget to put something in one and not the other. | How to have multiple location blocks use the same named @location |
Try:location ~ ^/(?:styles|css)/(.*)$ { alias /var/www2/styles/$1; }Orlocation ~ ^/(styles|css)/(.*)$ { alias /var/www2/styles/$2; }$1refers to the first capturing group(...). When you added another group it referred to that one instead. You can use a non-capturing group(?:...)instead, or refer to the second capturing group$2. | I would like to create a location rule for two alias to one location.This is the rule used for one location:location ~ ^/images/(.*)$ { alias /var/www2/images/$1; }What I would like to do is define two alias in location. So for example,
I can visithttp://domain.com/styles/file.cssandhttp://domain.com/css/file.cssand it would go to one alias which is /var/www2/styles/I've tried something like this, but it did not work for me.location ~ ^/(styles|css)(.*)$ { alias /var/www2/styles/$1; }But the again, I don't know regex much. | Multiple Nginx Alias to One Location |
It is an optimization - for very small strings simple copy is faster than calling a system (libc) copy function.Simple copy withwhileloop works rather fast for short strings, and system copy function have (usually) optimizations for long strings. But also system copy does a lot of checks and some setup.Actually, there is a comment by author just before this code: nginx, /src/core/ngx_string.h (search ngx_copy)/*
* the simple inline cycle copies the variable length strings up to 16
* bytes faster than icc8 autodetecting _intel_fast_memcpy()
*/Also, a two line upper is#if ( __INTEL_COMPILER >= 800 )So, author did measurements and conclude that ICC optimized memcopy do a long CPU check to select a most optimized memcopy variant. He found that copying 16 bytes by hand is faster than fastest memcpy code from ICC.For other compilers nginx does usengx_cpymem(memcpy) directly#define ngx_copy ngx_cpymemAuthor did a study of differentmemcpys for different sizes:/*
* gcc3, msvc, and icc7 compile memcpy() to the inline "rep movs".
* gcc3 compiles memcpy(d, s, 4) to the inline "mov"es.
* icc8 compile memcpy(d, s, 4) to the inline "mov"es or XMM moves.
*/ | When I was reading the nginx code, I have seen this function :#define ngx_cpymem(dst, src, n) (((u_char *) memcpy(dst, src, n)) + (n))
static ngx_inline u_char *
ngx_copy(u_char *dst, u_char *src, size_t len)
{
if (len < 17) {
while (len) {
*dst++ = *src++;
len--;
}
return dst;
} else {
return ngx_cpymem(dst, src, len);
}
}It's a simple string copy function. But why it tests the length of string and switch to memcpy if the length is >= 17 ? | A curious string copy function in C |
GETs for fragment identifiers don't/shouldn't (some buggy clients may send them) appear in an HTTP request, so you can't have a rewrite rule to match them, regardless of webserver.The HTTP engine cannot make any assumptions about it. The server is not even given it.If you tried to make the initial request for / redirect to /#! rather than serving the root index, you'd end up with a "too many redirects" error, as the client would come back asking for / again (remember that it won't send the # with its request).You'll need to do this with javascript instead for the index document.The bottom line is that it's not available server-side in the GET request. Evencurl has been patchednot to send it any more.You could have nginx location directives to make everything else hit the front controller though:location = / {
}
location = /index.html {
}
location ~ / {
rewrite ^ /#!$uri redirect;
break;
}Beware of this approach though;http://jenitennison.com/blog/node/154goes into a lot more detail about the hashbang debacle at Gawker and other issues surrounding its use. | I'm wondering what a location or rewrite nginx directive for hashbang (#!) urls would look like. Basically routing all non hash-banged url's through the hashbang like a front controller. So:http://example.com/about/staffwould route tohttp://example.com/#!/about/staffI'm unclear what the best technique here would be? Whether writing an if statement to check existence of the hashbang, or just a generic rewrite that filters all requests... | NGINX hashbang rewrite |
After a number of hours trying out different things, the reason was in fact the uwsgi buffer-size just not being high enough even though I had quadrupled it. For those that don't know, you need to add:buffer-size=32768Where the number is some number of bytes that works for your use case. The default is 4096. | I have a Django REST Framework app running behind an Nginx proxy, we have a third party service that redirects to one of the urls in the app. I'm getting 502s from this endpoint when the redirect happens and have narrowed it down to the Referer header being too large. My logic is as follows:Received 502 when the redirect happensHitting the link locally with all the query params returns the expected responseAdding the Referer header (which is quite large) triggers the 502Removing half of the Referer header returns us to the expected resultI've tried increasing my uwsgi buffer-size and nginx proxy buffer. | How do I fix Nginx 502 Bad Gateway on large headers? |
As stated by @silverfox, you need an ingress controller. You can enable the ingress controller in minikube like this:minikube addons enable ingressMinikube runs on IP 192.168.42.135, according tominikube ip. And after enabling the ingress addon it listens to port 80 too. But that means a reverse proxy like nginx is required on the host, to proxy calls to port 80 through to minikube.After enabling ingress on minikube, I created an ingress file (myservice-ingress.yaml):apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myservice.myhost.com
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80Note that this is different to the answer given by @silverfox because it must contain the "host" which should match.Using this file, I created the ingress:kubectl create -f myservice-ingress.yamlFinally, I added a virtual host to nginx (running outside of minikube) to proxy traffic from outside into minikube:server {
listen 80;
server_name myservice.myhost.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.42.135;
}
}TheHostheader must be passed through because the ingress uses it to match the service. If it is not passed through, minikube cannot match the request to the service.Remember to restart nginx after adding the virtual host above. | I have installed minikube on a server which I can access from the internet.I have created a kubernetes service which is available:>kubectl get service myservice
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice 10.0.0.246 80:31988/TCP 14hThe IP address of minikube is:>minikube ip
192.168.42.135I would like the URLhttp://myservice.myhost.com(i.e. port 80) to map to the service in minikube.I have nginx running on the host (totally unrelated to kubernetes). I can set up a virtual host, mapping the URL to192.168.42.135:31988(the node port) and it works fine.I would like to use an ingress. I've added and enabled ingress. But I am unsure of:a) what the yaml file should containb) how incoming traffic on port 80, from the browser, gets redirected to the ingress and minikube.c) do I still need to use nginx as a reverse proxy?d) if so, what address is the ingress-nginx running on (so that I can map traffic to it)? | Kubernetes Ingress running behind nginx reverse proxy |
Nginx since version 1.1.4 supports HTTP/1.1 when connecting to upstream servers. You just need to set configuration parameterproxy_http_version 1.1(1.0 is the default value).
seehttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version | I have 3 server:
A(nginx)-->B(nginx)-->C(nodejs),When i access A or B,chrome use http/1.1+keepalive by default.I do not set "proxy_http_version 1.1;" and proxy_set_header Connection "";But between A and B,NGINX use http/1.0 by default。That is like:client-->nginxA(upstream to b)-->nginxB(upstream to c)-->C (nodejs)http/1.1-->http/1.0-->http/1.1-->nodejsMy questions is :
why nginx use http/1.1 for upstream by default,between nginx and nginx, upstream use http/1.0 ?THX. | why between nginx/nginx upstream use http/1.0? |
You can't have bothlisten 443 ssl;andssl on;, remove thessl on;line and restart nginx. | am using rails 3.2 and ruby 1.9 for my app, have to run application in https with domain name likehttps://welcome.comon my system. so i configure my nginx by creating ssl certificate for domain name and httpssnapshort of ssl:# HTTPS server
#
server {
listen 443 ssl;
server_name welcome.com;
root html;
index index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_timeout 5m;
}i can able to saw nginx home page by calling welcome.com andhttps://welcome.com. without running the rails applicationMy application also running in port 443 successfully, but after querying in browser likehttps://welcome.comRails terminal showing error:ERROR bad Request-Line `\x16\x03\x01\x00�\x01\x00\x00�\
ERROR bad URI `._i\b8\x10�yA�^6�v�M|In browser throwing error:SSL received a record that exceeded the maximum permissible length.
(Error code: ssl_error_rx_record_too_long)Even tried by clearing browser history repeatedly, but the result is same.Am not sure what i made wrong, can any one help me?have i made any wrong in certificate creation ? | Error code: ssl_error_rx_record_too_long for https in nginx on ruby on rails application |
Okay I've now got bothphp -vandphp-fpm -vreturning the same value of php and i did it by runningbrew doctorwhich told me to run echo'export PATH="/usr/local/sbin/:$PATH"'so now that I have the same versions running and can confirm that php-fpm is running without failing usinglsof -i | grep php-fpmI'm on to normal problems that people have installing php and nginx on their mac books! So I can rest easy tonight knowing that I am slightly closer to my goal!I also now have the following$ which php-fpm
/usr/local/sbin/php-fpm
$ which php
/usr/local/bin/phpThank you everyone for your time and suggestions :) | I've been struggling with this all night and can't find an answer that fixes it!I'm on a mac and using homebrew to install php and nginx, I ran the following which show as successfulbrew install php
brew install nginxno problems so far and I can start both servicesbrew services start nginx
brew services start nginxwhen I run brew services list I get the followingnginx started me /Users/me/Library/LaunchAgents/homebrew.mxcl.nginx.plist
php started me /Users/me/Library/LaunchAgents/homebrew.mxcl.php.plisthowever when trying to run a Wordpress site I get the following error in my nginx log[error] 26099#0: *1 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8080"I have googled the problem and it seems that it's normally a problem with nginx passing a request to php-fpm, I have checked the user that is running each service to make sure they match, I have done it as both me and both root to no avail.
When I check "brew services list" it shows php in orange which I understand to mean it has actually failed.I dug a bit further and it seems that if I run 'php -v' I getPHP 7.2.9 (cli) (built: Aug 23 2018 02:08:27) ( NTS )but if I run 'php-fpm -v' I get:PHP 7.1.16 (fpm-fcgi) (built: Mar 31 2018 03:00:16)I believe this is causing me a problem, I have googled it but haven't got any definitive fixes.Here is another oddity with it:$which php
/usr/local/bin/php
$which php-fpm
/usr/sbin/php-fpmDoes anyone have any ideas how I can resolve this?Thanks in advance! | php -v and php-fpm -v show different versions of php |
See the notes on Nginx in the Tornado docs:http://tornado.readthedocs.org/en/stable/guide/running.htmlSince one Tornado process can only take advantage of one CPU core (Edit:Seeupdated docsfor a development on this), use Nginx to load-balance multiple Tornado processes to use multiple cores
Additionally, Nginx is likely a more efficient static file handler than Tornado. | I found out that we can run the tornado application from just firing something likepython main.py. But everyone else says to deploy tornado with nginx. What are the benefits? I know it's a bit foolish, but I really am confused. | Why use nginx to deploy tornado instead of its built-in server? |
You should turn off buffering in nginx:proxy_buffering off;Reference:http://nginx.org/r/proxy_buffering | I would like to have page content for a web page I am developing appear on screen as it is downloaded. In my test/development environment this works as expected using the PHP flush() command.However, my production setup (WPEngine) uses an Nginx proxy in front of Apache and flush() no longer works (nor do any of the other output buffering commands). I have been able to get the desired behaviour by deliberately filling up the buffer when I want to flush by sending 4k worth of whitespace.However, that feels like a hack and the page in question needs to be flushed 100 times or more so this adds a considerable amount to the total data downloaded.Is there a way to signal to Nginx to flush the buffer (or not buffer at all) by sending control characters and/or setting HTTP headers so I can avoid sending otherwise unnecessary whitespace?Since WPEngine is a managed hosting environment, I am not able to make any changes to the server setup. So, for example, turning off Nginx buffering by adding a directive to the nginx server config is not an option.The way I am currently doing this is as follows:-', time() - $start );
echo $buffer;
sleep(1);
} while( (time() - $start) < 10 );
?> | Flush output buffer in Apache/Nginx setup |
You must be accessing these apps through a domain pointing to these IPs:75.101.163.44
75.101.145.87
174.129.212.2These are the apex faces and they are in front of both bamboo and cedar apps. Varnish is there for bamboo, but any request that goes through them ends up going through varnish too.These faces are only for apex domains. If your app is under a subdomain such as www, it should be setup as a CNAME pointing to appname.herokuapp.com. When setup like that, requests will not go through varnish.For more on Apex's and Heroku, see here:http://neilmiddleton.com/the-dangers-of-a-records-and-heroku/ | According to the comments in the accepted answer hereRails how to Gzip Javascript? (Heroku)and the official cedar documentation (http://devcenter.heroku.com/articles/http-routing#the_herokuappcom_http_stack):Since requests to Cedar apps are made directly to the application server – not proxied through an HTTP server like nginx – any compression of responses must be done within your application. For Rack apps, this can be accomplished with the Rack::Deflater middleware. For gzipped static assets, make sure that Rack::Deflater is loaded before ActionDispatch::Static in your middleware stack.However, as far as I can tell, my app is running on herokuapp.com (cedar) and, according to the heroku logs, is using nginx to serve data (which is great). I've also confirmed via the Content-Encoding HTTP header that it is gzipping data to the browser. According to the documentation, that is NOT supposed to happen on cedar. Am I missing something here? | Heroku Cedar and nginx (gzip) |
There are two reasons why 'if is evil' as far as nginx is concerned. One is that many howtos found on the internet will directly translate htaccess rewrite rules into a series of ifs, when separate servers or locations would be a better choice. Secondly, nginx's if statement doesn't behave the way most people expect it to. It acts more like a nested location, and some settings don't inherit as you would expect. Its behavior is explainedhere.That said, checking things like cookies must be done with ifs. Just be sure you read and understand how ifs work (especially regarding directive inheritance) and you should be ok.You may want to rethink blindly proxying to whatever host is set in the cookie. Perhaps combine the cookie with amapto limit the backends.EDIT: If you use names instead of ip addresses in the id cookie, you'll also need a resolver defined so nginx can look up the address of the backend. Also, your default proxy_pass will append the request onto the end of the setUserCookie. If you want to proxy to exactly that url, you replace that default proxy_pass with:rewrite ^ /index.php/setUserCookie break;
proxy_pass http://localhost:99; | I am using the following configuration for NGinx currently to test my app :location / {
# see if the 'id' cookie is set, if yes, pass to that server.
if ($cookie_id){
proxy_pass http://${cookie_id}/$request_uri;
break;
}
# if the cookie isn't set, then send him to somewhere else
proxy_pass http://localhost:99/index.php/setUserCookie;
}But they say "IFisEvil". Can anyone show me a way how to do the same job without using "if"?And also, is my usage of "if" is buggy? | NGinx : How to test if a cookie is set or not without using 'if'? |
I managed to fix it with some changes.Change 1. Adding /flaskapp to the routes in my flask application. This eliminated the need for URL-rewriting and simplified things greatly.Change 2. nginx.conf changes. I added logc in the location block to redirect http requests as https, new conf:location /flaskapp {
proxy_pass http://myapp:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# New configs below
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
# Makes flask redirects use https, not http.
proxy_redirect http://$http_host/ https://$http_host/;
}While I didn't "solve" the issue of introducing conditional rewrites based on a known prefix, since I only need one prefix for this app it is an acceptable solution to bake it into the routes. | I have a flask application using nginx for a reverse proxy/ssl termination, but I'm running into trouble when using url_for and redirect in flask.nginx.conf entry:location /flaskapp {
proxy_pass http://myapp:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}The idea is that a user navigates tohttps://localhost:port/flaskapp/some/location/hereand that should be passed to flask ashttp://localhost:8080/some/location/hereThis works reasonably well when navigating to a defined route, however if the route hasredirect(url_for('another_page')), the browser is directed tohttp://localhost:8080/another_pageAnd fails, when the URL I actually want to go to is:https://localhost:port/flaskapp/another_pageI have tried several other answers for similar situations, but none have seemed to be doing exactly what I am doing here. I have tried using_external=True, settingapp.config['APPLICATION_ROOT'] = '/flaskapp'and many iterations of differentproxy_set_headercommands innginx.confwith no luck.As an added complication, my flask application is usingflask-loginand CSRF cookies. When I tried settingAPPLICATION_ROOTthe application stopped considering the CSRF cookie set byflask-loginvalid, which I assume has something to do with origins.So my question is, how do I make it so that when flask is returning aredirect()to the client, nginx understands that the URL it is given needsflaskappwritten into it? | Handling flask url_for behind nginx reverse proxy |
Django get hostname and port from HTTP headers.
Addproxy_set_header Host $http_host;into your nginx configuration before optionsproxy_pass. | I'm running django on port 8001, while nginx is handling webserver duties on port 80. nginx proxies views and some REST api calls to Django. I'm using django-allauth for user registration/authentication.When a new user registers, django-allauth sends the user an email with a link to click. Because django is running on port 8001, the link looks likehttp://machine-hostname:8001/accounts/confirm-email/xxxxxxxxxxxxxxHow can I make the url look likehttp://www.example.com/accounts/confirm-email/xxxxxxxx?Thanks! | django-allauth: how to modify email confirmation url? |
Edit your startup sequence to run a command or script that captures the interface's IP address and writes it to a file in the formatlisten :80or whatever port you want:echo "listen $(ip -o -4 a s eth0 | awk '{ print $4 }' | cut -d/ -f1):80;" > /path/to/some/fileThen just have your nginx config include that file:include /path/to/some/file;Obviously, you'll need to make sure the IP capture occurs before the nginx startup does. | Is there a way to make Nginx 1.11 bind to a specific interface regardless of the IP address?I've got a home gateway to an ISP provider; it uses DHCP client to obtain its dynamic IP address. I do not know what that IP address is at NGINX configuration time.Surely, there must be a way to make such a fine HTTP server bind to a specific network interface? I know that Apache can. | NGINX bind to a specific network interface, regardless of IP address |
Nginx doesn't flush unless you specify theflushoption (even if you have specified thebufferoption).Here's an example of how to buffer packets of 8k to the log every five minutes:access_log /var/log/nginx/access.log main buffer=8k flush=5m; | How frequently nginx flushes its buffer to access_log by default ?In manual there is not info, just setup syntax:access_log path [format [buffer=size [flush=time]] [if=condition]]; | Nginx access_log default flush time |
Based on your updated comments;if the upstream backend sends the referer header, you could do something like this:location ~* ^/(css|js)/.+\.(css|js)$ {
#checking if referer is from app1
if ($http_referer ~ "^.*/app1"){
return 417;
}
#checking if referer is from app2
if ($http_referer ~ "^.*/app2"){
return 418;
}
}
error_page 417 /app1$request_uri;
error_page 418 /app2$request_uri;
location /app1 {
proxy_pass http://app1.com;
}
location /app2 {
proxy_pass http://app2.com;
}For example, if the backend on app2.com, requests the test.css like this:curl 'http://example.com/css/test.css' -H 'Referer: http://app2.com/app2/some/api'The request land here:/app2/css/test.css | My question is similar toNginx Relative URL to Absolute Rewrite Rule?- but with an added twist.I have nginx acting as a proxy server, which proxies for multiple apps, similar to this (simplified) config:server {
listen 80;
server_name example.com;
location /app1 {
proxy_pass http://app1.com;
}
location /app2 {
proxy_pass http://app2.com;
}
}This works fine, but as in the other question, these applications (app1andapp2) use relative urls such as/css/foo.css, or/js/bar.js. Also it's a big problem to ask all applications to change to something like/app1/css/foo.css.Is it possible for nginx to intelligently figure out which application should handle the request? FTR, users would be accessing these applications like this:http://example.com/app1/fooactionorhttp://example.com/app2/baraction.If it matters, all applications are Java/Tomcat based apps.TIA! | proxying relative urls with nginx |
Switch away from using TCP sockets and going to UNIX sockets (assuming you are on a unix based server)Start memcached with a socket enabled:
Add-s /tmp/memcached.socketto your memcached startup line (Note, sockets disables networking support)Then in PHP, connect using persistent connections, and to the new memcache socket:$memcache_obj = new Memcache;
$memcache_obj->pconnect('unix:///tmp/memcached.socket', 0);Another recommendation, if you have multiple "types" of cached objects, start a memcached instance for each "type" and distribute your hot items amongst them.Drupal does this, you can see how their config file and memcached init is setuphere.Also, it sounds to me like your memcached timeout is set WAY to high. If it's anything above 1 or 2 seconds, you can lock scripts up. The timeout should be reached, and the script should default to retrieving the object via another method (SQL, file, etc)The other thing is verify that your memcache isn't being put into a swap file, if your cache is smaller than your average free ram, try starting memcache with the -k option, this will force it's cache to always stay in ram and can't be swapped.If you have a multi-core server, also make sure memcached is compiled with thread support, and enable it using-t | I am running memcached on my server and when it hits 600+ req/s it becomes unstable and causes a big load of problems. It appears when the request rate gets that high, my PHP applications at random times are unable to connect to the memcache server, causing slow load times which makes nginx and php-fpm freak out and I receive a bunch of 104: Connection reset by peer errors in my nginx logs.I would like to point out that in my memcache server I have 'hot objects' - objects that at times receive 90% of the memcache requests. I also noticed when so many requests hit a single object, it slightly adds a little more load time to the overall page (when it manages to load).I would greatly appreciate any help to this problem. Thanks so much! | 600+ memcache req/s problems - help! |
For nginx server to allow SSL encryption you need to provide ssl flag while listening in nginx.conf
and only ssl certificate will not be sufficient, you will need the ssl certificate key and password as well and they must be configured.charset utf-8;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
}
server {
listen 443 ssl;
ssl_certificate /usr/nginx/ssl.crt;
ssl_certificate_key /usr/nginx/ssl.key;
ssl_password_file /usr/nginx/ssl.pass;
server_name localhost;
root /usr/nginx/html;
}And you need to put the ssl certificate, key and password via volumes or via embedding in docker container. If you are running container over kubernetes cluster, adding them via kubernetes secrets will be better option.For Dockerfile you can add likeFROM nginx
COPY dist /usr/nginx/html
RUN chmod -R 777 /usr/nginx/html/*
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY ssl.crt /usr/nginx/
COPY ssl.pass /usr/nginx/
COPY ssl.key /usr/nginx/
EXPOSE 80:443
ENTRYPOINT nginx -g 'daemon off;'For further info you can refer the Nginx Docker articlehttps://medium.com/@agusnavce/nginx-server-with-ssl-certificates-with-lets-encrypt-in-docker-670caefc2e31 | I am trying to run a UI application with Docker using nginx image I am able to access the service on port 80 without any problem but whenever I am trying access it via https on 443 port I am not able to access the applications the site keeps loading and eventually results in not accessible I have updated the nginx.conf file in default.conf to allow access over port 443Following is my nginx.confcharset utf-8;
server {
listen 80;
server_name localhost;
root /usr/nginx/html;
}
server {
listen 443;
server_name localhost;
root /usr/nginx/html;
}I have added the SSL self-signed certificate in the /usr/nginx folder and exposed port 443 via DockerfileThe following is my DockerfileFROM nginx
COPY dist /usr/nginx/html
RUN chmod -R 777 /usr/nginx/html/*
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY domain.crt /usr/nginx
EXPOSE 80:443
ENTRYPOINT nginx -g 'daemon off;'Can anyone please explain me is port 443 not allowing any access | Running Nginx Docker with SSL self signed certificate |
Youdidend up with 2 Nginx installations:The one installed globally by your OS's package manager (/usr/sbin/nginx). This uses /etc/nginx/nginx.conf as configuration file by default.The one installed by Phusion Passenger (/opt/nginx/sbin/nginx). This uses /opt/nginx/conf/nginx.conf as configuration file by default.Only (2) has Phusion Passenger support. Ignore (1) and do not use it. | I'm trying to move from Apache + Passenger to Nginx + passenger on my Ubuntu Lucid Lynx box.When I install passenger:sudo gem install passengerandcd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin
sudo ./passenger-install-nginx-moduleeverything is fine (no error). Nginx is downloaded and compiled and installed at the same time (when selecting the first option during passenger installation). By default it is installed in/opt/nginx.I end up with the configuration file/opt/nginx/conf/nginx.conf; This conf file was automatically updated with passenger config). The thing I do not understand is that I also have the configuration file/etc/nginx/nginx.conf. What is the purpose of this one when it seems that the conf file in/opt/...is the main one?When I run/etc/init.d/nginx start, it starts correclty saying that/etc/nginx/nginx.confis ok. Does it mean that it does not check the other conf file?I updated/etc/init.d/nginxscript and added/opt/nginx/sbinat the beginning of the PATH and it seems the correct conf file is taken into account. It seems like I have two nginx installations where I only relied on passenger to install it. | nginx with passenger |
The issue in this case is that the error response didn't have an appropriateAccess-Control-Allow-Originon it, so the requesting application didn't have permissions to view it. That is, even the error messages are subject to cross-origin policy. | I was testing an REST Api that uploads image file to server.The image was too large and exceeded max request body size, so Nginx refused it and returned response 413(Request Entity Too Large).Nginx: error.log*329 client intended to send too large body: 1432249 bytes, client: xx.xx.xx.xx, server: api.example.com, request: "POST /images HTTP/1.1", host: "api.example.com", referrer: "https://example.com/posts/create"However, I found that firefox/chrome console said,Chrome: consoleAccess to XMLHttpRequest at 'https://api.example.com/images' from origin 'https://example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.Is there any connection between CORS and 413 error? Where does this message comes from and why? | Why browsers display CORS error in case of response 413? |
Good starting point :heroku/heroku-buildpack-nginxWhat you are looking for is rate-limiting with NGINX, read this for a better understandingand here you have an example gist:NGINX reverse proxy with rate limitingThis is the file of theheroku-nginx-node-examplethat I think you have to add thelimit_reqoptionsIf you need more help, show what you have tried so far and I will edit this answer. | How does one configure nginx for a heroku nodejs web application? I would like to configure nginx such that an IP address is limited to N requests for a given time period. Like the classic "You're doing that too much" message as seen on Reddit.Thanks,Charles | How to configure nginx for heroku nodejs web application |
Yes, you can, there is a package callednode-sspi. It only works on Windows environment though.Windows SSPI server-side authentication for NodeNodeSSPI to Node.js is what mod-auth-sspi to Apache HTTPD. In a nutshell NodeSSPI authenticates incoming HTTP(S) requests through native Windows SSPI, hence NodeSSPI runs on Windows only.If you need to use it for other OS, you need to develop your own node module oruse Apache. | I have an nginx reverse proxy to a few node apps. Our users are all on a Windows domain controlled network. I'm aware I can useexpress-ntlmorpassport-windowsauthto prompt the user for their login credentials, but that's non-integrated auth.Is it possible to use integrated auth (windows authenticated users can bypass credentials prompt) directly from within node.js (or nginx) without IIS (or Apache)? If so, how?I suppose we could replace nginx with IIS as the reverse proxy, but I'd like to avoid that if I can. | Is it possible to use Windows integrated auth without IIS? |
I am assuming the following:Current work flow:User run php script from command line, which communicate with a server side script/cgi setup in Nginx using http requestServer side script/cgi in Nginx will take the incoming data, process it and put it in database, or send out to end userOP concern:Efficiency of command line php script communicating with Nginx server side script using http protocol, which maybe overkill as the communication happen within the same server.Proposal 1Command line php script will write all information into file(s),
then send one http request to Nginx server side cgi scriptNginx server cgi script, upon receiving the request, will pick up all
information from file(s), then process itramfs (ram disk) can be use to minimize I/O to physical HDProposal 2Combine your command line php script into the Nginx server side script, and create a web interface for it. Current command line user will login webpage to control the process they used to do it with command line tool.Pro:No more inter-scripts/inter-process communication. The whole work flow is in one process. This maybe more scalable for the future also, as multiple users can log in through web interface and handle the process remotely. Additionally, they do not require OS level accounts.Con:May need more development time. (But you only have to maintain one code base instead of two.) | We are developing a realtime app and we are using nginx push stream module for a websockets part. Firstly, data is send from a client to a php script that does some authentication and stores needed information in database and then pushes information to nginx that later sends it to a subscribed users on a specific sockets. Quite often there will be situations when there are more that 30 http requests made from this script to local nginx (which I am not exactly sure is a bad thing?).QuestionIs it possible to send information from php to nginx without http requests? Is there any way that my php script can communicate with nginx? What is a best practise to handle this kind of communications? Is sending 30+ http requests per php script a good practise?I have read towards some AMQP solutions but haven't found information where nginx is a consumer of messages from rabbitmq.I will gladly provide any additional information if something is not clear. | Sending information to a ngnix from php on the same server without http |
I wanted to do this too, but apparently by design nginx cannot expand variables in theerror_logcommand, in case there are errors doing so and it cannot get a log filename to write them to.Their suggestion is to use some program to generate your configuration files instead. You could usesedfor this, to automatically search-and-replace your own variables and placing the output in the nginx configuration directory. | I want to write a config file for an nginx virtual host that looks like this:server {
listen 80;
server_name www.my-domain-name.com;
access_log /home/me/sites/$server_name/logs/access.log;
error_log /home/me/sites/$server_name/logs/error.log;
location /static {
alias /home/me/sites/$server_name/static;
}
location / {
proxy_pass http://localhost:8000;
}
}Using$server_nameseems to work find for thelocation /static, but it doesn't seem to work for theaccess_loganderror_log-- am I doing something wrong? Or is this just not possible? Can I do it some other way?[update] - this is the error message when trying to reload nginx:nginx: [emerg] open() "/home/me/sites/$server_name/logs/error.log" failed (2: No such file or directory) | Nginx: can I use $server_name when specifying access log location? |
mod_rpafwill let you do this.This sets the HTTPS value in Apache to "on" based on the headers sent by nginx so Cake will work out of the box (as well as any other apps run in Apache).It also corrects the values for REMOTE_ADDR, SERVER_PORT and HTTP_HOST.Here is my example config:
RPAF_Enable On
RPAF_ProxyIPs 127.0.0.1 10.0.0.0/24
RPAF_SetHostName On
RPAF_SetHTTPS On
RPAF_SetPort On
# If mod_rewrite redirects then we lose the HTTPS status to REDIRECT_HTTPS.
# This resets it back. This happens with Cake's front controller
SetEnvIf REDIRECT_HTTPS on HTTPS=on
| CakePHP (all versions that I've seen) check against$_SERVER['HTTPS']to see whether a request has been made over HTTPS instead of plain HTTP.I'm using nginx as a load balancer, behind which are the Apache application servers. Since the SSL connection terminates at the load balancer,$_SERVER['HTTPS']is not set as far as CakePHP is concerned.I'd like to find a secure way to detect HTTPS on the app servers.So far, I've put this into my CakePHP configuration:$request_headers = getallheaders();
if ( (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS']) || ( isset($request_headers['X-Forwarded-Proto']) && $request_headers['X-Forwarded-Proto'] == 'https' ) ) {
$ssl = true;
// overwrite environment vars (ugly) since CakePHP won't honour X-Forwarded-Proto
$_SERVER['HTTPS'] = 'on';
$_ENV['HTTPS'] = 'on';
} else {
$ssl = false;
}And then in the nginx configuration, I've usedproxy_set_header X-Forwarded-Proto https;to add the flag to any requests between the load balancer and the back-end application servers.This works perfectly fine, but anyone making a direct request to the app servers could fool them into thinking they are browsing over SSL when they're not. I'm not sure whether this is a security risk, but it doesn't seem like a good idea.Is it a security risk? What's the better solution?Since usingX-Forwarded-Protoseems likesomething of a standard, the solution may be a good patch to be submitted to the CakePHP core, so I think any answer can legitimately involve editing core files too. | How can I securely detect SSL in CakePHP behind an nginx reverse proxy? |
The two servers (not the client) need to send the following headers:Access-Control-Allow-Origin : Decide which origin could call into the serverAccess-Control-Allow-Methods : The method that is allowed to access the resource (GET or POST)Access-Control-Max-Age : How long the cache is heldYou could inspect the headers returned from the server (using Firebug or others) if the servers are supporting cross origin resource sharing.If you can't modify the two servers to add the headers, one other possibility to set up a proxy that sit between your request and two servers. This proxy could add the headers if you need to access themIf you own admin right to the servers,this CORS pageshows how to add the headers in various platforms. | I have two URLs:One is the application URL =http://domain.com/appOne is the application API URL =http://api.domain.com/How can I get the application to be able to request things from the api at a different subdomain.I have already tried putting Access-Control-Allow-Origin: * on both sides with no luck.Thanks | Cross-Subdomain Requests |
You can use the post_action directive to trigger a sub_request after the main request is complete.Useful for the sort of logging you have in mind.** OCT 2016 UPDATE **The post_action directive has been removed from the Nginx documentation and while it still appears to work, usage is inadvisable. Caveat Emptor!** JAN 2020 UPDATE **TheMirror Module, introduced in Nginx 1.13.4, essentially replicates the post_action directive. | I wanted to count the number of requests to a particular url pattern. Not sure how this is done in NGinx.Is this possible:When an request to the url pattern comes, we serve that request first. Then NGinx makes another request asynchronously to a server which counts the impression. NGinx does not wait for the response of this request.Thanks. | NGinx - Count requests for a particular URL pattern |
I think you should check thisgithub thread, it seems like it could help you.Basically, after few hours, a Nodejs server stop functioning, and the poor nginx can not forward its requests, as the service listening to the forward port is dead. So it triggers a 502 error.It was all due to a memory leak, that leads to a massive garbage collection, then to the server to crash. Check your memory consumption, you could have some surprises. And try to debug your app code, a piece (dependency) at the time.Updated answer:So, i will add another branch to my question as it seems it has not helped you so far.
You could try to get rid ofpm2, and usesystemdto manage your app life cycle.Create a service filesudo vim /lib/systemd/system/appname.servicethis is a simple file i used myself for a random ExpressJS app:[Unit]
Description=YourApp Site Server
[Service]
ExecStart=/home/user/appname/index.js
Restart=always
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/home/user/appname
[Install]
WantedBy=multi-user.targetNote that it will try to restart if it fails somehowRestart=alwaysManage it with systemdRegister the new service with:sudo systemctl daemon-reloadNow start your app from systemd with:sudo systemctl start appnamefrom now on you should be able to manage your app life cycle with the usual systemd commands.You could add stdout and stderr to syslog to understand what you app is doingStandardOutput=syslog
StandardError=syslogHope it helps more | I have a website with Nginx installed as a reserve proxy for an ExpressJS server (proxies to port 3001). This uses Node and ReactJS for my frontend application.This is simply a testing website currently, and isn't known or used by any users. I have this installed on a Digital Ocean Droplet with Ubuntu.Every morning when I wake up, I load my website and see502 Bad Gateway. The problem is, I don't know how to find out how this happened. I have PM2 installed which should automatically restart my ExpressJS server but it hasn't done so, and when I runpm2 list, my application is still showingonline:When I runpm2 logs, I get the following error (I am running this as an Administrator):So I'll runpm2 restart allto restart the app, but then I don't see any crash information. However on this occasion when taking this screenshot, there were a couple of unusual requests./robots.txt,/sitemap.xmland/.well-known/security.txt, but nothing indicating a crash:When I look at my Nginxerror.logfile, all I can see is the following:There is, however, something obscure within myaccess.log([09/Oct/2018:06:33:19 +0000]) but I have no idea what this means:If I runcurl localhost:3001whilst the server is offline, I will receive a connection error message. This works fine after I runpm2 restart all.I'm completely stuck with this and even the smallest bit of help would be appreciated greatly, even if it's just to tell me I'm barking up the wrong tree completely and need to look elsewhere - thank you. | ExpressJS Server Goes Offline Every Night - 502 Bad Gateway |
Change root line to:root /var/www/WordPress/;for the$fastcgi_script_namedoesn't include/ | My situation is this, I have two Docker containers:Runs PHP-FPM on port 9000Runs nginx and has PHP files (should the PHP-FPM container have access to the files?)I keep getting the following error:FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.0.1, ser
ver: _, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.2:9000", host: "172.17.0.3"I readherethat this is "always related to a wrongly setSCRIPT_FILENAMEin the nginxfastcgi_paramdirective."The problem is, I don't know how to resolve it :-PConfig in Container 2:server {
listen 80 default_server;
listen [::]:80 default_server;
charset UTF-8;
root /var/www/WordPress;
index index.php index.html index.htm;
server_name _;
location / {
try_files $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass 172.17.0.2:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/WordPress$fastcgi_script_name;
# set headers
add_header Cache-Control $cc;
access_log off;
expires $ex;
}
location ~* \.(js|css|svg|png|jpg|jpeg|gif|ico|eot|otf|ttf|woff)$ {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
access_log off;
log_not_found off;
expires 1y;
}
location ~ /\.ht {
deny all;
}
} | How to resolve a PHP-FPM Primary script unknown with a PHP-FPM and an Nginx Docker container? |
The problem might be happening because of conflicting stubs in your various phar files. Try: | We are using a Ubuntu+nginx+php5-fpm combination on our servers with PHP version being 5.5. We are trying to run index.php which includes a bunch of phar files. Something like:When this script is run from the command line PHP, it works fine. When this is run from either php Development server (php -S) or from nginx, we get the following error:2013/11/18 17:56:06 [error] 14384#0: *597 FastCGI sent in stderr: "PHP message: PHP Fatal error: Cannot redeclare class Extract_Phar in b.phar on line 103I don't have a class called Extract_Phar - so I presume that my build process is adding it somewhere along the way. I have used phing to build the same, just in case that helps. The phing target is:
And the index.php in my intophar folder is something like:include("api/LogUtils.inc.php");
// Other relative include statementsI have played around with apc flags based on other answers and have set the following:apc.include_once_override = 0
apc.canonicalize = 0
apc.stat = 0
apc.enabled=0
apc.enabled_cli=0
apc.cache_by_default = 0None of this helps and we are unable to run our code. ANy suggestions? | Php5-FPM error while including multiple phar files |
In this line,fs.readFileSync(filepath, 'utf8')the encoding is set to'utf8'. It needs to be'binary'.Also, theres.end(file_content)function needs to pass the right encoding. Tryres.end(file_content, 'binary').I had the same issue and had to figure it out myself, this answer doesn't seem to exist anywhere online. | I downloaded the.wofffile from Google web fonts for some network reason in China. Previously I tried@font-facethat onGithub Pagesand it works. But this time it took me an hour to find where was broken.I use Node to serve static files withmime, and thecontent-typeappears to beapplication/x-font-woff, and my code in CoffeeScript:exports.read = (url, res) ->
filepath = path.join __dirname, '../', url
if fs.existsSync filepath
file_content = fs.readFileSync filepath, 'utf8'
show (mime.lookup url)
res.writeHead 200, 'content-type': (mime.lookup url)
res.end file_content
else
res.writeHead 404
res.end()As thecontent-typeof.woffon Github Pages isapplication/octet-stream, I just commnet out that line in my code to make it the same.. But it still failed:exports.read = (url, res) ->
filepath = path.join __dirname, '../', url
if fs.existsSync filepath
file_content = fs.readFileSync filepath, 'utf8'
show (mime.lookup url)
# res.writeHead 200, 'content-type': (mime.lookup url)
res.end file_content
else
res.writeHead 404
res.end()At last, I switched to a Nginx server to serve the.wofffile.. and finally it began to work.But how Can I fix that on Node? | Why Node.js failed to serve .woff files |
Heard back from Heroku Support:We do not recommend trying to add nginx to your stack, nor does Heroku provide that layer. But you are correct that if you wish to gzip responses, your application must gzip the responses - this is often handled in application framework (e.g. Ruby's Rack) as a middleware layer. gzip is extremely fast and this should not add any significant latency to your requests.This confirms that you do not need to run Nginx for its reverse proxy feature on Heroku. | TL/DR: My primary question: Is it worth my time to try to add NGinx to my Django/Gunicorn/Cedar/PostgresSql app or does Heroku do this type of performance improvement for me?In the Cedar documentation (https://devcenter.heroku.com/articles/cedar), it clearly states that cedar does not support a reverse-proxy. "Cedar does not include a reverse proxy cache such as Varnish, preferring to empower developers to choose the CDN solution that best serves their needs."Again in the Routing article (https://devcenter.heroku.com/articles/http-routing#gzipped-responses), it is specified that nginx is not done automatically: 'Since requests to Cedar apps are made directly to the application server – not proxied through an HTTP server like nginx – any compression of responses must be done within your application."However, in the Python Faq, it says otherwise:https://devcenter.heroku.com/articles/python-faq#do-python-applications-run-behind-nginx"No. There is no need for using a reverse proxy on Heroku because the Heroku Cloud Platform takes care of everything those servers normally do for you.Your application simply provides a Python server to respond to HTTP requests.Gunicorn, Gevent, and Eventlet are excellent options.Because the web server is embedded in your application, you can easily test and debug the exact same code in any environment. This development and production parity makes it easy to troubleshoot problems during your development cycle."It seems to me like Heroku takes care of some of the benefits of reverse proxies, but not compression. Is that true? | Clarification: Does Heroku Run Python Apps Behind Nginx or Not? |
Just yesterday I had to setup some Unicorns and nginx. I followed:The article aa_memon already mentionedandhttp://www.slideshare.net/mauricio.linhares/deploying-your-rails-application-to-a-clean-ubuntu-10Also, here is my Unicorn config and init.d script:https://gist.github.com/2049606.The deploy script I ended up using is almost identical to those mentioned in the links above. If you are using RVM, make sure you add something like:$:.unshift(File.expand_path('./lib', ENV['rvm_path'])) # Add RVM's lib directory to the load path.
require "rvm/capistrano" # Load RVM's capistrano plugin.
set :rvm_ruby_string, '1.9.3-p125@YOURGEMSET' # Or whatever env you want it to run in.A critical point is that you specify the PID files to be in the correct places (I mistyped that and it took me half an hour to find my mistake). Also make sure your user can write all necessary files. | Can anyone suggest a good good unicorn + nginx + cap deploy how to?
I have searched high and low spend like 5 hours getting my deploy up and running with all kind or errors. | good unicorn + nginx + cap deploy howto? |
as you can see in your haproxy post haproxy akt as aforward proxyoption http_proxyWhat this option mean is described in the manualhttps://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-option%20http_proxyIt sometimes happens that people need a pure HTTP proxy which
understands basic proxy requests without caching nor any fancy
feature. In this case, it may be worth setting up an HAProxy instance
with the "option http_proxy" set. In this mode, no server is declared,
and the connection is forwarded to the IP address and port found in
the URL after the "http://" scheme.No host address resolution is performed, so this only works when pure
IP addresses are passed. Since this option's usage perimeter is rather
limited, it will probably be used only by experts who know they need
exactly it. This is incompatible with the HTTP tunnel mode.As far as I know nginx does not have this feature.A similar question it this.https://superuser.com/questions/604352/nginx-as-forward-proxy-for-httpsWhy can't you use the haproxy as described in the post of your link? | I am aware that nginx can be configured to act as a load balancer, but I'm wondering if it is possible to load balance between proxies? Let's say I have multiple proxies running on localhost, and I want to use nginx to provide a single point of connection so that I can rotate between the proxies. I am trying to achieve something similar to the posthere, which is using HAProxy instead of nginx. I have the followingnginx.conf:events { }
http {
upstream proxies {
server localhost:9998;
server localhost:9999;
server localhost:10000;
}
server {
listen 8080;
location / {
proxy_pass http://proxies;
}
}
}However, when I send a curl request like this:curl http://icanhazip.com -x localhost:8080It ignores the url, and I get a response similar to what I would expect if I had directly sent a request to one of the proxy servers like so:curl localhost:9999Of course, I did not really expect it to work, since there must be some option to tell nginx to treat the upstream servers as proxies themselves. However, I was not able to find how to do this after searching online. | How to configure nginx to act as a load balancer for proxies? |
Django is running on plain HTTP only behind the proxy, so it will always use that to construct absolute URLs (such as redirects), unless you configure it how to see that the proxied request was originally made over HTTPS.As of Django 1.4, you can do this using theSECURE_PROXY_SSL_HEADERsetting. When Django sees the configured header, it will treat the request as HTTPS instead of HTTP:request.is_secure()will return true,https://URLs will be generated, and so on.However, note the security warnings in the documentation: youmustensure that the proxy replaces or strips the trusted header from all incoming client requests, both HTTP and HTTPS. Your nginx configuration above does not do that withX-Forwarded-Ssl, making it spoofable.A conventional solution to this is to setX-Forwarded-Protocoltohttporhttps, as appropriate, in each of your proxy configurations. Then, you can configure Django to look for it using:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') | I'm trying to test my Django app locally using SSL. I have a view with the@login_requireddecorator. So when I hit/locker, I get redirected to/locker/login?next=/locker. This works fine with http.However, whenever I use https, the redirect somehow drops the secure connection, so I get something likehttps://cumulus.dev/locker -> http://cumulus.dev/locker/login?next=/lockerIf I go directly tohttps://cumulus.dev/locker/login?next=lockerthe page opens fine over a secure connection. But once I enter the username and password, I go back tohttp://cumulus.dev/locker.I'm using Nginx to handle the SSL, which then talks torunserver. My nginx config isupstream app_server_djangoapp {
server localhost:8000 fail_timeout=0;
}
server {
listen 80;
server_name cumulus.dev;
access_log /var/log/nginx/cumulus-dev-access.log;
error_log /var/log/nginx/cumulus-dev-error.log info;
keepalive_timeout 5;
# path for static files
root /home/gaurav/www/Cumulus/cumulus_lightbox/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
server {
listen 443;
server_name cumulus.dev;
ssl on;
ssl_certificate /etc/ssl/cacert-cumulus.pem;
ssl_certificate_key /etc/ssl/privkey.pem;
access_log /var/log/nginx/cumulus-dev-access.log;
error_log /var/log/nginx/cumulus-dev-error.log info;
keepalive_timeout 5;
# path for static files
root /home/gaurav/www/Cumulus/cumulus_lightbox/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
} | Django @login_required dropping https |
So I finally found more info on this.When writingserver app:4000;,appisa DNS entry, which resolves to multiple instances.Itis possibleto update those DNS entrieswithout having to restart nginx. The detail is here:https://serverfault.com/a/916786/182596Thisreddit postandnnginx this articlehelped also.Basically, one has to set on the nginx configuration tousedockerDNS server127.0.0.11Put the proxy in a variable. Here's the conf:resolver 127.0.0.11 valid=10s;
server {
set $app app:4000;
location / {
proxy_pass http://$app;
}
}Oncedocker-compose up -d --no-deps --scale app=2 --no-recreate appis called, it starts routing to both instances.The issue is that when scaling down, it takes the DNS entry TTL to update that it is not valid anymore, hence, with10s, I do have 50% of my traffic being down for[0-10s], which is decent but not perfect.I'm currently investigating:what is a good TTL durationif there is a way to manually trigger a DNS entries refresh | So I have:version: "3.6"
services:
nginx:
image: nginx
app:
image: node:latestAnd my nginx config is:upstream project_app {
server app:4000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://project_app;
}In order to update a container without downtime (rolling updates), I first upscale theappservice to 2:docker-compose up -d --no-deps --scale app=2 --no-recreate appIt will createproject_app_1alongproject_app.But at this step, even when the newproject_app_1container is ready, all the traffic goes toproject_app, the former container.To have them both used, I then need to rundocker-compose restart nginx.Now, the traffic is router to bothproject_appandproject_app_1, which is really cool.I am now ready to killproject_appwhich is outdated now.My questions are:Do I need to restart nginx again after it is killed, to make sureallthe traffic gets routed toproject_app_1or is it somewhat automatic?The fact thathttp://app:4000works is because of DNS hostname config, right? Where can learn more on this?If shutdown discovery works automatically in nginx, ain't there a way to make startup discovery also automatic in order to avoid restarting nginx, which induces a 2 seconds downtime?ThanksPS: If you are curious about the whole script I use, I reported iton the associated github issue. | How do docker-compose network aliases work if there are multiple instances for zero downtime container update? |
The servers normally queue the requests until a thread is available to handle it.
If there are many requests in the queue but only a few threads, a single thread might handle the request quite fast, but if you add the time, the request was queued, the consumer sees a much longer time.See:How to increase number of threads in tomcat thread pool?Measuring the number of queued requests for tomcatSee if you can increase the number of threads or decrease the accept_count, but keep in mind, that the number of other resources like database connections might also need to be increased. Also keep in mind, that more threads might mean more competing for resources.It might be worth to try change the parameters for this. Normally the access-log should also show the time the message is queueandhandled, but I am not sure. | I have five tomcat instances behind nginx.Sometimes the nginxupstream_response_timeis very big, more than 1 second, while the tomcat local access log shows the process time is only 50ms(I use%Dto log process time).What is the possible reason and how to fix it? It does not seems the network is slow since other applications run fast.Update:Seems the nginxupstream_response_time=%D+1 sec. | Tomcat process time is small but nginx shows it is big |
flushpackets=onmeans flushing out the buffer after each chunk is sentThis example is from the guacamole docs:https://guacamole.apache.org/doc/gug/proxying-guacamole.html#proxying-with-apache
Order allow,deny
Allow from all
ProxyPass http://HOSTNAME:8080/guacamole/ flushpackets=on
ProxyPassReverse http://HOSTNAME:8080/guacamole/
| I have an application that requires me to disable buffering in the reverse proxy. I managed to do that with the following nginx configuration:server {
listen 80;
server_name 10.0.0.104;
location / {
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://http_backend;
proxy_redirect default;
}
}
upstream http_backend {
server 10.0.0.86:8080;
keepalive 16;
}I need to have the same setup working on Apache but apache doesn't have aproxy_buffering offdirective. The only conf that I was able to find in themod_proxy docsisProxyIOBufferSizeandProxyReceiveBufferSizebut they have a minimun value instead of an option to disable buffering. I tested with those but my application fails. | Apache equivalent of Nginx `proxy_buffering off` |
At the moment, that is not supported in nginx. But there is senginx[1], it's proxy module is extended to support client certificate handshake with origin server.[1]http://www.senginx.org/en/index.php/Proxy_HTTPS_Client_Certificate | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed9 years ago.Improve this questionI have 2 Nginx serversserver1andserver2.server1requires client ssl verification.server2proxies all request to server1The problem is while i am trying to access my service directly from server1 the browser asks my client certificate and it works fineBut from servier2 it always gives error "400 Bad Request. No required SSL certificate was sent"server1 nginx config isserver {
listen 443;
server_name server1 ;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_client_certificate /etc/nginx/client_keys/keys.crt;
ssl_verify_client on;
ssl_verify_depth 1;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://some-service;
}
}server2 nginx config isserver {
listen 443 default_server;
server_name server2;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_client_certificate /etc/nginx/client_keys/keys.crt;
location / {
proxy_pass https://server1;
}
} | Nginx ssl_verify_client and proxy_pass [closed] |
gem uninstall passengerwill remove passenger and all these dependenciespassenger, passenger-install-apache2-module,
passenger-install-nginx-module, passenger-config, passenger-status,
passenger-memory-stats, passenger-make-enterprisey | I'm on Mac OSX. Nginx is installed in /opt/nginx.How do I uninstall it? Any thoughts? | Installed Nginx with passenger-install-nginx-module. How do I uninstall it? |
First, you have to configure php with fpm or fastcgi (older method) on nginx, there are plenty of docs available for that.
Once you have setup php with either of the methods, all you have to do is extract phpmyadmin files to the docroot in a subdirectory and configure phpmyadmin by editing config.inc.php or using the setup script provided at /setup.To setup php 5.3 fpm + nginx:http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed12 years ago.Improve this questionI'm using Ubuntu 11.04 and Nginx.I want to install phpmyadmin and have access to it on mydomain.com/phpmyadmin.I've looked around and I see some ways to get it to work, but not in the way mentioned above. I'm sure it's simple to do, but I'm a complete server noob atm.Edit:Nevermind, I found out an easy way to do it.I just created a symbolic link from my phpmyadmin folder into my public web folder. I did this before but I didn't clear the cache so I thought it didn't work.For Ubuntu 11.04 users with the default nginx file path, here are the steps:1) ln -sf /usr/share/phpmyadmin /usr/share/nginx/www2) /etc/init.d/nginx restart3) delete your browser cache | How do I get Phpmyadmin to work with Nginx and Ubuntu? [closed] |
Otherwise, I'll have to build those static assets on the host side and then include as a volume into the web container and share it with nginx container.This statement seems incorrect.If the static assets are generated as part of the build process, then just mount a volume on top of that directory at runtime. Docker will take care of copying the underlying content into the volume, after which you can access it in your nginx container using--volumes-from.For example, if I start with thisDockerfilefor my web container:FROM alpine
RUN apk add --update darkhttpd
COPY assets /assets
CMD ["darkhttpd", "/assets"]I now have a directory/assetsthat contains my static assets. If I
run this image as:docker run -v /assets --name web webThen/assetswill (a) be a volume and (b) contain the contents of
the/assetsdirectory.Now you can start an nginx container and share this data with it:docker run --volumes-from web nginxThe nginx container will have a/assetsdirectory that contains your
static assets.I've put together a small examplehere. | I have 2 containers:webandnginx. When I buildwebcontainer, static assets for frontend are generated within the container.Now, I want to share those assets betweenwebandnginxwithout using a volume on the host machine. Otherwise, I'll have to build those static assets on the host side and then include as a volume into thewebcontainer and share it withnginxcontainer. This is undesirable from my build system's standpoint.Is there a way to build static assets in thewebcontainer and then share them withnginx? | Docker: Is it possible to share data between 2 containers without a volume? |
Thanks for the discussion everyone. After looking into PHP Post/Upload process, it cleared up how things worked a little.Updating the SDK appeared to eliminate those initial memory limit issues.Of course I'm still looking into the issue of concurrency, but I feel like this is more of an apache/nginx/server config/spec optimisation issue than my language.Thanks everyone! | I've created an API in Laravel, that allows users to upload zip archives that contain images.Once an archive is uploaded it's sent to S3 and then picked up by another service to be processed.I'm finding that with larger archives PHP keeps hitting its memory limit. I know I could raise the limit but that feels like a slippery slope, especially as I imagine multiple users uploading large files.My current solution has been to completely forego my server and allow the client to upload directly to S3. But this feels very insecure and susceptible to spamming/DDOSing.I guess what I'm really hoping for is a discussion about how this could be handled elegantly.Is there a language more suitable for this sort of processing/concurrency? I could easily spawn the uploading process out to something else.Are my issues about S3 unfounded? I know ever request needs to be signed but the tokens generated are reusable, so they're exploitable.Resources online speak about NGINX as a better solution, as it has an upload module that write uploads directly to file, as apache appears to be trying to do a lot in memory (not 100% sure about this).I'm pretty unclear about the whole PHP upload process if I'm honest. Is a request stored directly in memory? i.e. Ten 50mb uploads would cause a memory limit exception against my 500mb of RAM | Handling Multiple Concurrent Large Uploads |
When nginx encounters ahttpsprotocol it thinks it is still usinghttpas the protocol and is not being forwarded with the rest of the headers, try adding:proxy_set_header X-Forwarded-Proto $scheme;in your location blocks to fix it. | This is a very similar problem toNginx configuration leads to endless redirect loopbut that discussion has not led me to an answer yet. I'm learning how to work with nginx and ssl and everything works perfectly on the regular http:// example.com side of things, but when routing to the https:// example.com/admin I instead see:This webpage has a redirect loopHere is my config file:map $uri $example_org_preferred_proto {
default "http";
~^/(images|css|javascript)/ "none";
~^/admin/ "https";
}
server {
listen 80;
root /usr/share/nginx/www/example.com/blog;
server_name example.com;
if ($example_org_preferred_proto = "https")
return 301 https://example.com$request_uri;
}
location ~ / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:2368;
}
}
server {
listen 443;
ssl on;
root /usr/share/nginx/www/example.com/blog;
server_name example.com;
ssl_certificate /usr/share/nginx/.crt;
ssl_certificate_key /usr/share/nginx/.key;
if ($example_org_preferred_proto = "http") {
return 301 http://example.com$request_uri;
}
location ~ / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:2368;
}
}Basically what I want to accomplish is having a site that normally runs unencrypted, but when I point to my admin page the browser redirects to https and encrypts my login.Note: the mapping idea came fromhttp://www.redant.com.au/ruby-on-rails-devops/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/and seems like a much better approach than using rewrite | nginx redirect loop with ssl |
Passenger 5 scores better on custom-picked benchmarks because it has a built-in caching layer ("turbocaching") that can avoid actually running your application code for identical requests in a short timeframe; it will not make your actual application code run any faster. This caching layer is only active in certain constrained situations, and is not likely to provide much benefit in the vast majority of actual cases. If you aren't careful, the caching layer may actually end up breaking your application - I demonstrated severalsecurityvulnerabilitiesdue to the caching layer to Phusion during the 5 beta phase (which they fixed, at the cost of the caching layer not being able to cache nearly as much). IMO, the Raptor/Passenger 5 benchmarks are deceptive marketing fluff, and the caching layer exists primarily to win Hello World benchmarks, and you should probably just ignore them.That said,the speed of your application server is almost certainly insignificant in the scope of your overall application performance. Passenger is a great platform because it's extremely user-friendly, well-documented, has an absolutely fantastic installer, and handles a lot of the annoying crap for you out of the box. You should use Passenger if you need the functionality it provides and don't want to screw around with a ton of config stuff. If it doesn't fit your use case, use something else that does.If every last microsecond is of prime concern to you, you should measureyourapplication's performance under various webservers and various workloads, and then pick the one that performs the best. Otherwise, use whatever you like the most and then switch once performance becomes an actual problem.Footnote: If you do use Passenger 5, be sure to read theTurbocaching security changesarticle to be sure you don't accidentally make your application vulnerable to user data theft (or otherwise introduce bugs) through the turbocaching layer. | I've been searching around for performance tests on the new passenger 5 as I readhereit became way faster.I tried to find other ressources confirming this but no luck. Has anyone tried to install it and see the difference? | passenger 5 performance compared to unicorn/thin/puma/etc |
Try adding something like the following directives to your config to prevent http flooding:http {
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
server {
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
}
}Seehttp://nginx.org/en/docs/http/ngx_http_limit_conn_module.htmlandhttp://nginx.org/en/docs/http/ngx_http_limit_req_module.htmlfor more infoThere's all the following directivehttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rateNOTE:http://www.botsvsbrowsers.com/details/504401/index.htmlsays the above user agent is not a known bot | A have a http flood on my server, not so much queries, but anyway. Queries in log95.55.237.3 - - [06/Sep/2012:14:38:23 +0400] "GET / HTTP/1.0" 200 35551 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US)" "-" | "-"
93.78.44.25 - - [06/Sep/2012:14:38:23 +0400] "GET / HTTP/1.0" 200 36051 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US)" "-" | "-"
46.118.112.3 - - [06/Sep/2012:14:38:23 +0400] "GET / HTTP/1.0" 200 35551 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US)" "-" | "-"I tried this filters in nginx configserver {
.....
set $add 1;
set $ban '';
###### Rule 1 ########
if ($http_referer = '-' ) {
set $ban $ban$add;
}
if ($request_uri = '/') {
set $ban $ban$add;
}
if ($http_user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US)') {
set $ban $ban$add;
}
if ($ban = 111) {
return 444;
}
######################
......
}but stil bot queries get 200 OK. Could somebody help? | Config of nginx to filter http flood |
This will work, as777is a security risksudo chmod -R o+w storage/
sudo chmod -R 775 storage/ | I just tried to change my Laravel project to run on Nginx instead of Apache and can't get the right permissions. Don't really know what to try next. Currently here they are:I even gave the 777 permission to the storage folder, but nothing works. I have an admin panel on a blog which always keeps throwingErrorException in File.php line 190:
chmod(): Operation not permittedWould really appreciate any help.I am using Nginx, PHP 7.0, MySQL. The website is written using the Laravel framework. | Laravel proper permissions |
The error message is clear enough:Tue Jul 10 21:49:38 2012 - uwsgi socket 0 bound to UNIX address
/run/uwsgi/app/testapp1/socket fd 5
Tue Jul 10 21:49:38 2012 - bind():
No such file or directory [socket.c line 107]Do you see difference between:socket = /run/uwsgi/testapp1/socketand:uwsgi_pass unix:///var/run/uwsgi/app/testapp1/socket;?Hint:/var/run/uwsgi/app/testapp1/socket | I've tried to configure django on top on nginx and uwsgi and a 502 bad gateway error is encountered when trying to access localhostThis is my /etc/ngingx/sites-available/default fileserver {
server_name testapp1.com www.testapp1.com;
access_log /var/log/nginx/testapp1.com.access.log;
location / {
uwsgi_pass unix:///var/run/uwsgi/app/testapp1/socket;
include uwsgi_params;
}
}This is my testapp1.ini file in /etc/nginx/apps-available/[uwsgi]
thread=3
master=1
env = DJANGO_SETTINGS_MODULE=testapp1.settings
module = django.core.handlers.wsgi:WSGIHandler()
chdir = /home/paul/apps/testapp1
socket = /run/uwsgi/testapp1/socket
logto = /var/log/uwsgi/testapp1.logThis is the uwsgi.log fileTue Jul 10 21:49:38 2012 - *** Starting uWSGI 1.0.3-debian (32bit) on[Tue Jul 10 21:49:38 2012] *** Tue Jul 10 21:49:38 2012 - compiledwith version: 4.6.2 on 20 February 2012 10:06:16 Tue Jul 10 21:49:382012 - current working directory: / Tue Jul 10 21:49:38 2012 - writingpidfile to /run/uwsgi/app/testapp1/pid Tue Jul 10 21:49:38 2012 -detected binary path: /usr/bin/uwsgi-core Tue Jul 10 21:49:38 2012 -setgid() to 33 Tue Jul 10 21:49:38 2012 - setuid() to 33 Tue Jul 1021:49:38 2012 - your memory page size is 4096 bytes Tue Jul 1021:49:38 2012 - uwsgi socket 0 bound to UNIX address/run/uwsgi/app/testapp1/socket fd 5 Tue Jul 10 21:49:38 2012 - bind():No such file or directory [socket.c line 107]I didnt change the nginx.conf file. | 502 error with nginx + uwsgi +django |
I fixed it by the following:location /kibana4/ {
proxy_pass http://host:5601/;
proxy_redirect http://host:5601/ /kibana4/;
}I had to use proxy_redirect to have the response back !Thanks | I have kibana 4 and elasticsearch running on the same server.I need to access kibana through a domain but when I try I keep getting file not found.I just create location /kibana in nginx and the proxy_pass is the ip:port of kibana.Anyone had this? | How to configure Kibana 4 and elasticsearch behind nginx? |
SolutionInstead of usingapt-get install wkhtmltopdfI downloaded the latest version from thereleases pageand everything works now. | I am trying to use wkhtmltopdf with Django ,nginx,uwsgi
it works perfectly on development env running using manage.py runserver
but when serving with nginx ans uwsgi i get this error:wkhtmltopdf exited with non-zero code 1. error:
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-isp'
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
Exception Location: /home/isp/Env/isp/lib/python3.6/site-package/pdfkit/pdfkit.py in to_pdf, line 159the command :wkhtmltopdf http://www.google.com output.pdfworks perfectly on terminal
and i used this guid to deploy the Django apphttps://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04#setting-up-the-uwsgi-application-serveri think it is related to virtualenv , i tried using this wrapperhttps://github.com/JazzCore/python-pdfkit/wiki/Using-wkhtmltopdf-without-X-serverbut still having the same errormycode :import pdfkit
pdfkit.from_file("./invoices/invoice"+str(booking_id)+"-"+str(invoice_id)+".html", "invoices/invoice_initial"+str(booking_id)+"-"+str(invoice_id)+".pdf") | wkhtmltopdf (pdfkit) Could not connect to any X display |
Two options to add to your config below ...Option 1:server {
...
server_name example.com;
...
location /siteA {
root /var/www/siteA;
...
}
location /siteB {
root /var/www/siteB;
...
}
...
}Option 2:server {
...
server_name example.com;
...
location /siteA {
return 301 http://siteA.example.com$request_uri;
}
location /siteB {
return 301 http://siteB.example.com$request_uri;
}
...
}First option simply serves fromexample.com/siteAin addition while second option redirects tositeA.example.com | I have a few sites. Each site has its own "server" section with a server_name that looks like thisserver {
...
server_name siteA.example.com;
root /var/www/siteA;
...
}I can therefore bring up the site using the urlhttp://siteA.example.comI however also need to bring up the site by using the urlhttp://example.com/siteAHow can this be done? | Nginx Routing path to server |
You should get the header value X-forwarded-forhttp://en.wikipedia.org/wiki/X-Forwarded-For | My server doesn't have a public IP address, so I don't know how to get the real client's IP address.This is my nginx's configuration:location / {
proxy_pass http://domain1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}In my Rails app's controller bothrequest.ipandrequest.remote_ipreturn my server's gateway address.How can i get the real IP of client?How to get X-Forwarded-For value from Rails request? | Get the real IP address of client with Rails and Nginx? |
I've experienced 502 issues with ghost behind nginx several times over a few years of running it. I'm not sure if the cause of mine today is the same as yours, but what I observed was that after a restart ghost had changed its port number to one different than what its nginx config was listening on.I followed these directions fromhttps://web.archive.org/web/20200807095031/https://www.danwalker.com/running-ghost-on-a-5-digital-ocean-vps/which resolved it for me:See which port ghost is running on:sudo netstat -plotnCheck that it matches theproxy_passin the nginx config file in/etc/nginx/sites-enabled.In my case the port in the nginx config had incremented to 2369 while the actual node process was running on 2368. Changing theproxy_passport back to 2368 in my ghost blog's nginx config file resolved the issue for me. | I recently installed Ghost 1.8.4 and Nginx on myAWS ec2 Ubuntu 16.04server. When I loaded my blog site, it correctly took me to the Ghost home page, from where I logged into Ghost admin. On the admin screen, there was a message to update.I ranghost updatein puttyThe update appeared to be successful, but when I returned to my blog site, I received the following error:502 Bad Gateway
nginx/1.10.3 (Ubuntu)Does anyone know a probably cause of this error and how to resolve?I checked some posts, which suggested I should have turned Ghost off before the update. If this is true, is my ghost installation now corrupted?I went to my ghost directory in/var/www/ghostand tried to run:sudo service ghost startbut it returned:Failed to start ghost.service: Unit ghost.service not foundand trying to stop, returnsUnit ghost.service not loaded. Am I running the command from the correct location? | What is the cause of the "502 Bad Gateway" after Ghost 1.8.7 update |
Below config should work for youserver {
listen 80;
server_name example.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1G;
# optimize downloading files larger than 1G - refer to nginx doc
before adjusting
#proxy_max_temp_file_size 2G;
location = / {
rewrite ^ /index.html permanent;
}
location / {
proxy_pass http://structure.example:80;
}
location /cdn {
proxy_pass http://content.example:80;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
} | I'm currently working on a JS Project, that uses the url path. Now if I go on my website withexample.com/, the JavaScript won't work, because I actually needexample.com/index.html.I'm already using an reverse proxy to proxy pass to two different docker containers. So my idea was to pass the request toexample.com/index.htmlwhenexample.com/is called. But I can't figure out the regex stuff to achieve this goal.My old config:server {
listen 80;
server_name example.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1G;
# optimize downloading files larger than 1G - refer to nginx doc
before adjusting
#proxy_max_temp_file_size 2G;
location / {
proxy_pass http://structure.example:80;
}
location /cdn {
proxy_pass http://content.example:80;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}Stuff I tried:server {
listen 80;
server_name example.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1G;
# optimize downloading files larger than 1G - refer to nginx doc
before adjusting
#proxy_max_temp_file_size 2G;
location / {
proxy_pass http://structure.nocms:80/index.html;
}
location ~* \S+ {
proxy_pass http://structure.nocms:80;
}
location /cdn {
proxy_pass http://content.nocms:80;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
} | How to Proxy Pass from / to /index.html |
Okay, I made it. How I made it for debian squeeze with nginx server: (all commands I execute from root user)First of all you need to install sendmailapt-get install sendmailnext, you must configure this file that was easier than I thoughtsendmailconfigokay, next step that I make was a php.ini configuration (I'm not a great admin, I'm a beginner, so I don't know is it necessary or not.)I setsendmail_path= /usr/sbin/sendmail -t -iOkay, from this moment, theoretically, you can send email, but for my case it led to 504 http error gateway time-out. But as I found much later the email already came to email box.
So, my test php file is:That's pretty clear.Next problem is 504 error. I go to the log filesnano /var/log/mail.logand here i find this error (that not the only one error, but that one is responsible for 504 error):sm-msp-queue[***]: My unqualified host name (myhostname) unknown; sleeping for retryThen, to find how I can solve this trouble:http://forums.fedoraforum.org/archive/index.php/t-85365.htmllast comment on that page.Or another words I made this:nano /etc/hostsand in that file I change the order of the hosts127.0.0.1 my_ip localhost myhostnamesave, done.
open your test php file, there is no any 504 error and emails is income to email you mention in mail function.
As I say, I'm a novice, and that may not work for you, but it work for me anyhow. This is not the end configuration, of course. Hope you find it helpful. | May be it's a dumb question, but I can't find the reason why php mail function doesn't work
I have a nginx server on debian squeeze, I moved to it recently. I tried simple mail execution but it return false.if(mail('[email protected]', 'test-subject', 'test-text-blablabla'))
echo 'ok';
else
echo 'bad';What can i do with it?Thanks.my mail section of php.ini:[mail function]
; For Win32 only.
; http://php.net/smtp
SMTP = localhost
; http://php.net/smtp-port
smtp_port = 25
; For Win32 only.
; http://php.net/sendmail-from
;sendmail_from =[email protected]; For Unix only. You may supply arguments as well (default: "sendmail -t -i").
; http://php.net/sendmail-path
;sendmail_path =
; Force the addition of the specified parameters to be passed as extra parameters
; to the sendmail binary. These parameters will always replace the value of
; the 5th parameter to mail(), even in safe mode.
;mail.force_extra_parameters =
; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename
mail.add_x_header = On
; The path to a log file that will log all mail() calls. Log entries include
; the full path of the script, line number, To address and headers.
;mail.log = | mail() doesn't work on new server |
Turned out it wasn't being set in either Nginx or Passenger.It's in benchmarking.rb in /gems/actionpack-2.3.2/lib/action_controller/, line 90. | EDIT-- the solution I posted below probably applies to any server (Nginx/Apache/anything else), because this header is set in Rails itself.Anyone know where the "X-Runtime" header can be removed in Nginx & Passenger?I've grepped the source files and haven't found anything yet, but I'd like to get rid of it for security since it's a telltale sign of Rails. | How to remove "X-Runtime" header from Nginx/Passenger? |
Edit your config file like this and it should work:gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;Note the added types, because sometimes those types can be detected in different ways by different systems. | I have no idea where to place my gzip compression lines within myhttpblock, shown here.http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
keepalive_timeout 65;
server {
listen 8080;
root /usr/share/nginx;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html;
autoindex off;
}
location ~ ^/(images|fonts|videos)/ {
root /usr/share/nginx/assets;
autoindex off;
expires 7d;
proxy_redirect off;
proxy_max_temp_file_size 0;
}
location ~ \.(mp3|mp4) {
}
}
include /etc/nginx/conf.d/*.conf;
}The lines I want to use for gzip compression are here, and I don't know whether to put these in the server block, before the server block, or in the location block:# Compression
gzip on;
gzip_proxied any;
gzip_types text/plain text/xml text/css application/x-javascript;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_static on;I have gzip_static set to "on" because I'm usinggulp-gzipto compress various css and js files. | nginx gzip compression not working |
The problem is that you need to configuregunicorn's logging, because it will (by default) not display any custom headers.From the documentation, we find out that the default access log format is controlled byaccess_log_formatand is set to the following:"%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"where:his the remote addresslis-(not used)uis-(not used, reserved)tis the time stampris the status linesis the status of the requestbis length of responsefis referrerais user agentYou can also customize it with the following extra variables that are not used by default:T- request time (in seconds)D- request time (in microseconds)p- the process id{Header}i- request header (custom){Response}o- response header (custom)Togunicorn, all requests are coming from nginx so it will display that as the remote IP. To get it to log any custom headers (what you are sending fromnginx) you'll need to adjust this parameter and add the appropriate variables, in your case you would set it to the following:%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%({X-Real-IP}i)s" | I run a django app via gunicorn, supervisor and nginx as reverse proxy and struggle to make my gunicorn access log show the actual ip instead of 127.0.0.1:Log entries look like this at the moment:127.0.0.1 - - [09/Sep/2014:15:46:52] "GET /admin/ HTTP/1.0" ...supervisord.conf[program:gunicorn]
command=/opt/middleware/bin/gunicorn --chdir /opt/middleware -c /opt/middleware/gunicorn_conf.py middleware.wsgi:application
stdout_logfile=/var/log/middleware/gunicorn.loggunicorn_conf.py#!python
from os import environ
from gevent import monkey
import multiprocessing
monkey.patch_all()
bind = "0.0.0.0:9000"
x_forwarded_for_header = "X-Real-IP"
policy_server = False
worker_class = "socketio.sgunicorn.GeventSocketIOWorker"
accesslog = '-'my nginx module confserver {
listen 80;
root /opt/middleware;
index index.html index.htm;
client_max_body_size 200M;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
real_ip_header X-Real-IP;
}
}I tried all sorts of combinations in the location {} block, but can't see that it makes any difference. Any hint appreciated. | Gunicorn doesn't log real ip from nginx |
As @Caleb Irwin said, you can runnode ./build/index.jsThe NGINX configuration will look like this:upstream sveltekit {
server 127.0.0.1:3000;
keepalive 8;
}
server {
# listen ...
# servername ...
# root ... (folder with an index.html in case of sveltekit being crashed)
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://sveltekit;
proxy_redirect off;
error_page 502 = @static;
}
location @static {
try_files $uri /index.html =502;
}
}(I'm not a NGINX pro and welcomes feedback to improve on it)You may also want to make the SvelteKit app listen only to localhost by adding the environmentHOST=127.0.0.1before runningnode build/index.js. This will prevent the port 3000 from being reached from the outside.You can also look into usingpm2to manage the sveltekit process, including running it on each of your cores in cluster mode, automatic restart in case of server crash / reboot. | I have a svelte kit project. I want to deploy the app in an Nginx web server after annpm run build. At the moment I have a node container and I use to start usingnpm run preview. It's working fine, but I want to deploy in a production environment usingbuild.How could I do that?ref:https://kit.svelte.dev/docs#command-line-interface-svelte-kit-build | How to deploy a svelte kit app after build using nginx as web server |
@Wukerplank's comment put me on the right track. I checked the output when runningpassenger-install-nginx-moduleagain and it says:Nginx doesn't support loadable modules such as some other web servers do,
so in order to install Nginx with Passenger support, it must be recompiled.
Do you want this installer to download, compile and install Nginx for you?
1. Yes: download, compile and install Nginx for me. (recommended)
The easiest way to get started. A stock Nginx 1.4.1 with Passenger
support, but with no other additional third party modules, will be
installed for you to a directory of your choice.
2. No: I want to customize my Nginx installation. (for advanced users)
Choose this if you want to compile Nginx with more third party modules
besides Passenger, or if you need to pass additional options to Nginx's
'configure' script. This installer will 1) ask you for the location of
the Nginx source code, 2) run the 'configure' script according to your
instructions, and 3) run 'make install'.
Whichever you choose, if you already have an existing Nginx configuration file,
then it will be preserved.The important part being that Nginx has to be recompiled to work with Passenger and that existing Nginx configurations are preserved.So the right way to upgrade Passenger is toinstall the new Passenger gemexecutepassenger-install-nginx-modulewith exactly the same parameters as the first time (so the same Nginx version and modules are compiled, it's installed in the same directory etc.)before installing, check that it says "Welcome to the Phusion Passenger Nginx module installer, v4.0.2." with the new version on top (4.0.2 in my case)after Nginx is installed, change thepassenger_rootin your existing Nginx conf (path/to/nginx/conf/nginx.conf) to point to the new gem version (just replace the old version number with the new)Restart NginxProfit | Is it possible to upgrade Phusion Passenger to a newer version when it is already running (with Nginx in my case)?I installed Passenger 4.0.0.rc6 usingpassenger-install-nginx-module. My Nginx config now containspassenger_root /usr/local/lib/ruby/gems/2.0.0/gems/passenger-4.0.rc6;
passenger_ruby /usr/local/bin/ruby;Now I want to upgrade to Passenger 4.0.2. I can install the gem, but when I runpassenger-install-nginx-moduleagain, it tries to recompile and reinstall Nginx. (I thought it would be so clever to notice there is already a installed Nginx in the location I specify using--prefix)I tried to manually changepassenger_rootto the new Passenger gem location but the I get the following error in the Nginx error log:2013/05/12 12:30:13 [alert] 14298#0: Unable to start the Phusion Passenger watchdog because its executable (/usr/local/lib/ruby/gems/2.0.0/gems/passenger-4.0.2/agents/PassengerWatchdog) does not exist. This probably means that your Phusion Passenger installation is broken or incomplete, or that your 'passenger_root' directive is set to the wrong value. Please reinstall Phusion Passenger or fix your 'passenger_root' directive, whichever is applicable. (-1: Unknown error)Apparently thePassengerWatchdogis built when runningpassenger-install-nginx-module. I don't want to copy overPassengerWatchdogfrom the old gem because something might have changed.So... what is the proper way to upgrade Passenger without recompiling and reinstalling Nginx (or Apache)? | Upgrade Phusion Passenger without reinstalling Nginx |
A more specific definition is in theirblog.$request_time– Full request time, starting when NGINX reads the first
byte from the client and ending when NGINX sends the last byte of the
response body$upstream_connect_time– Time spent establishing a
connection with an upstream server$upstream_header_time– Time
between establishing a connection to an upstream server and receiving
the first byte of the response header$upstream_response_time– Time
between establishing a connection to an upstream server and receiving
the last byte of the response bodySo$upstream_header_timeis included in$upstream_response_time.Time spent connecting to upstream is not included in both of them.Time spent sending response to client is not included in both of them. | Does anyone know when, specifically, the clock for$upstream_response_timebegins and ends?The documentation seems a bit vague:keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable.There is also an$upstream_header_timevalue, which adds more confusion.I assume$upstream_connect_timestops once the connection is established, butbeforeit is accepted upstream?After this what does$upstream_response_timeinclude?Time spent waiting for upstream to accept?Time spent sending the request?Time spent sending the response header? | When does nginx $upstream_response_time start/stop specifically |
you need sudo to edit that file, because it's owned by root user,usesudo nano /etc/nginx/nginx.conforsudo vim /etc/nginx/nginx.confwhich ever editor you prefer. | So I am trying to follow the tutorial here:https://gorails.com/deploy/ubuntu/14.04to deploy a Rails app. When I tried to edit the nginx.conf at (/etc/nginx/nginx.conf) file, it tells me I have read only permission, even though I followed the steps(with setting the permissions) previously. How do I fix this? | nginx.conf (permission to write denied). How do I fix this? |
The domain name part of the URL is not tested by thelocationdirective. You will need to use a named capture in theserver_namedirective. Seethis documentfor details.For example:server {
server_name ~^(?\w+)\.example\.com$;
location /admin {
return 301 $scheme://$name.myurl.com/;
}
} | I am wanting to do a redirect based on what subdomain the user is entering.For example:.example.com/admin -> .myurl.comIdeally I want to passas a parameter to my redirect URL.I was looking at something along the lines of this:location ~ (sub).(somewhere).(com)/(some)(thing)/(something)(else) {
set $var1 = $1; # = sub in above example
set $var2 = $2; # = somewhere in above example
set $var3 = $3; # = com in above example
set $var4 = $4; # = some in above example
set $var5 = $5; # = thing in above example
set $var6 = $6; # = something in above example
set $var7 = $7; # = else in above example
rewrite ^ $1/$2 last; # would be sub/somewhere
}based on this post here:Manipulate or split string(I think the syntax of the variable set is wrong in this example but you get the gist). | How to get subdomain of URL in NGINX |
Putrewriteinto onelocationand use otherlocations for assests/dynamic urls/etc.server {
listen 80 default;
server_name my.domain.com;
root /path/to/app/root;
location / {
rewrite ^ /index.html break;
}
location /assets/ {
# Do nothing. nginx will serve files as usual.
}
} | I use Nginx to serve a SPA (Single Page Application), in order to support HTML5 History API I have to rewrite all deeper routes back to the/index.html, so I followthis articleand it works! This is what I put in nginx.conf now:server {
listen 80 default;
server_name my.domain.com;
root /path/to/app/root;
rewrite ^(.+)$ /index.html last;
}However there's one problem, I have an/assetsdirectory under the root contains all the css, js, images, fonts stuffs, I don't want to rewrite these urls, I just want to ignore these assets, how am I suppose to do? | Nginx: how to let rewrite rules ignore files or folders |
You can use nginx referer module:http://nginx.org/en/docs/http/ngx_http_referer_module.html.
Something like this:server {
listen 80;
server_name website.com;
root /var/www/website.com/html ;
location /assets/ {
valid_referers website.com/ website.com/index.html website.com/some_other_good_page.html ;
if ($invalid_referer) {
deny all;
}
}
}This config guardassetsdirectory. But remember, that not guaranteed and worked only for browser - any body can emulate valid request with curl or telnet. For true safety you need use dynamic generated pages with dynamic generated links.You do not need to create the variable $invalid_referer as this is set by the nginx module. | I've been searching for a while now but didn't manage to find anything that fits my needs. I don't need hotlinking protection, as much as I'd like to prevent people from directly accessing my files. Let's say:Mywebsite.comrequestswebsite.com/assets/custom.js, that'd work,but I'd like visitors which directly visit this file to get a403 status codeor something. I really have no idea if it's possible, and I don't have any logical steps in mind..Regards ! | Nginx: Prevent direct access to static files |
Try changingrewrite ^/beta/(.+)$ /beta/index.php?url=$1 break;torewrite ^/beta/(.+)$ /beta/index.php?url=$1 last; break;Which should get nginx to re-read the URI and process it accordingly. | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI have an Nginx HTTP server with PHP-FPM set up and almost everything works fine. I want to be able to go topath/to/fileand it give meindex.php?url=path/to/file, which it does. However, it downloads the actual PHP, it won't execute it in the browser. I'm not sure what is causing this.Nginx configuration:server {
listen 80;
server_name sandbox.domain.tld;
access_log /path/to/domain/log/sandbox.access.log;
error_log /path/to/domain/log/sandbox.error.log;
location / {
root /path/to/sandbox;
index index.php;
if (!-e $request_filename) {
rewrite ^/beta/(.+)$ /beta/index.php?url=$1 break;
}
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include /usr/local/nginx/conf/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/sandbox$fastcgi_script_name;
} | PHP-FPM and Nginx rewrite causing download [closed] |
Your configuration snippet is not being doubled, actually what is happening is thatproxy_set_header X-Forwarded-For $remote_addr;is already set by default when you deploy NGINX Controller in your cluster.In order to disable this default setting, you need to use acustom template.By doing this, you can have a nginx.conf free ofproxy_set_header X-Forwarded-For $remote_addr;so you can set it as you need using the annotation you have described. | I’’m wondering “How to append Nginx IP to X-Forwarded-For”I added snippet in Ingress annotation.apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ing
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Forwarded-For "$remote_addr, $server_addr";But it seems to double set in nginx.conf.proxy_set_header X-Forwarded-For $remote_addr;
...
proxy_set_header X-Forwarded-For "$remote_addr, $server_addr";So my backend server will get twoX-Forwarded-ForAnyone knows “How to disable the proxy_set_header part generated by Nginx Ingress Controller”?proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme; | How to append Nginx IP to X-Forwarded-For in Kubernetes Nginx Ingress Controller |
This is a known bug, They only support python when it comes to the web console. if your application is in nodejs you would need to set these properties from the cli.you can setup the values from cli this wayaws elasticbeanstalk update-environment --environment-id your_enviornment_id --option-settings 'Namespace=aws:elasticbeanstalk:container:nodejs:staticfiles,OptionName=/assets,Value=/static/assets'or editing the config file fromeb config. | I am new to Elastic Beanstalk, trying to serve a Node.js Express app and utilize serving our static files separately with Nginx. None of the tutorials I've come across are explicit in how to define the virtual path.I'm attempting to do this through the AWS console in the browser. I am trying to add a virtual path/directory setup for the static files. In the console I'm atElastic Beanstalk > myapp > configuration > Static FilesBut no matter what I add here I get this error message:I've also tried adding the full directory path (/var/app/current/dist/public/images/). Is there another.ebextensions/*.conffile I need to add? I don't have a lot of experience with Nginx so if the fix is a.conffile I wouldn't know what it is | Elastic Beanstalk Nginx Serve Static Files |
I checked with myDigitalOceantechnical support and found out the reason: I restarted Nginx, but haven't restarted php-fpm which is the PHP process for Nginx.After I triedservice php7.0-fpm restart, phpMyAdmin is showing (Max: 150MiB) for importing limit now. And the importing works! | I was using phpmyadmin (Version information: 4.0.10deb1) on php 7.0.7 & nginx 1.4.6 . When I was trying to import a csv file to one of tables, I saw the max size allowed indicated on the phpmyadmin screen is 2,048KiB . Then I changed settings in php.ini (both /etc/php/7.0/fpm/php.ini & /etc/php/7.0/cli/php.ini):upload_max_filesize = 150M
post_max_size = 150M
memory_limit = -1
max_execution_time = 5000
max_input_time = 5000changed setting in /etc/nginx/nginx.conf:client_max_body_size 150M;and restarted nginx:service nginx restartbut nothing changed. And the import would fail. How could I fix this issue? Thanks. | phpMyAdmin import file size 2M limit |
As of v0.3.10, Dokku ships with a domains plugin. This lets you easily add domains to your app. By default your app is located atmyapp.mydomain.com. If you want your app to be accessible via the root domain, then just add the root domain as one of your app's domains.dokku domains:add myapp mydomain.com.That was really straightforward, the docs need to be updated to reflect this, really.For your second question, your app is not visible to the outside world. Your app is running inside its own docker container, with its own local IP address. If you still want to find out what port your app has exposed, you can rundocker pson your server. | How do I point a dokku app that will set up in the dokku server, to point at the root domain of the server itself. Suppose my domain isapps.comand the app to be implemented is calledbotapp. If I use virtualhost naming, and dogit remote add dokku[email protected]:botappit will get pointed atbotapp.apps.com. What do I do to get thebotapppointed atapps.comitself (the root domain).Also, how do I know what port a dokku app is rooting, inspite of using subdomains (virtualhost naming)? | How to point a Dokku app at the root domain of the dokku server |
The problem was withlocation ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}It should have root path set.location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
root /var/directory/...
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
} | I'm trying to setup nginx to cache static files, such as images, css and js.This is my conf.server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /var/www/site;
index index.html index.htm;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}When I try to to use this I get 404 on all files, when I remove the location ~*... I can retrieve all files perfectly. I have my files in/var/www/site/images/*/*.jpg. What am I missing here? | Nginx Cache-Control header not working (getting 404 on logs) |
try to use this, and please don't forget to replace root path!location /main/ {
root /full/path/from/root/main/;
try_files $uri $uri/ /index.php?$args;
}I've set wordpress on my host in folder /main and got it's working with next settings:location /main {
index index.php;
try_files $uri $uri/ /main/index.php?q=$uri;
}
root /path/to/webroot; | I have a WordPress site running nginx under a sub-direcotry.
how can i write rewrite rules in a sub-directory?
or can anyone please convert this Apache rewrite rule? I searched everywhere about nginx rewrite rules but nothing worked!
RewriteEngine On
RewriteBase /main/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /main/index.php [L]
any help appreciated, thanks | nginx rewrite rule under a subdirectory |
Here it is:location = / {
# would serve only the root
# ...
}
location /api/ {
# would serve everything after the /api/
# ...
}You need a special '=' modifier for the root location to work as expectedFrom thedocs:Using the “=” modifier it is possible to define an exact match of URI
and location. If an exact match is found, the search terminates. For
example, if a “/” request happens frequently, defining “location = /”
will speed up the processing of these requests, as search terminates
right after the first comparison. | I have a server configured as a reverse proxy to my server. I want to reject all the requests except to two locations, one for root and another the api root.so the server should only allow requests to the given pathsexample.com/ (only the root)
example.com/api/ (every url after the api root)The expected behaviour is that the server should reject all the below possibilities.example.com/location
example.com/location/sublocation
example.com/dynamic-locationmy current nginx configuration,server {
# server configurations
location / {
# reverse proxy configurations
}
}How do I set up this configuration? | Nginx allow only root and api locations |
We faced a similar issue with a client who needed our IP address to be whitelisted. We solved the issue by:Spinning up a Compute Engine with a static IP address. This is the IP address we gave to our clientInstalled Squid on the compute engine (https://help.ubuntu.com/lts/serverguide/squid.html)We then redirected all calls from the App Engine through the proxy server. You didn't list what language you are using but for PHP, that meant adding the following two lines to our CURL operations:curl_setopt($ch, CURLOPT_PROXY, "http://" . $_SERVER['SQUID_PROXY_HOST'] . ":" . $_SERVER['SQUID_PROXY_PORT'] );curl_setopt($ch, CURLOPT_PROXYUSERPWD, $_SERVER['SQUID_PROXY_USER'] . ":" . $_SERVER['SQUID_PROXY_PWD']);One thing to note is that depending on the number of calls you are making, a micro instance might not work for you. We initially setup our proxy server on a micro box but were having to restart it every few days. We ended up switching to a standard box and have not run into any problems since. | I have a Java web app on Google App Engine which makes requests to an external API. The API recently requires the whitelisting of IP addresses in order to access its services. Because GAE does not offer static IPs, I understand that one solution is to set up GCE instance (with a static IP) and use it as a proxy for external requests made by the GAE app.I have set up a f1-micro instance with Debian GNU/Linux 9, and have created a static external IP address as perthe documentation.How do I install nginx and set up GAE to route requests to the GCE proxy? | Using Google Compute Engine as a proxy for a Google App Engine web app |
Per nginx documentation fortry_filesChecks the existence of files in the specified order and uses the first found file for request processing;the processing is performed in the current contextso nginx find PHP file and process it in context oflocation /therefor just serve it as static file. Only last parameter is different, it's not checked but nginx make internal redirect (if it's uri) or return error code (if it's=code). So you need to remove=404from try_files to have internal redirect. And addtry_filestolocation ~ \.phpto make sure that file exists.location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/path/to/php.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_intercept_errors on;
}
location / {
index index.php;
try_files $uri $uri/ $uri.php;
} | I have a very simple PHP site:.
├── about.php
├── index.php
├── project
│ ├── project_one.php
│ └── project_two.php
└── projects.phpAnd the following nginx config (only relevant parts shown):location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/path/to/php.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_intercept_errors on;
}
location / {
index index.php;
try_files $uri $uri/ $uri.php =404;
}Hitting the/works as expected. Hitting any of thehttp://my.site.com/{projects | about | project/*}URLs should use try_files to hit the$uri.phpfile, and pass it to PHP. But instead, the browser just downloads the PHP file itself.I can get it to work by adding individual location directives for the above locations, like so:location /projects {
try_files $uri $uri/ /$uri.php;
}
location /about {
try_files $uri $uri/ /$uri.php;
}
location /project {
try_files $uri $uri/ $uri.php;
}But this is clearly not the way to do this.What am I doing wrong??? | NGINX try_files does not pass to PHP |
If your computer is on the same network with your server, behind a router with NAT, then you might see your private IP | This question already has answers here:Closed12 years ago.Possible Duplicate:suddenly $_SERVER['REMOTE_ADDR'] is started returning 10.10.10.10 phpI must have missed some fundamental thing here.. But when I navigate to an IP-displaying site such ashttp://www.whatsmyip.org/they show a certain IP. But when I echo out$_SERVER["REMOTE_ADDR"]on a page on my site it shows a different IP.Why is that? And how can I, through PHP, fetch the same IP that the whatsmyip.org site shows? | Why does $_SERVER["REMOTE_ADDR"] show a different IP than my external IP? [duplicate] |
Removetry_files $uri $uri/ =404;as it's testing if a certain url exists on the file system and if not return 404.But/Home/Indexis a route, which do not map to an existing file but to controller action, hence you get the 404 error. | I've created anASP.NET Core MVCapplication and deployed it into Linux server. When I go to sitename.com browser shows up the Home/Index page without any problem.But when I try to gositename.com/Home/Indexor another controller likesitename.com/Admin/Loginnginx throws a404 Not Founderror. What should be the problem?Here is myStartup.cs/Configuremethod.public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseBrowserLink();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseStaticFiles();
app.UseSession();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}Here is my website config fromsites-availablefolderserver {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/sitename.com;
index index.html index.htm;
server_name sitename.com www.sitename.com;
location / {
try_files $uri $uri/ =404;
proxy_pass http://127.0.0.1:5000;
}andnginx.confuser www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
mail {
} | asp.net core on linux with nginx routing doesn't work |
To answer your question, there is a setting you can change in the nginx.conf file containing your server's configuration.Set the following setting to something that seems fitting to your situation:large_client_header_buffers 4 16k;Find the documentation for ithere.I would suggest to use a POST request in case your ~3000 character requests get bigger and your nginx configuration reaches it limit. | I have a question but I accept other suggestions that bypass this feature.Basically I'm sending big lines of text ~3000 characters to my server in a get request and the server sends it to google translate as params in a url.The problem: Nginx throws me a 502 bad gateway error when the url is > 1900 characters.How can I increase the limit of my nginx url?Alternative Solution: Sending a post request with the 3000 characters in a JSON as a string? | Nginx url limit 502 gateway |
I have my Django app running with gunicorn. I followed the instructionshere.I made sure to include the proper location blocks:location /static {
alias /home/user/webapp;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}Making sure to include any template location alias as well.I set the .well-known location block like this;location /.well-known {
alias /home/user/webapp/.well-known;
}Pointing it directly do the root of the webapp instead of using the allow all.I did have to make sure that I only used the non ssl block until the certificate was generated then I used a different nginx config based onh5bpsnginx configs.Note: Make sure you have proper A records for you domain pointing to www if you are going to use h5bp to redirect to www. | I am trying to setup my nginx and django to be able to renew certificates.
However something goes wrong with my webroot-pluginin nginx:location ~ /.well-known {
allow all;
}But when I run the renewal command:./letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --webroot-path=/home/sult/huppels -d huppels.nl -d www.huppels.nlHowever it seems that the cert renewal wants to retrieve a file from my server cause i get the following error.The following errors were reported by the server:Failed authorization procedure. www.huppels.nl (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response fromhttp://www.huppels.nl/.well-known/acme-challenge/some_long_hash[51.254.101.239]: 400How do i make this possible with nginx or django? | letsencrypt django webroot |
There are several requirements:SetupHostheader in nginx with required domain or proxy if applicableUsesubdomainmiddleware before other middlewares that handle endpointsWork example:nginx configuration:server {
listen 80;
server_name bee.local;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name api.bee.local;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}nginx configuration with hardcoded Host header values:I believe that you did not setup Host header correctly. please try next configurationserver {
listen 80;
server_name bee.local;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
# proxy_set_header Host $host;
proxy_set_header Host bee.local;
}
}
server {
listen 80;
server_name api.bee.local;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
# proxy_set_header Host $host;
proxy_set_header Host api.bee.local;
}
}express app:var subdomain = require('express-subdomain');
var express = require('express');
var app = express();
var router = express.Router();
router.get('/', function(req, res) {
res.send('Welcome to our API!');
});
router.get('/users', function(req, res) {
res.json([
{ name: "Brian" }
]);
});
app.use(subdomain('api', router));
app.get('/', function(req, res) {
res.send('Homepage');
});
app.listen(3000); | I Wonder how can i handle subdomains in my project that based on Expressjs.Here's Mynginxconfigurationserver {
listen 80;
server_name bee.local;
access_log /var/log/nginx/bee.local.access.log;
error_log /var/log/nginx/bee.local.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name api.bee.local;
access_log /var/log/nginx/bee.local.access.log;
error_log /var/log/nginx/bee.local.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
}
}and here's my router with subdomain supportrouter.get('/v1/', function(req, res, next) {
res.status(200).json({ title: "title" });
});
app.use(subdomain('api', router));The problem is that it's rendering the index routeand for sure i setuped thehostsfileI've been searching for 3 hrs can you help me :) | Handle Express Subdomain with nginx |
Steps are as followshere.SSH into vagrant ->vagrant sshStop Nginx ->sudo service nginx stopRemove it ->sudo apt-get purge nginxUpdate you repos ->sudo apt-get updateInstall apache ->sudo apt-get install apache2Restart it ->sudo service apache2 restartYou are now on Apache server, update the apache conf file as your needs. | Obviously, I've Laravel project that really needs the.htaccessrules and Nginx doesn't seem to be the best solution for me,1- my question is why Laravel didn't provide homestead with Apache!
After a small research that I made I foundonline toolfor converting the rules but the output didn't work (was too short), whereas, Apache is more likely known and usable, plus it's easier to define rules for security and pretty URLs ..etc. ( at least for me )2- Please give me answers explaining why they choose Nginx!, more importantly I need to know what Seniors and Experts will use ( Nginx, Apache )3- Do you advise me to install Apache on Homestead? | Why laravel homestead is not running Apache |
resolve_timeoutsets how long NGINX will wait for answer from resolver (DNS).validflag means how long NGINX will consider answer from resolver as valid and will not ask resolver for that period.In your example, let's say NGINX want to resolveexample.com. It will ask resolver (172.17.42.1) and if resolver doesn't answer within 60 seconds NGINX will fail this request (and probably show you 500 error). Let's say resolver successfully answered, then NGINX will remember that answer for 10 minutes. If NGINX needs to resolveexample.comwithin that time, then it will use previous answer instead of asking resolver again. | I have this nginx configuration entry.http {
resolver 172.17.42.1 valid=600s;
resolver_timeout 60s;In this configuration there 2 two different timeouts.
The nginxdocumentationdoes not make it clear to me what is the difference betweenvalidandresolver_timeout.Can someone explain in detail? | what is the difference in resolver valid time and resolver_timeout in nginx |
This can be done usingecho_locationdirective (or similar, browse the directives) of the 3rd partyNginx Echo Module. You will need to compile Nginx with this module or useOpenrestywhich is Nginx bundled with useful stuff such as this.Outline code:server {
[...]
location /main {
echo_location /sub;
proxy_pass http://main.server:PORT;
}
location /sub {
internal;
proxy_pass http://alt.server:PORT;
}
}There is also the now undocumented post_action directive which does not require a third party module:server {
[...]
location /main {
proxy_pass http://main.server:PORT;
post_action @sub;
}
location @sub {
proxy_pass http://alt.server:PORT;
}
}This will fire a subrequest after the main request is completed. Here is an old answer where I recommended the use of this:NGinx - Count requests for a particular URL pattern.However, this directive has been removed from the Nginx documentation and further usage of this is now a case of caveat emptor. Four years on from 2012 when I gave that answer, I wouldn't recommend using this. | Let's say we have the following quite minimalnginx.conf:server {
listen 443 default ssl;
location /api/v1 {
proxy_pass http://127.0.0.1:8080;
}
}Now, I'm trying to usenginxitself as an event-source. Another component in my system should be aware of any HTTP requests coming in, while ideally not blocking the traffic on this firstproxy_passdirective.Is there any possibility to have a secondproxy_passwhich "just" forwards the HTTP request to another component as well while completely ignoring the result of that forwarded request?Edit: To state the requirement more clearly: What I want to achieve is that thesameHTTP requests are sent to two different backend servers, only one of them really handling the connection in terms of nginx. The other should just be an "event ping" to notify the other service that there has been a request. | Configure nginx proxy_pass with two parallel locations |
Technically you could do that in Tomcat. However, to start the application with port 80 or 443 you would have to run it with root permissions. Thus i'd recommend to configure and Apache HTTP or an Nginx server as reverse proxy (you can find many tutorials for that topic). | I have a spring boot application. Usually I run my Spring applications on PaaS instances, and configuring a domain name from there is easy enough, however I am running this on a Virtual Private Server, and I cannot, for the life of me, figure out how to run my spring boot so it's accessible with a domain name.I have already changed my DNS settings so it's pointing to my Virtual Private Server, this VPS also runs some other apache based static websites, I'm pretty confident my DNS settings are correct.My spring boot application is running usingspring-boot-starter-tomcat, the application deploys fine, I can grab my .war file and deploy it usingjava -jar myApplication.jaron the server.The application is also accessible remotely by writingmy.server.ip:8080on a browser.However, I've been googling a lot and cannot figure out how to configure Spring Boot so that it'll use my Domain name, so that I can access the website in a standard way:www.mywebsite.com, or even better yet also add an Alias so bothmywebsite.comandwww.mywebsite.comare valid.Can anyone point me in the right direction? I know thiscan be done in Tomcatbut I have no idea of how to configure it.Since this is a Spring Boot application I do not have.xmlfiles, my Spring Boot configuration is in aapplication-prod.ymlfile, and the only.xmlfile I use is thepom.xmlitself.Any help would be greatly appreciated. | Spring Boot configure a Domain/Host to access in a www.website.com fashion |
Try fixing the URL so your server doesn't have to redirecturl: "/jsontest/randomdata/" // there was a missing trailing /
// i.e. https://larsendt.com/jsontest/randomdata?ymax=500&count=32&t=0.9604179110508643
// was going to https://larsendt.com/jsontest/randomdata/?ymax=500&count=32&t=0.9604179110508643 | I'm doing some pretty basic jQuery ajax stuff on my website, and I'm having a boatload of trouble.Here's the relevant code:$(document).ready( function() {
$("#getdatabutton").click( function() {
$.ajax({
url: "/jsontest/randomdata",
type: "get",
data: [{name:"ymax", value:$("#randomgraph").height()},
{name:"count", value:$("#countinput").val()},
{name:"t", value:Math.random()}],
success: function(response, textStatus, jqXHR) {
data = JSON.parse(response);
updateGraph(data);
$("#result").html(response);
if(data["error"] == "") {
$("#errorbox").html("None");
}
else {
$("#errorbox").html(data["error"]);
}
},
error: function(jqXHR, textStatus, errorThrown) {
$("#errorbox").html(textStatus + " " + errorThrown);
}
});
});
});The page is loaded over HTTPS, but the XMLHttpRequests appear to go out over HTTP.I've attempted even changing the url to the absolute url (https://larsendt.com/jsontest/randomdata), and itstillsends the request to the HTTP version of my site.Naturally, since the request is going to a different protocol, the ajax call fails (cross-domain and all that).As reported by Chrome:The page at https://larsendt.com/jsontest/ displayed insecure content from http://larsendt.com/jsontest/randomdata/?ymax=500&count=32&t=0.08111811126582325.The only other relevant information I can think of is that I'm having nginx do a 301 redirect fromhttp://larsendt.comtohttps://larsendt.com, but I don't see how that would break anything (I believe it's fairly standard practice).If you want a live demo, the broken version is still up athttps://larsendt.com/jsontest.Anyway, thanks in advance. | jQuery ajax won't make HTTPS requests |
it is gunicorn issue, not Nginxyou can change the limit--limit-request-linehttps://docs.gunicorn.org/en/stable/settings.html#limit-request-line | This question already has an answer here:How to set gunicorn limit_request_line parameter over 8190?(1 answer)Closed2 years ago.I am using nginx and gunicorn to deploy my django project, when I use GET funcation posted data to server I get the error:Bad Request
Request Line is too large (8192 > 4094)On nginx.conf I have:client_max_body_size 100g;
client_header_buffer_size 512k;
large_client_header_buffers 4 512k;Many methods on the Internet are changing "large_client_header_buffers" from 4 512k; but didn't fix the problem.Any help or explanation is welcome! Thank you. | Request Line is too large (8192 > 4094) [duplicate] |
The easiest approach would be to create a location for the ELB, for example:location /elb-status {
access_log off;
return 200;
}You will just need to change thePing Pathto be/elb-statusIf you want to see something on your browser while testing you may need to change thecontent-typesince defaults toapplication/octet-streamand the browser will offer to save the file, so something like this should work:location /elb-status {
access_log off;
return 200 'your text goes here';
add_header Content-Type text/plain;
}If you would like to check against the user-agent something like this could be used:set $block 1;
# Allow all the ELB health check agents.
if ($http_user_agent ~* '^ELB-HealthChecker\/.*$') {
set $block 0;
}
if (!$http_x_forwarded_for) {
set $block 1
}
if ($block = 1) {
auth_basic 'Please enter ID and password';
auth_basic_user_file /usr/src/redmine/.htpasswd;
} | I'm having a trouble trying to implement basic authentication for ELB healthcheck.
I've searched quite a bit to figure out the nginx file configuration to avoid 401 error shown below, which ELB returns due to basic authenticationunhealthy in target-group hogehoge due to (reason Health checks failed with these codes: [401])I've tried to modify nginx.conf so as to avoid it, but it doesn't work.
The code below gives me[emerg] "server" directive is not allowed hereerror.http {
server {
location / {
if (!$http_x_forwarded_for) {
auth_basic 'Please enter ID and password';
auth_basic_user_file /usr/src/redmine/.htpasswd;
}
}
}
}How can I avoid 401 error by ELB healthcheck due to basic authentication?Thanks for the help. | How to avoid basic authentication for AWS ELB health-check with nginx configuration |
I have recently come across the same problem and here's what I did to fix it.In Server Config:I had to addrewrite ^/myNodeApp/(.*)$ /$1 break;to the NGINX config, in thelocation /myNodeApp/ {...}block, under what you already have in your example.In client side:I addedto the of my html files (or pug layout file in my case). This prefixes any links with your subdirectory.Note that you will need to remove any leading /'s from your existing links. Eginstead ofThat one caught me out for a while.Bonus:If you're using Socket.IO, like I am, you'll need to make a few more changes to stop some errors appearing in your console. You need to pass it a path option and specify your subdirectory.In your html filesvar socket = io.connect("/", {path: "/myNodeApp/socket.io"}) | I'm running a Koa app on port 5000, and i'd like Ngnix to serve the app in a sub-directory - e.g:http://example.com/myNodeAppHere's what I've currently got in/etc/nginx/sites-enabled/defaultlocation ^~ /myNodeApp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:5000/;
}Thiskind ofworks... apart from the fact that any redirect e.gthis.redirect('/')in my Koa app goes to the the nginx web root/Also, it doesn't render anything from my Koa apps'publicdirectory e.g. stylesheets, javascript and images.What am I doing wrong? Thanks. | Node JS - Nginx - proxy_pass to a subdirectory - Koa |
Thenginx cookbookrequires you to edit thechecksumattribute when using another version of nginx. Theremote_fileresource that is causing you an error is:remote_file nginx_url do
source nginx_url
checksum node['nginx']['source']['checksum']
path src_filepath
backup false
endYou need to update the checksum value. Specificallynode['nginx']['source']['checksum'].So in your JSON, you would add this line:"source": {"checksum": "insert checksum here" }Edit: As pointed out in the comments, the checksum is SHA256. You can generate the checksum of the file like so:shasum -a 256 nginx-1.7.8.tar.gz | I get the following error when runningvagrant up --provisionto set up my development environment with vagrant...==> default: [2014-12-08T20:33:51+00:00] ERROR: remote_file[http://nginx.org/download/nginx-1.7.8.tar.gz] (nginx::source line 58) had an error: Chef::Exceptions::ChecksumMismatch: Checksum on resource (0510af) does not match checksum on content (12f75e)My chef JSON has the following for nginx:"nginx": {
"version": "1.7.8",
"user": "deploy",
"init_style": "init",
"modules": [
"http_stub_status_module",
"http_ssl_module",
"http_gzip_static_module"
],
"passenger": {
"version": "4.0.53",
"gem_binary": "/home/vagrant/.rbenv/shims/gem"
},
"configure_flags": [
"--add-module=/home/vagrant/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/passenger-3.0.18/ext/nginx"
],
"gzip_types": [
"text/plain",
"text/html",
"text/css",
"text/xml",
"text/javascript",
"application/json",
"application/x-javascript",
"application/xml",
"application/xml+rss"
]}and Cheffile has the following cookbook:cookbook 'nginx'How do I resolve the Checksum mismatch? | Chef::Exceptions::ChecksumMismatch when installing nginx-1.7.8 from source |
It needs to be:index index.php index.html index.htmThe directive is "index".Also, the "try_files" is wrong. Change to:try_files $uri $uri/ /index.php$is_args$argsAlso it's much nicer to have the config file indented properly. It makes it much easier to debug.I suspect the tutorial that you followed is wrong, it's certainly not valid as directives need to be named first before trying to assign something to it.Pop a note to the tutorial author maybe? It'd be nice for them to correct it so nobody else falls on this one :) | I have setup the following ngnix config for my Ubuntu 14.04 VPS running HHVM with ngnix:server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /home/lephenix/main_website;
index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
include hhvm.conf;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?q=$uri&$args;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}Problem is that when I enable this config I get an error from ngnix:2014/09/07 13:16:01 [emerg] 13584#0: unknown directive "index.php" in /etc/nginx/sites-enabled/default:6I have looked and this seems to be the correct structure for this configuration. Even when I remove index.php the error then changes to:2014/09/07 13:17:03 [emerg] 13648#0: unknown directive "index.html" in /etc/nginx/sites-enabled/default:6I followed the following guide to setup the server:http://webdevstudios.com/2014/07/17/setting-up-wordpress-nginx-hhvm-for-the-fastest-possible-load-times/Thanks in advance for any help | Problematic Nginx config |
Rails has a seperate log file and doesn't log to the puma log. By default, Rails logs to a file inlogs/.log, e.g.log/production.log | Why I don't see any Rails specific entries in the logs ?I'm using Puma 2.7.1 with Nginx proxy, on a normal Debian box, nothing fancy, ruby 1.9.3 via RVM.My puma config:#!/usr/bin/env puma
environment 'sandbox'
bind 'unix://tmp/puma.sock'
stdout_redirect 'log/puma.log', 'log/puma_error.log', true
pidfile 'tmp/pids/puma.pid'
state_path 'tmp/pids/puma.state'
daemonize true
workers 4I start puma via:bundle exec puma -C config/puma/config.rbI see:[23664] Puma starting in cluster mode...
[23664] * Version 2.7.1, codename: Earl of Sandwich Partition
[23664] * Min threads: 0, max threads: 16
[23664] * Environment: sandbox
[23664] * Process workers: 4
[23664] * Phased restart available
[23664] * Listening on unix://tmp/puma.sock
[23664] * Daemonizing...I run:tail -f log/puma*I see:==> log/puma_error.log <==
X-Accel-Mapping header missing
=== puma startup: 2014-02-13 14:08:52 +0100 ===
[deprecated] I18n.enforce_available_locales will default to true in the future. If you really want to skip validation of your locale you can set I18n.enforce_available_locales = false to avoid this message.
==> log/puma.log <==
=== puma startup: 2014-02-13 14:08:52 +0100 ===
[23670] - Worker 23678 booted, phase: 0
[23670] - Worker 23674 booted, phase: 0
[23670] - Worker 23686 booted, phase: 0
[23670] - Worker 23682 booted, phase: 0But I don't see any more logs, nothing application related.When the application raises an exception, I get nothing in the logs... "tabula rasa" | Puma / missing logs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.