Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Here is the config that works:server {
server_name www.mysite2.name;
return 301 $scheme://mysite2.name$request_uri;
}
server {
#This config is based on https://github.com/daylerees/laravel-website-configs/blob/6db24701073dbe34d2d58fea3a3c6b3c0cd5685b/nginx.conf
server_name mysite2.name;
# The location of our project's public directory.
root /usr/share/nginx/mysite2/live/public/;
# Point index to the Laravel front controller.
index index.php;
location / {
# URLs to attempt, including pretty ones.
try_files $uri $uri/ /index.php?$query_string;
}
# Remove trailing slash to please routing system.
if (!-d $request_filename) {
rewrite ^/(.+)/$ /$1 permanent;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}Then the only output in the browser was a Laravel error: “Whoops, looks like something went wrong.”Do NOT runchmod -R 777 app/storage(note). Making something world-writable is bad security.chmod -R 755 app/storageworks and is more secure. | I have 3 domain names and am trying to host all 3 sites on one server (a Digital Ocean droplet) using Nginx.mysite1.name
mysite2.name
mysite3.nameOnly 1 of them works. The other two result in 403 errors (in the same way).In my nginx error log, I see:[error] 13108#0: *1 directory index of "/usr/share/nginx/mysite2.name/live/" is forbidden.My sites-enabled config is:server {
server_name www.mysite2.name;
return 301 $scheme://mysite2.name$request_uri;
}
server {
server_name mysite2.name;
root /usr/share/nginx/mysite2.name/live/;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.html index.php;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}All 3 sites have nearly identical config files.Each site's files are in folders like /usr/share/nginx/mysite1.name/someFolder, and then /usr/share/nginx/mysite1.name/live is a symlink to that. (Same for mysite2 and mysite3.)I've looked atNginx 403 forbidden for all filesbut that didn't help.Any ideas on what might be wrong? | Nginx 403 error: directory index of [folder] is forbidden |
Trylocation ~ ^/(first/location|second/location)/ {
...
}The~means to use a regular expression for the url. The^means to check from the first character. This will look for a/followed by either of the locations and then another/. Quoting fromthe docs,A regular expression is preceded with the tilde (~) for case-sensitive matching, or the tilde-asterisk (~*) for case-insensitive matching. | How can I have same rule for two locations in NGINX config?I have tried the followingserver {
location /first/location/ | /second/location/ {
..
..
}
}but nginx reload threw this error:nginx: [emerg] invalid number of arguments in "location" directive** | How can I have same rule for two locations in NGINX config? |
Well, I think nginx by itself doesn't have that in its setup, because the Ubuntu-maintained package does it as a convention to imitate Debian's apache setup. You could create it yourself if you wanted to emulate the same setup.Create/etc/nginx/sites-availableand/etc/nginx/sites-enabledand then edit thehttpblock inside/etc/nginx/nginx.confand add this lineinclude /etc/nginx/sites-enabled/*;Of course, all the files will be insidesites-available, and you'd create a symlink for them insidesites-enabledfor those you want enabled. | I installed Nginx on Centos 6 and I am trying to set up virtual hosts. The problem I am having is that I can't seem to find the/etc/nginx/sites-availabledirectory.Is there something I need to do in order to create it? I know Nginx is up and running because I can browse to it. | nginx missing sites-available directory |
Edit:it seems nginx now supportserror_log stderr;as mentioned inAnon's answer.You can send the logs to/dev/stdout. Innginx.conf:daemon off;
error_log /dev/stdout info;
http {
access_log /dev/stdout;
...
}edit: May need to runln -sf /proc/self/fd /dev/if using running certain docker containers, then use/dev/fd/1or/dev/fd/2 | Is there a way to have the master process log to STDOUT STDERR instead of to a file?It seems that you can only pass a filepath to the access_log directive:access_log /var/log/nginx/access.logAnd the same goes for error_log:error_log /var/log/nginx/error.logI understand that this simply may not be a feature of nginx, I'd be interested in a concise solution that uses tail, for example. It is preferable though that it comes from the master process though because I am running nginx in the foreground. | Have nginx access_log and error_log log to STDOUT and STDERR of master process |
$hostis a variable of theCoremodule.$hostThis variable is equal to line Host in the header of request or
name of the server processing the request if the Host header is not
available.This variable may have a different value from $http_host in such
cases: 1) when the Host input header is absent or has an empty value,
$host equals to the value of server_name directive; 2)when the value
of Host contains port number, $host doesn't include that port number.
$host's value is always lowercase since 0.8.17.$http_hostis also a variable of the same module but you won't find it with that name because it is defined generically as$http_HEADER(ref).$http_HEADERThe value of the HTTP request header HEADER when converted to lowercase and with 'dashes' converted to 'underscores', e.g. $http_user_agent, $http_referer...;Summarizing:$http_hostequals always theHTTP_HOSTrequest header.$hostequals$http_host,lowercase and without the port number(if present),except whenHTTP_HOSTis absent or is an empty value. In that case,$hostequals the value of theserver_namedirective of the server which processed the request. | In Nginx, what's the difference between variables$hostand$http_host. | What's the difference of $host and $http_host in Nginx |
One permission requirement that is often overlooked is a user needs x permissions in every parent directory of a file to access that file. Check the permissions on /, /home, /home/demo, etc. for www-data x access. My guess is that /home is probably 770 and www-data can't chdir through it to get to any subdir. If it is, try chmod o+x /home (or whatever dir is denying the request).EDIT: To easily display all the permissions on a path, you can usenamei -om /path/to/check | I have nginx installed with PHP-FPM on a CentOS 5 box, but am struggling to get it to serve any of my files - whether PHP or not.Nginx is running as www-data:www-data, and the default "Welcome to nginx on EPEL" site (owned by root:root with 644 permissions) loads fine.The nginx configuration file has an include directive for/etc/nginx/sites-enabled/*.conf,and I have a configuration fileexample.com.conf, thus:server {
listen 80;
Virtual Host Name
server_name www.example.com example.com;
location / {
root /home/demo/sites/example.com/public_html;
index index.php index.htm index.html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME /home/demo/sites/example.com/public_html$fastcgi_script_name;
include fastcgi_params;
}
}Despite public_html being owned by www-data:www-data with 2777 file permissions, this site fails to serve any content -[error] 4167#0: *4 open() "/home/demo/sites/example.com/public_html/index.html" failed (13: Permission denied), client: XX.XXX.XXX.XX, server: www.example.com, request: "GET /index.html HTTP/1.1", host: "www.example.com"I've found numerous other posts with users getting 403s from nginx, but most that I have seen involve either more complex setups with Ruby/Passenger (which in the past I've actually succeeded with) or are only receiving errors when the upstream PHP-FPM is involved, so they seem to be of little help.Have I done something silly here? | Nginx 403 forbidden for all files |
That is not an nginx configuration file. It ispartof an nginx configuration file.The nginx configuration file (usually callednginx.conf) will look like:events {
...
}
http {
...
server {
...
}
}Theserverblock is enclosed within anhttpblock.Often the configuration is distributed across multiple files, by using theincludedirectives to pull in additional fragments (for example from thesites-enableddirectory).Usesudo nginx -tto test the complete configuration file, which starts atnginx.confand pulls in additional fragments using theincludedirective.Seethis documentfor more information. | I have reconfigured nginx but I can't get it to restart using the following configuration:server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /robots.txt {
alias /path/to/robots.txt;
access_log off;
log_not_found off;
}
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30;
proxy_read_timeout 30;
proxy_pass http://127.0.0.1:8000;
}
location /static {
expires 1M;
alias /path/to/staticfiles;
}
}After runningsudo nginx -c conf -tto test the config, the following error is returned.nginx: [emerg] "server" directive is not allowed here in /etc/nginx/sites-available/config:1
nginx: configuration file /etc/nginx/sites-available/config test failed | nginx: [emerg] "server" directive is not allowed here |
server_namesupports suffix matches using.mydomain.examplesyntax:server {
server_name .mydomain.example;
rewrite ^ http://www.adifferentdomain.example$request_uri? permanent;
}or on any version 0.9.1 or higher:server {
server_name .mydomain.example;
return 301 http://www.adifferentdomain.example$request_uri;
} | How can I redirectmydomain.exampleand any subdomain*.mydomain.exampletowww.adifferentdomain.exampleusing Nginx? | How to redirect to a different domain using Nginx? |
The location block in your nginx config should be:location / {
try_files $uri /index.html;
}The problem is that requests to the index.html file work, but you're not currently telling nginx to forward other requests to the index.html file too. | I am transitioning my react app from webpack-dev-server to nginx.When I go to the root url "localhost:8080/login" I simply get a 404 and in my nginx log I see that it is trying to get:my-nginx-container | 2017/05/12 21:07:01 [error] 6#6: *11 open() "/wwwroot/login" failed (2: No such file or directory), client: 172.20.0.1, server: , request: "GET /login HTTP/1.1", host: "localhost:8080"
my-nginx-container | 172.20.0.1 - - [12/May/2017:21:07:01 +0000] "GET /login HTTP/1.1" 404 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:53.0) Gecko/20100101 Firefox/53.0" "-"Where should I look for a fix ?My router bit in react looks like this:render(
Hello there p
, app);And my nginx file like this:user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 8080;
root /wwwroot;
location / {
root /wwwroot;
index index.html;
try_files $uri $uri/ /wwwroot/index.html;
}
}
}EDIT:I know that most of the setup works because when I go to localhost:8080 without being logged in I get the login page as well. this is not through a redirect to localhost:8080/login - it is some react code. | React-router and nginx |
Try this:Edit/etc/nginx/sites-available/defaultUncomment both listen lines to make Nginx listen on port 80 IPv4 and IPv6.listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default_server ipv6only=on; ## listen for ipv6Leaveserver_namealone# Make site accessible (...)
server_name localhost;Addindex.phpto theindexlineroot /usr/share/nginx/www;
index index.php index.html index.htm;Uncommentlocation ~ \.php$ {}# pass the PHP scripts to FastCGI server listening on (...)
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+?\.php)(/.+)?$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}Edit/etc/php5/fpm/php.iniand make surecgi.fix_pathinfois set to0Restart Nginx and php5-fpmsudo service nginx restart && sudo service php5-fpm restartI just started using Linux a week ago, so I really hope to help you with this. I am using a nano text editor to edit the files. run apt-get install nano if you don't have it. Google it to know more. | I am installing a website in a droplet (Digital Ocean). I have an issue for install NGINX with PHP properly. I did a tutorialhttps://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-14-04but when I try to run some .php files it's just downloading it...
for example...http://5.101.99.123/info.phpit's working but... If I go to the mainhttp://5.101.99.123it's downloading my index.php :/Any idea?-rw-r--r-- 1 agitar_user www-data 418 Jul 31 18:27 index.php
-rw-r--r-- 1 agitar_user www-data 21 Aug 31 11:20 info.phpMy/etc/nginx/sites-available/defaultserver {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/html;
index index.html index.htm index.php;
# Make site accessible from http://localhost/
server_name agitarycompartir.com;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
## NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location / {
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}...Other "locations" are commented on.... | Nginx serves .php files as downloads, instead of executing them |
Proxy timeouts are well, for proxies, not for FastCGI...The directives that affect FastCGI timeouts areclient_header_timeout,client_body_timeoutandsend_timeout.Edit: Considering what's found on nginx wiki, thesend_timeout directiveis responsible for setting general timeout of response (which was bit misleading). For FastCGI there'sfastcgi_read_timeoutwhich is affecting theFastCGI process response timeout. | I am running Django, FastCGI, and Nginx. I am creating an api of sorts that where someone can send some data via XML which I will process and then return some status codes for each node that was sent over.The problem is that Nginx will throw a 504 Gateway Time-out if I take too long to process the XML -- I think longer than 60 seconds.So I would like to set up Nginx so that if any requests matching the location /api will not time out for 120 seconds. What setting will accomplish that.What I have so far is:# Handles all api calls
location ^~ /api/ {
proxy_read_timeout 120;
proxy_connect_timeout 120;
fastcgi_pass 127.0.0.1:8080;
}Edit: What I have is not working :) | How do I prevent a Gateway Timeout with FastCGI on Nginx |
It looks like one possible answer is, unsurprisingly,curl:$ curl http://example.com/ --silent --write-out "%{size_download}\n" --output /dev/null
31032
$ curl http://example.com/ --silent -H "Accept-Encoding: gzip,deflate" --write-out "%{size_download}\n" --output /dev/null
2553In the second case the client tells the server that it supports content encoding and you can see that the response was indeed shorter, compressed. | I have a webapp on a NGinx server. I setgzip onin the conf file and now I'm trying to see if it works. YSlow says it's not, but 5 out of 6 websites that do the test say it is. How can I get a definite answer on this and why is there a difference in the results? | How can I tell if my server is serving GZipped content? |
nginx, like all well-behaved programs, can be configured not to self-daemonize.Use thedaemon offconfiguration directive described inhttp://wiki.nginx.org/CoreModule. | I have Nginx installed on a Docker container, and am trying to run it like this:docker run -i -t -p 80:80 mydockerimage /usr/sbin/nginxThe problem is that the way Nginx works, is that the initial process immediately spawns a master Nginx process and some workers, and then quits. Since Docker is only watching the PID of the original command, the container then halts.How do I prevent the container from halting? I need to be able to tell it to bind to the first child process, or stop Nginx's initial process from exiting. | How to run Nginx within a Docker container without halting? |
There are two methods you can take for this. Unfortunately some work for some EB application types and some work for others.Supported/recommended in AWS documentationFor some application types, likeJava SE,Go,Node.js, and maybe Ruby (it's not documented for Ruby, but all the other Nginx platforms seem to support this), Elasticbeanstalk has a built-in understanding of how to configure Nginx.To extend Elastic Beanstalk's default nginx configuration,add .conf configuration files to a folder named.ebextensions/nginx/conf.d/in your application source bundle. Elastic Beanstalk's nginx configuration includes .conf files in this folder automatically.~/workspace/my-app/|-- .ebextensions
| `-- nginx
| `-- conf.d
| `-- myconf.conf
`-- web.jarConfiguring the Reverse Proxy - Java SETo increase the maximum upload size specifically, then create a file at.ebextensions/nginx/conf.d/proxy.confsetting the max body size to whatever size you would prefer:client_max_body_size 50M;Create the Nginx config file directlyFor some other application types, after much research and hours of working with the wonderful AWS support team, I created a config file inside of.ebextensionsto supplement the nginx config. This change allowed for a larger post body size.Inside of the.ebextensionsdirectory, I created a file called01_files.configwith the following contents:files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;This generates a proxy.conf file inside of the /etc/nginx/conf.d directory. The proxy.conf file simply contains the one linerclient_max_body_size 20M;which does the trick.Note that for some platforms, this file will be created during the deploy, but then removed in a later deployment phase.You can specify other directives which are outlined in Nginx documentation.http://wiki.nginx.org/Configuration | I'm running into"413 Request Entity Too Large"errors when posting files larger than 10MB to our API running on AWS Elastic Beanstalk.I've done quite a bit of research and believe that I need to up theclient_max_body_sizefor Nginx, however I cannot seem to find any documentation on how to do this using Elastic Beanstalk. My guess is that it needs to be modified using an ebetension file.Anyone have thoughts on how I can up the limit? 10MB is pretty weak, there has to be a way to up this manually. | Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk |
Just to note that nginx has now support for Websockets on the release 1.3.13. Example of use:location /websocket/ {
proxy_pass http://backend_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}You can also check thenginx changelogand theWebSocket proxyingdocumentation. | I'm so lost and new to building NGINX on my own but I want to be able to enable secure websockets without having an additional layer.I don't want to enable SSL on the websocket server itself but instead I want to use NGINX to add an SSL layer to the whole thing.Every web page out there says I can't do it, but I know I can! Thanks to whoever (myself) can show me how! | NGINX to reverse proxy websockets AND enable SSL (wss://)? |
Nginx operates within the directory, so if you can'tcdto that directory from the nginx user then it will fail (as does thestatcommand in your log). Make sure thewww-usercancdall the way to the/username/test/static. You can confirm that thestatwill fail or succeed by runningsudo -u www-data stat /username/test/staticIn your case probably the/usernamedirectory is the issue here. Usuallywww-datadoes not have permissions tocdto other users home directories.The best solution in that case would be to addwww-datatousernamegroup:gpasswd -a www-data usernameand make sure thatusernamegroup can enter all directories along the path:chmod g+x /username && chmod g+x /username/test && chmod g+x /username/test/staticFor your changes to work, restart nginxnginx -s reload | I am using the default config while adding the specific directory with nginx installed on my ubuntu 12.04 machine.server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
root /username/test/static;
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
...
...
}I just want a simple static nginx server to serve files out of that directory. However, checking theerror.logI see2014/09/10 16:55:16 [crit] 10808#0: *2 stat() "/username/test/static/index.html" failed (13: Permission denied), client:, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "domain"
2014/09/10 16:55:16 [error] 10808#0: *2 rewrite or internal redirection cycle while internally redirecting to "/index.htmlI've already donechown -R www-data:www-dataon/username/test/static, I've set them tochmod 755. I don't know what else needs to be set. | Nginx: stat() failed (13: permission denied) |
Runningnginx -tthrough your commandline will issue out a test and append the output with the filepath to the configuration file (with either an error or success message). | Working on a client's server where there are two different versions of nginx installed. I think one of them was installed with the brew package manager (its an osx box) and the other seems to have been compiled and installed with the nginx packaged Makefile. I searched for all of the nginx.conf files on the server, but none of these files define the parameters that nginx is actually using when I start it on the server. Where is the nginx.conf file that I'm unaware of? | Locate the nginx.conf file my nginx is actually using |
This is most likely happening because of the long domain name. You can fix this by addingserver_names_hash_bucket_size 64;at the top of yourhttpblock (probably located in/etc/nginx/nginx.conf). I quote from the nginx documentation what to do when this error appears:In this case, the directive value should be increased to the next power of two.So in your case it should become 64. If you still get the same error, try increasing to 128 and further.Reference:https://nginx.org/en/docs/http/server_names.html#optimization | I'm in the process of setting up a new server. The web server of my choice is NGINX. I want to add the domain (e.g.example.com) as a virtual host. I already have two other domains in there and it works fine, but when I try to add the above mentioned domain and start the server it gives me:Job failed. See system journal and 'systemctl status' for details.I thought it was because of the dashes, so I tried just various other domains with and without hyphens, but no luck. Same error. what could be causing this? I also tried rebooting, I am really at a loss here. Any help would be greatly appreciated.I have played around a bit and found out, that, when I only put one domain in, it works. But when I put another domain in, it stops. Here is the output in status:[root@netzmelone nginx]# systemctl status nginx
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled)
Active: failed (Result: exit-code) since Sun, 16 Dec 2012 11:38:08 +0000; 7s ago
Process: 14239 ExecStop=/usr/sbin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=1/FAILURE)
Process: 14232 ExecStart=/usr/sbin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 14242 ExecStartPre=/usr/sbin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=1/FAILURE)
Main PID: 14234 (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/nginx.service
Dec 16 11:38:08 netzmelone nginx[14242]: nginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
Dec 16 11:38:08 netzmelone nginx[14242]: nginx: configuration file /etc/nginx/nginx.conf test failed | nginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size |
Addclient_max_body_sizeNow that you are editing the file you need to add the line into the server block, like so;server {
client_max_body_size 8M;
//other lines...
}If you are hosting multiple sites add it to the http context like so;http {
client_max_body_size 8M;
//other lines...
}And also update theupload_max_filesizein your php.ini file so that you can upload files of the same size.Saving in ViOnce you are done you need to save, this can be done in vi with pressingesckey and typing:wqand returning.Restarting Nginx and PHPNow you need to restart nginx and php to reload the configs. This can be done using the following commands;sudo service nginx restart
sudo service php5-fpm restartOr whatever your php service is called. | I want to increase themaximum file sizethat can be uploaded.After doing some research online, I found that you have to edit the file 'nginx.conf'.The only way I can currently access this file is by going through Putty and typing in the command:vi /etc/nginx/nginx.confThis will open the file but I have 2 questions now:How do I edit this file?I found online that you have to add the following line of code:client_max_body_size 8M;Where would I put this line of code innginx.conf? | How to edit nginx.conf to increase file size upload |
I think this will do it for you:location / {
try_files /base.html =404;
} | Using nginx, I want to preserve the url, but actually load the same page no matter what. I will use the url withHistory.getState()to route the requests in my javascript app. It seems like it should be a simple thing to do?location / {
rewrite (.*) base.html break;
}works, but redirects the url? I still need the url, I just want to always use the same page. | nginx: send all requests to a single html page |
From theproxy_passdocumentation:A special case is using variables in the proxy_pass statement: The requested URL is not used and you are fully responsible to construct the target URL yourself.Since you're using $1 in the target, nginx relies on you to tell it exactly what to pass. You can fix this in two ways. First, stripping the beginning of the uri with a proxy_pass is trivial:location /service/ {
# Note the trailing slash on the proxy_pass.
# It tells nginx to replace /service/ with / when passing the request.
proxy_pass http://apache/;
}Or if you want to use the regex location, just include the args:location ~* ^/service/(.*) {
proxy_pass http://apache/$1$is_args$args;
} | upstream apache {
server 127.0.0.1:8080;
}
server{
location ~* ^/service/(.*)$ {
proxy_pass http://apache/$1;
proxy_redirect off;
}
}The above snippet will redirect requests where the url includes the string "service" to another server, but it does not include query parameters. | How can query string parameters be forwarded through a proxy_pass with nginx? |
For reference, I am attaching mylocationblock for catching files with the.phpextension:location ~ \.php$ {
include /path/to/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}Double-check the/path/to/fastcgi_params, and make sure that it is present and readable by the nginx user. | I have setup an nginx server with php5-fpm. When I try to load the site I get a blank page with no errors. Html pages are served fine but not php. I tried turning on display_errors in php.ini but no luck. php5-fpm.log is not producing any errors and neither is nginx.nginx.confserver {
listen 80;
root /home/mike/www/606club;
index index.php index.html;
server_name mikeglaz.com www.mikeglaz.com;
error_log /var/log/nginx/error.log;
location ~ \.php$ {
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}EDIThere's my nginx error log:2013/03/15 03:52:55 [error] 1020#0: *55 open() "/home/mike/www/606club/robots.txt" failed (2: No such file or directory), client: 199.30.20.40, server: mikeglaz.com, request: "GET /robots.txt HTTP/1.1", host: "mikeglaz.com" | nginx showing blank PHP pages |
I assume that you're running a Linux, and you're using gEdit to edit your files. In the/etc/nginx/sites-enabled, it may have left a temp file e.g.default~(watch the~).Depending on your editor, the file could be named.saveor something like it. Just run$ ls -lahto see which files are unintended to be there and remove them (Thanks@Tischfor this).Delete this file, and it will solve your problem. | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionserver {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
#root /usr/share/nginx/www;
root /home/ubuntu/node-login;
# Make site accessible from
server_name ec2-xx-xx-xxx-xxx.us-west-1.compute.amazonaws.com;
location /{
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
}
}this results in nignx error [warn] conflicting server name "ec2..." on 0.0.0.0:80 ignored
I dont understand, any explanation appreciated. Thanks. | nginx error "conflicting server name" ignored [closed] |
I got a MD5 hash with different results for both key and certificate.This says it all. You have a mismatch between your key and certificate.The modulus should match. Make sure you have correct key. | I'm not able to setup SSL. I've Googled and I found a few solutions but none of them worked for me. I need some help please...Here's the error I get when I attempt to restart nginx:root@s17925268:~# service nginx restart
Restarting nginx: nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/conf.d/ssl/ssl.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
nginx: configuration file /etc/nginx/nginx.conf test failedMy certificate is from StartSSL and is valid for 1 year.Here's what I tested:The certificate and private key has no trailing spaces.I'm not using the default server.key file.I checked the nginx.conf and the
directives are pointing to the correct private key and certificate.I also checked the modulus, and I get a different modulus for both key and certificate.Thank you for your help. :) | SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch |
The default value forclient_max_body_sizedirective is1 MiB.It can be set inhttp,serverandlocationcontext — as in themost cases,
thisdirective in a nested block takes precedence over the same directive in the ancestors blocks.Excerpt from thengx_http_core_module documentation:Syntax: client_max_body_size size;
Default: client_max_body_size 1m;
Context: http, server, locationSets the maximum allowed size of the client request body, specified in
the “Content-Length” request header field. If the size in a request
exceeds the configured value, the 413 (Request Entity Too Large) error
is returned to the client. Please be aware that browsers cannot
correctly display this error. Setting size to 0 disables checking of
client request body size.Don't forget toreload configurationbynginx -s reloadorservice nginx reloadcommands prepending withsudo(if any). | I have been getting the nginx error:413 Request Entity Too LargeI have been able to update myclient_max_body_sizein the server section of my nginx.conf file to 20M and this has fixed the issue. However, what is the default nginxclient_max_body_size? | Default nginx client_max_body_size |
Change listen option to this in your catch-all server block. (Adddefault_server) this will take all your non-defined connections (on the specified port).listen 80 default_server;if you want to push everything to index.php if the file or folder does not exist;try_files $uri /$uri /index.php;Perthe docs,It can also be set explicitly which server should be default, with the **default_server** parameter in the listen directive | I have an instance of nginx running which serves several websites. The first is a status message on the server's IP address. The second is an admin console onadmin.domain.com. These work great. Now I'd like all other domain requests to go to a singleindex.php- I have loads of domains and subdomains and it's impractical to list them all in an nginx config.So far I've tried settingserver_nameto*but that failed as an invalid wildcard.*.*works until I add the other server blocks, then I guess it conflicts with them.Is there a way to run a catch-all server block in nginx after other sites have been defined?N.B. I'm not a spammer, these are genuine sites with useful content, they're just powered by the same CMS from a database! | nginx server_name wildcard or catch-all |
It is easier to think about them if you realize they aren't mutually exclusive. Think of an API gateway as a specific type reverse proxy implementation.In regards to your questions, it is not uncommon to see both used in conjunction where the API gateway is treated as an application tier that sits behind a reverse proxy for load balancing and health checking. An example would be something like a WAF sandwich architecture in that your Web Application Firewall/API Gateway is sandwiched by reverse proxy tiers, one for the WAF itself and the other for the individual microservices it talks to.Regarding the differences, they are very similar. It's just nomenclature. As you take a basic reverse proxy setup and start bolting on more pieces like authentication, rate limiting, dynamic config updates, and service discovery, people are more likely to call that an API gateway. | In order to deal with the microservice architecture, it's often used alongside a Reverse Proxy (such as nginx or apache httpd) and for cross cutting concerns implementationAPI gateway pattern is used. Sometimes Reverse proxy does the work of API gateway.It will be good to see clear differences between these two approaches.
It looks like the potential benefit of API gateway usage is invoking multiple microservices and aggregating the results. All otherresponsibilitiesof API gateway can be implemented using Reverse Proxy. Such as:Authentication (It can be done using nginx LUA scripts);Transport security. It itself Reverse Proxy task;Load balancing...So based on this there are several questions:Does it make sense to use API gateway and Reverse proxy simultaneously (as example request -> API gateway -> reverse proxy(nginx) -> concrete microservice)? In what cases ?What are the other differences that can be implemented using API gateway and can't be implemented by Reverse proxy and vice versa? | API gateway vs. reverse proxy |
Like Apache, this is a quick edit to the source and recompile. FromCalomel.org:The Server: string is the header which
is sent back to the client to tell
them what type of http server you are
running and possibly what version.
This string is used by places like
Alexia and Netcraft to collect
statistics about how many and of what
type of web server are live on the
Internet. To support the author and
statistics for Nginx we recommend
keeping this string as is. But, for
security you may not want people to
know what you are running and you can
change this in the source code. Edit
the source filesrc/http/ngx_http_header_filter_module.cat look at lines 48 and 49. You can
change the String to anything you
want.## vi src/http/ngx_http_header_filter_module.c (lines 48 and 49)
static char ngx_http_server_string[] = "Server: MyDomain.com" CRLF;
static char ngx_http_server_full_string[] = "Server: MyDomain.com" CRLF;March 2011 edit:Props to Flavius below for pointing out a new option, replacing Nginx's standardHttpHeadersModulewith the forkedHttpHeadersMoreModule. Recompiling the standard module is still the quick fix, and makes sense if you want to use the standard module and won't be changing the server string often. But if you want more than that, the HttpHeadersMoreModule is a strong project and lets you do all sorts of runtime black magic with your HTTP headers. | There's an option to hide the version so it will display only nginx, but is there a way to hide that too so it will not show anything or change the header? | How do you change the server header returned by nginx? |
Because docker will recognize$PWD/conf/nginx.confas afolderand not as a file. Check whether the$PWD/conf/directory containsnginx.confas adirectory.Test with> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directoryOtherwise, open aDocker issue.It's working fine for me with same configuration. | I have a docker with version17.06.0-ce. When I trying to install NGINX using docker with command:docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latestIt shows thatdocker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.If do not mount thenginx.conffile, everything is okay. So, how can I mount the configuration file? | Are you trying to mount a directory onto a file (or vice-versa)? |
For the optiongzip_types, the mime-typetext/htmlis always included by default, so you don't need to specify it explicitly. | I have this in Nginx configuration filesgzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;but Nginx give error when starting up[warn]: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:25What is actually duplicate totext/html? Is ittext/plain? | duplicate MIME type "text/html"? |
Best way to do what you want is to add another server block:server {
#implemented by default, change if you need different ip or port
#listen *:80 | *:8000;
server_name test.com;
return 301 $scheme://www.test.com$request_uri;
}And edit your main server block server_name variable as following:server_name www.test.com;Important: Newserverblock is the right way to do this,ifis evil. You must use locations and servers instead ofifif it's possible.Rewriteis sometimesevil too, so replaced it withreturn. | I need to redirect everyhttp://test.comrequest tohttp://www.test.com. How can this be done.In the server block I tried addingrewrite ^/(.*) http://www.test.com/$1 permanent;but in browser it saysThe page isn't redirecting properlyFirefox has detected that the server is redirecting the request for this address in a way that will never complete.My server block looks likeserver {
listen 80;
server_name test.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
root /home/test/test/public;
passenger_enabled on;
rails_env production;
#rewrite ^/(.*) http://www.test.com/$1 permanent;
#rewrite ^(.*)$ $scheme://www.test.com$1;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
} | How to redirect a URL in Nginx |
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfilesOne big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.depends_on featureThis probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same @https://github.com/docker/compose/issues/374Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:make nginx retry until the php server is up - I would prefer this oneuse volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers. | I have recently started migrating to Docker 1.9 and Docker-Compose 1.5's networking features to replace using links.So far with links there were no problems with nginx connecting to my php5-fpm fastcgi server located in a different server in one group via docker-compose. Newly though when I rundocker-compose --x-networking upmy php-fpm, mongo and nginx containers boot up, however nginx quits straight away with[emerg] 1#1: host not found in upstream "waapi_php_1" in /etc/nginx/conf.d/default.conf:16However, if I run the docker-compose command again while the php and mongo containers are running (nginx exited), nginx starts and works fine from then on.This is mydocker-compose.ymlfile:nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfilesThis is mydefault.conffor nginx:server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass waapi_php_1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}How can I get nginx to work with only a single docker-compose call? | Docker Networking - nginx: [emerg] host not found in upstream |
You likely have other files (such as thedefaultconfiguration) located in/etc/nginx/sites-enabledthat needs to be removed.This issue is caused by a repeat of thedefault_serverparameter supplied to one or morelistendirectives in your files. You'll likely find this conflicting directive reads something similar to:listen 80 default_server;As thenginx core module documentation forlistenstates:Thedefault_serverparameter, if present, will cause the server to become the default server for the specifiedaddress:portpair. If none of the directives have thedefault_serverparameter then the first server with theaddress:portpair will be the default server for this pair.This means that there must be another file orserverblock defined in your configuration withdefault_serverset for port 80. nginx is encountering that first before yourmysite.comfile so try removing or adjusting that other configuration.If you are struggling to find where these directives and parameters are set, try a search like so:grep -R default_server /etc/nginx | In my error log i get[emerg] 10619#0: a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/mysite.com:4on Line 4 I have:server_name mysite.com www.mysite.com;Any suggestions? | nginx- duplicate default server error |
Edit yourlistenstatement from:listen 443;tolisten 443 ssl;and comment out or delete :# ssl on;then checknginx -tagain. | After NGINX upgrade tov1.15.2starts getting the warning.nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /usr/local/etc/nginx/sites-enabled/confid-file-name:8Where the 8th line isssl on;how I can solve this? | nginx the "ssl" directive is deprecated, use the "listen ... ssl" |
I ran into a similar problem. It works on one server and does not on another server with same Nginx configuration. Found the the solution which is answered by Igor herehttp://forum.nginx.org/read.php?2,1612,1627#msg-1627Yes. Or you may combine SSL/non-SSL servers in one server:server {
listen 80;
listen 443 default ssl;
# ssl on - remember to comment this out
} | I'm running a Sinatra app behind passenger/nginx. I'm trying to get it to respond to both http and https calls. The problem is, when both are defined in the server block https calls are responded to normally but http yields a 400 "The plain HTTP request was sent to HTTPS port" error. This is for a static page so I'm guessing Sinatra has nothing to do with this. Any ideas on how to fix this?Here's the server block:server {
listen 80;
listen 443 ssl;
server_name localhost;
root /home/myhome/app/public;
passenger_enabled on;
ssl on;
ssl_certificate /opt/nginx/ssl_keys/ssl.crt;
ssl_certificate_key /opt/nginx/ssl_keys/ssl.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
location /static {
root /home/myhome/app/public;
index index.html index.htm index.php;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 /500.html;
access_log /home/myhome/app/logs/access.log;
error_log /home/myhome/app/logs/error.log;
} | Dealing with nginx 400 "The plain HTTP request was sent to HTTPS port" error |
Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.see docker linking docs(this is what compose uses in the background)but more clearly explained in thedocker-composedocs on links & exposelinkslinks:
- db
- db:database
- redisAn entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:172.17.2.186 db
172.17.2.186 database
172.17.2.187 redisexposeExpose portswithout publishing them to the host machine- they'll only beaccessible to linked services. Only the internal port can be specified.and if you set up your project to get the ports + other credentials through environment variables,links automatically set a bunch of system variables:To see what environment variables are available to a service, rundocker-compose run SERVICE env.name_PORTFull URL, e.g. DB_PORT=tcp://172.17.0.5:5432name_PORT_num_protocolFull URL, e.g.DB_PORT_5432_TCP=tcp://172.17.0.5:5432name_PORT_num_protocol_ADDRContainer's IP address, e.g.DB_PORT_5432_TCP_ADDR=172.17.0.5name_PORT_num_protocol_PORTExposed port number, e.g.DB_PORT_5432_TCP_PORT=5432name_PORT_num_protocol_PROTOProtocol (tcp or udp), e.g.DB_PORT_5432_TCP_PROTO=tcpname_NAMEFully qualified container name, e.g.DB_1_NAME=/myapp_web_1/myapp_db_1 | I am trying to link 2 separate containers:nginx:latestphp:fpmThe problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect.
Here is the source code, which is in myrepository. Here is the filedocker-compose.yml:nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www/test/
links:
- fpm
fpm:
image: php:fpm
ports:
- "9000:9000"andDockerfilewhich I used to build a custom image based on the nginx one:FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/Lastly, here is my custom Nginx virtual host config:server {
listen 80;
server_name localhost;
root /var/www/test;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}Could anybody help me configure these containers correctly to execute php scripts?P.S.I run containers via docker-composer like this:docker-compose upfrom the project root directory. | How to correctly link php-fpm and Nginx Docker containers? |
NginxUnicornRefer tounicorn on githubfor more information. | I would like to know the difference between Nginx and Unicorn. As far as I understand, Nginx is a web server while Unicorn is a Ruby HTTP server.Since both Nginx and Unicorn can handle HTTP requests, what is the need to use the combination of Nginx and Unicorn for RoR applications? | Why does Unicorn need to be deployed together with Nginx? |
Your "listen" directives are wrong. See this page:http://nginx.org/en/docs/http/server_names.html.They should beserver {
listen 80;
server_name www.domain1.example;
root /var/www/domain1;
}
server {
listen 80;
server_name www.domain2.example;
root /var/www/domain2;
}Note, I have only included the relevant lines. Everything else looked okay but I just deleted it for clarity. To test it you might want to try serving a text file from each server first before actually serving PHP. That's why I left the 'root' directive in there. | I would like to host 2 different domains in the same server using Nginx.
I redirected both domains to this host via @ property. Although I configure 2 different server blocks, whenever I try to access second domain, it redirects to first one.Here is my config.server {
listen `www.domain1.example:80`;
access_log `/var/log/nginx/host.domain1.access.log` main;
root `/var/www/domain1`;
server_name `www.domain1.example`;
location ~ \.php$ {
# Security: must set cgi.fixpathinfo to 0 in `php.ini`!
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass `127.0.0.1:9000`;
fastcgi_index `index.php`;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include `/etc/nginx/fastcgi_params`;
}
}
server {
listen www.domain2.example:80;
access_log /var/log/nginx/host.domain2.access.log main;
root /var/www/domain2;
server_name www.domain2.example;
location ~ \.php$ {
# Security: must set cgi.fixpathinfo to 0 in php.ini!
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include /etc/nginx/fastcgi_params;
}
}How can I fix this? | Nginx Different Domains on Same IP |
In HTTP world, the "upstream server" term was introduced in the HTTP/1.0 specification,RFC 1945:502 Bad GatewayThe server, while acting as a gateway or proxy, received an invalid
response fromthe upstream serverit accessed in attempting to
fulfill the request.Formal definition was added later, inRFC 2616:upstream/downstreamUpstream and downstream describe the flow of a message: all
messages flow from upstream to downstream.According to this definition:if you are looking at a request, then the client is upstream, and the server is downstream;in contrast, if you are looking at a response, then the client is downstream, and the server is upstream.At the same time, in HTTP most of the data flow is not for requests, but forresponses. So, if you'll consider flow of responses, then the "upstream server" term sounds pretty reasonable and logical. And the term is again used in the 502 response code description (it is identical to HTTP/1.0 one), as well as some other places.The same logic can be also seen in terms "downloading" and "uploading" in natural language. Most of the data flow is from servers to clients, and that's why "downloading" means loading something from a server to a client, and "uploading" - from a client to a server. | I've always thought of upstream and downstream along the lines of an actual stream, where the flow of information is like water. So upstream is where water/data comes from (e.g. an HTTP request) and downstream is where it goes (e.g. the underlying system that services the request).I've been looking at API gateways recently and noticed that some of them used the inverse of this definition. I shrugged it off as an oddity at the time. I then discovered that nginx, which some API gateways are based on, also uses the terminology in the opposite way to what I expected. nginx calls the servers that it sends requests to "upstream servers", and presumably the incoming requests would therefore be "downstream clients".Conceptually it seems like nginx would be pushing the requests "uphill" if going to an "upstream server", which is totally counter-intuitive... Gravity is reversed in the land of reverse proxies and API gateways, apparently!I've seen other discussions talking about upstream / downstream representing dependencies between systems but for middleware or infrastructure components that sit between systems the idea of dependencies is a little looser, and I find it more helpful to think in terms of flow of information still - because THAT'S usually the source of your dependencies anyway.Have I got my understanding of the stream analogy fundamentally wrong or are these software components getting the concepts backwards? | Upstream / downstream terminology used backwards? (E.g. nginx) |
Okay, I think I get this now.Why can't nginx directly call my Flask application?Becausenginxdoesn't support the WSGI spec. Technically nginx could implement theWSGIspec if they wanted, they just haven't.That being the case, we need a web server that does implement the spec, which is what theuWSGIserver is for.Note thatuWSGIis a full fledged http server that can and does work well on its own. I've used it in this capacity several times and it works great. If you need super high throughput for static content, then you have the option of stickingnginxin front of youruWSGIserver. When you do, they will communicate over a low level protocol known asuwsgi."What the what?! Another thing called uwsgi?!"you ask. Yeah, it's confusing. When you referenceuWSGIyou are talking about an http server. When you talk aboutuwsgi(all lowercase) you are talking about abinary protocolthat theuWSGIserveruses to talk to other servers likenginx. They picked a bad name on this one.For anyone who is interested, I wrote ablog articleabout it with more specifics, a bit of history, and some examples. | I'm looking at theWSGI specificationand I'm trying to figure out how servers likeuWSGIfit into the picture. I understand the point of the WSGI spec is to separate web servers like nginx from web applications like something you'd write usingFlask. What I don't understand is what uWSGI is for. Why can't nginx directly call my Flask application? Can't flask speak WSGI directly to it? Why does uWSGI need to get in between them?There are two sides in the WSGI spec: the server and the web app. Which side is uWSGI on? | What is the point of uWSGI? |
Actually, as far as I know, nginx would show an empty message and it wouldn't actually restart if the configuration is bad.The only way to screw it up is by doing an nginx stop and then start again. It would succeed to stop, but fail to start. | When I restart the nginx service on a command line on an Ubuntu server, the service crashes when a nginx configuration file has errors. On a multi-site server this puts down all the sites, even the ones without configuration errors.To prevent this, I run the nginx configuration test first:nginx -tAfter the test ran successful, I could restart the service:/etc/init.d/nginx restartOronly reload the nignx site configs without a restart:nginx -s reloadIs there a way to combine those two commands where the restart command is conditional to the configuration test's result?I couldn't find this online andthe official documentationon this is rather basic. I don't know my way around Linux that well, so I don't know if what I'm looking for is right in front of me or not possible at all.I'm using nginx v1.1.19. | How do I restart nginx only after the configuration test was successful on Ubuntu? |
There is a module calledHttpHeadersMoreModulethat gives you more control over headers. It does not come with Nginx and requires additional installation. With it, you can do something like this:location ... {
more_set_headers "Server: my_server";
}That will "set the Server output header to the custom value for any status code and any content type". It will replace headers that are already set or add them if unset. | I want to add a custom header for the response received from the server behind nginx.Whileadd_headerworks for nginx-processed responses, it does nothing when theproxy_passis used. | How to add a response header on nginx when using proxy_pass? |
Looking at the requirement you have, the below command shallhelp:service nginx status | Afterrunning an ASP.NET vNext projecton my local machine I was trying to figure out how I can run it onnginxas it looks to be arecommended choiceFollowingjsinh'sblog, I installed it using:sudo apt-get update
sudo apt-get install nginx -yI was trying to understand whether it is working or not by using:ifconfig eth0 | grep inet | awk '{ print $2}'After runningsudo service nginx start
sudo service nginx stopHowever, the output is always the same:How to verify if nginx is running or not? | How to verify if nginx is running or not? |
Be sure to give it the execution permission.cd ~/the/script/folder
chmod +x ./startup.shThis will give exec permission to user, group and other, so beware of possible security issues. To restrict permission to a single access class, you can use:chmod u+x ./startup.shThis will grant exec permission only to userFor reference | I am running a command./startup.sh nginx:startand I am getting this error messagezsh: permission denied: ./startup.shwhy could this be happening? | Terminal error: zsh: permission denied: ./startup.sh |
Put this in your server directive:location /issue {
rewrite ^/issue(.*) http://$server_name/shop/issues/custom_issue_name$1 permanent;
}Or duplicate it:location /issue1 {
rewrite ^/.* http://$server_name/shop/issues/custom_issue_name1 permanent;
}
location /issue2 {
rewrite ^.* http://$server_name/shop/issues/custom_issue_name2 permanent;
}
... | I'm in the process of reorganizing URL structure.
I need to setup redirect rules for specific URLs - I'm using Nginx.Basically Something like this:http://example.com/issue1 --> http://example.com/shop/issues/custom_issue_name1
http://example.com/issue2 --> http://example.com/shop/issues/custom_issue_name2
http://example.com/issue3 --> http://example.com/shop/issues/custom_issue_name3Thanks! | How to redirect single URL in Nginx? |
AWS Application Load Balancers now support native HTTP to HTTPS redirect.To enable this in the console, do the the following:Go to your Load Balancer in EC2 and tab "Listeners"Select "View/edit rules" on your HTTP listenerDelete all rules except for the default one (bottom)Edit default rule: choose "Redirect to" as an action, leave everything as default and enter "443" as a port.The same can be achieved by using the CLI as describedhere.It is also possible to do this in Cloudformation, where you need to set up a Listener object like this:HttpListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
DefaultActions:
- Type: redirect
RedirectConfig:
Protocol: HTTPS
StatusCode: HTTP_301
Port: 443If you still use Classic Load Balancers, go with one of the NGINX configs described by the others. | I want to redirect all the HTTP request to https request onELB. I have two EC2 instances. I am using nginx for the server. I have tried a rewriting the nginx conf files without any success. I would love some advice on it. | Redirecting EC2 Elastic Load Balancer from HTTP to HTTPS |
You should tryngx_http_autoindex_module.Setautoindexoption toon. It isoffby default.Your example configuration should be oklocation /{
root /home/yozloy/html/;
index index.html;
autoindex on;
}Withoutautoindexoption you should be getting Error 403 for requests that end with/on directories that do not have anindex.htmlfile. With this option you should be getting a simple listing:
Index of /
Index of /test/../
test.txt 19-May-2012 10:43 0
Edit:Updated the listing to delete any references to test | Once I've seen this before when I type a URLhttp://test.com/test/, instead of give me a html page, it gives me a 'file browser' like interface to browse all the files in the given location.I think it maybe a nginx module that could be enable in the location context.Thenginx.conffile:worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name 122.97.248.252;
location /test {
root /home/yozloy/html/;
autoindex on;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}update theerror.log2012/05/19 20:48:33 [error] 20357#0: *72 open() "/home/yozloy/html/test" failed (2: No such file or directory), client: 125.43.236.33, server: 122.97.248.252, request: "GET /test HTTP/1.1", host: "unicom2.markson.hkI must misunderstand the location/testmean, I thought it meant when I typehttp://example.com/test, then it would access the root dictionary which is/home/yozloy/html/ | How to configure nginx to enable kinda 'file browser' mode? |
nginx "fails fast" when the client informs it that it's going to send a body larger than theclient_max_body_sizeby sending a 413 response and closing the connection.Most clients don't read responses until the entire request body is sent. Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.If your HTTP client supports it, the best way to handle this is to send anExpect: 100-Continueheader. Nginx supports this correctly as of 1.2.7, and will reply with a413 Request Entity Too Largeresponse rather than100 ContinueifContent-Lengthexceeds the maximum body size. | I'm running nginx/ruby-on-rails and I have a simple multipart form to upload files.
Everything works fine until I decide to restrict the maximum size of files I want uploaded.
To do that, I set the nginxclient_max_body_sizeto1m(1MB) and expect a HTTP 413 (Request Entity Too Large) status in response when that rule breaks.The problemis that when I upload a 1.2 MB file, instead of displaying the HTTP 413 error page, the browser hangs a bit and then dies with a "Connection was reset while the page was loading" message.I've tried just about every option there is that nginx offers, nothing seems to work. Does anyone have any ideas about this?Here's my nginx.conf:worker_processes 1;
timer_resolution 1000ms;
events {
worker_connections 1024;
}
http {
passenger_root /the_passenger_root;
passenger_ruby /the_ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name www.x.com;
client_max_body_size 1M;
passenger_use_global_queue on;
root /the_root;
passenger_enabled on;
error_page 404 /404.html;
error_page 413 /413.html;
}
}Thanks.**Edit**Environment/UA: Windows XP/Firefox 3.6.13 | nginx upload client_max_body_size issue |
We should first readthe documentation on proxy_passcarefully and fully.The URI passed to upstream server is determined based on whether "proxy_pass" directive is used with URI or not. Trailing slash in proxy_pass directive means that URI is present and equal to/. Absense of trailing slash means hat URI is absent.Proxy_pass with URI:location /some_dir/ {
proxy_pass http://some_server/;
}With the above, there's the following proxy:http:// your_server/some_dir/ some_subdir/some_file ->
http:// some_server/ some_subdir/some_fileBasically,/some_dir/gets replaced by/to change the request path from/some_dir/some_subdir/some_fileto/some_subdir/some_file.Proxy_pass without URI:location /some_dir/ {
proxy_pass http://some_server;
}With the second (no trailing slash): the proxy goes like this:http:// your_server /some_dir/some_subdir/some_file ->
http:// some_server /some_dir/some_subdir/some_fileBasically, the full original request path gets passed on without changes.So, in your case, it seems you should just drop the trailing slash to get what you want.CaveatNote that automatic rewrite only works if you don't use variables in proxy_pass. If you use variables, you should do rewrite yourself:location /some_dir/ {
rewrite /some_dir/(.*) /$1 break;
proxy_pass $upstream_server;
}There are other cases where rewrite wouldn't work, that's why reading documentation is a must.EditReading your question again, it seems I may have missed that you just want to edit the html output.For that, you can use thesub_filterdirective. Something like ...location /admin/ {
proxy_pass http://localhost:8080/;
sub_filter "http://your_server/" "http://your_server/admin/";
sub_filter_once off;
}Basically, the string you want to replace and the replacement string | I'm used to using Apache with mod_proxy_html, and am trying to achieve something similar with NGINX. The specific use case is that I have an admin UI running in Tomcat on port 8080 on a server at the root context:http://localhost:8080/I need to surface this on port 80, but I have other contexts on the NGINX server running on this host, so want to try and access this at:http://localhost:80/admin/I was hoping that the following super simple server block would do it, but it doesn't quite:server {
listen 80;
server_name screenly.local.akana.com;
location /admin/ {
proxy_pass http://localhost:8080/;
}
}The problem is that the returned content (html) contains URLs to scripts and style info that is all accessed at the root context, so I need to get these URLs rewritten to start with /admin/ instead of /.How do I do this in NGINX? | How do I rewrite URLs in a proxy response in NGINX |
Your Nginx config is correct, you just miss few lines.Here is a "magic trio" makingEventSourceworking through Nginx:proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;Place them intolocationsection and it should work.You may also need to addproxy_buffering off;
proxy_cache off;That's not an official way of doing it.I ended up with this by "trial and errors" + "googling" :) | On server-side using Sinatra with astreamblock.get '/stream', :provides => 'text/event-stream' do
stream :keep_open do |out|
connections << out
out.callback { connections.delete(out) }
end
endOn client side:var es = new EventSource('/stream');
es.onmessage = function(e) { $('#chat').append(e.data + "\n") };When i using app directly, viahttp://localhost:9292/, everything works perfect. The connection is persistent and all messages are passed to all clients.However when it goes through Nginx,http://chat.dev, the connection are dropped and a reconnection fires every second or so.Nginx setup looks ok to me:upstream chat_dev_upstream {
server 127.0.0.1:9292;
}
server {
listen 80;
server_name chat.dev;
location / {
proxy_pass http://chat_dev_upstream;
proxy_buffering off;
proxy_cache off;
proxy_set_header Host $host;
}
}Triedkeepalive 1024inupstreamsection as well asproxy_set_header Connection keep-alive;inlocation.Nothing helps :(No persistent connections and messages not passed to any clients. | EventSource / Server-Sent Events through Nginx |
Config file:worker_processes 4; # 2 * Number of CPUs
events {
worker_connections 19000; # It's the key to high performance - have a lot of connections available
}
worker_rlimit_nofile 20000; # Each connection needs a filehandle (or 2 if you are proxying)
# Total amount of users you can serve = worker_processes * worker_connectionsmore info:Optimizing nginx for high traffic loads | We have a server that is serving one html file.Right now the server has 2 CPUs and 2GB of ram. From blitz.io, we are getting about 12k connections per minute and anywhere from 200 timeouts in that 60 seconds with 250 concurrent connections each second.worker_processes 2;
events {
worker_connections 1024;
}If I increase the timeout, the response time starts creeping up beyond a second.What else can I do to squeeze more juice out of this? | Tuning nginx worker_process to obtain 100k hits per min |
sudo nginx -tshould test all files and return errors and warnings locations | I'm an nginx noob trying out thisthistutorial on nginx 1.1.19 on ubuntu 12.04. I havethisnginx config file.When I run this command the test fails:$ sudo service nginx restart
Restarting nginx: nginx: [crit] pread() "/etc/nginx/sites-enabled/csv" failed (21: Is a directory)
nginx: configuration file /etc/nginx/nginx.conf test failedHow do I know why the nginx.conf test failed? | "configuration file /etc/nginx/nginx.conf test failed": How do I know why this happened? |
It's not only possible, it's easy:in nginx the response header values are available through a variable (one per header).
Seehttp://wiki.nginx.org/HttpCoreModule#.24sent_http_HEADERfor the details on those variables.In your examle the variable would be $sent_http_My_custom_header. | I am using nginx as a reverse proxy and trying to read a custom header from the response of an upstream server (Apache) without success. The Apache response is the following:HTTP/1.0 200 OK
Date: Fri, 14 Sep 2012 20:18:29 GMT
Server: Apache/2.2.17 (Ubuntu)
X-Powered-By: PHP/5.3.5-1ubuntu7.10
Connection: close
Content-Type: application/json; charset=UTF-8
My-custom-header: 1I want to read the value fromMy-custom-headerand use it in a if clause:location / {
// ...
// get My-custom-header value here
// ...
}Is this possible? | nginx - read custom header from upstream server |
I think theproxy_set_headerdirective could help:location / {
proxy_pass http://my_app_upstream;
proxy_set_header Host $host;
# ...
} | I was trying to useThinapp server and had one issue.When nginxproxiesthe request to Thin (or Unicorn) usingproxy_pass http://my_app_upstream;the application receives the modified URL sent by nginx (http://my_app_upstream).What I want is to pass the original URL and the original request from client with no modification as the app relies heavily on it.The nginx'docsays:If it is necessary to transmit URI in
the unprocessed form then directive
proxy_pass should be used without URI
part.But I don't understand how exactly to configure that as the related sample is actually using URI:location /some/path/ {
proxy_pass http://127.0.0.1;
}So could you please help me figuring out how topreserve the original request URLfrom the client? | How to preserve request url with nginx proxy_pass |
You should give nginx permissions to read the file. That means you should give the user that runs the nginx process permissions to read the file.This user that runs the nginx process is configurable with theuserdirective in the nginx config, usually located somewhere on the top ofnginx.conf:user www-datahttp://wiki.nginx.org/CoreModule#userThe second argument you give touseris the group, but if you don't specify it, it uses the same one as the user, so in my example the user and the group both arewww-data.Now the files you want to serve with nginx should have the correct permissions. Nginx should have permissions to read the files. You can give the groupwww-dataread permissions to a file like this:chown :www-data my-file.htmlhttp://linux.die.net/man/1/chownWithchownyou can change the user and group owner of a file. In this command I only change the group, if you would change the user too you would specify the usernamebeforethe colon, likechown www-data:www-data my-file.html. But setting the group permissions correct should be enough for nginx to be able to read the file. | Just want to help somebody out. yes ,you just want to serve static file using nginx, and you got everything right innginx.conf:location /static {
autoindex on;
#root /root/downloads/boxes/;
alias /root/downloads/boxes/;
}But , in the end , you failed. You got "403 forbidden" from browser...----------------------------------------The Answer Below:----------------------------------------The Solution is very Simple:Way 1 : Run nginx as the user as the '/root/downloads/boxes/' ownerInnginx.conf:#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;YES, in the first line "#user noboy;" , just delete "#" , and change "nobody" to your own username in Linux/OS X, i.e change to "root" for test. The restart nginx.Attention, You'd better not runnginxasroot! Here just for testing, it's dangerous for the Hacker.For more reference , seenginx (engine X) – What a Pain in the BUM! [13: Permission denied]Way 2 : Change '/root/downloads/boxes/' owner to 'www-data' or 'nobody'InTerminal:ps aux | grep nginxGet the username of running nginx . It should be'www-data'or'nobody'determined by the version of nginx. Then hit in Terminal(use'www-data'for example):chown -R www-data:www-data /root/downloads/boxes/------------------------------One More Important Thing Is:------------------------------These parent directories"/","/root","/root/downloads"should give the execute(x) permission to'www-data'or'nobody'. i.e.ls -al /root
chmod o+x /root
chmod o+x /root/downloadsFor more reference , seeResolving "403 Forbidden" errorandNginx 403 forbidden for all files | Nginx serve static file and got 403 forbidden |
The problem is When a static file gets cached it can be stored for very long periods of time before it ends up expiring. This can be an annoyance in the event that you make an update to a site, however, since the cached version of the file is stored in your visitors’ browsers, they may be unable to see the changes made.Cache-bustingsolves the browser caching issue by using a unique file version identifier to tell the browser that a new version of the file is available. Therefore the browser doesn’t retrieve the old file from cache but rather makes a request to the origin server for the new file.Angular cli resolves this by providing an--output-hashingflag for the build command.Check the official doc :https://angular.io/cli/buildExample ( older versions)ng build --prod --aot --output-hashing=allBelow are the options you can pass in--output-hashingnone: no hashing performedmedia: only add hashes to files processed via [url|file]-loadersbundles: only add hashes to the output bundlesall: add hashes to both media and bundlesUpdatesFor the version of angular ( for example Angular 8,9,10) the command is :ng build --prod --aot --outputHashing=allFor the latest version of angular ( from angular 11 to angular 14) the command is reverted back to older format:ng build --aot --output-hashing=all | We have an Angular 6 application. It’s served on Nginx. And SSL is on.When we deploy new codes, most of new features work fine but not for some changes. For example, if the front-end developers update the service connection and deploy it, users have to open incognito window or clear cache to see the new feature.What type of changes are not updated automatically? Why are they different from others?What’s the common solution to avoid the issue? | Angular app has to clear cache after new deployment |
This is explained in detail in theexpress behind the proxies guideBy enabling the "trust proxy" setting via app.enable('trust proxy'), Express will have knowledge that it's sitting behind a proxy and that the X-Forwarded-* header fields may be trusted, which otherwise may be easily spoofed.Enabling this setting has several subtle effects. The first of which is that X-Forwarded-Proto may be set by the reverse proxy to tell the app that it is https or simply http. This value is reflected by req.protocol.The second change this makes is the req.ip and req.ips values will be populated with X-Forwarded-For's list of addresses. | I am writing an express app that sits behind an nginx server. I was reading through express's documentation and it mentioned the 'trust proxy' setting. All it says istrust proxy Enables reverse proxy support, disabled by defaultI read the little article here that explains Secure Sessions in Node with nginx.http://blog.nikmartin.com/2013/07/secure-sessions-in-nodejs-with-nginx.htmlSo I am curious. Does setting 'trust proxy' to true only matter when using HTTPS? Currently my app is just HTTP between the client and nginx. If I set it to true now, are there any side-effects/repercussions I need to be aware of? Is there any point to setting it true now? | What does "trust proxy" actually do in express.js, and do I need to use it? |
As of Nginx 1.9.2 you can dump the Nginx config with the-Tflag:-T— same as-t, but additionally dump configuration files to standard output (1.9.2).Source:http://nginx.org/en/docs/switches.htmlThis is not the same as dumping for a specific process. If your Nginx is using a different config file, check the output forps auxand use whatever it gives as the binary, e.g. if it gives something likenginx: master process /usr/sbin/nginx -c /some/other/configyou need to run/usr/sbin/nginx -c /some/other/config -TIf you are not on 1.9.2 yet, you can dump the config with gdb:https://serverfault.com/questions/361421/dump-nginx-config-from-running-process | Is it possible to get which conf the nginx is using only from a running nginx process?To get the conf file path. sometimesps auxreveal it, sometimes it doesn't. It might be just something likenginx: master process /usr/sbin/nginx(same as/proc/PID/cmdline)So isnginx -Vthe only solution?Fromthis question, is it possible to dump conf data structure from nginx process directly? Or at least dump the conf file path? | dump conf from running nginx process |
They are not forbidden, it's CGI legacy. See "Missing (disappearing) HTTP Headers".If you do not explicitly setunderscores_in_headers on;, nginx will silently drop HTTP headers with underscores (which are perfectly valid according to the HTTP standard). This is done in order to prevent ambiguities when mapping headers to CGI variables, as both dashes and underscores are mapped to underscores during that process. | I had a problem with a custom HTTPSESSION_IDheader not being transfered by nginx proxy.I was told that underscores are prohibited according to the HTTP RFC.Searching, I found that most servers likeApacheornginxdefine them as illegal inRFC2616section 4.2, which says:follow the same generic format as that given in Section 3.1 of RFC 822 [9]RFC822says:The field-name must be composed of printable ASCII characters
(i.e., characters that have values between 33. and 126.,
decimal, except colon)Underscore is decimal character 95 in the ASCII table in the 33-126 range.What am I missing? | Why do HTTP servers forbid underscores in HTTP header names |
It depends.Out of the box, putting nginx in front as a reverse proxy is going to give you:Access logsError logsEasy SSL terminationSPDY supportgzip supportEasy ways to set HTTP headers for certain routes in a couple of linesVery fast static asset serving (if you're serving off S3/etc. though, this isn't that relevant)The Go HTTP server is very good, but youwillneed to reinvent the wheel to do some of these things (which is fine: it's not meant to be everything to everyone).I've always found it easier to put nginx in front—which is what it is good at—and let it do the "web server" stuff. My Go application does the application stuff, and only the bare minimum of headers/etc. that it needs to. Don't look at putting nginx in front as a "bad" thing. | I am writing some webservices returning JSON data, which have lots of users.What are the benefits of using Nginx in front my server compared to just using the go http server? | What are the benefits of using Nginx in front of a webserver for Go? |
First off, you should be using the Docker embedded DNS server at127.0.0.11.Your problem could be caused by 1 of the following:nginx is trying to use IPv6 (AAAA record) for the DNS queries.Seehttps://stackoverflow.com/a/35516395/1529493for the solution.Basically something like:http {
resolver 127.0.0.11 ipv6=off;
}This is probably no longer a problem with Docker 1.11:Fix to not forward docker domain IPv6 queries to external servers
(#21396)Take care that you don't accidentally override theresolverconfiguration directive. In my case I had in theserverblockresolver 8.8.8.8 8.8.4.4;fromMozilla's SSL Configuration Generator, which was overriding theresolver 127.0.0.11;in thehttpblock. That had me scratching my head for a long time... | I am trying to get rid of deprecated Docker links in my configuration. What's left is getting rid of thoseBad Gatewaynginx reverse proxy errors when I recreated a container.Note: I am using Docker networks in bridge mode. (docker network create nettest)I am using the following configuration snippet inside nginx:location / {
resolver 127.0.0.1 valid=30s;
set $backend "http://confluence:8090";
proxy_pass $backend;I started a container with hostnameconfluenceon my Docker network with namenettest.Then I started the nginx container on networknettest.I can pingconfluencefrom inside the nginx containerconfluenceis listed inside the nginx container's/etc/hostsfilenginx log sayssend() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53I tried the docker network default dns resolver127.0.0.11from/etc/resol.confnginx log saysconfluence could not be resolved (3: Host not found)Anybody knows how to configure nginx resolver with Docker Networks or an alternative on how to force Nginx to correctly resolve the Docker network hostname? | Docker Network Nginx Resolver |
It should work, howeverhttp://nginx.org/en/docs/http/ngx_http_core_module.html#aliassays:When location matches the last part of the directive’s value:
it is better to use the root directive instead:which would yield:server {
listen 8080;
server_name www.mysite.com mysite.com;
error_log /home/www-data/logs/nginx_www.error.log;
error_page 404 /404.html;
location /public/doc/ {
autoindex on;
root /home/www-data/mysite;
}
location = /404.html {
root /home/www-data/mysite/static/html;
}
} | I have several sets of static.htmlfiles on my server, and I would like use nginx to serve them directly. For example, nginx should serve an URI of the following pattern:www.mysite.com/public/doc/foo/bar.htmlwith the.htmlfile that is located at/home/www-data/mysite/public/doc/foo/bar.html. You can think offooas the set name, andbaras the file name here.I wonder whether the following piece of nginx config would do the job:server {
listen 8080;
server_name www.mysite.com mysite.com;
error_log /home/www-data/logs/nginx_www.error.log;
error_page 404 /404.html;
location /public/doc/ {
autoindex on;
alias /home/www-data/mysite/public/doc/;
}
location = /404.html {
alias /home/www-data/mysite/static/html/404.html;
}
}In other words, all requests of the pattern/public/doc/.../....htmlare going to be handled by nginx, and if any given URI is not found, a defaultwww.mysite.com/404.htmlis returned. | Use nginx to serve static files from subdirectories of a given directory |
Probably other process is using specified port:sudo netstat -tulpnGet the PID of the process that already using 443. And send signal with kill command.sudo kill -2
sudo service nginx restartAternatively you can do:sudo fuser -k 443/tcpMake sure you dont use old syntax:server {
listen :80;
listen [::]:80;
}The above syntax will causenginx: [emerg] bind() to [::]:80 failed (98: Address already in use)Correct syntax:server {
listen 80;
listen [::]:80 ipv6only=on;
}orserver {
listen [::]:80;
}Both above syntax will achieve the same thing, listening on both ipv4 and ipv6. | I have a problem with nginx. I tried different solutions, but for me nothing work.
That is my error:4 root@BANANAS ~ # sudo service nginx restart :(
Restarting nginx: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] still could not bind()
nginx.Can you help me? | Nginx will not start (Address already in use) |
So I'm a newbie to nginx, and had this same question. Turns out the syntax of the language as mentioned above is both custom and actually quite simple. The syntax iscaptured in a sectionin the NGINX docs, and repeated here for convenience:nginx consists of modules which are controlled by directives
specified in the configuration file. Directives are divided into
simple directives and block directives. A simple directive consists of
the name and parameters separated by spaces and ends with a semicolon
(;). A block directive has the same structure as a simple directive,
but instead of the semicolon it ends with a set of additional
instructions surrounded by braces ({ and }). If a block directive can
have other directives inside braces, it is called a context (examples:
events, http, server, and location).Directives placed in the configuration file outside of any contexts
are considered to be in the main context. The events and http
directives reside in the main context, server in http, and location in
server.The rest of a line after the # sign is considered a comment.In summary:Everything in an NGINX config file is adirectivewhich may reference avariable. All directives arelisted alphabetically here, and all variables arelisted alphabetically here. NGINX configuration is driven bymodulesthat each implement a certain piece of functionality, and each modulecontributesdirectives and variables that become available for use within the config. That's it.That is why evenif-- which looks like a keyword like in a traditional programming language -- is actually just adirectivecontributed by thengx_http_rewrite_modulemodule.Hope this helps!PS - Also check outhttps://devdocs.io/, and specificallyhttps://devdocs.io/nginx, for amuchimproved way to search/use the NGINX documentation. | I want to write some more complex conditions in my Nginx configuration files but I'm not sure of the syntax and can't find docs describing what you can do beyond the basics in the examples and I can't seem to find this on the Nginx forums or on the mailing list.For example, is it possible for me to have anunlesscondition? | What language are nginx conf files? |
I'll have to disagree with the answers here. While Node will do fine, nginx will most definitely be faster when configured correctly. nginx is implemented efficiently in C following a similar pattern (returning to a connection only when needed) with a tiny memory footprint. Moreover, it supports thesendfilesyscall to serve those files which is as fast as you can possibly get at serving files, since it's the OS kernel itself that's doing the job.By now nginx has become the de facto standard as the frontend server. You can use it for its performance in serving static files, gzip, SSL, and even load-balancing later on.P.S.: This assumes that files are really "static" as in at rest on disk at the time of the request. | Is there any benchmark or comparison which is faster: place nginx in front of node and let it serve static files directly or use just node and serve static files using it?nginx solution seems to be more manageable for me, any thoughts? | node.js itself or nginx frontend for serving static files? |
The GD Graphics Library is for dynamically manipulating images.
For Ubuntu you should install it manually:PHP8.0:sudo apt-get install php8.0-gdPHP8.1:sudo apt-get install php8.1-gdPHP8.2:sudo apt-get install php8.2-gdPHP8.3:sudo apt-get install php8.3-gdThat's all, you can verify that GD support loaded:php -i | grep -i gdOutput should be like this:GD Support => enabled
GD headers Version => 2.1.1-dev
gd.jpeg_ignore_warning => 0 => 0and finally restart your apache:sudo service apache2 restart | I am usingLaravelweb framework on myubuntu 14.04server andNginxweb server, I have this error when I try to upload a file usingLaravelto the server.
my upload directory is on thepublic/uploadsfolder that has 777 permission. | GD Library extension not available with this PHP installation Ubuntu Nginx |
Usetry_filesand named location block ('@apachesite'). This will remove unnecessary regex match and if block. More efficient.location / {
root /path/to/root/of/static/files;
try_files $uri $uri/ @apachesite;
expires max;
access_log off;
}
location @apachesite {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}Update:The assumption of this config is that there doesn't exist any php script under/path/to/root/of/static/files. This is common in most modern php frameworks. In case your legacy php projects have both php scripts and static files mixed in the same folder, you may have to whitelist all of the file types you want nginx to serve. | location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
if (-f $request_filename) {
access_log off;
expires 30d;
break;
}
if (!-f $request_filename) {
proxy_pass http://127.0.0.1:8080; # backend server listening
break;
}
}Above will serve all existing files directly using Nginx (e.g. Nginx just displays PHP source code), otherwise forward a request to Apache. I need to exclude *.php files from the rule so that requests for *.php are also passed to Apache and processed.I want Nginx to handle all static files and Apache to process all dynamic stuff.EDIT: There is white list approach, but it is not very elegant, See all those extensions, I don't want this.location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
access_log off;
expires 30d;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}EDIT 2: On newer versions of Nginx usetry_filesinsteadhttp://wiki.nginx.org/HttpCoreModule#try_files | How to serve all existing static files directly with NGINX, but proxy the rest to a backend server. |
From:http://nginx.org/r/large_client_header_buffersSyntax:large_client_header_buffersnumbersize;Default:large_client_header_buffers 4 8k;Context:http, serverSets the maximumnumberandsizeof buffers used for reading large client request header. A request line cannot exceed the size of one buffer, or the 414 (Request-URI Too Large) error is returned to the client. A request header field cannot exceed the size of one buffer as well, or the 400 (Bad Request) error is returned to the client. Buffers are allocated only on demand. By default, the buffer size is equal to 8K bytes. If after the end of request processing a connection is transitioned into the keep-alive state, these buffers are released.so you need to change the size parameter at the end of that line to something bigger for your needs. | I am using Nginx in front of 10 mongrels.When I make a request with size larger then 2900 I get back an:error code 414: uri too largeDoes anyone know the setting in the nginx configuration file which determines the allowed uri length ? | How to set the allowed url length for a nginx request (error code: 414, uri too large) |
When yourreact.jsapp loads, the routes are handled on the frontend by thereact-router. Say for example you are athttp://a.com. Then on the page you navigate tohttp://a.com/b. This route change is handled in the browser itself. Now when you refresh or open the urlhttp://a.com/bin the a new tab, the request goes to yournginxwhere the particular route does not exist and hence you get 404.To avoid this, you need to load the root file(usually index.html) for all non matching routes so thatnginxsends the file and the route is then handled by your react app on the browser. To do this you have to make the below change in yournginx.conforsites-enabledappropiatelylocation / {
try_files $uri /index.html;
}This tellsnginxto look for the specified$uri, if it cannot find one then it sendindex.htmlback to the browser. (Seehttps://serverfault.com/questions/329592/how-does-try-files-workfor more details) | I uploaded react.js application to a server. I'm using nginx server. Application is working fine. But when I go to another page & refresh, the site is not working. It's showing a 404 Not found error.How can I solve this? | react.js application showing 404 not found in nginx server |
Here's how I solved it:Start the docker container for the application in an interactive mode, in my case it annginxcontainer :docker run -i -t nginx:latest /bin/bashRun the command below to grantreadpermission to theothersrole for theresolv.conffile:chmod o+r /etc/resolv.confNote: If you are having this issue on your host machine (Ubuntu Linux OS) and not for the Docker containers, then run the same command addingsudoto it in the host machine terminal:sudo chmod o+r /etc/resolv.confEndeavour to exit your bash interactive terminal once you run this:exitAnd then open a new bash interactive terminal and run the commands again:apt-get update
apt-get install nano
export TERM=xtermEverything should work fine now.Reference to this on Digital Ocean:Apt error: Temporary failure resolving 'deb.debian.org'That's all. | I have a Rails application that I want to deploy using Docker on an Ubuntu server. I have the Dockerfile for the application already set up, right now I want to view thenginxconf in its container.I ran the command below to start annginxcontainer in an interactive mode:docker run -i -t nginx:latest /bin/bashRight now I am trying to installnanoeditor in order to view the configuration fornginxconfiguration (nginx.conf) using the commands below:apt-get update
apt-get install nano
export TERM=xtermHowever, when I run the first commandapt-get update, I get the error below:Err:1 http://security.debian.org/debian-security buster/updates InRelease
Temporary failure resolving 'security.debian.org'
Err:2 http://deb.debian.org/debian buster InRelease
Temporary failure resolving 'deb.debian.org'
Err:3 http://deb.debian.org/debian buster-updates InRelease
Temporary failure resolving 'deb.debian.org'
Reading package lists... Done
W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/buster/updates/InRelease Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.I have checked very well it has nothing to do with network connectivity. | Docker: Temporary failure resolving 'deb.debian.org' |
NGINX doesn't manage your backend processes like apache does, so it can't affect their environments. To set a new$_SERVERPHP variable from NGINX, you need to add a newfastcgi_paramentry along with the rest of them. Wherever you're includingfastcgi_paramsorfastcgi.conf. | I use SetEnv in Apache to set some variables in virtualhosts that I recover in PHP using$_SERVER[the_variable].Now I am switching to Perl Catalyst and Nginx, but it seems that the "env" directive in Nginx is not the same. It does not work. How can it be accomplished?Here is the background picture, just in case someone can suggest a better approach or my previous system does not work with Nginx.I use the same app for many domains. All data comes from different databases with the same structure.The database name is hardcoded to the virtual host, in that environmental variable.As I know the database name, all the queries go to its appropriate database, from the very first query.I can have multiple domains using the same database, just including the same variable into the directives. | Nginx variables similar to SetEnv in Apache? |
I don't think that solution would work anyways because you will see some error message in your error log file.The solution was a lot easier than what I thought.simply, open the following path to your php5-fpmsudo nano /etc/php5/fpm/pool.d/www.confor if you're the admin 'root'nano /etc/php5/fpm/pool.d/www.confThen find this line and uncomment it:listen.allowed_clients = 127.0.0.1This solution will make you be able to uselisten = 127.0.0.1:9000in your vhost blockslike this:fastcgi_pass 127.0.0.1:9000;after you make the modifications, all you need is to restart or reload both Nginx and Php5-fpmPhp5-fpmsudo service php5-fpm restartorsudo service php5-fpm reloadNginxsudo service nginx restartorsudo service nginx reloadFrom the comments:Also comment;listen = /var/run/php5-fpm.sockand addlisten = 9000 | Trying to deploy my first portal .I am getting 502 gateway timeout error in browser when i was sending the request through browserwhen i checked the logs , i got this error2014/02/03 09:00:32 [error] 16607#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 14.159.131.19, server: foo.com, request: "GET HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "22.11.180.154"is there any problem related to permissions | nginx: connect() failed (111: Connection refused) while connecting to upstream |
Nginx uses thePCRE library. Thecompile-time options listhas some notes on this. | What regular expression engine does Nginx use?There are a lot of possibilities.More to the point, what flavor of syntax does it support, that is, what syntax features can I make use of? | What regular expression engine does Nginx use? |
If you want to pass the variable to your proxy backend, you have to set it with the proxy module.location / {
proxy_pass http://example.com;
proxy_set_header Host example.com;
proxy_set_header HTTP_Country-Code $geoip_country_code;
proxy_pass_request_headers on;
}And now it's passed to the proxy backend. | I'm using Nginx as a proxy to filter requests to my application. With the help of the "http_geoip_module" I'm creating a country code http-header, and I want to pass it as a request header using "headers-more-nginx-module". This is the location block in the Nginx configuration:location / {
proxy_pass http://mysite.com;
proxy_set_header Host http://mysite.com;;
proxy_pass_request_headers on;
more_set_headers 'HTTP_Country-Code: $geoip_country_code';
}But this only sets the header in the response. I tried using "more_set_input_headers" instead of "more_set_headers" but then the header isn't even passed to the response.What am I missing here? | Forward request headers from nginx proxy server |
I am assuming that you havehttpin your /etc/nginx/nginx.conf file which then tells nginx toinclude sites-enabled/*;So then you havehttp
http
serverAs the http directive should only happen once just remove the http directive from your sites-enabled config file(s) | I'm new to NGINX and I'm trying to setup minimal working thing. So I trying to run aiohttp mini-app with nginx and supervisor (bythisexample). But I can't configure Nginx right and getting the following error:nginx: [emerg] "http" directive is not allowed here in /etc/nginx/sites-enabled/default:1Here is full default.conf file:http {
upstream aiohttp {
# Unix domain servers
server unix:/tmp/example_1.sock fail_timeout=0;
server unix:/tmp/example_2.sock fail_timeout=0;
server unix:/tmp/example_3.sock fail_timeout=0;
server unix:/tmp/example_4.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
server example.com;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://aiohttp;
}
}
}It looks correct.serverdirective is inhttpas it should be. And http is parent directive. What I'm doing wrong? | nginx: [emerg] "http" directive is not allowed here in /etc/nginx/sites-enabled/default:1 |
Change this line:gzip_types text/plain application/x-javascript text/xml text/css;To be this:gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;Note the addition ofapplication/javascriptandtext/javascriptto your list of gzip types.There are also more details—and a more expansive list of gzip types—in the answerposted here. | All JavaScript files are not compressed by nginx gzip.CSS files are working.In mynginx.confI have the following lines:gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on; | NGINX gzip not compressing JavaScript files |
For normal production (on a server), use the defaultdaemon on;directive so the Nginx server will start in the background. In this way Nginx and other services are running and talking to each other. One server runs many services.ForDocker containers(or for debugging), thedaemon off;directive tells Nginx to stay in the foreground. For containers this is useful as best practice is for one container = one process. One server (container) has only one service.Settingdaemon off;is also useful if there's a 3rd party tool likeSupervisorcontrolling your services. Supervisor lets you stop/start/get status for bunches of services at once.I usedaemon off;for tweaking my Nginx config, then cleanly killing the service and restarting it. This lets me test configurations rapidly. When done I use the defaultdaemon on;. | This is my first web-server administration experience and I want to build docker container which uses nginx as a web-server. In all docker tutorialdaemon off;option is put into main.conffile but explanation about it is omitted.I search on the internet about it and I don't understand what is the difference betweendaemon on;anddaemon off;options. Some people mentioned thatdaemon off;is for production, why?Can you explain, what is the difference between this two options, and why I should usedaemon off;on production? | What is the difference between nginx daemon on/off option? |
You can execute a shell script viaLuacode from the nginx.conf file to achieve this. You need to have theHttpLuaModuleto be able to do this.Here's an example to do this.location /my-website {
content_by_lua_block {
os.execute("/bin/myShellScript.sh")
}
} | I want to run a shell script every time my nginx server receives any HTTP request. Any simple ways to do this? | How to run a shell script on every request? |
I use gridfs at work on one of our servers which is part of a price-comparing website with honorable traffic stats (arround 25k visitors per day). The server hasn't much ram, 2gigs, and even the cpu isn't really fast (Core 2 duo 1.8Ghz) but the server has plenty storage space : 10Tb (sata) in raid 0 configuration. The job the server is doing is very simple:Each product on our price-comparer has an image (there are around 10 million products according to our product db), and the servers job is to download the image, resize it, store it on gridfs, and deliver it to the visitors browser... if it's not present in the grid... or... deliver it to the visitors browser if it's already stored in the grid. So, this could be called as a 'traditional cdn schema'.We have stored and processed 4 million images on this server since it's up and running. The resize and store stuff is done by a simple php script... but for sure, a python script, or something like java could be faster.Current data size : 11.23gCurrent storage size : 12.5gIndices : 5Index size : 849.65mAbout the reliability : This is very reliable. The server doesn't load, the index size is ok, queries are fastAbout the speed : For sure, is it not fast as local file storage, maybe 10% slower, but fast enough to be used in realtime even when the image needs to be processed, which is in our case, very php dependant. Maintenance and development times have also been reduced: it became so simple to delete a single or multiple images : just query the db with a simple delete command. Another interesting thing : when we rebooted our old server, with local file storage (so million of files in thousands of folders), it sometimes hangs for hours cause the system was performing a file integrity check (this really took hours...). We do not have this problem any more with gridfs, our images are now stored in big mongodb chunks (2gb files)So... on my mind... Yes, gridfs is fast and reliable enough to be used for production. | I develop a new website and I want to use GridFS as storage for all user uploads, because it offers a lot of advantages compared to a normal filesystem storage.Benchmarks with GridFS served by nginx indicate, that it's not as fast as a normal filesystem served by nginx.Benchmark with nginxIs anyone out there, who uses GridFS already in a production environment, or would use it for a new project? | Is GridFS fast and reliable enough for production? |
The mistake is putting a server block inside a server block, you should close the main server block then open a new one for the sub domainsserver {
server_name example.com;
# the rest of the config
}
server {
server_name sub1.example.com;
# sub1 config
}
server {
server_name sub2.example.com;
# sub2 config
} | I'm new to Nginx and I'm trying to get subdomains working.What I would like to do is take my domain (let's call itexample.com) and add:sub1.example.com,sub2.example.com, and also havewww.example.comavailable.I know how to do this with Apache, but Nginx is being a real head scratcher.I'm running Debian 6.My current /etc/nginx/sites-enabled/example.com:server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}It is working to serve example.com and www.example.com.I have tried to add a second server block in the same file like:server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
server {
server_name sub1.example.com;
access_log /srv/www/example.com/logs/sub1-access.log;
error_log /srv/www/example.com/logs/sub1-error.log;
root /srv/www/example.com/sub1;
}
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}No luck. Any ideas? I'd super appreciate any feedback. | nginx - two subdomain configuration |
Thislocation /api {
proxy_pass http://backend;
}Needs to be thislocation /api/ {
proxy_pass http://backend/;
} | I am trying to pass off all calls to /api to my webservice but I keep getting 404s with the following config. Calls to / return index.html as expected. Does anyone know why?upstream backend{
server localhost:8080;
}
server {
location /api {
proxy_pass http://backend;
}
location / {
root /html/dir;
}
}More info hereadept@HogWarts:/etc/nginx/sites-available$ curl -i localhost/api/authentication/check/user/email
HTTP/1.1 404 Not Found
Server: nginx/1.2.1
Date: Mon, 22 Apr 2013 22:49:03 GMT
Content-Length: 0
Connection: keep-alive
adept@HogWarts:/etc/nginx/sites-available$ curl -i localhost:8080/authentication/check/user/email
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 22 Apr 2013 22:49:20 GMT
Transfer-Encoding: chunked
{"user":["false"],"emailAddress":["false"]} | nginx proxy_pass 404 error, don't understand why |
$uriis not equivalent to$request_uri.The$urivariable is set to the URI thatnginxiscurrently processing- but it is also subject to normalisation, including:Removal of the?and query stringConsecutive/characters are replace by a single/URL encoded characters are decodedThe value of$request_uriis always the original URI and is not subject to any of the above normalisations.Most of the time you would use$uri, because it is normalised. Using$request_uriin the wrong place can cause URL encoded characters to become doubly encoded.Use$request_uriin amapdirective, if you need to match the URI and its query string. | How do you determine when to use$request_urivs$uri?According to NGINX documentation,$request_uriis the original request (for example,/foo/bar.php?arg=bazincludes arguments and can't be modified) but$urirefers to the altered URI.If the URI doesn't change, does $uri = $request_uri?Would it be incorrect or better or worse to use:map $uri $new_uri {
# do something
}vsmap $request_uri $new_uri {
# do something
} | NGINX $request_uri vs $uri |
Try adding the$hostvariable in log_format:log_format main '$http_x_forwarded_for - $remote_user [$time_local] "$host" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $request_time';http://wiki.nginx.org/HttpCoreModule#.24host:$hostThis variable is equal to line Host in the header of request or name
of the server processing the request if the Host header is not
available.This variable may have a different value from $http_host in such
cases: 1) when the Host input header is absent or has an empty value,
$host equals to the value of server_name directive; 2) when the value
of Host contains port number, $host doesn't include that port number.
$host's value is always lowercase since 0.8.17. | We use following nginx site configure file in our production env.log_format main '$http_x_forwarded_for - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $request_time';
server {
root /srv/www/web;
server_name *.test.com;
access_log /var/log/nginx/xxx.test.com.access.log main;Both "http://a.test.com/ping" and "http://b.test.com/ping" http request will be record in file xxx.test.com.access.log.But there is a problem, nginx don't store "domain name" in xxx.test.com.access.log."http://a.test.com/ping" and "http://b.test.com/ping" share the same request "Get /ping".How can I record "a.test.com" or "b.test.com" in nginx log? | Full record url in nginx log |
I have an implementation of a proxy using httplib in a Werkzeug-based app (as in your case, I needed to use the webapp's authentication and authorization).Although the Flask docs don't state how to access the HTTP headers, you can userequest.headers(seeWerkzeug documentation). If you don't need to modify the response, and the headers used by the proxied app are predictable, proxying is staightforward.Note that if you don't need to modify the response, you should use thewerkzeug.wsgi.wrap_fileto wrap httplib's response stream. That allows passing of the open OS-level file descriptor to the HTTP server for optimal performance. | I want to proxy requests made to my Flask app to another web service running locally on the machine. I'd rather use Flask for this than our higher-level nginx instance so that we can reuse our existing authentication system built into our app. The more we can keep this "single sign on" the better.Is there an existing module or other code to do this? Trying to bridge the Flask app through to something like httplib or urllib is proving to be a pain. | Proxying to another web service with Flask |
You can use two IF statements either before or in the location block to inspect the headers and then return a 403 error code if it is present. Alternatively, you can use those IF statements to rewrite to a specific location block and deny all in that location:if ($http_x_custom_header) {
return 403;
}Reference:https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/https://nginx.org/en/docs/http/ngx_http_access_module.htmlAdding more detail per comment/request:if ($http_x_custom_header) {
return 405;
}this looks to see if header existsif you want to check to see if the correct values exist, then you first need to map the correct values to a variable.map $http_x_header $is_ok {
default "0";
Value1 "1";
Value2 "1";
Value3 "1";
}
if ($is_ok) {
return 405;
}this first maps the header value to whether or not its ok, then checks to see if the variable is ok.EDIT:Removed semicolon after map block since this causes an error. | If I have the headers: X_HEADER1 & X_HEADER2, I want to reject all requests if either of these headers are not set or do not contain the correct values. What is the best way to do this?Thanks | Nginx: Reject request if header is not present or wrong |
You can edit/etc/nginx/mime.typesand add ittypes {
application/xml dae;
}I haven't found the the exact stringapplication/xmlin mymime.typesso I suppose you can directly include it inside your server block, in the server scope or something.If you do not have access to the systemmime.typesthen you can set it in thelocationblock if you have access to that.https://nginx.org/en/docs/http/ngx_http_core_module.html#typesWARNINGWhen you settypesit will overwrite all mime types set in/etc/nginx/mime.types. To avoid this target specific extensions with a Regular expression location block. Also know that locations can be nested, like so:server {
# ...
location / {
root /usr/share/nginx/html;
index index.html;
location ~* \.mjs$ {# target only *.mjs files
# now we can safely override types since we are only
# targeting a single file extension.
types {
text/javascript mjs;
}
}
}
} | How to overwrite default Content-Type in nginx? Currently when I request 01.dae file, there'sContent-Type: application/octet-stream;And I want it to beContent-Type: application/xml;I tried something likelocation ~* \.dae$ {
types { };
default_type application/xml;
}andlocation ~* \.dae$ {
add_header Content-Type application/xml;
}but nothing works. | Force nginx to send specific Content-Type |
You could move the common parts to another configuration file andincludefrom both server contexts. This should work:server {
listen 80;
server_name server1.example;
...
include /etc/nginx/include.d/your-common-stuff.conf;
}
server {
listen 80;
server_name another-one.example;
...
include /etc/nginx/include.d/your-common-stuff.conf;
}Edit: Here's an example that's actually copied from my running server. I configure my basic server settings in/etc/nginx/sites-enabled(normal stuff for nginx on Ubuntu/Debian). For example, my main serverbunkus.org's configuration file is/etc/nginx/sites-enabledand it looks like this:server {
listen 80 default_server;
listen [2a01:4f8:120:3105::101:1]:80 default_server;
include /etc/nginx/include.d/all-common;
include /etc/nginx/include.d/bunkus.org-common;
include /etc/nginx/include.d/bunkus.org-80;
}
server {
listen 443 default_server;
listen [2a01:4f8:120:3105::101:1]:443 default_server;
include /etc/nginx/include.d/all-common;
include /etc/nginx/include.d/ssl-common;
include /etc/nginx/include.d/bunkus.org-common;
include /etc/nginx/include.d/bunkus.org-443;
}As an example here's the/etc/nginx/include.d/all-commonfile that's included from bothservercontexts:index index.html index.htm index.php .dirindex.php;
try_files $uri $uri/ =404;
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~ /(README|ChangeLog)$ {
types { }
default_type text/plain;
} | I have nginx acting as a reverse proxy to apache. I now need to add a new subdomain
that will serve files from another directory, but at the same time I want all location and proxy_pass directives that I have for the default host to apply to the subdomain also.I know that if I copy the rules from the default host to the new subdomain it will work, but is there a way for the subdomain to inherit the rules?
Below is a sample configurationserver {
listen 80;
server_name www.somesite.com;
access_log logs/access.log;
error_log logs/error.log error;
location /mvc {
proxy_pass http://localhost:8080/mvc;
}
location /assets {
alias /var/www/html/assets;
expires max;
}
... a lot more locations
}
server {
listen 80;
server_name subdomain.somesite.com;
location / {
root /var/www/some_dir;
index index.html index.htm;
}
}Thanks | Nginx subdomain configuration |
nginx -s reload is only used to tell a running nginx process to reload its config. After a stop, you don't have a running nginx process to send a signal to. Just run nginx (possibly with a -c /path/to/config/file) | I issued a nginx -s stop and after that I got this error when trying to reload it.[error]: invalid PID number "" in "/var/run/nginx.pid"That /var/run/nginx/pid file is empty atm.What do I need to do to fix it? | Nginx Invalid PID number |
Just to add another approach, you can use a separate file for each virtual domain or site you're hosting.
You can use a copy of default as a starting point for each one and customize for each site.Then create symlinks in sites-enabled. In this way you can take sites up and down just by adding or removing a symlink and issuing a service nginx reload.You can get creative and use this method to redirect sites to a maintenance mode page while you are doing site maintenance.So the structure looks like this:/sites-available/ (you can use obvious file names like this)
|
|-> a.mysite.com
|-> b.mysite.com
|-> someOtherSite.com
/sites-enabled/ (these are just symlinks to the real files in /sites-available)
|
|-> a.mysite.com
|-> b.mysite.comNotice that since there are only the first two entries are the only symlinked items insites-enabled, the third entry,someOtherSite.comis therefore offline. | With the base install of nginx, yoursites-availablefolder has just one file:defaulthow does thesites-availablefolder work and how would I use it to host multiple (separate) websites? | multiple websites on nginx & sites-available |
After much research, I can conclude that it is not possible out of the box.Update- you can use openresty which comes with Lua. Using Lua one can do pretty cool things, including logging all of the headers to say, Redis or some other server | How do I log all the headers the client (browser) has sent in Nginx? I also want to log the response headers. Note that I am using nginx as reverse proxy.After going through documentation, I understand that I can log a specific header, but I want to log all of the headers. | How to log all headers in nginx? |
Try another *fastcgi_param* something likefastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; | Recently I installed the latest version of Nginx and looks like I'm having hard time running PHP with it.Here is the configuration file I'm using for the domain:server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.php;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}}Here is the error I'm getting on the error log file:FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream | File Not Found when running PHP with Nginx |
For a graceful reload, you should instead use Upstart'sreloadcommand, e.g.:sudo reload jobnameAccording to the initctl (Upstart)manpage,reloadwill send aHUPsignal to the process:reload JOB [KEY=VALUE]...
Sends the SIGHUP signal to running process of the named JOB instance....which for Gunicorn will trigger a graceful restart (seeFAQ). | Im looking for something better thansudo restart projectnameevery time I issue agit pull origin master, which pulls down my latest changes to a Django project. Thisrestartcommand, I believe, is related to Upstart, which I use to start/top my Gunicorn server process.This restart causes a brief outage. Users hitting the web server (nginx) will get a 500, because Gunicorn is still restarting. In fact, it seems to restart instantly, but it takes a few seconds for pages to load.Any ideas on how to make this seamless? Ideally, I'd like to issue mygit pulland Gunicorn reloads automatically. | A better way to restart/reload Gunicorn (via Upstart) after 'git pull'ing my Django projects |
To add a header, add theadd_headerdeclaration to either thelocationblock or theserverblock:server {
add_header X-server-header "my server header content!";
location /specific-location {
add_header X-location-header "my specific-location header content!";
}
}Anadd_headerdeclaration within alocationblock will override thesameadd_headerdeclaration in the outerserverblock.
e.g. iflocationcontainedadd_header X-server-header ...that would override the outer declaration for that path location.Obviously, replace the values with what you want to add. And that's all there is to it. | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed4 years ago.The community reviewed whether to reopen this question1 year agoand left it closed:Original close reason(s) were not resolvedImprove this questionI am using two system (both are Nginx load balancer and one act as backup).I want to add and use few HTTP custom headers.Below is my code for both:upstream upstream0 {
#list of upstream servers
server backend:80;
server backup_load_balancer:777 backup;
#healthcheck
}
server {
listen 80;
#Add custom header about the port and protocol (http or https)
server_name _;
location / {
# is included since links are not allowed in the post
proxy_pass "http://upstream0;"
}
}Backup systemserver {
listen 777;
server_name _;
#doing some other extra stuff
#use port and protocol to direct
}How can I achieve that? | Adding and using header (HTTP) in nginx [closed] |
modify your nginx.confserver {
listen 80;
server_name www.foo.bar;
location / {
root /path/to/rails/public/;
passenger_enabled on;
allow my.public.ip.here;
deny all;
}
} | Nginx, Passenger, and Rails are running beautifully on my Linode. Before I launch, I'd like to restrict access so only my IP can view the site.I've tried to deny access to all, and allow access to only my IP in Nginx. It does deny access to all, but I can't get the allow to work. I have checked to ensure the IP address I'm specifying in nginx.conf is my correct public ip.Here's my nginx.conf. I've restarted nginx after editing the file, and tested some other changes which worked as expected (for instance, I removed deny all and was able to access the site, as expected).What am I doing wrong?http {
passenger_root /path/to/passenger-3.0.11;
passenger_ruby /path/to/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name www.foo.bar;
root /path/to/rails/public/;
passenger_enabled on;
location / {
allow my.public.ip.here;
deny all;
}
}
} | How can I allow access to a single IP address via Nginx.conf? |
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).Flaskdocumentshow to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask. | Setting up Flask with uWSGI and Nginx can be difficult. I tried followingthis DigitalOcean tutorialand still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?Does it make sense to not use Nginx either, just running bare Flask app on a port? | Are a WSGI server and HTTP server required to serve a Flask app? |
Include to prevent Nginx from crashing if your site is down, include a resolver directive, as follows:server {
listen 80;
server_name test.com;
location / {
resolver 8.8.8.8;
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
...WARNING! Using a public DNS create a security risk in your backend since your DNS requests can be spoofed. If this is an issue, you should point the resolver to a secure DNS server. | I am running docker-nginx on ECS server. My nginx service is suddenly stopped because theproxy_passof one of the servers got unreachable. The error is as follows:[emerg] 1#1: host not found in upstream "dev-example.io" in /etc/nginx/conf.d/default.conf:988My config file is as below:server {
listen 80;
server_name test.com;
location / {
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80 default_server;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}I have many servers in the config file, even if one server was down, I need to have running nginx. Is there any way to fix it? | Docker Nginx stopped: [emerg] 1#1: host not found in upstream |
If the proxy_pass statement has no variables in it, then it will use the "gethostbyaddr" system call during start-up or reload and will cache that value permanently.if there are any variables, such as using either of the following:set $originaddr http://origin.example.com;
proxy_pass $originaddr;
# or even
proxy_pass http://origin.example.com$request_uri;Then nginx will use a built-in resolver, and the "resolver" directivemustbe present. "resolver" is probably a misnomer; think of it as "what DNS server will the built-in resolver use". Since nginx 1.1.9 the built-in resolver will honour DNS TTL values. Before then it used a fixed value of 5 minutes. | I'm trying to include $remote_addr or $http_remote_addr on my proxy_pass without success.The rewrite rule workslocation ^~ /freegeoip/ {
rewrite ^ http://freegeoip.net/json/$remote_addr last;
}The proxy_pass without the $remote_addr works, but freegeoip does not read the x-Real-IPlocation ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}Then, I'm adding the ip to the end of the request, like this:location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/$remote_addr;
}but nginx report this error: no resolver defined to resolve freegeoip.net | Nginx proxy_pass with $remote_addr |
Maybe you're not doing it as root?Trysudo nginx -s reload, if it still doesn't work, you might want to trysudo pkill -HUP nginx. | I am trying to modify the Nginx config file to remove a "rewrite".Currently, I have this config file:worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name amc.local;
return 301 https://$host:8443/index.html;
}
}Now I want to reload this config file, I triednginx -s reload
nginx -c
nginx -s stop/startIn the log file, there is the line2014/01/22 11:25:25 [notice] 1310#0: signal process startedbut the modifications are not loaded. | Reload Nginx configuration |
If you could only name one thing then it would be thatMongrel2 is build around ZeroMQwhich means that scaling your web server has never been easier.If a request comes in, Mongrel2 receives it (nothing unusual here, same as for NginX and any other httpd). Next thing that happens is that Mongrel2 distributes the task ofcompilinga response to n (ZeroMQ-enabled) backends, waits for them to do the work, receives results, compiles the response and sends it off to the client.Now, the magic is with the fact that n can be any number and, that each of n can be written in any language as supported by ZeroMQ (20 or so) plus, all goes across the network so each n can be a dedicated box, possibly in another datacenter.In other words: with NginX and all the rest you have to do scalability in your logic tier, Mongrel2 allows you to start (from a request/response cycle point of view) this right where the request hits your infrastructure, at the httpd rather than letting complexity penetrate down to your logic tier which blows complexity upwards by at least one order of magnitude imo. | I'm confused what purposeMongrel2serves/provides thatnginxdoesn't already do.(Yes, I've read themanualbut I must to be too much of a noob to understand how it's fundamentally different than nginx)My current web application stack is:-nginx: webserver-Lua: programming language-FastCGI + LuaJIT: to connect nginx to Lua-Postgres: database | Why use Mongrel2? |
I experienced the same problem and it was due toSELinux.To check if SELinux is running:# getenforceTo disable SELinux until next reboot:# setenforce PermissiveRestart Nginx and see if the problem persists. If you would like to permanently alter the settings you can edit/etc/sysconfig/selinuxIf SELinux is your problem you can run the following to allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e,# setenforce Enforcing)# chcon -Rt httpd_sys_content_t /path/to/wwwIf you're still having issues take a look at the boolean flags ingetsebool -a, in particular you may need to turn onhttpd_can_network_connectfor network access# setsebool -P httpd_can_network_connect onFor me it was enough to allow http to serve my www directory. | I have Nginx setup and displaying the test page properly. If I try to change the root path, I get a 403 Forbidden error, even though all permissions are identical. Additionally, the nginx user exists.nginx.conf:user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
index index.html index.htm;
server {
listen 80;
server_name localhost;
root /var/www/html; #changed from the default /usr/share/nginx/html
}
}namei -om /usr/share/nginx/html/index.htmlf: /usr/share/nginx/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root usr
drwxr-xr-x root root share
drwxr-xr-x root root nginx
drwxr-xr-x root root html
-rw-r--r-- root root index.htmlnamei -om /var/www/html/index.htmlf: /var/www/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x root root html
-rw-r--r-- root root index.htmlerror log2014/03/23 12:45:08 [error] 5490#0: *13 open()
"/var/www/html/index.html" failed (13: Permission denied), client:
XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html HTTP/1.1", host: "ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com" | Why does Nginx return a 403 even though all permissions are set properly? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.