Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
To access a path like/var/www/mysite/manage/publicwith a URI like/manage, you will need to usealiasrather thanroot. Seethis documentfor details.I am assuming that you need to run PHP from both roots, in which case you will need twolocation ~ \.phpblocks, see example below. If you have no PHP within/var/www/mysite/static, you can delete the unusedlocationblock.For example:server {
listen 80;
server_name example.org;
error_log /usr/local/etc/nginx/logs/mysite/error.log;
access_log /usr/local/etc/nginx/logs/mysite/access.log;
root /var/www/mysite/static;
index index.html;
location / {
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
location ^~ /manage {
alias /var/www/mysite/manage/public;
index index.php;
if (!-e $request_filename) { rewrite ^ /manage/index.php last; }
location ~ \.php$ {
if (!-f $request_filename) { return 404; }
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
}
}The^~modifier causes the prefix location to take precedence over regular expression locations at the same level. Seethis documentfor details.Thealiasandtry_filesdirectives are not together due tothis long standing bug.Be aware ofthis cautionin the use of theifdirective. | lets say I've a path like:/var/www/myside/that path contains two folders... let's say/staticand/manageI'd like to configure nginx to have an access to:/staticfolder on/(eg.http://example.org/)
this folder has some .html files./managefolder on/manage(eg.http://example.org/manage) in this case this folder contains Slim's PHP framework code - that means the index.php file is inpublicsubfolder (eg. /var/www/mysite/manage/public/index.php)I've tried a lot of combinations such asserver {
listen 80;
server_name example.org;
error_log /usr/local/etc/nginx/logs/mysite/error.log;
access_log /usr/local/etc/nginx/logs/mysite/access.log;
root /var/www/mysite;
location /manage {
root $uri/manage/public;
try_files $uri /index.php$is_args$args;
}
location / {
root $uri/static/;
index index.html;
}
location ~ \.php {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
}
}The/works correctly anywaymanagedoesn't. Am I doing something wrong? Does anybody know what should I change?Matthew. | Nginx location configuration (subfolders) |
The problem was the nginx configuration.
I replaced my long configuration files with the simplest config possible:server {
listen 80;
server_name domain.com www.domain.com git.domain.com;
root /var/www/domain/;
}Then I was able to issue new certificates.The problem with my long configuration files was (as far as I can tell) that I had the these lines:location ~ /.well-known {
allow all;
}But they should be:location ~ /.well-known/acme-challenge/ {
allow all;
}Now the renewal works, too. | I had working Let's encrypt certificates some months ago (with the old letsencrypt client).
The server I am using is nginx.Certbot is creating the .well-known folder, but not the acme-challenge folderNow I tried to create new certificates via~/certbot-auto certonly --webroot -w /var/www/webroot -d domain.com -d www.domain.com -d git.domain.comBut I always get errors like this:IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: git.domain.com
Type: unauthorized
Detail: Invalid response from
http://git.domain.com/.well-known/acme-challenge/ZLsZwCsBU5LQn6mnzDBaD6MHHlhV3FP7ozenxaw4fow:
"<.!DOCTYPE html>
<.html lang='en'>
<.head prefix='og: http://ogp.me/ns#'>
<.meta charset='utf-8'>
<.meta content='IE=edge' http-equiv"
Domain: www.domain.com
Type: unauthorized
Detail: Invalid response from
http://www.domain.com/.well-known/acme-challenge/7vHwDXstyiY0wgECcR5zuS2jE57m8I3utszEkwj_mWw:
"<.html>
<.head><.title>404 Not Found
<.body bgcolor="white">
<.center><.h1>404 Not Found(Of course the dots inside the HTML tags are not really there)I have looked for a solution, but didn't found one yet.
Does anybody know why certbot is not creating the folders?Thanks in advance! | Certbot not creating acme-challenge folder |
Acme challenge link only needed for verifying domain to this ip address | Should I leave the /.well-known/acme-challenge always exposed on the server?
Here is my config for the HTTP:server {
listen 80;
location '/.well-known/acme-challenge' {
root /var/www/demo;
}
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
}Which basically redirects all the requests to https, except for the acme-challenge (for auto renewal). My question: Is it alright to keep location '/.well-known/acme-challenge' always exposed on port 80? Or better to comment/uncomment it manually, when need to reissue the certificate? Are there any security issues with that?Any advise or links to read for about the this location appreciated. Thanks! | Certbot /.well-known/acme-challenge |
Make sure your backend does not returnSet-Cookieheader. If Nginx sees it, it disables caching.If this is your case, the best option is to fix your backend. When fixing the backend is not an option, it's possible to instruct Nginx to ignoreSet-Cookieheaderproxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";See thedocumentationproxy_ignore_headerwill ensure that the caching takes place.proxy_hide_headerwill ensure the Cookie payload is not included in the cached payload. This is important to avoid leaking cookies via the NGINX cache. | I'm trying to cache static content which are basically inside the paths below in virtual server configuration. For some reason files are not being cached. I see several folders and files inside the cache dir but its always something like 20mb no higher no lower. If it were caching images for example would take at least 500mb of space.Here is the nginx.conf cache part:** nginx.conf **
proxy_cache_path /usr/share/nginx/www/cache levels=1:2 keys_zone=static$
proxy_temp_path /usr/share/nginx/www/tmp;
proxy_read_timeout 300s;Heres the default virtual server.**sites-available/default**
server {
listen 80;
root /usr/share/nginx/www;
server_name myserver;
access_log /var/log/nginx/myserver.log main;
error_log /var/log/nginx/error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~* ^/(thumbs|images|css|js|pubimg)/(.*)$ {
proxy_pass http://backend;
proxy_cache static;
proxy_cache_min_uses 1;
proxy_cache_valid 200 301 302 120m;
proxy_cache_valid 404 1m;
expires max;
}
location / {
proxy_pass http://backend;
}
} | nginx as cache proxy not caching anything |
WebSockets are fast and you don't have to (and shouldn't) disable them.The real cause of this error is that Webfactions uses nginx, and nginx was improperly configured. Here's how tocorrectly configure nginx to proxy WebSocket requests, by settingproxy_set_header Upgrade $http_upgrade;andproxy_set_header Connection $connection_upgrade;:# we're in the http context here
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# the Meteor / Node.js app server
server {
server_name yourdomain.com;
access_log /etc/nginx/logs/yourapp.access;
error_log /etc/nginx/logs/yourapp.error error;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}This is an improved nginx configuration based onDavid Weldon's nginx config. Andrew Mao has reached avery similar configuration.Remember to also set theHTTP_FORWARDED_COUNTenvironment variable to the number of proxies in front of the app (usually 1). | I managed to deploy meteor on my infrastructure (Webfactions).
The application seems to work fine but I get the following error in the browser console when my application starts:WebSocket connection to 'ws://.../websocket' failed: Error during WebSocket handshake: Unexpected response code: 400 | Meteor WebSocket handshake error 400 with nginx |
Thedocumentationstates that for http keepalive, you should also setproxy_http_version 1.1;andproxy_set_header Connection ""; | I need to keep alive my connection between nginx and upstream nodejs.Just compiled and installed nginx 1.2.0my configuration file:upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.document | nginx close upstream connection after request |
Nginx expects allserversection certificates in a file that you refer withssl_certificate. Just put all vendor's intermediate certificates and your domain's certificate in a file. It'll look like this.-----BEGIN CERTIFICATE-----
MII...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MII...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MII...
-----END CERTIFICATE-----To make sure everything is okay and to avoid downtime, I would suggest you to setup Nginx locally, add127.0.0.1 yourdomain.comto/etc/hosts, and try open it from major browsers. When you've verified that everything is correct your can replicate it to the production server.When you're done, it is a good idea to use some SSL checker tool to verify (e.g.this one). Because pre-installed CA certificates may vary depending on browser and platform, you can easily overlook a misconfiguration checking from one OS or a limited set of browsers.EditAs @Martin pointed out, the order of certificates in the file is important.RFC 4346for TLS 1.1 states:This is a sequence (chain) of X.509v3 certificates. The sender's
certificate must come first in the list. Each following
certificate must directly certify the one preceding it.Thus the order is:1. Your domain's certificate2. Vendor's intermediate certificate that certifies (1)3. Vendor's intermediate certificate that certifies (2)...n. Vendor's root certificate that certifies (n-1). Optional, because it should be contained in client's CA store. | I'm trying to install an intermediate certificate on Nginx ( laravel forge ).
Right now the certificate is properly installed, just the intermediate that is missing.I've seen that I need to concatenate the current certificate with the intermediate. What is the best/safest way to add the intermediate certificate.Also, if the install of the intermediate failed, can I just roll back to the previous certificate, and reboot nginx? ( the website site is live, so I can't have a too long downtime ) | Nginx install intermediate certificate |
You have a regex location and a prefix location. The regex location takes precedence unless^~is used with the prefix location. Try:location ~ /\. {
deny all;
}
location ^~ /.well-known/ {
# allow all;
}Seethis documentfor details. | I've got this in my nginx config:location ~ /\. {
deny all;
}
location /.well-known/ {
allow all;
}But I still can't accesshttp://example.com/.well-known/acme-challenge/taUUGC822PcdnCnW_aADOzObZqFm3NNM5PEzLNFJXRU. How do I allow access to just that one dot directory? | How to disallow access to all dot directories except .well-known? |
You need to pass the appropriateX-Forwarded-Forheader to your upstream. Add these lines to your upstream config:proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; | I have my express server running on port 3000 with nginx for the reverse proxy.req.ip always returns 127.0.0.1 and req.ips returns an empty arrayapp.enable('trust proxy');With/without enabling trust proxy, x-forwarded-for doesn't work:var ip_addr = req.headers['X-FORWARDED-FOR'] || req.connection.remoteAddress;nginx configuration:server {
listen 80;
server_name localhost;
access_log /var/log/nginx/dev_localhost.log;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}How do i get the IP address of the requesting client? | Express - req.ip returns 127.0.0.1 |
The syntax for disabling the error log is ok, but thedocsstate that a default logfile is used before the config is read. (which seems reasonable because how would it otherwise tell you you have an error in your config)Try creating this file by hand with the correct permissions for the user that runs nginx. Or try starting the server as root. | I compiled the nginx on Ubuntu myself.
I start my nginx with -c nginx.conf parameter.
In my nginx.conf file, I try to turn off error log with but failed.error_log /dev/null crit;Still got the error message:
nginx: [alert] could not open error log file: open() "/usr/nginx/logs/error.log" failed (2: No such file or directory)How could I turn off this log or change its location? | How to turn off or specify the nginx error log location? |
Is it possible to serve a Python (Flask) application with HTTP/2?Yes, by the information you provide, you are doing it just fine.In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?Now I'm going to tread on thin ice and give opinions.The way HTTP/2 has been deployed so far is by having an edge server that talks HTTP/2 (like ShimmerCat or NginX). That server terminates TLS and HTTP/2, and from there on uses HTTP/1, HTTP/1.1 or FastCGI to talk to the inner application.Can, at least theoretically, an edge server talk HTTP/2 to web application? Yes, but HTTP/2 is complex and for inner applications, it doesn't pay off very well.That's because most web application frameworks are built for handling requests for content, and that's done well enough with HTTP/1 or FastCGI. Although there are exceptions, web applications have little use for the subtleties of HTTP/2: multiplexing, prioritization, all the myriad of security precautions, and so on.The resulting separation of concerns is in my opinion a good thing.Your 80 ms response time may have little to do with the HTTP protocol you are using, but if those 80 ms are mostly spent waiting for input/output, then of course running things in parallel is a good thing.Gunicorn will use a thread or a process to handle each request (unless you have gone the extra-mile to configure the greenlets backend), so consider if letting Gunicorn spawn thousands of tasks is viable in your case.If the content of your requests allow it, maybe you can create temporary files and serve them with an HTTP/2 edge server. | I have a Python REST service and I want to serve it using HTTP2. My current server setup isnginx -> Gunicorn. In other words, nginx (port 443 and 80 that redirects to port 443) is running as a reverse proxy and forwards requests to Gunicorn (port 8000, no SSL). nginx is running in HTTP2 mode and I can verify that by using chrome and inspecting the 'protocol' column after sending a simple GET to the server. However, Gunicorn reports that the requests it receives are HTTP1.0. Also, I coulnt't find it in this list:https://github.com/http2/http2-spec/wiki/ImplementationsSo, my questions are:Is it possible to serve a Python (Flask) application with HTTP2? If yes, which servers support it?In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?The reason I want to use HTTP2 is because in some cases I need to perform thousands of requests all together and I was interested to see if the multiplexed requests feature of HTTP2 can speed things up. With HTTP1.0 and Python Requests as the client, each request takes ~80ms which is unacceptable. The other solution would be to just bulk/batch my REST resources and send multiple with a single requests. Yes, this idea sounds just fine, but I am really interested to see if HTTP2 could speed things up.Finally, I should mention that for the client side I use Python Requests with the Hyper http2 adapter. | Serving Python (Flask) REST API over HTTP2 |
Change your config to belowserver {
listen 80 default_server;
server_name mywebsite.me www.mywebsite.me;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/mywebsite.me/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebsite.me/privkey.pem;
root /home/website/mywebsite/public;
index index.html index.htm index.php;
server_name mywebsite.me www.mywebsite.me;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
}
}Your current config redirects on both http and https to https. So it becomes a infinite loop because of the return statement. You want return statement only when connection is http. So you split it into two server blocks | I want to redirect all myhttptraffic to redirect tohttps. I am usingletsencrypt. I read online thatreturn 301 https://$server_name$request_uri;would redirect all the traffic to my website over tohttpsbut instead it results inERR_TOO_MANY_REDIRECTS.Everything works fine without the above mention statement, but then I have to specifically specifyhttpsin the URL. Here's my/etc/nginx/sites-available/defaultfile:server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/mywebsite.me/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebsite.me/privkey.pem;
root /home/website/mywebsite/public;
index index.html index.htm index.php;
server_name mywebsite.me www.mywebsite.me;
return 301 https://$server_name$request_uri;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
}
}Where am I going wrong? | ERR_TOO_MANY_REDIRECTS with nginx |
You will need some scripting knowledge to put this together. I would use PHP, but if you are good in bash scripting use that. I would do it like this:First create some folder (/usr/local/etc/nginx/domain.com/).In main nginx.conf add command :include /usr/local/etc/nginx/domain.com/*.conf;Every file in this folder should be different vhost names subdomain.conf.You do not need to restart nginx server for config to take action, you only need to reload it :/usr/local/etc/rc.d/nginx reloadOR you can make only one conf file, where all vhosts should be set. This is probably better so that nginx doesn't need to load up 50 files, but only one....IF you have problems with scripting, then ask question about that... | Been playing with nginx for about an hour trying to setup mass dynamic virtual hosts.
If you ever done it in apache you know what I mean.Goal is to have dynamic subdomains for few people in the office (more than 50) | How to setup mass dynamic virtual hosts in nginx? |
If it was working prior to the update to Catalina, the issue is due to the new permissions requested by Catalina.Now, macOS requests permissions for everything, even for accessing a directory. So, probably you had a notification about granting Docker for Mac permission to access the shared folder, you didn't grant it, and now you are facing the outcome of such action.To grant privileges now, go to System preferences > Security & Privacy > Files and Folders, and add Docker for Mac and your shared directory. | I need your help to understand my problem.I updated my macintosh with Catalina last week, then i updated docker for mac.Since those updates, i have ownership issues on shared volumes.I can reproduce with a small example. I just create a small docker-compose which build a nginx container.
I have a folder src with a PHP file like this "src/index.php".I build the container and start it.
Then i go to /app/www/mysrc (shared volume) and tape "ls -la" to check if the index.php is OK and i get :ls: cannot open directory '.': Operation not permittedHere is a simple docker-compose file :
docker-compose.yml :version: "3"
services:
test-nginx:
restart: always
image: 'nginx:1.17.3'
ports:
- "8082:80"
volumes:
- ./src:/app/www/mysrcWhen i build and start the container, i get :$ docker-compose exec test-nginx sh
# cd /app/www
# ls -la
total 8
drwxr-xr-x 3 root root 4096 Oct 21 07:58 .
drwxr-xr-x 3 root root 4096 Oct 21 07:58 ..
drwxr-xr-x 3 root root 96 Oct 21 07:51 mysrc
# cd mysrc
# ls -la
ls: cannot open directory '.': Operation not permitted
# whoami
rootSo, my nginx server is down because nginx can't access to the source files.Thanks for your help. | "Operation not permitted" from docker container logged as root |
Nginx has two methods of changing configuration:HUPsignal to the master process results in "reload". Nginx starts a bunch of new workers and lets the old workers to shutdown gracefully, i.e. they finish existing requests. There isnointerruption of service. This method of configuration change is very lightweight and quick, but has few limitations: you cannot change cache zones or re-compile Perl scripts.USR2signal, thenWINCHand thenQUITto the master process result in "executable upgrade" and this sequence lets completely re-read whole configuration and even upgrade the Nginx executable. It reloads disk caches as well (which maybe time consuming). This method results innointerruption of service too.Official documentation | Suppose we have several identical nodes which are the application servers of some n-tier service. And suppose we use Apache ZooKeeper to keep all the config's of our distributed application. Plus we have an nginx as a load balancer and reverse proxy in front of this application.So let's say we perform a command which changes data only on node1, and for some period of time node2 differs from node1. And we want proxy to redirect all that special requests (which need that specific data) to node1 until all the infomation has migrated to node2 and node2 has the same data as node1.Is there any way to make nginx (or other proxy) read its config from Apache ZooKeeper? Or more broader: is there any way to effectively switch proxy configuration on fly? And of course it should be done without (or with minimal) downtime of the whole system - so restarting nginx is not the option. | Is there any way to configure nginx (or other quick reverse proxy) dynamically? |
Try to find the following line in yourphp.ini:display_errors = Offthen make it on | This question already has answers here:How do I get PHP errors to display?(27 answers)Closed6 months ago.I am running nginx with PHP-FPM. My nginx configuration for handling php files looks like this:location ~ \.php$ {
set $php_root /home/me/www;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}Now, I have a simple php file like this:Yes, with an obvious error. When I try accessing the php file, instead of tracing a syntax error, I always get a HTTP 500 Internal Server Error.I tried usingerror_reporting(-1);but still it always returns HTTP 500. How do I get PHP to print the exact error instead of returning a generic HTTP 500? | PHP FPM returns HTTP 500 for all PHP errors [duplicate] |
Anything from the?and after is the query string and is not part of the normalised URI used inlocationandrewritedirectives. Seethis documentfor details.If you want to keep the query string, either add it to thereturn:location = /old/page/ {
return 301 /new/page$is_args$args;
}Or withrewrite, the query string is automatically appended unless a?is added:rewrite ^/old/page/$ /new/page permanent;Seethis documentfor location syntax, andthis documentfor return/rewrite. | Currently I have something like this in my nginx.conf file:location ~ /old/page/?$ {
return 301 /new-page;
}The issue is that query strings are being stripped from the /old/page?ref=xx URL.Is it possible to include query strings using the redirect method I'm using above? | nginx 301 redirect with query string |
location /one {
rewrite /one/(.+) /$1 break;
include uwsgi_params;
uwsgi_pass unix:///.../one.sock;
} | I have a Nginx vhost than is configured as such:...
location /one {
include uwsgi_params;
uwsgi_pass unix:///.../one.sock;
}
location /two {
include uwsgi_params;
uwsgi_pass unix:///.../two.sock
}
...This is a simplified configuration of courseWhen I request/one/somethingI would like my Python script to receive/somethingasrequest_uri.I'm using BottlePy but would like this to be handled by Nginx and not in my Python code.Can I do something likeuwsgi_param REQUEST_URI replace($request_uri, '^/one', '')?EditHere is the request from my Python code:
[pid: 30052|app: 0|req: 1/1] () {42 vars in 844 bytes} [Tue Aug 21 14:22:07 2012] GET /one/something => generated 0 bytes in 4 msecs (HTTP/1.1 200) 2 headers in 85 bytes (0 switches on core 0)So Python is OK but uWSGI is not.How to fix that? | Nginx - Rewrite the request_uri before uwsgi_pass |
I was using php-fpm in the background and slow scripts were getting killed after a said timeout because it was configured that way. Thus, scripts taking longer than a specified time would get killed and nginx would report a recv or readv error as the connection is closed from the php-fpm engine/process. | I use nginx along with fastcgi. I see a lot of the following errors in the error logsreadv() failed (104: Connection reset
by peer) while reading upstream and
recv() failed (104: Connection reset
by peer) while reading response header
from upstreamI don't see any problem using the application. Are these errors serious or how to get rid of them. | nginx errors readv() and recv() failed |
Why doesn't nginx just forward the original Upgrade/Connection headers?From theofficial documentation:since the “Upgrade” is a hop-by-hop header, it is not passed from a client to proxied serverSeeRFC 2616.I don't want the Upgrade header or Connection being set to "upgrade" unless that's what the browser sent,There is also an example:map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
location /chat/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}Connection is 'upgrade' for non-websocket requests, which is also bad.Do you actually know what theConnectionheader means? Just a quote from RFC:for each connection-token in this field, remove any header field(s) from the message with the same name as the connection-token.How it can be bad? | nginx now supports proxying websockets, but I was unable to find any information on how to do this without having a separatelocationblock that applies to URIs on which websockets are used.I've seen some folks recommending some variations of this approach:location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://host:port;
}Would that be the correct way to proxy standard HTTP as well as websockets?I don't want theUpgradeheader orConnectionbeing set toupgradeunless that's what the browser sent, but theseproxy_set_headerlines are required for websockets to work.Why doesn't nginx just forward the original Upgrade/Connection headers?I've experimented with this and found that nginx does not proxy theUpgradeheader and changes theConnectionheader toclosefromupgradeif running without the twoproxy_set_headerlines. With them,Connectionisupgradefor non-websocket requests, which is also bad. | nginx reverse proxy websockets |
As per our discussion in ##php on freenode...Your issue is that the php.ini setting "log_errors" is set to Off.your options are:set log_errors=On in php.iniset php_admin_flag[log_errors]=On in your pool config (for docker container based onphp:5.6-fpmthat is in the file/usr/local/etc/php-fpm.conf)or possibly set log_errors=On in .user.ini (php's per-dir config, similar to .htaccess) | This question already has answers here:What is the location of Laravel's error logs?(6 answers)Closed13 days ago.Git repo of project:https://github.com/tombusby/docker-laravel-experiments(HEAD at time of writing is 823fd22).Here is my docker-compose.yml:nginx:
image: nginx:stable
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
links:
- php:php
ports:
- 80:80
php:
image: php:5.6-fpm
volumes:
- ./src:/var/www/html
expose:
- 9000Into src/ I've created a fresh laravel project. This all functions correctly if I swap out index.php for one with a basicecho "hello world";and if I useecho "called";exit();I can identify that part of laravel's index.php does get executed.It dies at line 53:$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);I have no idea why this happens, and I've tried usingdocker exec -it bashto have a look around my php-fpm container for error logs. All the logs are redirected to stderr/stdout (which is collected by docker).Here is the output that docker collects:php_1 | 172.17.0.3 - 06/May/2016:12:09:34 +0000 "GET /index.php" 500
nginx_1 | 192.168.99.1 - - [06/May/2016:12:09:34 +0000] "GET /index.php HTTP/1.1" 500 5 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36" "-"As you can see "500" does pretty much nothing to help me work out why there was an error, but I can't find any way of getting the stack trace or anything like the proper error logs that apache's php extension would have produced. | Docker php-fpm/nginx set-up: php-fpm throwing blank 500, no error logs [duplicate] |
PerGunicorn's deploy doc, my understanding is that you use Nginx as a proxy server for Gunicorn.As Gunicorn is ported fromRuby's Unicorn, I'm assuming the limitations and specifications of Unicorn apply to Gunicorn as well:Unicorn is an HTTP server for Rack applications designed to only serve
fast clients on low-latency, high-bandwidth connections and take
advantage of features in Unix/Unix-like kernels. Slow clients should
only be served by placing a reverse proxy capable of fully buffering
both the request and response in between Unicorn and slow clients.Gunicorn's deploy docsays much the same thing:Although there are many HTTP proxies available, we strongly advise
that you use Nginx. If you choose another proxy server you need to
make sure that it buffers slow clients when you use default Gunicorn
workers. Without this buffering, Gunicorn will be easily susceptible
to denial-of-service attacks.So Gunicorn serves fast, low-latency high-bandwidth clients and Nginx serves the rest. | This is a beginer question, but I am having trouble understanding the abstraction between Gunicorn and Nginx. I am not looking for a detailed answer, just at a high level what is the role that each plays? How do they interact? | Difference between Gunicorn and Nginx |
The simplest way is:location /remote_addr {
default_type text/plain;
return 200 "$remote_addr\n";
}The above should be added to theserverblock of yournginx.conf.No need to use any 3rd party module (echo, lua etc.) | It may sound like a code golf question, but what is the simplest / lightest way to return$remote_addrintext/plain?So, it should return several bytes of the IP address in a plain text.216.58.221.164Use case: An API to learn the client's own external (NAT), global IP address.Is it possible to do it with Nginx alone and without any backends? If so, how? | Nginx: directly return $remote_addr in text/plain |
Add this to your nginx configurationlocation ^~ /static/ {
include /etc/nginx/mime.types;
root /project_path/;
}replace/project_path/with yourapp's absolute path, you should note that itdoesn't include static directoryand all the contents inside/project_path/static/will be serverd in url/static/. | I've a web application with this structure:|
|__ static
|__style.less
|__images
|__ myapp.py
|__ wsgi.pyI've managed to run the web application using nginx and wsgi, but the problem is that the static files are not served, i mean, the server can't find them when i go to their URL. It gives me 404.Here's my nginx configuration file part:server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/public_html;
index index.php index.html index.htm;
server_name xxxxxxx.com;
location / {
try_files $uri $uri/ =404;
}
location /myapp {
include uwsgi_params;
uwsgi_pass unix:/var/www/public_html/myapp/myapp.sock;
}Is there something missing? | How to serve Flask static files using Nginx? |
The downside of WhiteNoise is that if you use it without a CDN like Cloudfront or Cloudflare it will definitely not perform as well as nginx. WhiteNoise is best either when used with a CDN (as most production sites ought to be doing) or for low-traffic sites where ease of configuration trumps performance.If you already have nginx correctly configured and don't plan on using a CDN for some reason then you're probably better off just sticking with nginx. | There are many article describing the pros of using whitenoise instead of other configuration for serving static files. But the information about it's cons is kind of hard to findIs there any cons or drawbacks of using whitenoise for serving static files?If the question is to broad, I'm now using NGINX for serving my static files (I also use it and gunicorn for serving my Django Application) and I found its also quite easy to configure it | Django whitenoise drawback |
server {
# Default, you don't need this!
#listen 80;
server_name www.abc.com;
# Index and root are global configurations for the whole server.
index index.html;
root /home/www.abc.com/;
location / {
location ~* ^/sub/ {
# The tilde and asterisks ensure that this location will
# be matched case insensitive. nginx does not support
# setting absolutely everything to be case insensitive.
# The reason is easy, it's costly in terms of performance.
}
}
} | I am using Nginx for a simple demo website, and I just configure the Nginx like this:server {
listen 80;
server_name www.abc.com;
location / {
index index.html;
root /home/www.abc.com/;
}
}In mywww.abc.comfolder, I have sub-folder namedSub, and inside hasindex.htmlfile. So when I try to visitwww.abc.com/Sub/index.html, then it works fine. If I visitwww.abc.com/sub/index.html, it returns404.How to configure the Nginx to case-insensitive in URL? | How to make URL case insensitive with Nginx |
Although I'm not an nginx expert, I feel like I have a much better understanding of how to do this now. As I figure out more I'll update this answer.One possible solution to my original question is this:location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /home/ubuntu/app/bundle/programs/web.browser;
access_log off;
expires max;
}Which says: Any URL for this site containing a slash followed by 40 alphanumeric characters + .js or .css, can be found in theweb.browserdirectory. Serve these files statically, don't write them to the access log, and tell the client that they can be cached forever.Because the the main css and js files are uniquely named after each bundle operation, this should be safe to do.I'll maintain a full version of this examplehere. It's also worth noting that I'm using a recent build of nginx which supports WebSockets as talked abouthere.Finally, don't forget to fully enable gzip in your nginx config. My gzip section looks like:gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;After doing all that, I was able to get a decent score onpagespeed.update 9/17/2014:Updated the paths for meteor 0.9.2.1 | The site configuration for my meteor app has directives which look like the following:server {
listen 443;
server_name XXX;
ssl on;
ssl_certificate XXX;
ssl_certificate_key XXX;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
proxy_http_version 1.1; # recommended for keep-alive connections per http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}I feel like I should be telling nginx to serve contents ofstatic_cacheableand setting theexpiresheader tomax. How exactly do I go about doing that? Are there other things I should add in here? | recommended nginx configuration for meteor |
EDIT: The config below is from a working nginx config, with the hostname and port changed.You need to may be able to set the server listening on port 36000 as anupstreamserver (seehttp://nginx.org/en/docs/http/ngx_http_upstream_module.html).server {
listen 80;
server_name domain.somehost.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:36000/;
proxy_redirect http://localhost:36000/ https://$server_name/;
}
} | EDIT: It turns out that the my setup below actually works. Previously, I was getting redirections to port 36000 but it was due to some configuration settings on my backend application that was causing it.I am not entirely sure, but I believe I might be wanting to set up a reverse proxy using nginx.I have an application running on a server at port 36000. By default, port 36000 is not publicly accessible and my intention is for nginx to listen to a public url, direct any request to the url to an application running on port 36000. During this entire process, the user should not know that his/her request is being sent to an application running on my server's port 36000.To put it in more concrete terms, assume that my url ishttp://domain.somehost.com/Upon visitinghttp://domain.somehost.com/, nginx should pick up the request and redirect it to an application already running on the server on port 36000, the application does some processing, and passes the response back. Port 36000 is not publicly accessible and should not appear as part of any url.I've tried a setup that looks like:server {
listen 80;
server_name domain.somehost.com
location / {
proxy_pass http://127.0.0.1:36000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}and including that inside my main nginx.confHowever, it requires me to make port 36000 publicly accessible, and I'm trying to avoid that. The port 36000 also shows up as part of the forwarded url in the web browser.Is there any way that I can do the same thing, but without making port 36000 accessible?Thank you. | nginx reverse proxy to backend running on localhost |
Hopefully you have this solved by now but for anyone else who is struggling with a similar issue. You need to include a space between the if statement and the opening parenthesis.So in your example you need to change the lineif($domain = "co") {toif ($domain = "co") {And everything should work fine. | Nginx Complains about the following part of my configuration:nginx: [emerg] unknown directive "if($domain" in /etc/nginx/nginx.conf:38
nginx: configuration file /etc/nginx/nginx.conf test failedHere is the bit it is talking about:server_name ~^(?:(?\w*)\.)?(?\w+)\.(?(?:\w+\.?)+)$;
if($domain = "co") {
set $domain "${subdomain}";
set $subdomain "www";
set $tld "co.${tld}";
}
if ($subdomain = "") {
set $subdomain "www";
}
root /var/www/sites/$domain.$tld/$subdomain;
location / {
index index.php index.html index.htm;
}Here is the full server section of my configuration file:server {
listen 80;
server_name ~^(?:(?\w*)\.)?(?\w+)\.(?(?:\w+\.?)+)$;
if($domain = "co") {
set $domain "${subdomain}";
set $subdomain "www";
set $tld "co.${tld}";
}
if ($subdomain = "") {
set $subdomain "www";
}
root /var/www/sites/$domain.$tld/$subdomain;
location / {
index index.php index.html index.htm;
}
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}What is the issue ? | Nginx unknown directive "if($domain" |
You can set a header value to void and Nginx will drop it :proxy_set_header Sec-WebSocket-Extensions ""; | I have a Nginx websocket reverse proxy and I would like to hide a HTTP header from the client request.proxy_hide_header hides the server response headers and can't be used for hiding client request headers.I would like to do that because the websocket server behind nginx doesn't work well with the websocket extension "permessage-deflate" so I would like to remove the Sec-WebSocket-Extensions header from client requests. | Hide a client request header with a Nginx reverse proxy server |
You're looking for $uri. It does not have $args. In fact, $request_uri is almost equivalent to $uri$args.If you really want exactly $request_uri with the args stripped, you can do this.local uri = string.gsub(ngx.var.request_uri, "?.*", "")You will need to have lua available but that will do exactly what you're asking. | How do I get the value ofrequest_uriwithout the args appended on the end. I know there is aurivariable but I need the original value as the Nginx documentation states:request_uriThis variable is equal to theoriginalrequest URI as received from
the client including the args. It cannot be modified. Look at $uri for
the post-rewrite/altered URI. Does not include host name. Example:
"/foo/bar.php?arg=baz" | Nginx request_uri without args |
Here's the preferred way to do this with newer versions of Nginx:location ~ ^/images/(.*) {
return 301 /assets/images/$1;
}Seehttps://www.nginx.com/blog/creating-nginx-rewrite-rules/for more info. | all...I am trying to do something in nginx to redirect all calls for files in/images/to become in:/assets/images/can someone help me with the rewrite rule? giving a 301 moved permanently status? | nginx rewrite redirect for a folder |
location = /oneapi {
set $args $args&apiKey=tiger;
proxy_pass https://api.somewhere.com;
} | I'd like to add a parameter in the URL in a proxy pass.
For example, I want to add an apiKey : &apiKey=tigerhttp://mywebsite.com/oneapi?field=22--->https://api.somewhere.com/?field=22&apiKey=tigerDo you know a solution ?Thank's a lot,
Gilles.server {
listen 80;
server_name mywebsite.com;
location /oneapi{
proxy_pass https://api.somewhere.com/;
}
} | Nginx proxy_pass : Is it possible to add a static parameter to the URL? |
The traceback shows that it was the route matching that raised a redirect;usually(e.g. unless you added explicit redirect routes), that means the client tried to access abranchURL (one that ends with atrailing slash), but the requested URL did not include the last slash. The client is simply being redirected to the canonical branch URLwiththe slash.From theWerkzeugRuledocumentation:URL rules that end with a slash are branch URLs, others are leaves. If you havestrict_slashesenabled (which is the default), all branch URLs that are matched without a trailing slash will trigger a redirect to the same URL with the missing slash appended.From therouting documentation:Flask’s URL rules are based on Werkzeug’s routing module. The idea behind that module is to ensure beautiful and unique URLs based on precedents laid down by Apache and earlier HTTP servers.Take these two rules:@app.route('/projects/')
def projects():
return 'The project page'
@app.route('/about')
def about():
return 'The about page'Though they look rather similar, they differ in their use of the trailing slash in the URL definition. In the first case, the canonical URL for the projects endpoint has a trailing slash. In that sense, it is similar to a folder on a file system. Accessing it without a trailing slash will cause Flask to redirect to the canonical URL with the trailing slash.In the second case, however, the URL is defined without a trailing slash, rather like the pathname of a file on UNIX-like systems. Accessing the URL with a trailing slash will produce a 404 “Not Found” error.This behavior allows relative URLs to continue working even if the trailing slash is ommited, consistent with how Apache and other servers work. Also, the URLs will stay unique, which helps search engines avoid indexing the same page twice.As documented, if you donotwant this behaviour (and have the urlwithoutthe trailing slash be a 404 Not Found instead), you must set thestrict_slashes=Falseoption on your route. | My flask app is doing a301redirect for one of the urls.The traceback in New Relic is:Traceback (most recent call last):
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1358, in full_dispatch_request
rv = self.dispatch_request()
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1336, in dispatch_request
self.raise_routing_exception(req)
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1319, in raise_routing_exception
raise request.routing_exception
RequestRedirect: 301: Moved PermanentlyIt doesn't look like it is even hitting my code or rather the traceback isn't showing any of my files in it. At one point I did have Nginx redirect all non SSL request to HTTPS but had to disable that as Varnish was not able to make the request to port443with out an error... probably some configuration that I did or didn't make.It doesn't always return a301though, I can request the URL and get it without any trouble. But someone out in the world requesting the URL is getting a301response.It is aGETrequest with some custom headers to link it to the account.At no point in my code is there a301redirect. | Flask 301 Response |
I solved it using this new configuration:upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri @nodejs;
}
location @nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}Thanks to the following Stack Overflow post:How to serve all existing static files directly with NGINX, but proxy the rest to a backend server. | My current nginx config is this:upstream nodejs {
server 127.0.0.1:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
index index.html;
location / {
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}I'm very very new to nginx, but at the very least I know that nginx is better than node/express at serving static files. How can I configure the server so that nginx serves the static files? | How do you serve static files from an nginx server acting as a reverse proxy for a nodejs server? |
I would start a new fcgi process on a new port, change the nginx configuration to use the new port, have nginx reload configuration (which in itself is graceful), then eventually stop the old process (you can use netstat to find out when the last connection to the old port is closed).Alternatively, you can change the fcgi implementation to fork a new process, close all sockets in the child except for the fcgi server socket, close the fcgi server socket in parent, exec a new django process in the child (making it use the fcgi server socket), and terminate the parent process once all fcgi connections are closed. IOW, implement graceful restart for runfcgi. | I'm running a django instance behind nginx connected using fcgi (by using the manage.py runfcgi command). Since the code is loaded into memory I can't reload new code without killing and restarting the django fcgi processes, thus interrupting the live website. The restarting itself is very fast. But by killing the fcgi processes first some users' actions will get interrupted which is not good.
I'm wondering how can I reload new code without ever causing any interruption. Advices will be highly appreciated! | How to gracefully restart django running fcgi behind nginx? |
Use theindexdirective to namer.jsonas the default filename within that location:location /r/ {
index r.json;
alias /home/user/media/json/;
} | I want Nginx to return r.json file for path example.com/r/
What I tried:location /r/ {
alias /home/user/media/json/r.json;
}But all that didn't work. I've got 500 with message:/home/user/media/json/r.jsonindex.html is not a directory | Nginx return file for path |
Theenvsubstcommand replaces all occurrences of$vars, including$http_upgradeand$connection_upgrade.
You should provide a list of variables to be replaced, e.g.:envsubst '${API_LOCATION},${UI_LOCATION}' < /etc/nginx/conf.templates/default.confSee also:Replacing only specific variables with envsubstMoreover, into theDockerfileconfiguration you should use the double$$escape in order to disable the variable substitution:FROM nginx
COPY conf /etc/nginx/conf.templates
CMD /bin/bash -c "envsubst '$${API_LOCATION},$${UI_LOCATION}' < /etc/nginx/conf.templates/default.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'" | I'm trying to reverse-proxy a websocket, which I've done with nginx before with no issue. Weirdly, I can't seem to re-create my prior success with something so simple. I've been over and over the config file but can't seem to find my error.Here's my fulldefault.conf:map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /api/ {
proxy_pass ${API_LOCATION};
}
location / {
proxy_pass ${UI_LOCATION};
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}The error I'm getting:2016/10/10 23:30:24 [emerg] 8#8: invalid number of arguments in "map" directive in /etc/nginx/conf.d/default.conf:1
nginx: [emerg] invalid number of arguments in "map" directive in /etc/nginx/conf.d/default.conf:1And the exact Dockerfile that I'm using, in case you want to replicate my setup (savedefault.confasconf.templates/default.confrelative to the Dockerfile:FROM nginx
COPY conf /etc/nginx/conf.templates
CMD /bin/bash -c "envsubst < /etc/nginx/conf.templates/default.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'" | nginx 'invalid number of arguments in "map" directive' |
Use return 444;This non-standard status code of 444 causes nginx to simply close the connection without responding to it.if ($http_user_agent ~ (agent1|agent2) ) {
return 444;
}Reference documentationMore elaborative documentation | I want to block unwanted Bots from accessing sites on the server.Can nginx drop / kill the connection right away when a certain Bot is detected?if ($http_user_agent ~ (agent1|agent2) ) {
**KILL CONNECTION**;
}Something like example above. | Drop unwanted connections |
No, this is not yet possible; nginx 1.2 incorporates stuff from the 1.1.x development branch which indeed includes HTTP/1.1 reverse proxying. Websocket connections are established using the HTTP/1.1 "Upgrade" header, but the fact that nginx now supports this kind of headers does not mean it supports websockets (websockets are a different protocol, not HTTP).
(I tried this myself using the 1.1.x branch (which I found to be stable enough for my purpose) and it doesn't work without the tcp_module)Websockets will probably be supported in 1.3.x (http://trac.nginx.org/nginx/roadmap).Your alternatives are:keep using node-http-proxyuse nginx without tcp module; socket.io won't use websockets but something else (e.g. long polling)nginx with tcp module: in this case I think you need an additional port for this module (never tried this myself)put something else in front as a reverse proxy: I use HAProxy (which supports websockets) in front of nginx and node. Nginx now simply acts as a static fileserver, not a proxy. Varnish is another option, if you want additional caching. | i would like to replace my node-http-proxy module with nginx proxy_pass module. Is it possible with new released nginx version, as i have read, that it supports HTTP/1.1 out of the box. I saw some threads struggeling with that problem, that websockets are not supported by nginx.In my case im running several node projects in background and want to route my websocket connections from port 80 to 8000-8100, depending on domain. Is there a native way to do websocket proxy/reverse proxy without using the tcp_module addon?I tried to setup an upstream in nginx.conf with proxy_passing to it, but if i try to connect to port 80 over websocket, i get an 502 Gateway error.Anyone facing the same problem?
Does anyone have a working example for nginx + spcket.io, proxying over port 80? | nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections |
A highrequest_timemay be, among others, due to a client with a slow connection, for which you can't do much about. Thus, a highrequest_timedoesnot necessarilyrepresent the performance of your server and/or application.You really should not spend too much time onrequest_timewhen profiling but instead measure things like the application's response time (ie.upstream_response_time).That said, there are some things which you are able to do andmayaffect therequest_time. Some of them are the following:Move your server on a high-speed networkMove your server near the clientDisable theNagle's algorithmTune the server's TCP stack (seethis article). However these won't necessarily make a big difference since the kernel does a good job of tuning them for you. | I am trying to improve the performance of a web app. Profiling the app itself, I found its response time are quite acceptable (100ms-200ms), but when I use ApacheBench to test the app, the response time sometimes exceeds 1 second. When I looked closely at the logs, I found a big discrepancy betweenrequest_timeandupstream_response_timeoccasionally:"GET /wsq/p/12 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.940 0.286
"GET /wsq/p/31 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.200 0.086Theupstream_response_timeis quite close to my profiling in the web app, butrequest_timeis close to one second for the first request.What could cause this discrepancy?I understandrequest_timeis recorded from the first byte received to last response byte sent, it can be affected by network condition and client problem. I am wondering what should I do to reduce the averagerequest_timeas much as possible? | Why is request_time much larger than upstream_response_time in nginx access.log? |
I don't believe this is possible as nginx is not a servlet container, so it has no understanding of what a .war file is. You can configure nginx to act as a reverse proxy in front of a Tomcat server so this might get you the best of both worlds.A quick Google search came up with thishttp://wiki.nginx.org/JavaServerswhich might give you what you're looking for. | I really love nginx for the stability and way
requests are handled.And i really love tomcat for the java
and the user friendlinessIs there a way to deploy my .war on a nginx server ? | Can i deploy my .war on an nginx server |
Answering to myself. Actually solution was not that difficult to find, it just demanded careful look into nginx documentation.proxy_read_timeoutis a directive responsible for that, and by default it's set to 60 seconds. So it can be easily fixed by setting e.g.:proxy_read_timeout 24h;Setting0won't work, it will actually make all your connections broken, therefore we need to come up with long enough timeout.After fixing that I approached also the other issue, but this time related to how browsers handle the connection. For some reason after 5 minutes of inactivity browsers silently discard the connection. What's worse neither side is informed that it's discarded, for both it still appears as if connection is online, but data doesn't get through. Fix for that is to send some keep alive ping on interval basis (plain sse comment works great). | I have a Node.js via Nginx setup and it involves Server-Sent Events.No matter what Nginx configuration I have, connection of sse is broken after 60 seconds and reinitialized again. It doesn't happen if I connect to application directly on port on which node serves it, so it's clearly some Nginx proxy issue.I'd like to have no timeout on sse connection. Is that possible? I've tried tweakingsend_timeout,keepalive_timeout,client_body_timeoutandclient_header_timeoutbut it doesn't change anything. Below is my Nginx configuration.upstream foobar.org {
server 127.0.0.1:3201;
}
server {
listen 0.0.0.0:80;
server_name example.org;
client_max_body_size 0;
send_timeout 600s;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example.org/;
proxy_redirect off;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
}
} | Server-Sent Events connection timeout on Node.js via Nginx |
If we usecURLto retrieve aHTTPSsite that is not using aCA-signed certificate, the following problem occurs:curl https://example.selfip.com
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.htmlWhile we can simply overcome this using the-koption, there's a safer and lasting solution, i.e.:Step 1Identify which directory yourOpenSSLinstallation uses.openssl version -d
OPENSSLDIR: "/usr/lib/ssl"Step 2Change to that directory and list the directory contents. You should see a directory calledcerts.cd /usr/lib/ssl && ls -alStep 3Change to that directory.cd certsList the directory contents. You should see from the symlinks that the certificates are actually stored in/usr/share/ca-certificates.Step 4Change to/usr/share/ca-certificatesdirectory andadd you self-signed certificate there, (ex: your.cert.name.crt)Step 5Change to/etcdirectory and edit the fileca-certificates.conf.root@ubuntu:# cd /etc
root@ubuntu:# nano ca-certificates.confAddyour.cert.name.crtto the file (ca-certificates.conf) and save it.Last Step:Execute the programupdate-ca-certificates –fresh.Note: You might like to backup/etc/ssl/certsbefore executing the command.root@ubuntu:# update-ca-certificates --fresh
Clearing symlinks in /etc/ssl/certs...done.
Updating certificates in /etc/ssl/certs....done.
Running hooks in /etc/ca-certificates/update.d....done.Test with curl on your target HTTPS site and it should work now.Source | I copied the PEM file into /usr/local/share/ca-certificates/ and ran update-ca-certificates, and I verified that the resulting certificate is now included in /etc/ssl/certs/ca-certificates.crt which is the file printed by curl-config --ca. I also verified that the certificate printed by openssl s_client -connect example.com:443 was identical to my PEM file. And yet I continue to get the "error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed" message. This happens even if I use curl's --cacert option as described athttp://curl.haxx.se/docs/sslcerts.htmlto tell it what certificate to use.It works if I disable certificate verification altogether with curl -k, but I don't want to do that because I'm trying to write a test harness that's supposed to test the SSL properly.It works fine if I access the same URL in lynx, which normally complains if there are any SSL errors. But I can't just use Lynx for this test harness, unless I can find some way of making Tornado's AsyncHTTPClient use Lynx instead of libcurl. And it doesn't seem to make any sense that installing the self-signed certificate satisfies Lynx but not curl.I'm using Ubuntu 12.04 LTS in a Vagrant-powered VirtualBox; it has curl 7.22.0. The SSL terminating proxy is nginx/1.3.13 running on the same machine, and the domain name is pointed to 127.0.0.1 by an entry in /etc/hosts.Any clues on what might be the problem? Thanks. | Why won't curl recognise a self-signed SSL certificate? |
Lewis4u's answer may be right! But I think we should have a clearer explanationIn nginx.conf file we see the root path is:root html;The question is: Where is "html" relative path?
This relative path is set at compile time. You can check the path by command$>nginx -VYou will see "--prefix=/usr/local/Cellar/nginx/1.12.0_1", this is the folder of nginx files. Now you should "cd" to this directory to see your "html" folder.$> cd /usr/local/Cellar/nginx/1.12.0_1
$> ls -l htmlThen you will see the "html" folder is a softlink to "/usr/local/var/www"In conclusion, in my case, the "html" folder is "/usr/local/var/www". It may be different on your MAC. But hey, you got the method to find out. Right?! | I have successfully installed nginx on my MAC with homebrewbrew install nginxbut i can't find from where is this default page called.In nginx.conf under location saysroot html;and i can't find it. Please help. | Location of the Nginx default index.html file on MAC OS |
I was able to find a solution after 2 days of searching. Somehow SELinux was not permitting Nginx to proxy to my server. Running the command below fixed the issue./usr/sbin/setsebool -P httpd_can_network_connect trueAdding the -P flag thanks to @DaveTrux | I am just setting up nginx as a webserver that proxies directly to a tomcat app server.
When the user connects to my website Nginx should redirect the request to port 8080 where the tomcat app server is running.I am doing everything on amazon ec2 instance that is running Redhat 7.What I have so far is this:nginx.conf file
user nginx;
worker_processes 1;
server {
listen 80;
server_name mydomainname;
access_log /var/log/nginx/example.log;
error_log /var/log/nginx/example.error.log;
location / {
proxy_pass http://localhost:8080/example/;
}
}The error that I am getting is(13: Permission denied) while connecting to upstream, clientThis is definitely a user access issue, but cannot seem to figure it out. It seems like nginx does not have access to redirect to port 8080.Also, nginx is running under myuserroot 15736 nginx: master process /usr/sbin/nginx
myuser 15996 nginx: worker process
root 16017 grep --color=auto nginxI have tried to put 127.0.0.1 instead of localhost, but no luck.
I have also tried to change the user in the nginx.conf to myuser, still no luck.
When I connect directly to the application sever I have no issues.Example URL of my tomcat http://mydomain:8080/example/Thank you in advance. | nginx proxy server localhost permission denied |
Its been a long time, but it might help someone else... setdaemon offin your nginx config. Supervisord requires processes not to run as daemons.You can also set it directly for the supervisor command:command=/usr/sbin/nginx -g "daemon off;" | Here's a preview of the status runningsupervisorctl statusevery 2 seconds:[root@docker] ~ # supervisorctl status
nginx RUNNING pid 2090, uptime 0:00:02
[root@docker] ~ # supervisorctl status
nginx STARTING
[root@docker] redis-2.8.9 # supervisorctl status
nginx RUNNING pid 2110, uptime 0:00:01Is this a normal thing for nginx to respawn every few seconds ? Knowing that nginx is setup to be run in the background with this setup:[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true | Nginx with Supervisor keep changing status b/w Running and Starting |
Got it working with the following~/.goaccessrc:date_format %d/%b/%Y:%T %z
log_format %h - - [%d] "%r" %s %b "%R" "%u"I installed GoAccess as a binary package from wheezy repository (no source recompilation). | I want to parse and analyze the nginx logs withgoaccessand take a report from the analyzed logs.
But, when I run thezcat -f access.log.*.gz | goaccess -a -ccommand, it gives me the following error :GoAccess - version 0.5 - Jun 26 2012 04:30:08
An error has occurred
Error occured at: parser.c - process_log - 584
Message: No date format was found on your conf file.I tried to add the linedate_format %D %Tto .goaccessrc file but I got another error which is:GoAccess - version 0.5 - Jun 26 2012 04:30:08
An error has occurred
Error occured at: parser.c - process_log - 588
Message: No log format was found on your conf file.I think it asks for the date and log formats that nginx uses.
but I have no any date or log format in my nginx configuration.Additionally, I've tried to use a previous version of goaccess(0.4.2 version)
and thezcat -f access.log.*.gz | goaccess -a -ccommand works fine.
it doesn't asks for any date or log format and i can view the goaccess menu and
i can view any data that want.But when I try to get a html report withzcat -f access.log.*.gz | goaccess -a -c > report.htmlcommand, it does nothing. it just waits and waits.(without giving any warning or error)Note: i've checked this webpages and if you want to take a look too.http://goaccess.prosoftcorp.com/faqhttp://wiki.nginx.org/HttpLogModule | nginx log analysis with goaccess |
It doesn't have to be nginx in particular, but you want some kind of frontend server proxying to your application server for a few reasons:So that you can run the Catalyst server on a high port, as an ordinary user, while running the frontend server on port 80.To serve static files (ordinary resources like images, JS, and CSS, as well as any sort of downloads you might want to use X-Sendfile or X-Accel-Redirect with) without tying up a perl process for the duration of the download.It makes things easier if you want to move on to a more complicated config involving e.g. Edge Side Includes, or having the webserver serve directly from memcached or mogilefs (both things that nginx can do), or a load-balancing / HA config. | I am trying to deploy my little Catalyst web app using Plack/Starman. All the documentation seems to suggest I want to use this in combination with nginx. What are the benefits of this? Why not use Starman straight up on port 80? | Why use nginx with Catalyst/Plack/Starman? |
Ok, let me explain something, you already have a localhost server, which is defined inside a file calleddefaultthat is the file that causes the "Welcome to nginx" or something to appear, and I believe you can't create a new server with the sameserver_name, let's remove that and make your localhost serve only those images,First we need to delete thedefaultfile fromsites-enabled, it will still exist insidesites-availableif you ever want to get it back.( note that all files insidesites-enabledare simply symlinks from the files insidesites-available)We create a new file insidesites-availableand call it whatever you want,images-appfor examplecreate the new server inside theimages-appfile, I'll assume that the root of the app is inside a folder called/dataof course you will map that to your own server structure.server {
server_name localhost;
root /data;
index index.html;
location / {
try_files $uri =404;
}
}now we go tosites-enabledand enable this site we created insidesites-availablesudo ln -s /etc/nginx/sites-available/images-app /etc/nginx/sites-enabled/make sure that all the nginx config are correctsudo nginx -tIf nothing is wrong we can go ahead and reload nginx settingssudo service nginx reload | I am completely new to nginx and I am asked to find a way to serve Map Tiles that are separated according to the zoom levels. The image file structure is like~/data/images/7/65/70.pngwhere 7 is the zoom level, 65 and 70 are the lon-lat values. The folder 65 contains many files such as 71.png, 72.png and etc.I have installed Nginx properly and I can getWelcome to nginxmessage. I have followed the instructions inhttp://nginx.org/en/docs/beginners_guide.htmland created the/data/wwwand/data/imagesdirectories. I have placed index.html file under/data/wwwand tile images under/data/images. Then I modified the configuration file by adding following lines in http tags:server {
location / {
root /data/www;
}
location /images/ {
root /data;
}
}After reloading the config file and entering localhost on the browser I can neither get the index.html file nor see the images.What I am trying to do is to display the image when I enter something as:http://localhost/1.0.0/basemap/7/65/70.png7: folder indicating 7th zoom level65: folder indicating the latitude70.png: file indicating the longitude (folder 65 includes many png files)What am I missing? | How to serve images with nginx |
You are on the right track.....Just install the nginx on your EC2. In my case I had a linux Ubuntu 14.04 installed on "Digital Ocean".First I updated the apt-get package lists:sudo apt-get updateThen install Nginx using apt-get:sudo apt-get install nginxThen open the default server block configuration file for editing:sudo vi /etc/nginx/sites-available/defaultDelete everything in this configuration file and paste the following content:server {
listen 80 default_server;
root /path/dist-nginx;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}To make the changes active, restart the webserver nginx:sudo service nginx restartThen copy index.html and the bundle files to/path/dist-nginxon your server and you are up and running. | I am actually learning Angular 2 with Typescript and developed a little app by based on the angular-seed project (angular-seed). I have built the app for production purposes and got dist folder ready to be deployed containing my bundle files like this:dist/
main.bundle.js
main.map
polyfills.bundle.js
polyfills.map
vendor.bundle.js
vendor.mapHowever, as a fresher, I have no idea how to deploy it now on my EC2 server. I read that I have to config Nginx server to serve my static file but do I have to config it particularly to work with my bundle files?Excuse my mistakes if any. Thanks a lot in advance! | How can I deploy my Angular 2 + Typescript + Webpack app |
As long as your new user (nginxin your case) has the proper rights, everything should work.You have to change yourusersetting innginx.conf...
user nginx;
...and restart/reload your server.Link to docs. | I have a manual install of nginx on Ubuntu 12.04. When I ran./configureI used the following options:./configure --user=www-data --group=www-data --with-http_ssl_module --with-http_realip_moduleNow the nginx worker processes run under the www-data user in the www-data group. However, I wish to change this to a different user (called nginx in my case).Is this possible to do after runningmakeandmake installalready?Any help would be much appreciated. | Changing the user that nginx worker processes run under (Ubuntu 12.04) |
Turns my nginx config was ok. The problem was with my gunicorn server was not running properly. | I'm using nginx as a proxy server to forward requests onto my gunicorn server. When I runsudo nginx -t -c /etc/nginx/sites-enabled/mysiteI get the following error.[emerg]: unknown directive "upstream" in /etc/nginx/sites-enabled/mysite:1
configuration file /etc/nginx/sites-enabled/mysite test failedAny idea how to fix it? This is my nginx config:upstream gunicorn_mysite {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name example.com;
access_log /usr/local/django/logs/nginx/mysite_access.log;
error_log /usr/local/django/logs/nginx/mysite_error.log;
location / {
proxy_pass http://gunicorn_mysite;
}
}I'm running Ubuntu 10.04 and my nginx version is 0.7.65 which I installed from apt.This is the output when I run nginx -Vnginx version: nginx/0.7.65
TLS SNI support enabled
configure arguments: --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_stub_status_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-mail --with-mail_ssl_module --with-ipv6 --add-module=/build/buildd/nginx-0.7.65/modules/nginx-upstream-fair | nginx unknown directive "upstream" |
You should use/admin/1/in your inner location block as the inner URLs are not relative to the outer URLs. You can see that this is the issue based on the following snippet from the error message you included...location "1/" is outside location "/admin/" | Hi I'm trying to get the following to work!I'm basically trying to allow the following URLs to be passed to the proxy_pass directive by either of these two URLS:http://example.com/admin/1orhttp://example.com/admin/2/I have the following config:location /admin/ {
# Access shellinabox via proxy
location 1/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://example.com;
}
}At the moment, an error is thrown:2016/01/17 15:02:19 [emerg] 1#1: location "1/" is outside location "/admin/" in /etc/nginx/conf.d/XXX.conf:37
nginx: [emerg] location "1/" is outside location "/admin/" in /etc/nginx/conf.d/XXX.conf:37 | Nested locations in nginx |
Currently we havetwo optionsto solve this:Option 1:Duplicated locations: NGINX looks for the best match. (a little better performance)location /post/ {
post config stuff;
.
.
.
}
location ~* ^/post/.*\.(css|js|png|gif)$ {
post/files.(css|js|png|gif) config stuff;
expires max;
add_header Pragma public;
add_header Cache-Control "public";
}
location /user/ {
user folder config stuff;
.
.
.
}
location ~* ^/user/.*\.(css|js|png|gif)$ {
user/files.(css|js|png|gif) config stuff;
.
.
.
}Option 2:Nested locations: Filtered by extension in the inner location blockslocation /post/{
...
location ~* \.(css|js|png|gif)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public";
}
}
location /user/{
...
location ~* \.(css|js|png|gif)$ {
...
}
} | I have pictures, and I want to add their headers to max, I have profile pictures which can be changed and post pictures, I want to add headers only for post pictures, but not to profile pictures, I have no idea how can I manage this. thank you, this is my configuration,this is the path of posts, /post/name-of-the-picture.jpg
this is the path of users, /user/name-of-the-picture.jpgI only want to add headers to post pathlocation ~* \.(css|js|png|gif)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public";
} | How to add headers to only specific files with nginx |
Optionroot /var/www/letsencrypt/;tells to nginx "this is base directory", so final path will be/var/www/letsencrypt/.well-known/acme-challenge/.So, you have 2 options:Change your path, for example to$ echo hi > /var/www/letsencrypt/.well-known/acme-challenge/hiChange behavior of nginx, so nginx will treat it as alias:location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
rewrite /.well-known/acme-challenge/(.*) /$1 break;
root /var/www/letsencrypt;
}And don't forget makekillall -1 nginxto reload config | I'm not able to get nginx to return the files I've put in/var/www/letsencrypt.nginx/sites-available/mydomain.confserver {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name my-real-domain.com;
include /etc/nginx/snippets/letsencrypt.conf;
root /var/www/mydomain;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}nginx/snippets/letsencrypt.conflocation ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}I run this command:certbot certonly --webroot -w /var/www/letsencrypt/ -d my-real-domain.comBut the page that certbot tries to access is always an 404.DEBUGGING$ echo hi > /var/www/letsencrypt/hi
$ chmod 644 /var/www/letsencrypt/hiNow I should be able tocurl localhost/.well-known/acme-challenge/hi, but that does not work. Still 404. Any idea what I'm missing? | Configure Nginx to reply to http://my-domain.com/.well-known/acme-challenge/XXXX |
You need to specify an absolute path for yourrootdirective. Nginx uses the directory set at compile time using the --prefix switch. By default this is/usr/local/nginx.What this means is that your root, which is currently set to roothome/laravel-app/causes nginx to look for files at/usr/local/nginx/home/laravel-app/which presumably isn't where your files are.If you set yourrootdirective to an absolute path such as/var/www/laravel-app/public/nginx will find the files.Similarly you'll note that I added/public/to the path above. This is because Laravel stores it'sindex.phpfile there. If you were to just point at/laravel-app/there's no index file and it'd give you a 403. | I keep getting403 ForbiddenMy settings:/etc/nginx/sites-available/defaultdefaultserver {
listen 80;
root home/laravel-app/;
index index.php index.html index.htm;
server_name example.com;
location / {
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
# pass the PHP scripts to FastCGI server listening on the php-fpm socket
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}UpdateI followed this instruction :hereAny hints/suggestions on this will be a huge help ! | 403 Forbidden on nginx/1.4.6 (Ubuntu) - Laravel |
It is generally a bad security practice to have multiple independent apps on a single domain.However, I believe what you're facing here is the peculiarity of the way thattry_filesworks -- according tohttp://nginx.org/r/try_files,If none of the files were found, an internal redirect to the uri specified in the last parameter is made.Effectively, this means that if there would have been an extra parameter after your/index.htmlspecification (i.e., basically, anything at all), then your code would have worked as you expected; however, due to the lack of any such final parameter, what happens in each case is that everything gets redirected back to the/location, as if aGET /index.html HTTP/1.1request was to have been made (except it's all done internally within nginx).So, as a solution, you can either fix the path for the internal redirect to remain within the samelocation(e.g.,/projectX/index.html), or leave the paths alone, but make the last parameter return an error code (e.g.,=404, which should never be triggered as long as your file always exists).E.g,try_files $uri /projectX/index.html;,Or,try_files $uri /index.html =404;.As in:location /projectX/ {
alias /home/projectX/dist/;
try_files $uri /projectX/index.html; # last param is internal redirect
}Or:location /projectX/ {
alias /home/projectX/dist/;
try_files $uri /index.html =404;
}In summary, note well that/projectX/index.htmlwould only work as the last parameter, and/index.htmlwould only work as a non-final one. | I'm serving multipleangularapps from the sameserverblock inNginx. So in order to let the user browse directly to certain customAngularroutes I've declared without having to go through the home page (and avoid the 404 page), I'm forwarding these routes from nginx to each angular app'sindex.html, I've added atry_filesto eachlocation:server {
listen 80;
server_name website.com;
# project1
location / {
alias /home/hakim/project1/dist/;
try_files $uri /index.html;
}
# project2
location /project2/ {
alias /home/hakim/project2/dist/;
try_files $uri /index.html;
}
# project3
location /project3/ {
alias /home/hakim/project3/dist/;
try_files $uri /index.html;
}
}This solution avoids the 404 error when going to an Angular route, but the problem is that when I browse to/project2/or/project3/it redirects to the/project1/. That's obviously not what is expected, since I want to have each location to forward to the/project-i/index.htmlof the adequate project. | Serve multiple Angular apps from the same server with Nginx |
Just found out aboutpython-nginx, which works great out-of-the-box using only Python, and doesn't seem to need any C or required Python package at all! Could improve docs a bit. Maybe I'll send a pull request for that. | I have a python script that dynamically alters nginx config file (nginx.conf). Since nginx configuration is not ininiformat, i currently use some regexp to parse and modify file content. Is it the only way or some better way to programmatically alter nginx configuration exist? | Any good way to programmatically change nginx config file from python? |
I'm having the same issue. Here's what I'm currently working on:Option 1: use a single image for both nginx and the appThis way, I can build the image once (with the app, precompiled assets and nginx), then run two instances of it: one running the app server, and another for the nginx frontend:docker build -t hello .
docker run --name hello-app hello rackup
docker run --name hello-web -p 80:80 --link hello-app:app hello nginxNot pretty, but very easy to set up and upgrade.Option 2: use a shared volume, and precompile assets as a jobShared volumes cannot be updated in the build process, but can be updated by a container instance. So we can run our rake task to precompile the assets just before running our app:docker build -t hello .
docker run -v /apps/hello/assets:/app/public/assets hello rake assets:precompile
docker run --name hello-app hello rackup
docker run --name hello-web -p 80:80 --link hello-app:app -v /apps/hello/assets:/usr/share/nginx/html/assets nginxThis looks like a more robust option, but will require a more complex instrumentation. I'm leaning towards this option, however, since we'll need a separate job for database migrations anyway.Option 3: distribute the assets to a CDN on build-timeYour Dockerfile can upload the resulting assets directly to a CDN. Then you configure your Rails app to use it as theasset_host. Something like:RUN rake assets:precompile && aws s3 sync public/assets s3://test-assets/I'm currently experimenting with this option. Since I'm using Amazon CloudFront, looks like I can just sync the resulting assets to S3 using the AWS CLI. There's also a gem for that (asset_sync), but it looks stale.The downside is that you'll have to send the needed AWS credentials to the build context or the Dockerfile itself – this might need committing them to your source repository, if you're using an automated build. | I have an nginx container separate from my rails container and want to be able to serve precompiled assets from rails with the nginx container. This sounds like a job for a volume container but I have got myself confused after having quickly needing to learn docker and reading the documentation endlessly. Has anybody had to deal with a similar situation? | Sharing precompiled assets across docker containers |
The initial connection refers to the time taken to perform the initial TCP handshake and negotiating an SSL (where applicable). The slowness could be caused by congestion, where the server has hit a limit and can't respond to new connections while existing ones are pending. You could look into someperformance enhancementsin your Nginx configuration. | My web app sits behind a Nginx. Occasionally, the loading of my web page takes more than 10 seconds, I used Chrome DevTools to track the timing, and it looks like this:The weird thing is, when the page loads slowly, the initial connection time is always 11 seconds long. And after this slow request, subsequent loading of the same page becomes very fast.What is the possible problem that cause this?P.S. If this is caused by a resource limitation on my server, can I see some errors/warnings in some system log? | Why is the initial connection time for a HTTP request so long? |
I think you can manage this with map. If the header is present, map a variable to either the IP of the client or to an empty string, and use that value as the key of the zone. If the map does not match, the empty string will prevent rate limiting from happening.Something like this (not tested, but should work)map $http_userandroidid $limit {
default "";
"~.+" $binary_remote_addr;
}This will map an empty of missing userAndroidId header to "", and any other value to the $binary_remote_addr. You can then use the $limit variable in your zone like this:limit_req_zone $limit zone=one:10m rate=1r/s; | Maybe I am asking a poor question but I want to apply rate limit in nginx based on custom http header rather than IP based. My IP based configuration is working but I am not able to get around using custom http header. What I want is that if a particular header is present in http request then rate limiting should be applied otherwise not.conf filehttp {
limit_req_zone $http_userAndroidId zone=one:10m rate=1r/s;
location ^~ /mobileapp{
set $no_cache 1;
# set rate limit by pulkit
limit_req zone=one burst=1;
limit_req_status 429;
error_page 429 /50x.html;
}
}However, rate limiting is applied even if there is no header present.
P.S. userAndroidId is my request header. | Rate limit in nginx based on http header |
A reverse proxy setup (e.g. nginx forwarding HTTP requests to Starman) has the following advantages:things are a bit easier to debug, since you can easily hit directly the backend server;if you need to scale your backend server, you can easily use something like pound/haproxy between the frontend (static-serving) HTTP and your backends (Zope is often deployed like that);it can be a nice sidekick if you are also using some kind of outward-facing, caching, reverse proxy (like Varnish or Squid) since it allows to bypass it very easily.However, it has the following downsides:the backend server has to figure out the real originating IP, since all it will see is the frontend server address (generally localhost); there is almost an easy way to find out the client IP address in the HTTP headers, but that's something extra to figure out;the backend server does not generally know the orignal "Host:" HTTP header, and therefore, cannot automatically generated an absolute URL to a local resource; Zope addresses this with special URLs to embed the original protocol, host and port in the request to the backend, but it's something you don't have to do with FastCGI/Plack/...;the frontend cannot automatically spawn backend processes, like it could do with FastCGI for instance.Pick your favourites pros/cons and make your choice, I guess ;-) | A very popular choice for running Perl web applications these days seems to be behind a nginx webserver proxying requests to either a FastCGI daemon or a PSGI enabled webserver (e.g. Starman).There have been lots of question as to why one would do this in general (e.g.Why use nginx with Catalyst/Plack/Starman?)
and the answers seem to apply in both cases (e.g. allow nginx to serve static content, easy restart of application server, load balancing, etc.)However, I am specifically interested in the pros/cons of using FastCGI vs a reverse-proxy approach. It seems that Starman is widely considered to be the fastest and best Perl PSGI application/web server out there, and I am struggling to see any advantages to using FastCGI at all. Both approaches seem to support:UNIX domain sockets aswell as TCP socketsfork/process manager style servers aswell as non-blocking event-based (e.g. AnyEvent) servers.Signal handling/graceful restartPSGISimilarly, nginx configuration for either option is very similar.So why you would choose one over the other? | nginx and Perl: FastCGI vs reverse proxy (PSGI/Starman) |
Nothing complex, it's the root directory in thenginx.confis not defined correctly.Checking the logs withkubectl logs <> -n <>gives why the404 erroris happening for a particular request.xxx.xxx.xxx.xxx - - [02/Oct/2020:22:26:57 +0000] "GET / HTTP/1.1" 404 153 "-" "curl/7.58.0" 2020/10/02 22:26:57 [error] 28#28: *1 "/etc/nginx/html/index.html" is not found (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.1", host: "xxx.xxx.xxx.xxx"It's because oflocationinside yourconfigmapis referring to wrong directory as rootroot html.Change the location to a directory which hasindex.htmlwill fix the issue. Here is the working configmap withroot /usr/share/nginx/html. However this could be manipulated as you want, but we need to make sure files exist in the directory.apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html; #Change this line
index index.html index.htm;
}
}
} | I have Kubernetes set up in a home lab and I am able to get a vanilla implementation of nginx running from a deployment.The next step is to have a custom nginx.conf file for the configuration of nginx. For this, I am using a ConfigMap.When I do this, I no longer receive the nginx index page when I navigate tohttp://192.168.1.10:30008(my local ip address for the node the nginx server is running on). If I try to use the ConfigMap, I receive the nginx 404 page/message.I am not able to see what I am doing incorrectly here. Any direction would be much appreciated.nginx-deploy.yamlapiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30008
selector:
app: nginx | Custom nginx.conf from ConfigMap in Kubernetes |
Yes this is possible. Deploy your ingress controller, and deploy it with a NodePort service. Example:---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: kube-system
labels:
k8s-app: nginx-ingress-controller
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32080
protocol: TCP
name: http
- port: 443
targetPort: 443
nodePort: 32443
protocol: TCP
name: https
selector:
k8s-app: nginx-ingress-controllerNow, create an ingress with a DNS entry:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-service #obviously point this to a valid service + port
servicePort: 80Now, assuming your static IP is attached to any kubernetes node running kube-proxy, have DNS updated to point to the static IP, and you should be able to visitmyapp.example.com:32080and the ingress will map you back to your app.A few additional things:If you want to use a lower port than 32080, then bear in mind if you're using CNI networking,you'll have trouble with hostport. It's recommend to have a load balancer listening on port 80, I guess you could just have nginx set up to do proxy pass, but it becomes difficult. This is why a load balancer with your cloud provider is recommended :) | So I'm using Kubernetes for a side project and it's great. It's cheaper to run for a small project like the one I'm on (a small cluster of 3-5 instances gives me basically everything I need for ~$30/month on GCP).The only area where I'm struggling is in trying to use the kubernetes Ingress resource to map into cluster and fan out to my microservices (they're small Go or Node backends). I have the configuration setup for the ingress to map to different services and there's no problem there.I understand that you can really easily have GCP spin up a LoadBalancer when you create an ingress resource. This is fine, but it also represents another $20-ish/month that adds to the cost of the project. Once/if this thing gets some traction, that could be ignored, but for now and also for the sake of understanding Kubernetes better, I want to the do the following:get a static IP from GCP,use it w/ an ingress resourcehost the load-balancer in the same cluster (using the nginx load balancer)avoid paying for the external load balancerIs there any way this can even be done using Kubernetes and ingress resources?Thanks! | Create kubernetes nginx ingress without GCP load-balancer |
At this time I use smth like:Dockerfile:FROM php:fpm
COPY . /var/www/app/
WORKDIR /var/www/app/
RUN composer install
EXPOSE 9000
VOLUME /var/www/app/webDockerfile.nginxFROM nginx
COPY default /etc/nginx/defaultdocker-compose.ymlapp:
build:
context: .
web:
build:
context: .
dockerfile: Dockerfile.nginx
volumes_from: appBut in few days on 17.05 release we can do in one Dockerfile smth like:FROM php:cli AS builder
COPY . /var/www/app/
WORKDIR /var/www/app/
RUN composer install && bin/console assets:dump
FROM php:fpm AS app
COPY --from=builder /var/www/app/src /var/www/app/vendor /var/www/app/
COPY --from=builder /var/www/app/web/app.php /var/www/app/vendo /var/www/app/web/
FROM nginx AS web
COPY default /etc/nginx/default
COPY --from=builder /var/www/app/web /var/www/app/web | I have a small theoretical problem with combination of php-fpm, nginx and app code in Docker.I'm trying to stick to the model when docker image does only one thing -> I have separate containers for php-fpm and nginx.php:
image: php:5-fpm-alpine
expose:
- 9000:9000
volumes:
- ./:/var/www/app
nginx:
image: nginx:alpine
ports:
- 3000:80
links:
- php
volumes:
- ./nginx/app.conf:/etc/nginx/conf.d/app.conf
- ./:/var/www/appNOTE:In app.conf isroot /var/www/app;Example schema from SymfonyThis is great in development, but I don't know how to convert this to production ready state. Mount app directory in production is really bad practice (if I'm not wrong). In best case I copy app source code into container and use this prebuilded code (COPY . /var/www/appinDockerfile), but in this case is impossible or I don't know how.I need share app source code between two contatiner (nginx container and php-fpm container) because booth of that need it.Of course I can make own nginx and php-fpm container and addCOPY . /var/www/appinto both of them, but I thing that is wrong way because I duplicate code and the whole build process (install dependencies, build source code, etc...) must be in both (nginx/php-fpm) containers.I try to search but I don't find any idea how to solve this problem. A lot of articles show how to do this with docker-compose file and mount code with --volume but I didn't find any example how to use this on production (without volume).Only one acceptable solutions for me (in this time) is make one container with nginx and php-fpm together but I'm not sure when is a good way (I try to findbest practice).Do you have any experiences with this or any idea how to solve it?Thanks for any response! | Docker production ready php-fpm and nginx configuration |
Or you can put it simply in its own location -location /robots.txt {
alias /Directory-containing-robots-file/robots.txt;
} | I am running nginx 0.6.32 as a proxy front-end for couchdb. I have my robots.txt in the database, reachable ashttp://www.example.com/prod/_design/mydesign/robots.txt. I also have my sitemap.xml which is dynamically generated, on a similar url.I have tried the following config:server {
listen 80;
server_name example.com;
location / {
if ($request_method = DELETE) {
return 444;
}
if ($request_uri ~* "^/robots.txt") {
rewrite ^/robots.txt http://www.example.com/prod/_design/mydesign/robots.txt permanent;
}
proxy-pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}This appears to work as a redirect but is there a simpler way? | How do i configure nginx to redirect to a url for robots.txt & sitemap.xml |
If you're using AWS Linux2, you have to install nginx from the AWS "Extras Repository". To see a list of the packages available:# View list of packages to install
amazon-linux-extras listYou'll see a list similar to:0 ansible2 disabled [ =2.4.2 ]
1 emacs disabled [ =25.3 ]
2 memcached1.5 disabled [ =1.5.1 ]
3 nginx1.12 disabled [ =1.12.2 ]
4 postgresql9.6 disabled [ =9.6.6 ]
5 python3 disabled [ =3.6.2 ]
6 redis4.0 disabled [ =4.0.5 ]
7 R3.4 disabled [ =3.4.3 ]
8 rust1 disabled [ =1.22.1 ]
9 vim disabled [ =8.0 ]
10 golang1.9 disabled [ =1.9.2 ]
11 ruby2.4 disabled [ =2.4.2 ]
12 nano disabled [ =2.9.1 ]
13 php7.2 disabled [ =7.2.0 ]
14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ]Use theamazon-linux-extras installcommand to install it, like:sudo amazon-linux-extras install nginx1.12More details are here:https://aws.amazon.com/amazon-linux-2/faqs/. | I try to install the latest version of nginx (>= 1.9.5) on a fresh amazon linux to make use of http2. I followed the instructions that are described here ->http://nginx.org/en/linux_packages.htmlI created a repo file/etc/yum.repos.d/nginx.repowith this content:[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1If I runyum updateandyum install nginxI get this:nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 kIt seems that it fetches still from the amzn-main repo. How do I install a newer version of nginx?-- edit --
I added "priority=10" to the nginx.repo file and now I can install 1.9.15 withyum install nginxwith this result:Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: systemd
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest | How to install nginx 1.9.15 on amazon linux disto |
I just did a quick test, and this worked for me:server {
location / {
# This proxy_pass is used for requests that don't
# match the limit_except
proxy_pass http://127.0.0.1:8080;
limit_except PUT POST DELETE {
# For requests that *aren't* a PUT, POST, or DELETE,
# pass to :9080
proxy_pass http://127.0.0.1:9080;
}
}
} | I have twoiKaaroinstances running on port 8080 and 9080, where the 9080 instance is Read only.I am unsure how to use nginx for example if the request method is POST, PUT, DELETE then send to write instance (8080) else send to 9080 instance.I have done something using the location using the regex, but this is not correct.Fromhttp://wiki.nginx.org/HttpLuaModulei see that there is the 'HTTP method constants' which can be called, so is it correct to add a location block as:location ~* "(ngx.HTTP_POST|ngx.HTTP_DELETE|ngx.HTTP_PUT)" {
proxy_pass http://127.0.0.1:8080;Thanks | nginx proxy_pass based on whether request method is POST, PUT or DELETE |
Assuming that you have installed all requirement and you are using the aptitude packages then you don't need the wsgi.py. All the configuration is in the uwsgi ini/xml/yaml file. (take the format that you prefer).Here is a minimal example forexample.comfile for nginx(/etc/nginx/sites-available/examplecom for ubuntu 11.10)server {
listen 80;
server_name example.com;
access_log /var/log/nginx/projectname.log;
location /media {
alias /vagrant/test/projectname/media/;
}
location /static {
alias /vagrant/test/projectname/static/;
}
location / {
uwsgi_pass unix:///run/uwsgi/projectname/socket;
include uwsgi_params;
}
}Create a symbolic link to /etc/nginx/sites-enabledsudo ln -s /etc/nginx/sites-available/examplecom /etc/nginx/sites-enabled/examplecomorsudo /usr/sbin/nxensite examplecomYou are done with NGINX.Go to/etc/uwsgi/apps-availableand create your ini filesudo vim /etc/uwsgi/apps-available/projectname.ini
[uwsgi]
virtualenv=/home/vagrant/.virtualenvs/projectenv
thread=3
master=1
env = DJANGO_SETTINGS_MODULE=projectname.settings
module = django.core.handlers.wsgi:WSGIHandler()
chdir = /path/to/my/django/project
socket = /run/uwsgi/projectname/socket
logto = /var/log/uwsgi/projectname.logPoint your ini to /etc/uwsgi/apps-enabled/projectname.inisudo ln -s /etc/uwsgi/apps-available/projectname.ini /etc/uwsgi/apps-enabled/projectname.iniFor more information, see any of these files on your system:/etc/uwsgi/apps-available/README
/etc/uwsgi/apps-enabled/README
/usr/share/doc/uwsgi/README.Debian.gz
/etc/default/uwsgiYou are done. You can now restart nginx & uwsgisudo service nginx restart
sudo service uwsgi restartCheers! | I'm trying do deploy a django project. I tried a lot of tutorials, but had no luck. I use a new clean Ubuntu 11.10. I've performedapt-get install nginx
apt-get install uwsgi
service nginx startI've created folder/deploy/project1and put theremanage.pyand other files.My current/deploy/project1/project1/wsgi.pycontains:import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project1.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()So, could you tell me how to deploy my django app fordomain.comproperly?I've also installed Django via pip and easy_installWhat should I add in/etc/nginx/sites-enabled/default. | Django + uWSGI via NGINX on Ubuntu 11.10 |
Our setup is the same as yours, only usingmapinstead ofif/set(as recommended by thenginx devs).# Sets a $real_scheme variable whose value is the scheme passed by the load
# balancer in X-Forwarded-Proto (if any), defaulting to $scheme.
# Similar to how the HttpRealIp module treats X-Forwarded-For.
map $http_x_forwarded_proto $real_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}P.S. I agree, areal_schememodule would be nice! | Is it possible to force nginx$schemevalue to "https" if nxinx is running behind load balancer?In my scenario Load balancer takes care of https communication with client and forwards requests to nginx as raw http. I know I can do something like this to detect httpsset $my_scheme "http";
if ($http_x_forwarded_proto = "https") {
set $my_scheme "https";
}but I'm just curious if there is something likereal_ip_headerfunction for IPs.Are there also some headers I need to update when detecting https manualy? | nginx $scheme variable behind load balancer |
nginxlocation block doesn't match query stringat all. So it's impossible.LocationThis directive allows different configurations depending on the URI.In nginx, there is a built-in variable$uri, which the location block is matched against. For example, give a requesthttp://www.example.com/app/login.php?username=xyz&password=secretthe $uri value is this string:/app/login.phpand the query_string is stored in nginx variable$args:username=xyz&password=secretTo do something wrt. query string, you can do something likeif ($args ~ username=xyz) {
# do something for requests with this query string
}But be careful,IF is Evil | I'd like to match question mark "?" as regexp on nginx.conf location.For example, a URL pattern which I'd like to match is /something?foo=5 or /something?bar=8 (parameter only changeable).Because nginx adoptsRCPE, I can write the location on nginx.conf as follows:location ~ ^/something\?.* {
}The above doesn't match the URL pattern. How can I do that?Also, the following is not my expectation.location ~ ^/something?.* {
}It'll match /something_foo_bar_buzz that I don't expect. | How to match question mark "?" as regexp on nginx.conf location |
Ifnginxis not listening on port 8001, it cannot know which port to use in the redirect. You will need to specify it explicitly:location ~ /foo(.*)$ {
return 301 $scheme://$http_host$1;
}
location ~ /adminfoo(.*)$ {
return 301 $scheme://$http_host/admin$1;
}The$http_hostvariable consists of the hostname and port from the original request. Seethis documentfor details. | I'm trying to collapse a second brand of a web app into the first brand and use 301 redirects to redirect any lingering traffic. The server is running in a Vagrant box forwarding on port 8001. I would like to have:Instead ofhttps://local-dev-url:8001/foo/(anything)301 tohttps://local-dev-url:8001/(anything)Instead ofhttps://local-dev-url:8001/adminfoo/(anything)301 tohttps://local-dev-url:8001/admin/(anything).Here's what I have:location ~ /foo/?(.*)$ {
return 301 $1/;
}
location ~ /adminfoo/?(.*)$ {
return 301 admin/$1/;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Scheme $scheme;
proxy_pass http://127.0.0.1:5000;
proxy_redirect http:// $scheme://;
}
location /admin/ {
alias /hostonly/path/to/admin/stuff/;
}However, instead of redirectinghttps://local-dev-url:8001/foo/tohttps://local-dev-url:8001/it is 301ing tohttps://local-dev-url//instead. (No port number, extra slash.) I've seen answers that hard-code the URL of the redirect, but since I work with a lot of other devs and we all have unique local dev URLs, the only consistent part is the :8001 port number.Is there a way to configure the 301 to work as desired? | nginx keep port number when 301 redirecting |
After a lucky find in further research (http://answerpot.com/showthread.php?577619-Several%20Bugs/Page2) I found something that helped...Supplying theuwsgi_pass_request_body off;parameter in the Nginx conf resolves this problem... | I have a django app hosted via Nginx and uWsgi. In a certain very simple request, I get different behaviour for GET and POST, which should not be the case.The uWsgi daemon log:[pid: 32454|app: 0|req: 5/17] 127.0.0.1 () {36 vars in 636 bytes} [Tue Oct 19 11:18:36 2010] POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
[pid: 32455|app: 0|req: 5/18] 127.0.0.1 () {32 vars in 521 bytes} [Tue Oct 19 11:18:50 2010] GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)The Nginx accesslog:127.0.0.1 - - [19/Oct/2010:18:18:36 +0200] "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 0 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
127.0.0.1 - - [19/Oct/2010:18:18:50 +0200] "GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 80 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"The Nginx errorlog:2010/10/19 18:18:36 [error] 4615#0: *5 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0", upstream: "uwsgi://unix:sock/uwsgi.sock:", host: "localhost:9201"In essence, Nginx somewhere loses the response if I use POST, not so if I use GET.Anybody knows something about that? | Nginx connection reset, response from uWsgi lost |
To start puma with socket binding just use/tmpdirectory:bundle exec puma -e development -b unix:///tmp/my_app.sockTo access application through domain name you should use something likenginxand do configuration for it.To installnginxin Ubuntu just run next command:sudo apt-get install nginxRunsudo nano /etc/nginx/sites-available/my_app.confand place configuration below into this file (Ctrl + X, Y - to save changes):upstream my_app {
server unix:///tmp/my_app.sock;
}
server {
listen *:80;
server_name my_app.com;
access_log /var/log/nginx/my_app-access.log;
location /favicon.ico {
root /var/www/my_app/public/assets/favicon.ico;
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
root /var/www/my_app/public;
try_files $uri @app;
gzip_static on;
expires max;
add_header Cache-Control public;
}
location @app {
proxy_pass http://my_app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_502;
}
}You should change/var/www/my_appandmy_app.comto appropriate values.Add symlink into enabled sitessudo ln -fns /etc/nginx/sites-available/my_app.conf /etc/nginx/sites-enabled/Restart nginx:sudo service nginx restart.Link your domain name to server IP (viahosts-file or DNS-provider). | I have followedthis linkto configure nginx with puma
but when I start the server withbundle exec puma -e development -b unix:///var/run/my_app.sockit throwsPermission denied - "/var/run/my_app.sock" (Errno::EACCES) error.but when I start the server withbundle exec puma -e developmentit is started withtcp://0.0.0.0:9292my_app.sock file does not exist in /var/run/how do I start the server with unix socket and access the application through the domain name given in themy_app.conffile.Can you please anyone help me?. | how to start puma with unix socket |
FYIIt is problem of php-fpm imageIt is not about usernames, it is about www-data user IDWhat to doFix your php-fpm container and don't break good nginx container.SolutionsHere is minepost with solution for docker-compose(nginx +
php-fpm(alpine)):https://stackoverflow.com/a/36130772/1032085Here is minepost with solution for php-fpm(debian) container:https://stackoverflow.com/a/36642679/1032085Solution for Official php-fpm image. Create Dockerfile:FROM php:5.6-fpm
RUN usermod -u 1000 www-data | I'm using Docker Hub's official nginx image:https://hub.docker.com/_/nginx/The user of nginx (as defined in /etc/nginx/nginx.conf) isnginx. Is there a way to make nginx run aswww-datawithout having to extend the docker image? The reason for this is, I have a shared volume, that is used by multiple containers -php-fpmthat I'm running aswww-dataandnginx. The owner of the files/directories in the shared volume iswww-data:www-dataandnginxhas trouble accessing that - errors similar to*1 stat() "/app/frontend/web/" failed (13: Permission denied)I have adocker-compose.ymland run all my containers, including the nginx one withdocker-compose up....
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- fpm
... | How to change the nginx process user of the official docker image nginx? |
TCP has no concept of server names, so this is not possible. It only works
in HTTP because the client sends the hostname it is trying to access as
part of the request, allowing nginx to match it to a specific server block.Source:https://forum.nginx.org/read.php?2,263208,263217#msg-263217 | Current setup as follows:stream {
server {
listen 9987 udp;
server_name subdomain.EXAMPLE.com; # this line is resulting in an error
proxy_pass localhost:9987;
proxy_timeout 1s;
proxy_responses 1;
error_log logs/dns.log;
}
}server_name subdomain.EXAMPLE.com;Is this possible?$nginx -t
$nginx: [emerg] "server_name" directive is not allowed here in /etc/nginx/nginx.conf:15Works just fine without server_name, but I'd like to use a sub-domain if possible. (I am using a build with--with-stream, thats not my issue.) | nginx server_name inside stream block possible? |
Run in development mode by setting theFLASK_ENVenvironment variable todevelopment. Unhandled errors will show a stack trace in the terminal and the browser instead of a generic 500 error page.export FLASK_ENV=development # use `set` on Windows
flask runPrior to Flask 1.0, useFLASK_DEBUG=1instead.If you're still usingapp.run(no longer recommended in Flask 0.11), passdebug=True.if __name__ == '__main__':
app.run(debug=True)In production, you don't want to run your app in debug mode. Instead you should log the errors to a file.Flask uses the standard Python logging library can be configured to log errors. Insert the the following to have send Flask's log messages to a file.import logging
handler = logging.FileHandler('/path/to/app.log') # errors logged to this file
handler.setLevel(logging.ERROR) # only log errors and above
app.logger.addHandler(handler) # attach the handler to the app's loggerRead more about the Pythonloggingmodule. In particular you may want to change where errors are logged, or change the level to record more than just errors.Flask has documentation forconfiguring loggingandhandling errors. | I'm running my Flask application with uWSGI and nginx. There's a 500 error, but the traceback doesn't appear in the browser or the logs. How do I log the traceback from Flask?uwsgi --http-socket 127.0.0.1:9000 --wsgi-file /var/webapps/magicws/service.py --module service:app --uid www-data --gid www-data --logto /var/log/magicws/magicapp.logThe uWSGI log only shows the 500 status code, not the traceback. There's also nothing in the nginx log.[pid: 18343|app: 0|req: 1/1] 127.0.0.1 () {34 vars in 642 bytes}
[Tue Sep 22 15:50:52 2015]
GET /getinfo?color=White => generated 291 bytes in 64 msecs (HTTP/1.0 500)
2 headers in 84 bytes (1 switches on core 0) | Flask application traceback doesn't show up in server log |
location / {
rewrite ^/(.*)$ http://www.google.com/search?q=$1 permanent;
}
location /blog {
root html;
index index.php;
try_files $uri $uri/ /blog/index.php;
}Explanation:Alocationcan be followed by a path string (calledprefix string) or by aregex.Regex starts with~(for case sensitive matching) or with~* (for case-insensitive matching).Prefix strings that starts with^~makes nginx ignore any potential regex matching (more about it below).The nginx uses the following rule to determine whichlocationto use (all matching is done against the normalized URI. See [1] for more details):Regexthat matches the URI (except if there is a prefix string that matches the URI and that prefix string starts with^~). If there are multiple regex match, nginx uses the first match found as listed in the conf file.Longest prefix string.Here a few examples (obtained from [1]):location = / {
[ configuration A ]
}
location / {
[ configuration B ]
}
location /documents/ {
[ configuration C ]
}
location ^~ /images/ {
[ configuration D ]
}
location ~* \.(gif|jpg|jpeg)$ {
[ configuration E ]
}The “/” request will matchconfiguration A.The “/index.html” request will matchconfiguration B. Note that it doesn't match configuration A because A had a=symbol (location = /) which forces an exact match.The “/documents/document.html” request will matchconfiguration C.The “/images/1.gif” request will matchconfiguration D. Note that the prefix string has^~which indicates to nginx to not look for potential regex matches (otherwise that request would match configuration E since regex takes precedence over prefix strings).The “/documents/1.jpg” request will matchconfiguration E.Useful docs:[1]http://nginx.org/r/location[2]http://nginx.org/en/docs/http/request_processing.html | I'm using nginx 1.0.8 and I'm trying to redirect all visitors from www.mysite.com/dir to google search pagehttp://www.google.com/search?q=dirwhere dir is a variable, however if dir=="blog"( www.mysite.com/blog) I just want to load the blog content(Wordpress).Here is my config :location / {
root html;
index index.html index.htm index.php;
}
location /blog {
root html;
index index.php;
try_files $uri $uri/ /blog/index.php;
}
location ~ ^/(.*)$ {
root html;
rewrite ^/(.*) http://www.google.com/search?q=$1 permanent;
}if I do this even www.mysite.com/blog will be redirected to google search page. If I delete the last location www.mysite.com/blog works great.From what I've read here:http://wiki.nginx.org/HttpCoreModule#locationit seems that the priority will be first on regular expressions and that first regular expression that matches the query will stop the search.Thanks | nginx redirect all directories except one |
To expand on the previous answers you should be able to modify the following code and have nginx directly serve your download files whilst still having the files protected.First of all add a location such as :location /files/ {
alias /true/path/to/mp3/files/;
internal;
}to your nginx.conf file (the internal makes this not directly accessible). Then you need a Django View something like this:def song_download(request, song_id):
try:
song = Song.objects.get(id=song_id)
response = HttpResponse()
response['Content-Type'] = 'application/mp3'
response['X-Accel-Redirect'] = '/files/' + song.filename
response['Content-Disposition'] = 'attachment;filename=' + song.filename
except Exception:
raise Http404
return responsewhich will hand off the file download to nginx. | So I of course know that serving static files through Django will send you straight to hell but I am confused on how to use a custom url to mask the true location of the file using Django.Django: Serving a Download in a Generic Viewbut the answer I accepted seems to be the "wrong" way of doing things.urls.py:url(r'^song/(?P\d+)/download/$', song_download, name='song_download'),views.py:def song_download(request, song_id):
song = Song.objects.get(id=song_id)
fsock = open(os.path.join(song.path, song.filename))
response = HttpResponse(fsock, mimetype='audio/mpeg')
response['Content-Disposition'] = "attachment; filename=%s - %s.mp3" % (song.artist, song.title)
return responseThis solution worksperfectlybut not perfectly enough it turns out. How can I avoid having a direct link to the mp3 while still serving through nginx/apache?EDIT 1 - ADDITIONAL INFOCurrently I can get my files by using an address such as:http://www.example.com/music/song/1692/download/But the above mentioned method is the devil's work.How can I accomplished what I get above while still making nginx/apache serve the media? Is this something that should be done at the webserver level? Some crazy mod_rewrite?http://static.example.com/music/Aphex%20Twin%20-%20Richard%20D.%20James%20(V0)/10%20Logon-Rock%20Witch.mp3EDIT 2 - ADDITIONAL ADDITIONAL INFOI use nginx for my frontend and reverse proxy back apache/development server so I think if it does require some sort of mod_rewrite work I will have to find something that would work with nginx. | Django: Serving Media Behind Custom URL |
You need to tell nginx to make environment variables available. From thedocs for theenvdirective: "By default, nginx removes all environment variables inherited from its parent process except the TZ variable. This directive allows preserving some of the inherited variables, changing their values, or creating new environment variables."So, in your case you'd need to specifyenv PATH;in nginx.conf. | I have some Lua code, which I use in my openresty nginx.conf file. This Lua code contains such lines:...
local secret = os.getenv("PATH")
assert(secret ~= nil, "Environment variable PATH not set")
...Just for testing reasons I tried to check if PATH variable is set and for some reason the assert statement does not pass. I see in the console:Environment variable PATH not setHowever, when I run this$ echo $PATHI see, that this variable indeed has some value. So, what is wrong with that and how can I fix it? | Unable to use environment variables in Lua code |
When definingupstreamNginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based onfail_timeout(default 10s) andmax_fails(default 1)So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reportsno live upstreams. Better explained here:https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/I had a similar problem and you can prevent this overriding those settings.For example:upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
} | I have a really weird issue with NGINX.I have the followingupstream.conffile, with the following upstream:upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}In locations.conf:location ~ "^/files(?.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}In /etc/hosts:127.0.0.1 localhost mymachineWhen I do:wget https://mynachine:6006/alive --no-check-certificate, I getHTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.But when I send to the NGINX file server a request, I get the following error:no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"But the upstream is OK. What is the problem? | No live upstreams while connecting to upstream, but upsteam is OK |
I would use your static directory as document root. This ensures that nobody can execute/dynamic.phpdirectly, however, it will be forwarded to yourindex.phpby the named location [email protected] configuration example is untested!server {
index index.php;
root /var/www/foo/static;
server_name foo.bar *.foo.bar;
location / {
try_files $uri @php;
}
location @php {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm-foo.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/foo/index.php;
}
}You don't need thelistendirective if it only contains 80 since this is the default.Theserver_names should not contain a leading dot.The$urialways contains the requested URI including the leading slash (e.g./static.html) and nginx will prefix them with the document root upon invocation oftry_files(e.g./var/www/foo/static.html). Hence, you need to set yourstaticdirectory before the$uri(e.g./static$uribecomes/var/www/foo/static/static.html).You don't needfastcgi_split_path_infobecause you are not using that feature.Yourtry_filesin your PHP location makes it impossible for nginx to properly forward things. A request for/dynamic.htmldoes not end on.php, hence,try_filesalways fails. | I want to serve static HTML files with NGINX, but if the file is missing, it should load a PHP file instead and PHP should handle the content.I've been testing several combinations oftry_files, but I can't get my head around it. I have a dummy PHP app that looks like this:./
../
dynamic.php
index.php
static/
static/static.htmlThen I have a small PHP code on index like this: 0) {
if ($matches[1] == "dynamic") {
require 'dynamic.php';
} else {
echo "Not found!";
}
} else {
echo "Index page!";
}The results of browsing to each page should be:http://foo.bar/ - Loads index.php
http://foo.bar/static.html - Loads static/static.html
http://foo.bar/dynamic.html - Loads index.php & PHP requires dynamic.php
http://foo.bar/baz.html - Loads index.php with "not found" messageThis is what I got in the NGINX config file:server {
listen 80;
server_name .foo.bar *.foo.bar;
access_log /var/log/nginx/foo.access.log;
error_log /var/log/nginx/foo.error.log;
root /var/www/foo;
index index.php;
location / {
# Trying with 'try_files' here. No success.
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm-foo.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}I've been trying repeatedly and evidently utterly failing with this line:try_files $uri $uri/static /index.php;I am missing something. Help? | Nginx to serve static page before dynamic |
Read about audit2allow and used it to create a policy to allow access to the denied requests for Nginx.Step 1 involves runningaudit2allowtargeting nginxlocalconf:$ sudo grep nginx /var/log/audit/audit.log | \
grep denied | audit2allow -m nginxlocalconf > nginxlocalconf.teStep 2, review results:$ cat nginxlocalconf.te
module nginxlocalconf 1.0;
require {
type httpd_t;
type var_t;
type transproxy_port_t;
class tcp_socket name_connect;
class file { read getattr open };
}
#============= httpd_t ==============
#!!!! This avc can be allowed using the boolean 'httpd_can_network_connect'
allow httpd_t transproxy_port_t:tcp_socket name_connect;
allow httpd_t var_t:file { read getattr open };Review steps to activate:$ sudo grep nginx /var/log/audit/audit.log | grep denied | \
audit2allow -M nginxlocalconf
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i nginxlocalconf.ppStep 3, active:$ sudo semodule -i nginxlocalconf.pp | I'm having an application listening on port 8081 and Nginx running on port 8080. The proxy pass statement looks like:$ cat /var/etc/opt/lj/output/services/abc.servicemanager.conf
location /api/abc.servicemanager/1.0 { proxy_pass http://localhost:8081;}Innginx.conf, I include this file as:include /etc/nginx/conf.d/services/*.conf;The/etc/nginx/conf.d/serviceis a symlink:# ll /etc/nginx/conf.d/
lrwxrwxrwx. 1 root root 39 Dec 10 00:19 services -> ../../../var/etc/opt/lj/output/servicesThis is a CentOS 7.0 SELinux Enabled system. If Isetenforce 0, and make it Permissive, I don't see any issues. So the file is in right place and no issues with paths. If SELinux is enforcing, I see the following in audit log:type=AVC msg=audit(1418348761.372:100930): avc: denied { getattr } for pid=3936 comm="nginx" path="/var/etc/opt/lj/output/services/abc.servicemanager.conf" dev="xvda1" ino=11063393 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=fileI want to know how to enable Nginx to find the conf file without having to disable SELinux. | proxy_pass isn't working when SELinux is enabled, why? |
I just had this same issue and found a solution. My base href is "/", however.Below is my nginx.conf:worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name mysite.com www.mysite.com;
root /usr/share/nginx/html;
location / {
try_files $uri$args $uri$args/ /index.html;
}
}
} | My current app users routes like this /myapp/, /myapp//, /myaapp/dept/My app is currently deployed in an internal http server with NGINX. The other server that accepts external traffic, also runs NGINX and forwards it to the internal server.I have add baseref=/myapp to the index.html as per documentationIf the user goes tohttp://www.myexternalserver.com/myapp, the app works perfectly. If the user is inside the page and clicks on an internal link likehttp://www.myexternalserver.com/myapp/myparameter, it works. The url in the browser changes, the page is displayed as intended. I am guessing it's processed by Angular 2.Unfortunately when a user types in the url directly:http://www.myexternalserver.com/myapp/myparameter, I get a 404 error made by NGINX.I think I have to configure NGINX settings but I don't know how should modify NGINX's config or what to put in the sites-available/default file/ | NGINX and Angular 2 |
Your error message tells it comes from nginx configuration.You need to increaseclient_max_body_sizeon yournginx.confserver config. eg :http {
server {
client_max_body_size 20M;
listen 80;
server_name test.com;
}
} | I'm currently hosting a django project on Apache + nginx. When I try to upload a large file I get a413 request entity too large error message.I also have a django-cms project and when I tried to upload a file which is anything over 5meg I get an errorcode 64, The web server connection was closed.Thanks in advance, | 413 request entity too large + The web server connection was closed | Error 64 |
You need to add the following line:proxy_set_header X-Forwarded-Proto https;as inlocation / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstreamy;
break;
}
} | server declaration in my nginx.conf:listen 1.2.3.4:443 ssl;
root /var/www/myapp/current/public;
ssl on;
ssl_certificate /etc/nginx-cert/server.crt;
ssl_certificate_key /etc/nginx-cert/server.key;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstreamy;
break;
}
}upstream declaration in nginx.conf:upstream upstreamy {
server unix:/var/www//myapp/shared/sockets/unicorn.sock fail_timeout=0;
}this works fine, myapp is reachable ashttps://somehostbut the app is generating http url's for redirects, so for instance when authenticating with devise, the / is redirected tohttp://somehost/user/sign_ininstead of https (from the viewpoint of the rails app, it's all http anyway).I triedproxy_pass https://upstreamy;but that just tries to encrypt traffic between nginx and the unicorns that run the rails app.I also tried, in application_helper.rb:# http://stackoverflow.com/questions/1662262/rails-redirect-with-https
def url_options
super
@_url_options.dup.tap do |options|
options[:protocol] = Rails.env.production? ? "https://" : "http://"
options.freeze
endbut it seems to not work.How would one solve this?Edit: so, the goal is not to make the rails app to require ssl, or to be forced to use ssl; the goal is to make the rails app generate https:// urls when redirecting... (I think all other urls are relative). | https redirect for rails app behind proxy? |
upstream: "https://:80/v1/some/page",It is not really clear to me what you are trying to achieve. But it is very unlikely that you have a HTTPS server on port 80. Port 80 is commonly used by HTTP not HTTPS. Trying to access it by HTTPS will usually result in a HTTP error response by the server which, when interpreted as the expected TLS handshake response, will result in strange error messages likessl3_get_record:wrong version number. | I am trying to proxy requests to a remote server, this is how I configure my Nginxupstream myupstream {
server remote-hostname;
}...location ~ ^/(v1|v2|v3)/.*$ {
proxy_pass https://myupstream;
# also tried these options:
# proxy_ssl_server_name on;
# proxy_ssl_verify off;
# proxy_set_header Host ;
# proxy_set_header X_FORWARDED_PROTO https;
}As a result I see error 502 page and this record in error.log2018/11/10 19:41:38 [error] 8410#8410: *1 SSL_do_handshake() failed
(SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number)
while SSL handshaking to upstream, client: 127.0.0.1, server: ,
request: "GET /v1/some/page HTTP/1.1",
upstream: "https://:80/v1/some/page",
host: ""What could cause this?Note: This nginx proxy is on my local machine. | Nginx upstream to https host - ssl3_get_record:wrong version number |
Try:location ~ (\.php$|myadmin) {
return 403;
} | I run a number of websites behind an nginx frontend. All my sites are in Python/Django. I see in my logs lots of crawling by hackers for various php applications - I'd like to block them (return a 404) at nginx without them hitting my application servers.I'd like to do this globally in my nginx conf file so it applies to all my site-specific configurations.So, how do I:Return 404 for all extensions of type .phpReturn 404 for partial matches of certain strings, such as "phpmyadmin" | How to block all file extensions of certain types on nginx |
The first server defined in Nginx is treated as thedefault_serverso by just adding one as the default and returning 412 (Precondition Failed) or any another status that best fits your requirements, will help for the subsequent servers to obey theserver_nameserver {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 412;
}
server {
listen 80;
server_name mysite.lk www.mysite.lk;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
} | Please find the below setting which is placed in/etc/nginx/sites-enabledunder my site domain name. (mysite.lk)server {
listen 80;
server_name mysite.lk www.mysite.lk;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
}The application is running on port 8080 and here I'm redirecting all the 80 traffic to 8080.
My website only usesmysite.lkandwww.mysite.lkdomain names.Hence, I want torestrict/block all other domains(except mysite.lk and www.mysite.lk) which are coming to this server IP. What is the change that I need to do to achieve this?I tried numerous things such as answers given in theWhy is nginx responding to any domain name?, but was getting errors at the nginx startup.Please help me out!
Thanks.UpdateFound the Answer. A catch-all server block should needed in the top of the config before the given config like below. The code block should be like this.server {
return 403;
}
server {
listen 80;
server_name mysite.lk www.mysite.lk;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
} | Nginx restrict domains |
A 524 error states that CloudFlare was able to make a TCP connection to the origin, but the origin did not reply with a HTTP response before the connection timed out. This means that CloudFlare is able to make a network connection to the origin server, but the origin server took too long to respond to the request.https://support.cloudflare.com/hc/en-us/articles/200171926-Error-524-A-timeout-occurred | Wonder why Cloudflare throws an error on my server which is up? I can verify the server is up by visiting the ip in my browser.I checked system log, apache log, no error found. Btw, I just set the domain on a static site.. I can't figure out how to fix it. Googled and found no solution | Cloudflare throws 524 an error on my server |
On your production server, in your WordPress index.php file, at the top, you can temporarily putecho(exec("whoami"));die();Then browse to your WordPress site and see what user was running. On Ubuntu, mine waswww-data.This was useful for me for:Can I install/update WordPress plugins without providing FTP access? | I'm trying to setup Wordpress to be able to install plugins via SFTP (SSH) on a Centos 6 VPS.I've been able to modifywp-configso it uses the right credentials withuseras my SFTP user.Now I have a permission related problem, as if I do achmod 777on mywp-contentfolder I'm able to install, but with the normal permissions it can't create folders.I'm using Nginx and all mywp-contentfiles and folders are owned byuserand I've tried setting the group tonginxbut it doesn't work.I also tried setting the user asnginxbut still no luck.UPDATE : I found out wordpress was usingapacheas user but I want to change this to myuserinstead. How can I do this ? | How do I know which linux user Wordpress uses for plugin installation |
Operating system does in memory caching by default. It's calledpage cache. In addition, you can enablesendfileto avoid copying data between kernel space and user space. | I have Nginx running in a Docker container, and it serves some static files. The files willneverchange at runtime - if they actually do change, the container will be stopped, the image will be rebuilt, and a new container will be started.So, to improve performance, it would be perfect if Nginx would read the static files only one single time from disk and then server it from memory forever. I have found some configuration options to configure caching, but at least from what I have seen none of them provided this "forever" behavior that I'm looking for.Is this possible at all? If so, how do I need to configure Nginx to achieve this? | Cache a static file in memory forever on Nginx? |
i would just stream it from S3. it's very easy, and signed URLs are much more difficult. just make sure you set thecontent-typeandcontent-lengthheaders when you upload the images to S3.var aws = require('knox').createClient({
key: '',
secret: '',
bucket: ''
})
app.get('/image/:id', function (req, res, next) {
if (!req.user.is.authenticated) {
var err = new Error()
err.status = 403
next(err)
return
}
aws.get('/image/' + req.params.id)
.on('error', next)
.on('response', function (resp) {
if (resp.statusCode !== 200) {
var err = new Error()
err.status = 404
next(err)
return
}
res.setHeader('Content-Length', resp.headers['content-length'])
res.setHeader('Content-Type', resp.headers['content-type'])
// cache-control?
// etag?
// last-modified?
// expires?
if (req.fresh) {
res.statusCode = 304
res.end()
return
}
if (req.method === 'HEAD') {
res.statusCode = 200
res.end()
return
}
resp.pipe(res)
})
}) | I have app where user's photos are private. I store the photos(thumbnails also) in AWS s3. There is a page in the site where user can view his photos(i.e thumbnails). Now my problem is how do I serve these files. Some options that I have evaluated are:Serving files from CloudFront(or AWS) using signed url generation. But the problem is every time the user refreshes the page I have to create so many signed urls again and load it. So therefore I wont be able to cache the Images in the browser which would have been a good choice. Is there anyway to do still in javascript? I cant have the validity of those urls for longer due to security issues. And secondly within that time frame if someone got hold of that url he can view the file without running through authentication from the app.Other option is to serve the file from my express app itself after streaming it from S3 servers. This allows me to have http cache headers, therefore enable browser caching. It also makes sure no one can view a file without being authenticated. Ideally I would like to stream the file and a I am hosting using NGINX proxy relay the other side streaming to NGINX. But as i see that can only be possible if the file exist in the same system's files. But here I have to stream it and return when i get the stream is complete. Don't want to store the files locally.I am not able to evaluate which of the two options would be a better choice?? I want to redirect as much work as possible to S3 or cloudfront but even using singed urls also makes the request first to my servers. I also want caching features.So what would be ideal way to do? with the answers for the particular questions pertaining to those methods? | Serving files stored in S3 in express/nodejs app |
You need to callFCGI_Acceptin thewhileloop:while(FCGI_Accept() >= 0)You haveFCGI_Accept >= 0in your code. I think that results in the address of theFCGI_Acceptfunction being compared to0. Since the function exists, the comparison is never false, but the function is not being invoked. | I am attempting to run a fastcgi app written in C language behind the Nginx web server. The web browser never finishes loading and the response never completes. I am not sure how to approach it and debug. Any insight would be appreciated.The hello world application was taken from fastcgi.com and simplified to look like this:#include "fcgi_stdio.h"
#include
int main(void)
{
while(FCGI_Accept >= 0)
{
printf("Content-type: text/html\r\nStatus: 200 OK\r\n\r\n");
}
return 0;
}Output executable is executed with either one of:cgi-fcgi -connect 127.0.0.1:9000 a.outorspawn-fcgi -a120.0.0.1 -p9000 -n ./a.outNginx configuration is:server {
listen 80;
server_name _;
location / {
# host and port to fastcgi server
root /home/user/www;
index index.html;
fastcgi_pass 127.0.0.1:9000;
}
} | C language FastCGI with Nginx |
TLDR: Try to use thecsrf_exemptdecorator for your view:from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def my_webhook(request):
# Do some stuffs...
# Return an HHTPResponse as Django expects a response from the view
return HttpResponse(status=200)You should only do this when absolutely needed to avoid potential security flaws.More context:I faced a similar problem while working on a web-hook called by a third-party which is a payment solution. The Django view for that web-hook is called by the third-party to notify us every time the payment status changes (goes from 'open' to 'paid' for example).As the payment platform only provides a payment ID in the request POST, the CSRF check should not be performed. Django allows you to do this through thecsrf_exemptdecorator. | I have a website running, which appears to be working fine. Yet, now I've seen this error in the logs for the fist time.Forbidden (Referer checking failed - no Referer.): /pointlocations/
[pid: 4143|app: 0|req: 148/295] 104.176.70.209 () {48 vars in 1043 bytes} [Wed Jul 26 19:49:35 2017] POST /pointlocations/?participant=A2TYLR23CHRULH&assignmentId=3P4MQ7TPPYF65ANAUBF8A3B38A0BB6 => generated 2737 bytes in 2 msecs (HTTP/1.1 403) 1 headers in 51 bytes (1 switches on core 0)It happens when posting to/pointlocations/, but only for one specific person ( eachparticipantis unique per account, so I know it's only one person, having this problem repeatedly. Over 500+ otherparticipanthave had no such problem/error.What does this error mean, what is likely causing it and can I fix this? | What does error mean? : "Forbidden (Referer checking failed - no Referer.):" |
For cheap / lesser known certs like the COMODO or StartSSL ones, you need to add the entire certificate chain into the certificate file you are using with nginx. Many operating systems don't trust the intermediate CAs, just the root CA, so you need to fill in the missing steps between the certificate for your host and the root CA that is trusted by the OS.In the e-mail you received your certificate with, you should also find links to the intermediate CAs and the root CA. Open thedocker-registry.crtfile, scroll to the bottom, and append the intermediate CAs and, finally, the root CA certificate for the PositiveSSL chain. Once you've done that, restart nginx. You should now be good to go. | I'm am running a private docker registry on ubuntu using S3 for storage. I'm having issues getting docker login/push/pull commands to work over SSL. I'm using Nginx in front of Gunicorn to run the registry. It works without any issues over HTTP, but after switching to HTTPS for a prod system, it throws the following error from the client docker login.Invalid Registry endpoint: x509: certificate signed by unknown authorityI have purchased a rather cheap PositiveSSL certificate from Commodo to use for this. I have ensured the root CA and intermediate CA's are installed on the Ubuntu system running the registry. The following is my nginx configuration for the server# Default nginx site to run the docker registry
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
server_name docker.ommited.net;
ssl on;
ssl_certificate /etc/ssl/docker-registry.crt;
ssl_certificate_key /etc/ssl/docker-registry.key;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
location / {
proxy_pass http://localhost:5000/;
}
}I'm trying to figure out how to get docker to properly recognize the cert, or ignore the certificate warning. I'm running docker-registry version v0.7.3, the particular client I'm using is Docker version 1.1.2, build d84a070. on a side note, when visiting the registry in a browser, the cert is properly recognized. any help pointing me in the right direction would be greatly appreciated! | Docker registry login fails with "Certificate signed by unknown authority" |
Here's the summary:http://nginx.org/en/docs/control.htmlThe master process first checks the syntax validity, then tries to
apply new configuration. If this succeeds, it starts new worker
processes, and sends messages to old worker processes requesting them
to shut down gracefully.That means it would keep older processes handling unclosed connections while having new processes working according to the updated configuration.
From this perspective connections with keep-alive are no different from other unclosed connections.In versions prior to 1.11.11 such "old" processes could hang indefinitely long (according to @Alexey, haven't checked it though), from 1.11.11 there’s a configuration setting controlling thishttp://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout | refer to nginx official docs . the reload command of nginx is for reload of configuration files ,and during the progress , there's no downtime of the service .i've learned that it wait requests that already connected until it finished ,and stop accept any new request . the idea is cool , but how does it deal with the keep-live connections ? because those long-live connections won't close and there continuous request comes along . | How nginx reload work ? why it is zero-downtime |
If you accesshttp://hostname/devops/logsdirectly from your browser, certainly you will get what you want. But since you click the hyperlink in the homepage, then you can only gethttp://hostname/logs, which will be certainly failed.So, you need/logsbackend configured in your ingress yaml to get it processed, and configurenginx.ingress.kubernetes.io/configuration-snippetto ensure/logsnot get rewrote, like this:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/logs /logs break;
spec:
rules:
- host: master1.dev.local
http:
paths:
- backend:
serviceName: devops1
servicePort: 10311
path: /logs
- backend:
serviceName: devops1
servicePort: 10311
path: /devops | I have an ingress controller and ingress resource running with all /devops mapped to devopsservice in the backend. When I try to hit "http://hostname/devops" things work and I get a page (although without CSS and styles) with a set of hyperlinks for e.g one of them is "logs".When I click on the "logs" hyperlink, it is redirecting me tohttp://hostname/logswhereas I need it to behttp://hostname/devops/logs.Any idea what I can do?apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/add-base-url : "true"
spec:
rules:
- host: master1.dev.local
http:
paths:
- backend:
serviceName: devops1
servicePort: 10311
path: /devops | nginx ingress sub path redirection |
Firstly you can use custom configuration for your nginx ingress controller, documentation can be foundhereAlso, if you just want to use nginx ingress controller as a reverse proxy, each ingress rule already createsproxy_passdirective to relevant upstream/backend service.And if paths are same with your rule and backend service, then you don't have to specify rewrite rule, only just path for backend service. But if paths
are different, then take consider usingnginx.ingress.kubernetes.io/rewrite-targetannotation, otherwise you will get404 backenderrorSo to redirect request from which is coming to frontendhttp://example.com/somethingto backendexample-com/something, your ingress rule should be similar to belowapiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gpg-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
#nginx.ingress.kubernetes.io/rewrite-target: /different-path
spec:
rules:
- host: example.com
http:
paths:
- path: /something
backend:
serviceName: example-com
servicePort: 80For more explanation about annotations, checkNginx Ingress AnnotationsAlso, consider checking logs of nginx-ingress-controller pod via if something wrongkubectl logs nginx-ingress-controller-xxxxxHope it helps! | Currently I am trying to Migrate a site that was living on an Apache Load balanced Server to my k8s cluster. However the application was set up strangely with a proxypass and proxyreversepass like so:ProxyPass /something http://example.com/something
ProxyPassReverse /something http://example.com/somethingAnd I would like to mimic this in an Nginx IngressFirst I tried using therewrite-targetannotation however that does not keep theLocationheader which is necessary to get the application running again.Then I tried to get theproxy-redirect-to/fromannotation in place inside a specific location block like so:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gpg-app-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-redirect-from: http://originalapp.com/something
nginx.ingress.kubernetes.io/proxy-redirect-to: http://example.com/something
spec:
rules:
- host: example.com
http:
paths:
- path: /something
backend:
serviceName: example-com
servicePort: 80I would like to be able to instead use a customproxy_passvariable but it doesn't seem like its possible.What would be the best way to mimic this proxy pass? | What is the Best way to Setup Proxy Pass in an Nginx Ingress object for Kubernetes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.