Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I don't really understand what you want, but if you want the actual full URL i think you can use this$http_host$request_uri; | If request URL istest.com/gifts.If i am usingngx.var.urioutput is/gifts/expected output istest.com/giftsCode :location /gifts {
try_files $uri @url_change;
}
location @url_change {
default_type text/html;
content_by_lua '
ngx.say(ngx.var.uri)
';
} | get complete url using nginx and lua with openresty |
You will get 403 FORBIDDEN if you have throttling enabled. Based on this issue:https://github.com/tomchristie/django-rest-framework/issues/667, turning off the lazy evaluation of request.user/request.auth is probably the best thing to do. | Using django-rest-framework. I'm getting HTTP 403 errors when running in production behind nginx. When I call a particular view, which inherits from APIView to support a GET operation, I get:{"detail": "Invalid username/password"}But... I only get this in a browser. I don't get it when I use curl for the very same URL. I get this error whether I hit the URL directly, or load the URL via AJAX, in both Chrome and Firefox.I do not get the error if I log in via the Django Admin with an admin account first.Also, I only get this is I'm running from behind nginx. If I run with either the Django dev server, or gunicorn, and hit the port directly, I'm fine, and can happily hit the URL anonymously. If I then put nginx in front of this, to forward to the same gunicorn/runserver I get this error.Maybe it's something to do with my nginx proxy_pass settings?location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}I'm running django rest framework 2.2.6, Django 1.5 and nginx 1.2.7.I set throttling to a silly high number in rest framework, and looked at the permissions which all seemed open by default (but set as so explicitly as well).Can anyone point me in the right direction?Thanks!Ludo. | django rest framework gives 403 when behind nginx but not directly |
Rewriting did not seem to work as planned (nothing that appeared to access.log or error.log gave even a hint that the rule was even caught). I made a more generic router that might fit better the other yet unknown needs as well.location / {
try_files $uri $uri/ @router;
index index.html index.php;
error_page 403 = @router;
error_page 404 = @router;
}
location @router {
rewrite ^(.*)$ /router.php?$1;
} | I am trying to get Nginx to rewrite URL space ./data/(.+).png to serveImage.php?guid=$1server {
server_name my.server;
listen 80;
listen 127.0.0.1:80;
root /var/www/my.server;
index index.html;
location / {
try_files $uri $uri/ index.html;
rewrite ^/data/(.+).png serveImage.php?guid=$1 last;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}What am I doing wrong?serveImage.phpdoes exist in the document root. | Redirect URL to PHP only if file does not exist with Nginx |
Does your application work with xsp (xsp4 if you are using .net 4.0)? You'll want to make sure that is working before you try configuring the connection to another web server.Does nginx know where to find mono? You most likely have a parallel install and it won't be in the default paths.I use apache, but you may still find some of the instructions on my blog useful:http://tqcblog.com/2010/04/02/ubuntu-subversion-teamcity-mono-2-6-and-asp-net-mvc/ | I am trying to setup ASP.Net MVC 2 application on Linux environment. I've installed Ubuntu 10.10 on VirtualBox, then installed Mono 2.8 from sources. After that I have installed nginx and configure it as recommendedhere.
Unfortunately, FastCGI shows me standard error 500 page:No Application Found
Unable to find a matching application for request:
Host localhost:80
Port 80
Request Path /Default.aspx
Physical Path /var/www/mvc/Default.aspxMy application is located in /var/www/mvc directory. I've tried to create some stub Default.aspx file and place it in root dir of my application, but it didn't help, same error occured.
Thanks. | ASP.Net MVC 2 on nginx/mono 2.8 |
You're 99% of the way there. All that is missing is matching the number of selectors and output variables, so if you would appenduser_agentto your list it would start to work.Sample working query:fields @message
| parse @message '* - - [*] "* * *" * * "-" "*"' as remote_addr, timestamp, request_type, location, protocol, response_code, body_bytes_sent, user_agent
| display request_type, location | I am trying to use aws log insights to run query on my log group that contains nginx log.This is my log format that I have setup on my ec2 machine:log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';Sample NGINX Log:xx.xx.xx.xx - - [10/Nov/2020:15:28:30 +0530] "POST /xx HTTP/1.1" 200 57 "https://xxxx.in/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36" "-"I am trying to parse this using log insights with the following code:parse @message '* - - [*] "* * *" * * "-" "*"' as remote_addr, timestamp, request_type, location, protocol, response_code, body_bytes_sentI am getting the following error:Expression is invalid: parse@message'* - - [*] "* * *" * * "-" "*"'asremote_addr,timestamp,request_type,location,protocol,response_code,body_bytes_sentAny help would be appreiciated | AWS log Insigts parse NGINX log |
I have just solved the same problem.you can easily solve the problem by configuring the following directory structure.~/my-app/
|-- readme.md
|-- .ebextensions/
| |-- options.config # Option settings
| -- cloudwatch.config # Other .ebextensions sections, for example
-- .platform/
-- nginx/ # Proxy configuration
|-- nginx.conf
-- conf.d/
-- custom.conf
-- elasticbeanstalk
|-- server.conffor more information, see thisurlmy /var/log/eb-engine.log showed the message line below.Running command /bin/sh -c cp -rp /var/app/staging/.platform/nginx/. /var/proxy/staging/nginx | I host my Java webapp on a single-instance Elastic Beanstalk environment and I added several ebextension files which successfully create config files for me upon each deployment. I can't however find a way of getting Beanstalk to add new configs within the/etc/nginxor/etc/nginx/conf.ddirectories.I followed the steps described here:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-java.htmlMy deployment package structure looks like this:$ zip -r deploy.zip api-1.0-SNAPSHOT-all.jar .ebextensions
adding: api-1.0-SNAPSHOT-all.jar (deflated 11%)
adding: .ebextensions/ (stored 0%)
adding: .ebextensions/ssl-certificates.config (deflated 37%)
adding: .ebextensions/https-instance-securitygroup.config (deflated 38%)
adding: .ebextensions/nginx/ (stored 0%)
adding: .ebextensions/nginx/conf.d/ (stored 0%)
adding: .ebextensions/nginx/conf.d/https.conf (deflated 61%)My files are nearly 1-to-1 copy of samples in the guide above.During deployment both my*.configfiles execute successfully, but the/etc/nginx/conf.d/https.confis missing. I tried to workaround the issue by removing the.ebextensions/nginxdirectory and replacing it with another.configfile that creates/etc/nginx/conf.d/https.conffrom scratch, but this didn't help and the file was still missing.I ssh-ed onto my EC2 instance and here's what I found in/var/log/eb-engine.log:2020/05/03 19:42:37.754375 [INFO] Executing instruction: configure proxy Nginx
2020/05/03 19:42:37.754393 [WARN] skipping nginx folder under .ebextensions
2020/05/03 19:42:37.754670 [INFO] No plugin in cfn metadata.I feel like I might have missed something very obvious here, but surprisingly I couldn't find any solution to my problem. Thoughts? Thanks! | Beanstalk deployment ignores my nginx configuration files in .ebextensions |
I finally figured it out. Maybe it was simple enough that I couldn't find the solution.Adding:SET PHP_FCGI_MAX_REQUESTS=0to the command file that launches the php-cgi.exe fixed it. I guess it defaults (when not set) to 500 hits before FCGI is killed.Obviously, there are good reasons for this and as GargantuChet has suggested, settings things up correctly and letting the instances of PHP managed and auto-spawn is a better way to go...but for people who want a quick windows development environment, this can solve some problems. | For the life of me, I can't figure this out.This is my development machine setup:Windows 7 Home Premium 64 bit,Webserver: NGINX 1.3.6 c:\users\user_name\devel\nginxPHP: 5.4.7 c:\users\user_name\devel\nginx\php5Everything works fine except that after exactly 500 hits, my php-cgi.exe quits unexpectedly. No error logs, no events, nothing. It just dies after 500 hits...EVERY SINGLE TIME. I haven't found a single source of information online to help me on this. All the configuration seems valid and good. This is happening on two different machines (my development desktop and my notebook). I've tried different nginx.conf and php.ini files...still the same.I just need to get a better idea on how to go about debugging this. Any suggestions? | php-cgi.exe quits after exactly 500 hits |
You can get access to the HTTP request type from the$request_methodvariable. So:location / {
if ($request_method = 'GET') {
proxy_pass couchdb_backend;
}
} | So I've got an app that uses CouchDB as the backend. Couch doesn't really have it's security/user model in place yet, and by default anyone can do anything (including deleting records and even the entire database). But, if we limit access to only GET requests we're much safer.I was hoping I could put nginx out front as a reverse proxy, but I can't find an option that lets you filter requests based on the verb coming in. Pound does this so I'm thinking of going that route, but we already use nginx extensively and it would be nice not to have to add another technology in the mix. Anyone know if there's an option that will let this happen?I'd even settle for a mod_proxy option in Apache. Any ideas? | nginx as a reverse proxy to limit http verb access |
According to the followingrecipe:
The correct flow for gzip would be something like this:OkHttpClient client = new OkHttpClient.Builder()
.addInterceptor(new GzipRequestInterceptor())
.build();
/** This interceptor compresses the HTTP request body. Many webservers can't handle this! */
static class GzipRequestInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException {
Request originalRequest = chain.request();
if (originalRequest.body() == null || originalRequest.header("Content-Encoding") != null) {
return chain.proceed(originalRequest);
}
Request compressedRequest = originalRequest.newBuilder()
.header("Content-Encoding", "gzip")
.method(originalRequest.method(), gzip(originalRequest.body()))
.build();
return chain.proceed(compressedRequest);
}
private RequestBody gzip(final RequestBody body) {
return new RequestBody() {
@Override public MediaType contentType() {
return body.contentType();
}
@Override public long contentLength() {
return -1; // We don't know the compressed length in advance!
}
@Override public void writeTo(BufferedSink sink) throws IOException {
BufferedSink gzipSink = Okio.buffer(new GzipSink(sink));
body.writeTo(gzipSink);
gzipSink.close();
}
};
}
} | In our Android App, I'm sending pretty large files to our (NGINX) server so I was hoping to use gzip for my RetrofitPOSTmessage.There are many documentations about OkHttp using gzip transparently or changing the headers in order to accept gzip (i.e. in aGETmessage).But how can I enable this feature forsending gzip http POST messagesfrom my device?
Do I have to write a custom Intercepter or something? Or simply add something to the headers? | Android - OKHttp: how to enable gzip for POST |
It should be noted that an expression will be matched against the text starting after the “http://” or “https://”http://nginx.org/en/docs/http/ngx_http_referer_module.htmlCorrect config:location / {
valid_referers click2dad.net*;
if ($invalid_referer = ''){
return 403;
}
try_files $uri $uri/ /index.php?$args;
} | I need block all http connections, who have referrer click2dad.net.
I write in mysite.conf:location / {
valid_referers ~.*http://click2dad\.net.*;
if ($invalid_referer = ''){
return 403;
}
try_files $uri $uri/ /index.php?$args;
}But i still see in nginx logs:HTTP/1.1" 200 26984 "http://click2dad.net/view/VUhfCE4ugTsb0SoKerhgMvPXcmXszU"200, not 403As it is correct to block all clients from the click2dad.net ? | Nginx block from referrer |
Theof your image is noted asapplication/octet-streamwhich a browser can only offer to download as it does not know how to interpret it.From yourindex.htmlfile it is clear that you were playing around with variations of the MIME-type, and it is unclear whether the standard requiresimage/svgorimage/svg+xml(or standards being what they are, something else entirely). | I try to show SVG file in HTML or separate tab but Nginx offers me to download it.
I took normal SVG file which works on another site but not in my server.Where is a problem?Here is an examplehttp://proximax.ru/media/content/final/plane2.svgAlso here SVG in HTMLhttp://proximax.ru/index/ | Nginx offers to download SVG instead of showing it |
The correct setting is1) underSSH, checkUser SSH tunnel, use port222) and underConnection, write127.0.0.1:27017 | I'm trying to use Robomongo (or Robo 3T) under Mac to control my mongodb in the remote Ubuntu & Nginx server.Normally, I need tossh xxx.xx.xx.xxin a terminal with a username and a password to connect to the server. in/etc/nginx/sites-enabled/myweb.io, there islisten 443 ssl.In Robo 3T, I tried to set up the connection withUse SSH tunnel. I tried the port number443or80. But it gave me an error:Error: Resource temporarily unavailable. Error when starting up SSH session: -13. (Error #35)Does anyone know how to fix this? | Use Robo 3T to connect to remote MongoDB |
In this situation, django is listening on some unix socket and all requests sent to django by nginx are local, so host that django sees is 'localhost'.Django must build full URL for any redirection when you're submitting form. Because only domain django knows about is 'localhost', django will build URL using that host.Nginx is used as gateway between django and clients, so it is responsible for changing all redirect urls sent by django to match domain name that nginx is serving site on. But line:proxy_redirect off;is telling nginx "don't do that, don't rewrite that redirect URLs". That is causing redirection problem.What you can do is: remove that line or change nginx configuration in way that it will properly inform django about domain name. To do that, you should add line:proxy_set_header Host $http_host;With that line in config, nginx will pass real domain name to django instead of passing localhost. This is recommended way, because with that line nginx will be more transparent to django. There are also other header configuration lines that you should add here, so other things in django can work properly. For list of all configuration refer to documentation of wsgi server that you are using, for gunicorn it will behere. | Using Django on the backend with Gunicorn, each time I submit a form and am supposed to be sent toexample.com/pagetwoI am sent tolocalhost/pagetwoinstead.I'm new to Nginx so if someone could point out what the problem is I'd be most greatful :)default.conf:server {
listen 80;
server_name example.com;
location /static/ {
root /srv;
}
location / {
proxy_redirect off;
proxy_pass http://unix:/srv/sockets/website.sock;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}Here is a form from the index page: {% csrf_token %}
{{ form.as_p }}
Submit
| Why does Nginx keep redirecting me to localhost? |
Do it like this:location / {
allow 123.456.123.345;
deny all;
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}From thedocs:The rules are checked in sequence until the first match is found.So if IP equals123.456.123.345, access will be allowed, otherwise - denied.If you want to allow multiple IPs, you can specify them beforedeny all;:allow 123.456.123.345;
allow 345.123.456.123;
deny all;"location" directive should be inside a 'server' directive | I've installed nginx and set it up as a forward proxy (see attached nginx.conf)
The server became overloaded and it seems like someone else was using it.is there a way to limit the nginx proxy to receive request only from specific ips?Please explain how I should change the nginx.conf to do it for ip 123.456.123.345worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 8080;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
} | nginx proxy - how to allow connection from a specific ip |
location / {
error_page 404 @sorry;
}
location @sorry {
return 404 "Sorry!";
}http://nginx.org/r/error_pagehttp://nginx.org/r/return | I understand that I can set a file for a custom error page:location / {
error_page 404 = /mybad.html;
}But I'd just like to instead provide my page override as text inline in the config file:location / {
error_page 404 = "Sorry!"
}Is this possible with nginx? | nginx custom error configuration referencing a string vs a file |
You don't compress images because images are already compressed. In some cases it can actually make them larger (but I've forgotten the reason why). These are binary formats, similar to videos and you won't gain any benefit. | I now and always use gzip with nginx but never use it for images. Try to find some adventages/disadventages of that kind solution but nothing found.Of course I can use client side caching in nginx and set expired days for images - but then first load will be always full load without any optimization.Soo should I or shouldnt gzip images jpeg png etc. ? What this type of commpresion do - change commpression of images to lower quality? | How works gzip_types image/jpeg |
The only way I got this to work on my system was to "hack" it by changing chmod on /sbin/reboot
like this guy didhttp://linux.byexamples.com/archives/315/how-to-shutdown-and-reboot-without-sudo-password/sudo chmod u+s /sbin/rebootI realize this might not be optimal in many cases, but this mediaPlayer is very much locked down so there is no accessing a terminal for anyone else anyways. | I have a user brftv on my linux system and I have www-data that runs the nginx.from the terminal I can let my brftv user runsudo /sbin/rebootand it works fine since I added the following to my /etc/sudoers file's "#user privilege specification" section:brftv ALL=NOPASSWD: /sbin/halt, /sbin/reboot, /sbin/poweroff
www-data ALL=NOPASSWD: /sbin/halt, /sbin/reboot, /sbin/poweroffBut when my php file runs the following code, nothing happensexec('nohup sudo -u brftv /sbin/reboot');I added the www-data line to the etc/sudoers above in case it was necessary when running the above exec() (even though I run it as -u brftv, but I'm no linux expert, just thought better be safe just in case).The php file that runs this exec() is owned by www-data, and chmod is 777, all should thus have privilege to execute from it.I have tried running the php-file both through browser (would be run by user www-data I assume) and from terminal $php myFile.php.------------------- UPDATE -----------------I did thissudo chmod u s /sbin/rebootWhich allows all users on my system to run the reboot cmd without password. It works, but I rather not leave it THAT open, so the other solution with /etc/sudoers would be better, if someone would have a hint at what my problem is...I followed this tuthttp://linux.byexamples.com/archives/315/how-to-shutdown-and-reboot-without-sudo-password/and the second example is pretty much what I got above that didn't work for me.. | how to do a linux reboot from php file |
I found a solution to this problem by replacing the /admin/ location with the following:location ^~ /admin/ { # restrict access to admin section
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
allow 192.168.0.1;
deny all;
}I hope this will save someone some long searches on the internet.
I would appreciate answers offering a better solution. | I am trying to restrict access to the admin section of my django app by using simple host-based access control in nginx.
Unfortunately nginx does not seem to abide by the configuration request:this is my setting for this particular section in nginx:# gunicorn setup
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /admin/ { # restrict access to admin section
allow 192.168.0.1;
deny all;
}This does still block my ip 192.168.0.1.
What am I doing wrong?
Is there another way to block access to the /admin/ section of a django app? | restrict access to the admin url by ip in django with nginx and gunicorn |
Usealias. Ref:http://nginx.org/en/docs/http/ngx_http_core_module.html#aliasThat is, replaceroot D:/workspace/javascript/maplib/;byalias D:/workspace/javascript/maplib/; | This is my nginx configuration file:server {
listen 80;
server_name localhost;
location / {
root d:/www;
index index.html index.htm;
}
location /js/api/ {
root D:/workspace/javascript/maplib/;
autoindex on;
}
}And the directory of the document is like this:D:/workspace/javascript/maplib
-- v1.0
--main.js
-- v1.1Now I want to access thev1.0/main.jsbyhttp://localhost/js/api/v1.0/main.js.And it returns a 404 error.It seems that ngnix will tried to get the file throughD:/workspace/javascript/maplib/js/api/v1.0/main.jswhich does not exist.It seems that the string path in thelocation(in the url) must exist at the file system.How to fix it to meet my requirement?BTW, there is not only the js but also some other kinds of files like.gif,.png,.htmlinside theD:/workspace/javascript/maplib/. | Location and document path in nginx |
Issue is intry_files.As current configuration includes:/ (which default route to index.html at root path)index.html/index.htmltest/*.htmlTo access pages route path name without extension i.e.,/privacythat format should be included intry_filesinsdielocation /Try this:try_files $uri $uri.html /$uri /index.html | I made a next.js export into theoutfolder.Folder structure is:outindex.htmlterms.htmlprivacy.htmlI set up nginx to serve files from this folder:server {
root /var/www/myproject/out;
index index.html index.htm index.nginx-debian.html;
server_name myproject.com;
location / {
try_files $uri $uri/ /index.html;
}
}The main page (index) opens fine. Navigation from within the app to urls likemyproject.com/privacyworks fine. The problem is if I try to open these links directly, it will serve the main page (index) instead of the actual pages, since those urls don't exist in the folder. The only way to open the privacy page directly is adding the html extension to the url:myproject.com/privacy.html.How to configure nginx to serve the actual pagemyproject.com/privacy.htmlwhen someone enters themyproject.com/privacyurl? | How to deploy Next.js static export with Nginx? (deep links not working) |
With the default logging driver,json-file, your logs are stored in/var/lib/docker/containers//. Do note that what gets logged here is the output of stdout and stderr from PID 1 of your container.As for "log rotate", the json-file driver has some options you can pass to it to limit the size per log file, and the maximum number of log files. Seemax-size, andmax-fileof thedocumentation.With docker-compose, you can set the options like:version: '3'
services:
myservice:
image: ...
logging:
options:
max-file: "3"
max-size: "50m" | Like most people who downvoted the sparseDocker docs page hereandhere, I'm confused by whatdocker-compose logsdoes.When I runcd /apps/laradock/ && docker-compose logs -f nginx, I see a very long output from many days ago til now.What file or files is that pulling from?The only nginx log file I could find was/apps/laradock/logs/nginx/error.log, and it doesn't have much in it (so isn't the same).And is there a way to "log rotate" or otherwise ensure that I don't spend more than a certain amount of disk on logging? | Where does `docker-compose logs` pull from? |
Basically you canreloadnginx configuration by invoking this command:docker exec nginx -s reload | I am trying to configure a LEMP dev environment with docker and am having trouble withnginxbecause I can't seem to restartnginxonce it has it's new configuration.docker-compose.yml:version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysqlI have a customnginxconfig that replaces/etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would runservice nginx restartto pull in the new config.However, if I try to do that this Docker environment, thenginxcontainerexits with code 1.docker-compose exec nginx sh
service nginx restart
-exit with code 1-How would I be able usenginxwith a custom/etc/nginx/sites-available/defaultfile? | Docker - how do i restart nginx to apply custom config? |
Please, try out the following code,server {
...
server_name www.example1.com example1.com;
...
location / {
proxy_pass app_ip:8084;
}
...
}
...
server {
...
server_name www.example2.com example2.com;
...
location / {
proxy_pass app_ip:8060;
}
...
}app_ip is the ip of the machine wherever same is hosted, if on the same machine, puthttp://127.0.0.1orhttp://localhost | I'm running a VPS with Ubuntu installed. How can I use the same VPS (same IP) to serve multiple Golang websites without specifying the port (xxx.xxx.xxx.xxx:8084) in the url?For example,Golang app 1 is listening on port 8084andGolang app 2 is listening on port 8060. I want Golang app 1 to be served when someone requests from domainexample1.comand Golang app 2 to be served when someone requests from domainexample2.com.I'm sure you can do this with Nginx but I haven't been able to figure out how. | Host Multiple Golang Sites on One IP and Serve Depending on Domain Request? |
Basically, you want to automatically redirect a POST request using a 301 Moved Permanently redirect.However. such redirect are specifically disallowed by theHTTP Specificationswhich states that:If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.The specs also note that:When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request.I believe the second situation may be what is going on and that while the target server is expecting POST data, it is receiving GET data instead.Your choices are:A. Change the code to work with GET data or better still, both POST and GET. I.E., look for POST and if not there, try GET equivalents.B. Try to ensure the code receives POST data by working with the Spec.You may be able to achieve Choice B by using the proxy_pass directive to handle the request instead.Something such as:location ~ user_info.php {
proxy_pass http://testing.com/userhistory;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}In this way, the user is technically not being redirected. | I need to preserve the POST data to a different urlThe rewrite works but the post data is lostneed to post data from user_info.php to userhistorylocation ~ user_info.php {
rewrite ^/.* http://testing.com/userhistory permanent;
}The data is lost. How can I preserve the data? | nginx rewrite post data |
I finally manage it like this with basic http auth:For each group I have a seperate password file, eggroup_a.auth,group_b.auth, ...In addition, I have a file where each user and password is written, egpasswords.txtpasswords.txthas the same format like auth files, so something likeuser1:password_hashI have a ruby scriptupdate.rbto sync user's passwords frompassword.txtto all.authfiles (well more a wrapper tosed):Ruby scriptupdate.rb:#!/usr/bin/env ruby
passwords = File.new("./passwords.txt","r")
while pwline = passwords.gets
pwline.strip!
next if pwline.empty?
user, _ = pwline.split(':')
%x(sed -i 's/#{user}:.*/#{pwline.gsub('/','\/')}/g' *.auth)
endTo update a user's password: update password inpasswords.txtand executeupdate.rbTo add a user to a group (egnew_usertogroup_a): opengroup_a.authand add the linenew_user:. Then addnew_user:password_hashtopasswords.txtif the user is not already present and finally runupdate.rb | coming from apache2 the only feature i cannot archive: have users in a password-database (htpasswd) and allow the access to different files/folders/virtual servers.Basic http auth I enabled works:location ~ ^/a/ {
# should allow access for user1, user2
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/auth/file_a;
}
location ~ ^/b/ {
# should allow access for user2, user3
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/auth/file_b;
}If I haveuser1,user2infile_aanduser2,user3infile_b, this works but I have to update both files when I change the password foruser2(password should be the same for all locations). Since I will have >15 different locations with different access rights and >10 users, this is not really easy to handle. (I love fine grained access rights!)With Apache I defined different groups for each location and required the right group. Changing access was as easy as adding/removing users to groups.Is there something like that or how can this scenario be handled easily with nginx? | nginx group http auth |
Turned out that PHP-fpm was not running | Hi I'm trying to move my old dev environment to a new machine. However I keep getting "bad gateway errors" from nginx. From nginx's errorlog:*19 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: ~(?[^.]+).gp2, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "backend.gp2:5555"Does anyone know why this is?Thanks! | Nginx Bad Gateway |
Put the following into yournginx.confin thehttpblock:passenger_default_user custom_username;
passenger_default_group custom_group;You can find more configuration options here:http://modrails.com/documentation/Users%20guide%20Nginx.html#PassengerDefaultUser | I have Nginx with Passenger. In nginx.conf I have line:user pass users;and Nginx process works on 'pass' user, but Passenger* processes work on 'nobody' user.I can run Passenger standalone:sudo passanger start -e production -p 80 --user=passHow can I run Passenger with Nginx with my custom user? | User to start Passenger (with Nginx) |
The extension.pemindicates that the file format isPEM(Privacy-Enhanced Mail). However, the extension does not tell anything about the content of the file. The content may be a certificate, a private key, a public key, or something else.The extension.crtindicates that the content of the file is a certificate. However, the extension does not tell anything about the file format. The file format may be PEM,DER(Distinguished Encoding Rules) or something else. If the file is text and contains-----BEGIN CERTIFICATE-----, the file format is PEM. On the other hand, if the file is binary, it is highly likely that the file format is DER.The extension.keyindicates that the content of the file is a private key. However, the extension does not tell anything about the file format. The file format may be PEM, DER or something else. If the file is text and contains-----BEGIN PRIVATE KEY-----(or something similar), the file format is PEM. On the other hand, if the file is binary, it is highly likely that the file format is DER.Diagrams below from"Illustrated X.509 Certificate"illustrate relationship amongASN.1(X.680),DER(X.690),BASE64(RFC 4648) andPEM(RFC 7468).Bothssl_certificateandssl_certificate_keyofngx_http_ssl_moduleexpect that the file format is PEM as the reference document says. Therefore, you don't have to change the file format of yourcert.pemandkey.pembecause their file extension.pemindicates that their file format is already PEM. Just write like below in your Nginx configuration file.ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;If you prefer.crtand.keyextensions, just rename them like below.$ mv cert.pem cert.crt
$ mv key.pem key.key | I know this is a super similar question to many other questions, but none of them either give a straight answer or one that works for me...I have gotten two files from Let's encrypt:cert.pemkey.pemI need to get them into acrtandkeyformat for use on an nginx server.I have tried:openssl rsa -outform der -in key.pem -out key.keyandopenssl x509 -outform der -in cert.pem -out cert.crtbut get the following error when starting up nginx:# service nginx restart
Performing sanity check on nginx configuration:
nginx: [emerg] cannot load certificate "/etc/ssl/nginx/cert.crt": PEM_read_bio_X509_AUX() failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE)
nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed | How to get .crt and .key from cert.pem and key.pem |
try_files $uri $uri/ $uri.html @extensionless =404;Yes,@extensionlessis treated like a normal file, and that's because you've added an extra=404after@extensionlesswithintry_files-- the@extensionlesspart would only work as the last parameter as an internal redirect to another context.If you want not only to support handing requests without.php, but to also strip.phpfrom any requests, you might want to do the following:location / {
if (-e $request_filename.php){
rewrite ^/(.*)$ /$1.php;
}
}
location ~ \.php$ {
if ($request_uri ~ ^/([^?]*)\.php(\?.*)?$) {
return 302 /$1$2;
}
fastcgi_...
} | I'm trying to get Nginx to handle php files without an extension (i.e. to handlehttp://localhost/samplethe same way it would handlehttp://localhost/sample.php).This is my site configuration:server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www;
index index.html index.php;
location ~ \.(hh|php)$ {
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location / {
try_files $uri $uri/ $uri.html @extensionless =404;
}
location @extensionless {
rewrite ^(.*)$ $1.php last;
}
}As far as I know it should do the trick. However - it doesn't. Tryinghttp://localhost/samplejust gets me to a 404 page (whereashttp://localhost/sample.phpworks fine).When turning on debugging, I see the following in the logs:2015/07/19 15:37:00 [debug] 4783#0: *1 http script var: "/sample"
2015/07/19 15:37:00 [debug] 4783#0: *1 trying to use file: "/sample" "/var/www/sample"
2015/07/19 15:37:00 [debug] 4783#0: *1 http script var: "/sample"
2015/07/19 15:37:00 [debug] 4783#0: *1 trying to use dir: "/sample" "/var/www/sample"
2015/07/19 15:37:00 [debug] 4783#0: *1 http script var: "/sample"
2015/07/19 15:37:00 [debug] 4783#0: *1 http script copy: ".html"
2015/07/19 15:37:00 [debug] 4783#0: *1 trying to use file: "/sample.html" "/var/www/sample.html"
2015/07/19 15:37:00 [debug] 4783#0: *1 trying to use file: "@extensionless" "/var/www@extensionless"
2015/07/19 15:37:00 [debug] 4783#0: *1 trying to use file: "=404" "/var/www=404"Which is weird. It basically looks like @extensionless is treated as a plain filename (instead of a location leading to a rewrite of the URL).What am I missing? :)
Thanks! | Nginx & Handling Files Without Extension |
Ideally, you should ask backend to write correct links. While it is possible to fix some simple cases using sub filter, it is not something generally possible (e.g. if returned data isn't text but e.g. flash code).You can use variables in a replacement string in sub_filter (but not in string to match in original response), it's explicitlydocumented:A replacement string can contain variables.As for subs filter - it's a 3rd party module which is expected to be more powerful, though may contain more bugs. As long as standard sub filter is enough for you - you probably don't want to use 3rd party subs filter. | I am trying to rewrite body links of a proxied page using something like this:sub_filter http://proxied.page.come http://local.page.com;
sub_filter_once off;Is this the way to go at all ?
What is the difference between thesub_filter moduleand thesubstitutions_filterAlso can variable be used in the sub_filter delcaration ? | Nginx sub_filter rewrites? |
I got it figured out. The issue was that it didn't like having hostnames in the list. The hostnames are needed as all these addresses are allocated dynamically. This was solved with the upstream directive as follows:upstream bazhost {server hostname1:8080;}
upstream foohost {server 192.168.1.10:8081;}
upstream barhost {server hostname2:1234;}
upstream hamhost {server hostname2:5678;}
map $http_host $backend {
baz.mydomain.com bazhost;
foo.mydomain.com foohost;
bar.mydomain.com barhost;
ham.mydomain.com hamhost;
}
server {
listen 443 ssl http2;
server_name .mydomain.com;
ssl_certificate /usr/share/nginx/certs/mydomain.com.pem;
ssl_certificate_key /usr/share/nginx/certs/mydomain.com.key;
location / {
proxy_redirect http:// https://;
proxy_pass http://$backend;
}
} | I'm trying to make my Nginx a bit more dry, as it's acting as a reverse proxy for nearly 20 servers. Here's what I'm trying to do, all the hostnames and stuff are changed/examples:map $http_host $backend {
baz.mydomain.com hostname1:8080;
foo.mydomain.com 192.168.1.10:8081;
bar.mydomain.com hostname2:1234;
ham.mydomain.com hostname2:5678;
}
server {
listen 443 ssl http2;
server_name .mydomain.com;
ssl_certificate /usr/share/nginx/certs/mydomain.com.pem;
ssl_certificate_key /usr/share/nginx/certs/mydomain.com.key;
location / {
proxy_redirect http:// https://;
proxy_pass http://$backend;
}
}The problem is that no matter what, this will always give a bad gateway error. I've tried a few variations and moving things around, with and without the wildcard server_name, with $host instead of $http_host but so far I can't get it working. Am I even going about this the right way? I'd really prefer not to have almost 20 separate virtual server entries in my config.There isn't a whole lot of help in the nginx documentation about using map like this, and not a lot online except for one really old post that briefly mentioned something similar here:https://serverfault.com/questions/342309/how-to-write-a-dry-modular-nginx-conf-reverse-proxy-with-named-locations | Using nginx map directive to dynamically set proxy upstream |
I have foundthis solution:location /static/ {
try_files $uri @static_svr1;
}
location @static_svr1{
proxy_pass http://192.168.1.11$uri;
proxy_intercept_errors on;
recursive_error_pages on;
error_page 404 = @static_svr2;
}
location @static_svr2{
proxy_pass http://192.168.1.12$uri;
proxy_intercept_errors on;
recursive_error_pages on;
error_page 404 = @static_svr3;
}
location @static_svr3{
proxy_pass http://192.168.1.13$uri;
}This example works like:try_files $uri @static_svr1 @static_svr2 @static_svr3 | I want to use something like:location @a1 {
...
}
location @a2 {
...
}
location /path {
try_files @a1 $uri @a2
}Does it possible and what should I do at @a1 location to continue tries with $uri / @a2?If so, how nginx will process that? | May I use two named locations within try_files nginx directive? |
I hit the same problem and the most simple fix I could come up with is to patch the adminer PHP script. I simply hardcoded$_SERVER["REQUEST_URI"]at the start ofadminer.phplike this:--- adminer.php 2015-10-22 12:31:18.549068888 +0300
+++ adminer.php 2015-10-22 12:31:40.097069554 +0300
@@ -1,4 +1,5 @@
<?php
+$_SERVER["REQUEST_URI"] = "/slave/admin/adminer.php";
/** Adminer - Compact database management
* @link http://www.adminer.org/
* @author Jakub Vrana, http://www.vrana.cz/If you put the above in a file calledfixyou can simply runpatch < /path/to/fixin the directory containingadminer.phpyou should get the correctly working version. Runningpatch -R < /path/to/fixwill restore the original behavior if needed.To understand the structure of a patch file readthis SO thread. | I am running Nginx which is configured to allow me to access several resources on another server which is available as a reverse proxy. For examplemain server:http://example.com
slave: http://example.com/slave
adminer on slave: http://example.com/slave/admin/adminer.phpEverything is all right so far. I enter my DB user name and password in Adminer and the trouble begins. Examining the headers returned by Adminer post-login I have noticed that it sends back this header:Location: /admin/adminer.php?username=userThis is the root of the trouble. On my browser this, naturally, gets interpreted as meaning relative to the current server rather than the reverse proxy. I tried hacking the adminer code after locating the one place where it has a Location header but that just stopped it dead in its tracks.How can I prevent this from happening? I have considered running a Lua script on Nginx that examines the header and replaces it but it strikes me that even if I get that to work I will be getting my server to do a great deal of unnecessary work.EditAfter exploring the issue a bit more I am starting to think that adminer may not being doing much wrong. It actually uses the $_SERVER['REQUEST_URI'] value to construct the location header and that happens to have little part from/admin/adminer.php. I have noted that the referer, $_SERVER['HTTP_REFERRER'] has the full original request pathhttp://example.com/slave/admin/adminer.php. So the solution would be to send back the location/slave/admin/adminer.php?username=user.Easy? Well, the issue is that in my setup/slave/is going to be variable so I need to resolve it in code. I can probably do that reasonably easily with a spot of PHP but I wonder... surely there is an easier alternative provided by Nginx?I should perhaps mention:Ubuntu 14.04 on both master & slaveNginx 1.6.2 installed vial apt-get nginx-extras (the Lua module enabled flavor)php5-fpm 5.5.9MariaDB 10Adminer 4.2.1 | Adminer login via a reverse proxy |
The handler was:- name: restart nginx
service: name=nginx state=restarted enabled=yesIt seems that the state and enabled flags cannot both be present. By trimming the above to the following, it worked.- name: restart nginx
service: name=nginx state=restartedWhy this is, and why it started breaking suddenly, I do not know. | I have a task in a playbook that tries to restart nginx via a handler as per usual:- name: run migrations
command: bash -lc "some command"
notify: restart nginxThe playbook however breaks on this error:NOTIFIED: [deploy | restart nginx] ********************************************
failed: [REDACTED] => {"failed": true}
msg: failure 1 running systemctl show for 'nginx.service': Failed to get D-Bus connection: No connection to service manager.The handler is standard:- name: restart nginx
service: name=nginx state=restarted enabled=yesAnd the way that I've setup nginx is not out of the ordinary as well:- name: install nginx
apt: name=nginx state=present
sudo: yes
- name: copy nginx.conf to the server
template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
sudo: yes
- name: delete default virtualhost
file: path=/etc/nginx/sites-enabled/default state=absent
sudo: yes
- name: add mysite site-available
template: src=mysite.conf.j2 dest=/etc/nginx/sites-available/mysite.conf
sudo: yes
- name: link mysite site-enabled
file: path=/etc/nginx/sites-enabled/mysite src=/etc/nginx/sites-available/mysite.conf state=link
sudo: yesThis is on aubuntu-14-04-x64VPS. | Nginx cannot restart via Ansible |
You can run a shell in the image, with:docker run -t -i --entrypoint bash paintedfox/nginx-php5Then change the configuration files as you like. Note the container ID (it appears in the prompt, e.g.root@9ffa2bafe2bb:/#), then commit it to a new image:docker commit 9ffa2bafe2bb my-new-nginxYou can then run the new image (my-new-nginx). | I installed a docker image from the registry by doing.docker pull paintedfox/nginx-php5Now I wish to make some changes to this nginx's config files to add some domains. I believe the config files are somehow help inside the dockers image, but where is the image? How can I change these config files? | After pulling a Docker from the repository, how to change the images files? |
Actually, that's not a filename.That's a python module path.The relevant file is actuallymysite/wsgi.py(but to import it in a python interpreter, you'd have toimport mysite.wsgi, hence the name used in command line). | I am following this tutorialhttp://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.htmlto setup django with nginx and uwsgi. But I am confused about this line:uwsgi --http :8000 --module mysite.wsgiIn the tutorial there's nothing about mysite.wsgi file. What should be the content of this file? | How to create mysite.wsgi file? |
The best way is compiling nginx from source to supportstreamdirective:./configure --prefix=/opt/nginx --sbin-path=/usr/sbin/nginx --conf-path=/opt/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --with-http_ssl_module --with-threads --with-stream --with-http_slice_module
make
sudo make install | I want to use Nginx 1.9 to be a TCP load balancer. I followed the tutorial inhttps://www.nginx.com/resources/admin-guide/tcp-load-balancing/but it didn't work.Every time I tried to start nginx, I've got errors:nginx: [emerg] unknown directive "stream" in /opt/nginx/nginx.confHere is my nginx.conf file:events {
worker_connections 1024;
}
http {
# blah blah blah
}
stream {
upstream backend {
server 127.0.0.1:9630;
server 127.0.0.1:9631;
}
server {
listen 2802;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass backend;
}
}Would you pls tell me how to configure it right? | Configure Nginx to be a TCP load balancer |
In testing Passenger 5.1, I found that settingpassenger_friendly_error_pages offis not enough to change the default error page. This disables exposing backtrace or environment variables but still shows Passenger's error page.To resolve this I had to set the following:passenger_intercept_errors on;
error_page 500 /500.html;The commandpassenger_intercept_errorstells nginx to handle status codes of 400 or higher. Theerror_pagecommand customizes the error. You might want to customize other errors as well.For a Rails app, the location of the pages is relative to thepublicfolder of the app (what you set in therootcommand for nginx).As mentioned isthis comment, the similar configuration for Apache is:PassengerErrorOverride on
ErrorDocument 500 /path/to/500.html | Currently if there's a problem launching a Rails app on our server, users are taken to a Passenger error page with an error like "Ruby (Rack) application could not be started".Is it possible to customize this error page and display something else so users of a live site don't see this?I'm using nginx for the server.Thanks | Changing Passenger Default Error Page for Nginx |
So I ended up finding the solution. My regexp was a bit off, as I wasn't taking into account the possibility that the ?timestamp didn't exist.This regexp worked for me.location ~* \.(ico|css|js|gif|jp?g|png)(\?[0-9]+)?$ { | I can't seem to get nginx to set expires headers on my static assets in my Rails app.My app is deployed using Phusion Passenger & nginx.Below is the related section of my nginx config fileserver {
listen 80;
server_name my.domain.tld;
root /home/deploy/my.domain.tld/current/public;
passenger_enabled on;
access_log off;
location ~* \.(ico|css|js|gif|jp?g|png)\?[0-9]+$ {
expires max;
break;
}
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
}I'm not sure why its not setting expires headers on my static assets ( e.g. /images/foo.png?123456 )I'm not sure if it has something to do with passenger or if my location regexp just isnt catching it | nginx not setting expires headers on Rails static assets |
I found out where the issue was, If you have the same issue you're probably using theppa:ondrej/nginx-mainlinerepository, and you have:font/woff woff;
font/woff2 woff;Instead of:font/woff woff;
font/woff2 woff2;See the file on thenginx/nginx master branchfor reference. | After the latest Nginx update (currently nginx/1.21.6), the following warning started to appear when I do anginx -t:nginx: [warn] duplicate extension "woff", content type: "font/woff2", previous content type: "font/woff" in /etc/nginx/mime.types:29The same issue is happening on all my servers, with Ubuntu 18.04 or 20.04 + latest nginx mainlineI never edited themime.typesfiles, which has the following:types {
[...]
font/woff woff;
font/woff2 woff;
}From what I understand it doesn't like these two lines having the same value, but which one should I delete? | duplicate extension "woff", content type: "font/woff2", previous content type: "font/woff" in /etc/nginx/mime.types |
In your nginx config you have:listen 9000;
server_name test.dev;So your domain should be resolved with:http://test.dev:9000But you should also addtest.devto your Windowshostsfile%windir%\system32\drivers\etc\hosts127.0.0.1 test.dev | I am new to WSL2 but so far it works really nice.
I have a simple HTML Page i want to serve with Nginx, but i want it to access with a web browser on the host. The default nginx webpage works(!), so i started to mimic the default nginx html page (/var/www/html/index.html).I have created:/var/www/test.dev/index.html/etc/nginx/sites-available/test.dev (+Symlink in sites-enabled/)Nginx config:server {
listen 9000;
listen [::]:9000;
server_name test.dev;
root /var/www/test.dev;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}So the only big difference to the default config is the port 9000.I reloaded/restarted nginx and i tried to curl my configs:$ curl https://localhost
$ curl https://localhost:9000both requests weresuccessful.But now i want to access the pages on my Windows host with a web browser.
The first one (default) works and i can see the default Nginx HTML page.
The second one does not work:site can't be reached.So my questions:1. Why is that? Do i have to make some changes to the Windows Firewall settings?2. I like to have a virtual host name like example.com instead of localhost:9000I've edited /etc/hosts... it works with curl but again not in the host Browser | Nginx running in WSL2 (Ubuntu 20.04) does not serve HTML page to Windows 10 host on another port than port 80 |
I just put together a little Nomad job showing this working, so you may have a slight configuration error. To allow you to run the job yourself I have made it available as agist here. In the same gist I have a nginx.conf that has nginx listen on whatever port is in the Nomad job file.Here is the Nomad job:job "nginx" {
datacenters = ["dc1"]
type = "service"
group "cache" {
count = 1
task "redis" {
driver = "docker"
config {
image = "nginx:1.11.10"
volumes = ["new/default.conf:/etc/nginx/conf.d/default.conf" ]
network_mode = "host"
}
artifact {
source = "https://gist.githubusercontent.com/dadgar/2dcf68ab5c49f7a36dcfe74171ca7936/raw/c287c16dbc9ddc16b18fa5c65a37ff25d2e0e667/nginx.conf"
}
template {
source = "local/nginx.conf"
destination = "new/default.conf"
change_mode = "restart"
}
resources {
network {
mbits = 10
port "nginx" {
static = 8080
}
}
}
}
}
}I can then query that address and see that nginx is bound to that port, thus the template being mounted is working properly.$ curl http://127.0.0.1:8080
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
If you take a look at the gist, I show the file being rendered and mounted properly as well.Hope this helps you! Also be sure to check out thecommunity pagefor getting help. We have both a Gitter room and a mailing list. | With the assumption that Consul and Nomad has been configured to run on a pool of resource. How would you rendered a template file for the sole purpose of generating e.g. an Nginx 'default.conf' file.Using the template stanza configuration below, as an example; Nomad fails to generate a default.conf 'file'; instead a default.conf 'directory' is created.template {
source = "/path/to/tmp.ctmpl"
destination = "folder/default.conf"
change_mode = "restart"
change_signal = "SIGINT"
}I'm either missing a trick, or have misunderstood the functionalities of the 'template stanza'.One of the issue with the template generating a directory rather than a file is, you cannot mount a directory to a config file path. So running a task that uses the Nomad docker driver with the exemplar 'docker' volume config results in an error.volumes = ["/path/to/job/folder/default.conf:/etc/nginx/conf.d/default.conf" ]Or is it impossible to have the template stanza generate a config file?*P.s. Using Nomad build 0.5.5** | How would you use Hashicorp's Nomad 'template stanza' to generate an nginx config file through the Nomad job file? |
I have the same problem. You forgot to change the default sockets fromfastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;tofastcgi_pass unix:/var/run/php/php5.6-fpm.sock;- you can find the configuration file at/etc/nginx/sites-available/default.Before:fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;After:fastcgi_pass unix:/run/php/php5.6-fpm.sock; | I have an Ubuntu web server running Nginx. I was runngin PHP 5.5.30 and I installed PHP 5.6.23 using the following commands:1) sudo apt-add-repository ppa:ondrej/php
2) sudo apt-get update
3) sudo apt-get install php5.6The installation is under a new path from the previous verison of PHP (/etc/php/5.6/). When I run a phpinfo() command from a web page I still get it running under the old version of PHP (5.5.30) - how do I get Nginx looking at the new installation?p.s. When I run php --version from the command line it show PHP 5.6.23 !
p.p.s My nginx.conf file containsfastcgi_pass unix:/run/php/php5.6-fpm.sock; | How to get Nginx to use alternative PHP version? |
After i give ownership of my www folder to my nginx user (as defined in/etc/nginx/nginx.conf), it works!chown -R www-data:www-data www | I have a php file on my NGINIX(with php-fpm) that create a simple txt file.But this works only when I give my "www" folder 777 Permission. My Index.php is placed in my www folder.What is wrong with my user settings on nginix and php-fpm? | nginx php-fpm failed to open stream permission denied |
You can add this to your config:location ~* ^/(assets|files|robots\.txt) { }This will work correctly with yourlocation /rule.Your config also need to add root document and default index file....
root /ftp/wardrobe;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?/$request_uri;
}
location ~* ^/(assets|files|robots\.txt) { }
... | Here is the rule in English:Any HTTP request other than those for index.php, assets folder, files folder and robots.txt is treated as a request for your index.php file.I have an.htaccessfile that works correctly on Apache server:RewriteCond $1 !^(index\.php|assets|files|robots\.txt)
RewriteRule ^(.*)$ index.php/$1 [L]Some correct results for this rule:example.com=example.com/index.phpexample.com/index.php/welcome=example.com/welcomeexample.com/assets/css/main.css!=example.com/index.php/assets/css/main.cssI tried some tools to convert from htaccess rule to nginx rule but all were incorrect.Viahttp://winginx.com/htaccess(missing the exception rules for assets folder...):location / { rewrite ^(.*)$ /index.php/$1 break; }Viahttp://www.anilcetin.com/convert-apache-htaccess-to-nginx/(error in $1 value):if ($1 !~ "^(index.php|assets|files|robots.txt)"){
set $rule_0 1$rule_0;
}
if ($rule_0 = "1"){
rewrite ^/(.*)$ /index.php/$1 last;
}How can I fix this? It's really hard to debug the rule.Here is my nginx config so far:server {
listen 80;
server_name www.example.com example.com;
location / {
try_files $uri $uri/ /index.php?/$request_uri;
}
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
} | Nginx rewrite rule for CodeIgniter |
Updated AnswerI have finally figured out a way to get this working. It's not at all obvious, but when hosting socket.io on a subfolder, you do NOT use the subfolder in the connect statement. This is what I was doing before and the client was never receiving a response.Not Working
http://localhost:8080/test/This is the part which is throwing things off. That creates a namespace for the local socket which the server side does not respect. So the client sends the message on the namespace '/test/' but the server responses are going to an empty namepace '' so the client never gets the messages. The workaround it to simply remove the '/test/' and make sure you are using the resource variable on the client and the server.Working!
I hope this helps you get things working on your end.Original AnswerIt is not a problem with your setup, it is a problem of socket.io not wanting to work on sub folder. I would bet willing to be that if you dropped the /sockets your example would work fine. I ran into the exact same problem when using http-node-proxy trying to host socket.io connections on subfolders. There was a bug created a while ago, but it was closed out and never resolved.https://github.com/LearnBoost/socket.io-client/issues/185https://github.com/LearnBoost/socket.io/issues/320I am still looking for a solution as well, but I have a feeling I'm going to have to roll up my sleeves and dig into the code myself. | So I have been trying to get this to work 2 days and I am stuck. This is my first time configuring a server for rails that uses NodeJS+Socket IO. I am a noob with NGINX and Unicorn. Basically the NodeJS+SocketIO part of my app will push messages to users who are connected to my app. This is my nginx.confserver{
listen 80 default;
root /home/deployer/saigon/public;
try_files $uri/index.html $uri @unicorn;
location /sockets {
proxy_pass http://localhost:3210;
}
location @unicorn {
proxy_pass http://localhost:3000;
}
}And in my production.rb, I have configured the url which the user will have to send message to/receive message fromSOCKET_IO_URL ='http://localhost:8080/socketsWhy 8080? I use Vagrant to forward 8080 -> 80I tried accessinghttp://localhost:8080/socketsand I was able to get the socket welcome message. I looked at my NodeJS server log and it was receiving messages alright. However, when it comes to broadcasting..... it just does not do it. Has anyone ever gotten this kind of app to work with the set up I am trying to do? Should I just go with Apache + Unicorn ?' | NGINX configuration to work with Socket.IO |
You can do this withreturn200:location = /echo_user_agent {
return 200 $http_user_agent;
} | Is it possible to configure Nginx to return a response body created from a request header, or a request parameter? It looks like this can be done with theechomodule, but if possible I would like to do it with a vanilla install of Nginx.Basically I want to do the following, but obviouslyreturn_bodydoesn't exist, so what can I use instead?location ~* ^/echo/(.+) {
return_body $1;
}orlocation /echo_user_agent {
return_body $http_user_agent;
}If I install theechomodule I could replacereturn_bodywithecho, but if at all possible it would be nice to be able to do this without having to install any extras, it seems to me like something simple like this should be possible to do without. | Can I echo a request header value as the response body with vanilla Nginx? |
you have to install Dependencies .
generally these will be enoughlibpcre3 libpcre3-dev libpcrecpp0 libssl-dev zlib1g-devso you can first install themsudo apt-get install libpcre3 libpcre3-dev libpcrecpp0 libssl-dev zlib1g-devand then compile .. also make sure you run the make command as root. | I downloaded nginx from it's site for linux(I use ubuntu 10.4).I extracted nginx-1.0.6.tar.gz and there was a configure file in that directory. So I entered "./configure" command in shell. It seemed to be configured right.After I entered "make" command ,It said this error:make -f objs/Makefile
make[1]: Entering directory `/usr/local/nginx'
cd ./auto/lib/pcre/ \
&& if [ -f Makefile ]; then make distclean; fi \
&& CC="gcc" CFLAGS="-O2 -fomit-frame-pointer -pipe " \
./configure --disable-shared
/bin/sh: ./configure: not found
make[1]: *** [auto/lib/pcre//Makefile] Error 127
make[1]: Leaving directory `/usr/local/nginx'
make: *** [build] Error 2what should I do now? | nginx install on linux |
I fix it by removingnginx.pidin/usr/local/var/runand then usingbrew services start nginx | Greeting,
i have a server installed nginx streaming
i did stop nginx and then did reload it, but it show me below error and was not start:nginx -s stop
nginx: [error] open() "/usr/local/var/run/nginx.pid" failed (2: No such file or directory)i did below command and nothing:sudo nginx
nginx: [emerg] open() "/usr/local/var/run/nginx.pid" failed (2: No such file or directory)how i can resolve it?
can anyone give me PID file content so i can creat it (Fake).Thank you for your help. | Nginx reload error in mac osx |
This is not entirely correct, to run PHP with apache you will need either the apachemod_phpor run it as aFastCGI module. For Nginx the latterseems to be the norm.For Ruby there'sPhusion Passengerthat fills this role, and supports both apache and nginx. On apache it runs as a plugin module just the way mod_php does. For Nginx I'm not sure.You may want to run your ruby applications using a dedicated application server, however. This is where Unicorn, Puma etc comes in. There's nothing preventing you from doing a similar setup for php, but it's less common.Another thing which makes php easier to deploy in many cases is that most distros and server installs come with apache and nginx already set up to handle php, while you need to set this up on your own for ruby.Once set up Passenger makes deploying ruby apps almost (but not quite) as simple as deploying php apps. | In php, you only need apache or nginx. Why does ruby rails also need something like puma or unicorn when nginx is already installed? | Why does ruby rails need puma or unicorn? |
You add a new folder mapping into the "sites" block of Homestead.yml, like so:- map: myapp.com
to: /home/vagrant/Code/myapp/publicThat's all there is to adding a new vhost. To modify it, edit the appropriate file in/etc/nginx/sites-available. In the above case, this would be/etc/nginx/sites-available/myapp.comSeeherefor example customization of a newly added vhost. Note that the quick tip linked here uses Homestead Improved, a slightly enhanced version of Homestead, as describedhere.More Homestead related posts can be found on our site via theHomestead tag. | I am using Laravel homestead. For a project I need a special vhost configuration, where should I define this? | Laravel Homestead vhost configuration |
The order in which nginx determines which location matches can be found here:http://wiki.nginx.org/HttpCoreModule#locationUsing either of these will be matched before any other regular expression:location = /donotexposeme.phpOrlocation ^~ /donotexposeme\.phpThe first is an exact match whereas the latter is a regular expression prefix match. | I want Nginx to deny access to a specific PHP file, let's call itdonotexposeme.php, but it doesn't seem to work, the PHP script is run as usual. Here is what I have in the config file:location / {
root /var/www/public_html;
index index.php;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/public_html$fastcgi_script_name;
include fastcgi_params;
}
location /donotexposeme.php {
deny all;
}Of course, I dosudo service nginx reload(orrestart) each time I edit the config. | Deny access to a PHP file (Nginx) |
By default,nginxdoes not pass headers containing underscores.Try:underscores_in_headers on;Seethis documentfor details.Alternatively, useAPI-KEYorX-API-KEYinstead ofAPI_KEY. | I have nodejs and nginx running i am sending an additional header in the API 'api_key' and its not received inreq.headersandreq.get('api_key')in nodejs i have bellow configuration file for nginxserver {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name mysite.com;
return 301 https://$host$request_uri;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:9102/;
proxy_set_header Host $http_host;
proxy_set_header api_key $http_api_key;
#proxy_set_header api_key 'Some Value'; works
proxy_redirect off;
}
}If set the value ofproxy_set_header api_key 'some value'it works and headers are printed on console but theapi_keyis subjected to change that's why i am using$http_api_keyso that whatever comes inapi_keycustom header is received as it is sent from rest client. I have tried couple of solutions likeproxy_set_header api_key $upstream_http_api_key;but no help. I want to receive any custom header sent from rest client in nodejs. | Custom headers not received in nodejs while using nginx reverse proxy |
Configure a Unicode domain name usingpunycodeformat in nginx:server_name xn--privatinstruktr-jub.dk; | I am trying to set up a server with a domain name called "privatinstruktør.dk" but keeps getting redirected to the default "welcome to nginx" page.I have tried to type in the server_name like this:server {
listen 80;
server_name privatinstruktør.dk;
location / {
root /var/www/privat;
}
}but that did not work. So I tried using regular expressions like:server_name "~^privatinstrukt(.+)r\.dk$";andserver_name "~^privatinstrukt(.*)r\.dk$";and evenserver_name "~^privat(.*)$";But all fails and I am redirected to the default page. Does anyone have a hint on how to fix this? | Unicode domain name in Nginx server_name |
This is how I solved the problemFirst. You have to configure your nginx vhost with ssl (no websocket connected yet). I use Let's encrypt (https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04)Second. You could create a path on your vhost to proxy to your websocket. This way nginx handles ssl protocol and also you do not use another portlocation /ws/{
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $remote_addr;
}After that your script will have to connect to `https://localhost/ws/Note: I use port 3000 for mylaravel-echo-serverintance and not 6001 as stated in the original question | I'm wondering what's the proper (or any for that matter) way of setting up Laravel Echo on an https production server. I've had it working on my local vagrant for a little bit now, and now that I've pushed the changes to production I'm unable to get the script to connect to the node server. Here is what I currently have.var echo = require('laravel-echo-server');
var options = {
host: 'https://localhost',
port: '6001',
sslCertPath: '/etc/nginx/ssl/nginx.crt',
sslKeyPath: '/etc/nginx/ssl/nginx.key'
};
echo.run(options);And then in javascriptimport Echo from "laravel-echo"
window.echo = new Echo({
broadcaster: 'socket.io',
host: 'https://localhost:6001'
});The above configuration is how I started out, but I've tried many other combinations, including trying to edit the nginx configuration to bypass https all together. If bypassing https is the method that's required, any advice on how to do this with Laravel Echo would be appreciated since the socket.io threads on this topic that I've been referencing don't seem to do the trick for me. | How to use Laravel Echo on production server with https |
instead oftry_files $uri $uri/ /index.html;I usedtry_files $uri/ $uri /index.php?$query_string; | When I access 127.0.0.1:6789 it works fine, but when i try to access something like 127.0.0.1:6789/busca.html?q=a, I got 500 Internal Server Errorthis is my nginx config fileserver {
listen 88;
root /vagrant/rizqcursosonline/rizqcursosonline/frontend/wwwpublic;
index index.php index.html index.htm;
server_name example.com;
location / {
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /vagrant/rizqcursosonline/rizqcursosonline/frontend/wwwpublic/;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}nginx log error2014/04/12 18:16:32 [error] 4165#0: *5 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.0.2.2, server: example.com, request: "GET /busca.html?q=a HTTP/1.1", host: "127.0.0.1:6789", referrer: "http://127.0.0.1:6789/"2014/04/12 18:16:32 [error] 4165#0: *7 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.0.2.2, server: example.com, request: "GET /favicon.ico HTTP/1.1", host: "127.0.0.1:6789" | nginx 500 Internal Server Error |
So as you suggest here comes the answer:When you make request to your server (nginx+upstream) as GET then$request_timeresult with normal and acceptable value. That happens because your upstream server doesn't take part in it and even if it does it makes it properly.Problems start when you are doing POST request. According to nginx doc value of$request_timevariable (available only at logging) will be compute when all data have been send and connection have been closed (by all upstreams and proxy also). And only then info is appended to log.So how to check if everything is correct?
First do a GET request to your server and watch log file. Notice how much time it really takes to finish the call and add log info into file - it should be real value.
Next do a POST request to your server and watch log file again. Here, probably, you will see that log isn't coming at all or after very long period.What it means? Check your nginx conf and your upstream conf because somewhere could be a place where connection isn't closed and just hang in the air. Those connection could be renew after time by your OS or upstream server but after all it could cause some problems other than just strange$request_timevalue. | I'm encountering some weirdness in my access log running Nginx on Windows. I've included $request_time in my access log as well as $upstream_response_time (running Django as fcgi upstream). It's my understanding that the log should represent request time in milleseconds, but it's output looks like this:ip|date|request_time|upstream_response_time
xx.xx.xx.xxx|[29/Jan/2013:15:29:57 -0600]|605590388736.19374237|0.141
xx.xx.xx.xxx|[29/Jan/2013:15:30:39 -0600]|670014898176.19374237|0.156Any ideas what the heck that gigantic number is!?Here's the full log format (I removed a few columns in teh above example)log_format main '$remote_addr|$time_local]|$request|$request_time|$upstream_response_time|'
'$status|$body_bytes_sent|$http_referer|'
'$http_user_agent';Using pipe delimiters. | Nginx $request_time and $upstream_response_time in Windows |
You should look into using Flask -- it's an extremely lightweight interface to a WSGI server (werkzeug) which also includes a templating library, should you ever want to use one. But you can totally ignore it if you'd like. | I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework.I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc.I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome! | How to run nginx + python (without django) |
nginx rewrite rules example for Wordpress 3:server{
server_name *.example.com;
listen 80;
#on server block
##necessary if using a multi-site plugin
server_name_in_redirect off;
##necessary if running Nginx behind a reverse-proxy
port_in_redirect off;
access_log /var/log/nginx/example-com-access.log;
location / {
root /var/www/example.com/wordpress;
index index.html index.htm index.php;
rewrite ^.*/files/(.*)$ /wp-includes/ms-files.php?file=$1 last;
if (!-e $request_filename) {
rewrite ^.+/?(/wp-.*) $1 last;
rewrite ^.+/?(/.*\.php)$ $1 last;
rewrite ^(.+)$ /index.php?q=$1 last;
}
}
location ~* ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$
{
root /var/www/example.com/wordpress;
rewrite ^/.*(/wp-.*/.*\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js))$ $1 last;
rewrite ^.*/files/(.*(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js))$ /wp-includes/ms-files.php?file=$1 last;
expires 30d;
break;
}
location ~ wp\-.*\.php|wp\-admin|\.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/example.com/wordpress$fastcgi_script_name;
}
} | I'm trying to run a multi domain blog installation with WordPress and Nginx. The last step is to configure some rewrite rules in.htaccess(apache only) for the webserver. How do I translate this into Nginx rewrite rules?RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
# uploaded files
RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule . index.php [L] | Multi Site WordPress rewrite rules in Nginx |
const express = require('express');
const path = require('path');
const util = require('util');
const app = express();
/**
* Listener port for the application.
*
* @type {number}
*/
const port = 8080;
/**
* Identifies requests from clients that use http(unsecure) and
* redirects them to the corresponding https(secure) end point.
*
* Identification of protocol is based on the value of non
* standard http header 'X-Forwarded-Proto', which is set by
* the proxy(in our case AWS ELB).
* - when the header is undefined, it is a request sent by
* the ELB health check.
* - when the header is 'http' the request needs to be redirected
* - when the header is 'https' the request is served.
*
* @param req the request object
* @param res the response object
* @param next the next middleware in chain
*/
const redirectionFilter = function (req, res, next) {
const theDate = new Date();
const receivedUrl = `${req.protocol}:\/\/${req.hostname}:${port}${req.url}`;
if (req.get('X-Forwarded-Proto') === 'http') {
const redirectTo = `https:\/\/${req.hostname}${req.url}`;
console.log(`${theDate} Redirecting ${receivedUrl} --> ${redirectTo}`);
res.redirect(301, redirectTo);
} else {
next();
}
};
/**
* Apply redirection filter to all requests
*/
app.get('/*', redirectionFilter);
/**
* Serve the static assets from 'build' directory
*/
app.use(express.static(path.join(__dirname, 'build')));
/**
* When the static content for a request is not found,
* serve 'index.html'. This case arises for Single Page
* Applications.
*/
app.get('/*', function(req, res) {
res.sendFile(path.join(__dirname, 'build', 'index.html'));
});
console.log(`Server listening on ${port}...`);
app.listen(port); | I chose to use express server for deployment as pointed to indeploymentsection of create-react-app user guide. The express server is set up on an EC2 instance and is fronted by an AWS Elb, where the SSL terminates. How do I setup redirection of http requests to https?I am open to using Nginx as well, if there is a solution.Appreciate any help. | How to redirect http to https for a reactJs SPA behind AWS Elb? |
What you need isaliasinstead ofroot.server {
listen 8082;
location /list {
alias D:/; ##### use alias, not root
autoindex on;
}
}SeeNginx -- static file serving confusion with root & alias | I'm completely new to nginx. I've installed nginx on windows pc.What I want to do is server a list of files inD:\onlocalhost:8082/list.If I use the following conf:server {
listen 8082;
location / {
root D:/;
autoindex on;
}
}I can correctly see what i want onlocalhost:8082. But if I change it to:server {
listen 8082;
location /list {
root D:/;
autoindex on;
}
}The pagelocalhost:8082/listgives a 404 error. | Nginx error 404 when using autoindex |
An updated answer for anyone who needs:instead oflocation /app1 {
proxy_pass http://10.131.6.181:3001/app1;
}uselocation /app1/ {
proxy_pass http://10.131.6.181:3001/;
}or if on locallocation /app1/ {
proxy_pass http://localhost:3000/;
}This is the correct way and this way you will not need to modify express. Express will receive only the part after /app1/ | I am completely stuck with a situation where I want to have several node applications on one server. I get this working fine by having the applications running on different ports. I can access the applications by putting in the ip address with port.I would like to proxy the applications from my nginx server by using different sub-directories like so:my.domain
location /app1 {
proxy_pass http://10.131.6.181:3001;
}
location /app2 {
proxy_pass http://10.131.6.181:3002;
}Doing this I had to move the all the express routes to /app1 for application1. This works but now I am stuck with the static files.I can now access the application withhttp://10.131.6.181:3001/app1which is great, but viahttp://my.domain/app1the static files are not loaded.The static files can be accessed directlyhttp://10.131.6.181:3001/cssbut not via the proxyhttp://my.domain/cssIdeally I would like to have the applications on different ports without the sub-directory in the express routes but only sub-directories in the proxy. I tried to put my head through the wall for the last 5 hours but didn't achieve anything.Now I would happy if can at least get the static files via the nginx proxy. | nginx proxy to remote node.js express app in subdirectory |
This is by design -- curly brackets are special in nginx.conf, if used as a part of a regular expression, then you have to use double quotes around your regular expression if you're using the braces.http://nginx.org/r/rewriteIf a regular expression includes the “}” or “;” characters, the whole expressions should be enclosed in single or double quotes.E.g.,rewrite "^/test[^a-zA-Z0-9]{2}/?$" https://www.google.com/ permanent; | I am using this rewrite in NGINX.rewrite ^/test[^a-zA-Z0-9]{2}/?$ https://www.google.com permanent; // doesn't workThe server fails to start with I add themin {2}repetition in the regex. The server comes up when I remove that like here:rewrite ^/test[^a-zA-Z0-9]/?$ https://www.google.com permanent; // this worksI have tried both{min,max}params.
The error that I get when I use theminrepetition is as below.directive "rewrite" is not terminated by ";"The context of this rewrite isserver.Can someone tell if what am I missing? Is there some module that need to be installed for this to work?My prod NGINX version is 1.4 and I tried it on my local with 1.10. | nginx rewrite regex min,max repetition |
Yes (see thessl_client_certificateandssl_verify_clientdirectives).Depends on your application, but in this case where you only need to verify that the certificate was signed by a certain CA, that's correct.You would need to create a CA and a client certificate signed by said CA and use that CA for verifying the client certificate on the server side.Now, what you need to consider is how you will solve the problem that thessl_client_certificateandssl_verify_clientdirectives doesn't support being used in a location block (e.g. they can only be used in a http or server block).I would suggest creating an own subdomain for the API (e.g. api.my_domain.com) and access the API from the service with that address.Example configuration:server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/public.crt;
ssl_certificate_key /etc/nginx/ssl/private.rsa;
ssl_client_certificate /etc/nginx/ssl/client_ca.pem;
ssl_verify_client on;
server_name api.my_domain.com;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app/api;
}
} | I have a rails 4 project with some API.This project runs withnginx v.1.6.3andhttpson production.Nginx configurations:upstream app {
# Path to Unicorn SOCK file, as defined previously
server unix:/tmp/unicorn.my_domain.sock fail_timeout=0;
}
server {
listen 80;
server_name my_domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/public.crt;
ssl_certificate_key /etc/nginx/ssl/private.rsa;
server_name my_domain.com;
root /var/www/current;
location /assets {
root /var/www/current/public;
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri @app;
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}ProblemAPI requests (POST /api/some_path/createetc.) should be protected with two-way SSL.Only one service will use this API (only 1 client with one certificate)QuestionDoes nginx able to handletwo-way SSL?two-way SSLshould be implemented onnginxlayer, not in web-application logic. am I right?How to set upnginxto catch clients which sends requests to/api/...url and authenticate them withtwo-way SSL?I just need a basic example, to understand how it should work. | How to set up two-way SSL in Nginx for custom location? |
Seems you have run out of memory. Many VPS servers are setup with no swap, so when you run out of memory, it will kill things off in a seemingly random manner.The easiest way to fix it is to get more memory provisioned to your VPS, likely costing more money. The next best way (other than running less stuff and memory optimizing everything running) would be to add a swap partition or swap file.For a 1GB swap file (as root):dd if=/dev/zero of=/swapfile bs=1M count=1024
mkswap /swapfile
swapon /swapfileBe sure to add it to /etc/fstab too as:/swapfile none swap defaults 0 0That will make it come back after reboot. | I am hosting my Rails app on Ubuntu 12.04 VPS, Nginx + Unicorn, after deployment everything is fine, but few hours later, when I ssh to the VPS I get this message-bash: fork: Cannot allocate memory
-bash: wait_for: No record of process 4201
-bash: wait_for: No record of process 4201If I run any command, it would just return-bash: fork: Cannot allocate memory. | SSH and -bash: fork: Cannot allocate memory VPS Ubuntu |
The webserver is not a calculator or statistical program. It's logging function is to provide the raw data you can do your analysis with. If you analysis program is incapable of converting microseconds to seconds you should shop around for other software. In any case, it us unrealistic to expect a program's logging function to perform unit conversions for you. The goal of logging is not to format, yet to record what it has done without impacting the performance of it's core functionality. | I'm trying to modify my nginx access log format to include the request duration, in seconds.I see two possible variables I could use:1)$request_time2)$upstream_response_timeHowever both of these variables are expressed in microseconds, and I need this value to be rendered in seconds. Is there any way to specify the output as an expression (i.e.$request_time * 1000) or accomplish this in some other way?Thanks | Writing the total request time in seconds to an nginx access log, possibly using a calculated variable |
Addproxy_intercept_errors on;tolocation /photos. Then yourerror_page 404 /myerrorfile.jpgwill work even when the 404 error comes from the upstream server. | Hi I would like configure my nginx server to proxy for amazon S3 and do something like mod_rewrite in apache - if proxy for amazon is 404 (file does'nt exist on amazon) then redirect me to my local file. It's possibble to do ?This is my nginx config file:upstream app{
server 127.0.0.1:3000;
}
server {
listen 0.0.0.0:80;
server_name www.mypage.com mypage.com;
access_log /var/log/nginx/mypagecom.log;
location /photos{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://amazons3.mypage.com/photos;
proxy_redirect off;
error_page 404 /myerrorfile.jpg;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app;
proxy_redirect off;
}
}Can anyone help me ? | nginx proxy and 404 redirect |
Be careful that the default postgres port is 5432 not 5431. You should update the port mapping for the postgres service in your compose file. The wrong port might be the reason for the issues you reported. Change the port mapping and then try to connect to postgres:5432. localhost:5432 will not work. | I have adocker-composefile with services forpython,nginx,postgresandpgadmin:services:
postgres:
image: postgres:9.6
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5431:5431"
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
environment:
PGADMIN_DEFAULT_EMAIL:[email protected]PGADMIN_DEFAULT_PASSWORD: pwdpwd
volumes:
- pgadmin:/root/.pgadmin
ports:
- "5050:80"
backend:
build:
context: ./foobar # This refs a Dockerfile with Python and Django requirements
command: ["/wait-for-it.sh", "postgres:5431", "--", "/gunicorn.sh"]
volumes:
- staticfiles_root:/foobar/static
depends_on:
- postgres
nginx:
build:
context: ./foobar/docker/nginx
volumes:
- staticfiles_root:/foobar/static
depends_on:
- backend
ports:
- "0.0.0.0:80:80"
volumes:
postgres_data:
staticfiles_root:
pgadmin:When I rundocker-compose upand visitlocalhost:5050, I see thepgadmininterface. When I try to create a new server there, withlocalhostor0.0.0.0as host name and5431as port, I get an error "Could not connect to server". If I remove these and instead enterpostgresin the "Service" field, I get the error "definition of service "postgres" not found". How can I connect to the database withpgadmin? | Connecting pgadmin to postgres in docker |
If you ask nginx to dynamically gzip your content it will convert your ETags into weak ones. This is required bythe specification, since a strong ETag can only be used for content that is byte-for-byte identical.A validator is weak if it is shared by two or more representations of a given resource at the same time, unless those representations have identical representation data. For example, if the origin server sends the same validator for a representation with a gzip content coding applied as it does for a representation with no content coding, then that validator is weak. | I am confused by how the W/ appears in the etag when I have not added it.
I am using Node.js http server module and have a Nginx as reverse proxy. I am getting the browser see the Etag generated by the Node.js server but with a W/ tagged to it.Can someone explain where that W/ comes from? Does the browser insert that based upon its determination that it is a weak etag? I want the browser to get it as I sent it. Without the W/ prefix.Here is the Etag header as seen in the browser.etag: W/"asv1534746804282-d62serveav"When trying to compare with if-none-match, I have to strip the W/.
Also, with the 304 status response, do I again have to send the Etag?EDIT: I added the W/ myself so that Nginx leaves it unmodified. I hope my assumption is correct. It appears to be. | Where does the W/ in an etag appear from? |
Not exactly the answer to the question, but the answer to the problem. Unfortunately this isn't documented very well. The solution is to create a service with a type of "ExternalName". According tohttps://akomljen.com/kubernetes-tips-part-1/the service should look like this:kind: Service
apiVersion: v1
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: full.qualified.domain.nameI just tried it and it works like a charm. | Setup: Kubernetes cluster on AKS with nginx-kubernetes ingress. Azure Application Gateway routing domain with SSL certificate to nginx-kubernetes.No problems serving everything in Kubernetes.Now I moved static content to Azure Blob Storage. There's an option to use a custom domain, which works fine, but does not allow to use a custom SSL certificate. The only possible way is to set up a CDN and use the Verizon plan to use custom SSL certificates.I'd prefer to keep all the routing in the ingress configuration, since some subroutes are directed to different Kubernetes services. Is there a way to mask and rewrite a path to the external blob storage url in nginx-kubernetes? Or is there any other available option that proxies an external url through ingress?I don't mind having direct blob storage URLs for resources, but the main entry point should use the custom domain. | Is there a way to serve external URLs from nginx-kubernetes ingress? |
When you change the root it'll still include the directory name, so what you want to do is only set the root forlocation /. You also don't need any additional regex on/adminas the location modifier~already tells nginx 'anything starting with'.This works for your use case:server {
listen 80;
index index.html;
location / {
root /var/www/html/www_new/front;
try_files $uri $uri/ /index.html;
}
location ~ ^/admin {
root /var/www/html/www_new; # the directory (/admin) will be appended to this, so don't include it in the root otherwise it'll look for /var/www/html/www_new/admin/admin
try_files $uri $uri/ /admin/index.html; # try_files will need to be relative to root
}
} | I have really simple nginx configuration with 3 locations inside. Each of them have it's own root directory + I should be able to add another in the future easily.What I want:Request/admin=> location^/admin(/|$)Request/admin/=> location^/admin(/|$)Request/admin/blabla=> location^/admin(/|$)Request/client=> location^/client(/|$)Request/client/=> location^/client(/|$)Request/client/blabla=> location^/client(/|$)Request/blabla=> location/Request/admin-blabla=> location/Request/client-blabla=> location/Actual result:All requests goes to location/.I tried many different suggestions from docs, stackoverflow and other sources using different combinations of aliases, try_files, roots and regexes, but nothing worked for me.Only when I tried to use justreturn 200 'admin';andreturn 200 'front'it worked as intended.Minimal config:server {
listen 80;
index index.html;
location / {
root /var/www/html/www_new/front;
try_files $uri $uri/ /index.html;
}
location ~ ^/admin(/|$) {
root /var/www/html/www_new/admin;
try_files $uri $uri/ /index.html;
}
location ~ ^/client(/|$) {
root /var/www/html/www_new/client;
try_files $uri $uri/ /index.html;
}
}Directory structure:/admin/client/frontThank you | Nginx multiple locations with different roots |
The above configuration should raise an error as you have defined twodefault_server, Also there is no point in defining two server blocks for the same domain,This is how you can achieve it,map $request_uri $rot {
"~dir1" /var/www/dir1/Folder/app/;
default /var/www/dir2/Folder/app/;
}
server {
listen 80 default_server;
server_name mydomain.net;
root $rot;
index index.html;
location / {
try_files $uri $uri/ index.html =404;
}
}The other way can be,server {
listen 80 default_server;
server_name mydomain.net;
location /dir2/ {
root /var/www/dir2/PharmacyAdminApp1/app;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /dir1/ {
root /var/www/dir1/PharmacyAdminApp1/app;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}If it still throws any error, switch your error_log on, and check which files are being accessed, and correct from there. | I am trying to create a nginx server that can host multiple sites on the same server. I have kept two different directories containing index.html files in the /var/www/ directory.1st directory : dir1
Containing folder strucuture as : dir1/Folder/app;
app directory contains index.html for the site.2nd directory : dir2
Containing folder strucuture as : dir2/Folder/app;
app directory contains index.html for the site.now inside /etc/nginx/conf.d, created 2 .conf files as test.conf and dev.conftest.conf :server {
listen 80 default_server;
server_name mydomain.net;
location /dir1/ {
root /var/www/dir1/PharmacyAdminApp/app;
index index.html index.htm;
try_files $uri $uri/ =404;
}
}dev.confserver {
listen 80 default_server;
server_name mydomain.net;
location /dir2/ {
root /var/www/dir2/PharmacyAdminApp1/app;
index index.html index.htm;
try_files $uri $uri/ =404;
}
}After this I restart the nginx server and visit "mydomain.net:80/dir1" on the browser but I get 404. Can anyone let me know what is the issue? | nginx configuration for multiple sites on same server |
That output means that the process is running. Which is what you want. You should try accessing the URL from the browser directly after running the command without pressingctrl+c.As a side note you can write a bash script to do this which will make it easier to add arguments to the gunicorn commands.I have a gist that does just that.https://gist.github.com/marcusshepp/129c822e2065e20122d8Let me know what other questions you might have and I'll add a comment. | I am trying to implement nginx + django + gunicorn for my project deployment. I am taking the help of following article:http://tutos.readthedocs.io/en/latest/source/ndg.html. I followed the steps as described. Now, I am trying to start gunicorn. What am I getting at the screen is:$ gunicorn ourcase.wsgi:application
[2016-05-19 19:24:25 +0000] [9290] [INFO] Starting gunicorn 19.5.0
[2016-05-19 19:24:25 +0000] [9290] [INFO] Listening at: http://127.0.0.1:8000 (9290)
[2016-05-19 19:24:25 +0000] [9290] [INFO] Using worker: sync
[2016-05-19 19:24:25 +0000] [9293] [INFO] Booting worker with pid: 9293Since, I am new tonginx&gunicorn, I am not sure whether the above is an error or not. I am getting nothing in error logcat /var/log/nginx/error.logIt prints nothing on the screen. Please help me to solve this. | Gunicorn stuck at Booting worker with pid: 9293 |
When I realised that the problem was theconflictbetween Basic Auth and JWT (as @Curious suggested in the commend), and that they are both using theAuthorizationheader, the solution was quite easy.I configure my front end application to send the JWToken via a custom header,**JWTAuthorization**, so when the request hits the server, it contains both headersAuthorization&JWTAuthorization. Then it's pretty simple, after the basic auth is passed, I justreplace the headers(here on the Node.js application, based on Koa):app.use(function *(next) {
this.headers.authorization = this.headers.jwtauthorization;
yield next;
}); | I'm currently running a Node.js application, with an API and files serving (I know nginx could handle it, but I wasn't supposed to use it at first).I'm simply using it to have a simple basic auth, which happens to be not that simple.Here is my nginx config:upstream nodejsapp {
server 127.0.0.1:1337;
keepalive 15;
}
server {
listen 80 default_server;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_redirect off;
location / {
proxy_pass http://nodejsapp;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
}
}The/etc/nginx/.htpasswdfile is justuser:encryptedpasswordand is good.With this config, when I go to my IP it:asks me the user and passwordstarts to load the page(sometimes) asks again for user and passwordfinishes to load the pageSo far so good, even if it asked twice the password.The Node.js app has a JWT authentication, when I sign in, the website reloads and from here, it asks indefinitely for the user and password (basic auth), as long I click on login. The JWT is in my local storage. If I click cancel on the basic auth prompt, the JWT is deleted and I'm logged out, and it... asks again for the basic auth.This is on Chrome. With Firefox and Safari, after the JWT logging, it automatically deletes the token from the local storage (and I'm logged out).It's pretty difficult to explain and I can't show you the website. In short the main problem is that the JWT (of the node.js app) is deleted. | Basic Auth and JWT |
Replace theproxy_set_header Host $host;line withproxy_set_header Host $host:$server_port;to redirect the link without port number.server {
server_name webmin.example.com;
listen 443;
ssl on;
ssl_certificate /etc/webmin/miniserv.pem;
ssl_certificate_key /etc/webmin/miniserv.pem;
access_log off;
error_log off;
location /RequestDenied {
return 418;
}
location / {
proxy_pass https://127.0.0.1:10000;
proxy_redirect off;
#Proxy Settings
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 32 32k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
}
} | I already have a working https site running. My config below is working fine for webmin. Except that when I login the web address rewrites the port no 10000 next to it, therefore getting error server not found. can anyone help me to correct this please?server {
server_name webmin.example.com;
listen 443;
ssl on;
ssl_certificate /etc/webmin/miniserv.pem;
ssl_certificate_key /etc/webmin/miniserv.pem;
access_log off;
error_log off;
location /RequestDenied {
return 418;
}
location / {
proxy_pass https://127.0.0.1:10000;
proxy_redirect off;
#Proxy Settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 32 32k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
}
} | Configuring Nginx reverse proxy for webmin |
This is a good reason to create a queue.
And you will:upload csv file (that should be within 30sec)your background job that will import user data (that can go for hours…)while this job is in progress you can serve some kind of WIP page with job status/percents/etc.Checkhttps://github.com/resque/resquefor example. There is a lot of other queues. | I have a Rails (v3.2.13, Ruby 2.0.0) application running on nginx + Unicorn (Ubuntu 12.04). All is working well, except when an admin user is uploading users (thousands) via a CVS file. The problem is that I have set timeout to 30 seconds and the import process takes much more time. So, after 30 seconds I get an nginx 502 Bad Gateway page (Unicorn worker is killed).The obvious solution is to increase timeout, but I don't want this because it'll cause another problems (I guess), because it's not a typical behavior.Is there a way to handle this kind of problems?Thanks a lot in advance.PS: Maybe a solutions is to modify the code. If so, I want to avoid the user to perform another request.Some ideas (don't know if possible):Setup a worker dedicated to this request.Send a "work in progress" signal to Unicorn to avoid to be killed.nginx-app.confupstream xxx {
server unix:/tmp/xxx.socket fail_timeout=0;
}
server {
listen 80;
...
location / {
proxy_pass http://xxx;
proxy_redirect off;
...
proxy_connect_timeout 360;
proxy_send_timeout 360;
proxy_read_timeout 360;
}
}unicorn.rbworker_processes 2
listen "/tmp/xxx.socket"
timeout 30
pid "/tmp/unicorn.xxx.pid" | How to configure nginx + Unicorn to avoid timeout errors? |
Sub-domain configuration starts with an entry in the DNS server of the parent domain and the lookup resolves the sub-domain to an IP address of the web server. The web server in turn delegates the requests based on its configuration for the sub-domain.If you don't have a DNS setup in your sub-domain, then the admin at example.com needs to set up a CNAME alias. The alias points the subdomain to the same web server, which hosts the website for the parent domain. The canonical names (CNAMES) are added for each of the subdomains. Once the subdomain is resolved to the IP address of the web server, the web server can route the request to a different website. | There are several questions on SO about nginx subdomain configuration but didn't find one that exactly the same as mine.Say I got a virtual hostsome.example.comfrom higher-level net adminexample.comat our organization. I want to usesome.example.comas my primary site and usefoo.some.example.comandbar.some.example.comfor auxiliary usage (proxy, etc). I tried this simple configuration and put it undersites-enabledbut didn't work:server {
listen 80;
server_name some.example.com;
root /home/me/public_html/some;
index index.html index.htm;
}
server {
listen 80;
server_name foo.some.example.com;
root /home/me/public_html/foo;
index index.html index.htm;
}
server {
listen 80;
server_name bar.some.example.com;
root /home/me/public_html/bar;
index index.html index.htm;
}In this settingsome.example.comworks fine, but for the other two browser return thatcould not find foo.some.example.com. I'm running it on a ubuntu server.Is there something wrong with this configuration? Or is it something I should talk to higher level net admin (makefoo.some.example.comandbar.some.example.comto be registered)? | nginx subdomain configuration on virtual host |
The actual answer to this question appears to be this Ubuntu-specific bug:https://bugs.launchpad.net/ubuntu/+source/libjpeg-turbo/+bug/1031718You can work around the problem by putting the linessetuid uwsgiuser
setgid uwsgiuserinto your upstart configuration file, and deleting theuidandgidsettings from your uwsgi configuration. | I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so:Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334)Error opening file for reading: Permission deniedProblem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. I've experimented with changing the permissions of the entire /opt/ directory to include the uwsgiuser user, to no avail. I'm using a TCP socket, so permissions shouldn't be an issue there. Is it the cache? Does that have its' own permissions? If so, where?My Upstart conf file
description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn
env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log
exec $UWSGI \
--master \
--emperor /etc/uwsgi/vassals \
--die-on-term \
--auto-procname \
--no-orphans \
--logto $LOGTO \
--logdateMy Vassal ini file:[uwsgi]
# Variables
base = /opt/env/mysiteenv
# Generic Config
uid = uwsgiuser
gid = uwsgiuser
socket = 127.0.0.1:5050
master = true
processes = 2
reload-on-as = 128
harakiri = 60
harakiri-verbose = true
auto-procname = true
plugins = http,python
cache = 2000
home = %(base)
pythonpath = %(base)/mysite
module = wsgi
logto = /opt/log/mysite/error.log
logdate = true | uWSGI Server log…permission denied to read file...which file? |
Most config directives can live inside location blocks (i.e., they are not global-only) and it's very common to do this is practice. You should have no trouble setting this up using only 1 instance of nginx.One of the great things about this is that you can set it up this way initially and then change your mind later by switching the location block to pass through to a backend server without any of that being visible to the outside world.So go ahead and do it on one server now, knowing that you can put in a backend server or cluster later as you need to scale up. | I'm about to deploy a Django application on a nginx web server, and want to make sure I'm building the system correctly.It seems to be common wisdom that if you are deploying Django on an apache server, then you should still put an nginx server in front of the application to serve static files, at which nginx is more performant.If instead of apache for the Django code, I would like to use nginx + FastCGI to host the Django application, is there any reason to configure a second nginx install to sit in front of the nginx server that is serving dynamic content, to handle static content as well as redirection to the dynamic content?Specifically, will there be different configuration parameters for the static and dynamic content that would make me want to keep the servers separate, or can I host it all in a single nginx installation, with some of the URLs being mapped to django content, and the rest being mapped to static content served from the same nginx install?Thanks for your advice! | nginx + FastCGI for django application---run two webservers or one? |
Thedefault_typeonlyapplies to file extensions that have not been defined in themime.typesfile.If the file extension is missing from themime.typesfile, it's fairly safe to assumeapplication/octet-stream, which most browsers will treat as a binary file and download it rather than attempting to render it.Themime.typesfile is simply atypesdirective with a long list of common MIME types and their associated file extension(s).Seethis documentfor details. | In the default nginx configuration file i see that the default_type is set to application/octet-stream. I understand the MIME types but I do not understand why we are setting a default type. What is the significance of this configuraion? Can someone help me to understand this?include /etc/nginx/mime.types;
default_type application/octet-stream; | default_type application/octet-stream in nginx.conf file |
Each Pod has its own network namespace and its own IP address, though the Pod-specific IP addresses aren't reachable from outside the cluster and aren't really discoverable inside the cluster. Since each Pod has its own IP address, you can have as many Pods as you want all listening to the same port.Each Service also has its own IP address; again, not reachable from outside the cluster, though they have DNS names so applications can find them. Since each Service has its own IP address, you can have as many Services as you want all listening to the same port. The Service ports can be the same or different from the Pod ports.The Ingress controller is reachable from outside the cluster via HTTP. The Ingress specification you show defines HTTP routing rules. If I set up a DNS service with a.devTLD and define an A record forticketing.devthat points at the ingress controller, thenhttp://ticketing.dev/api/users/anythinggets forwarded tohttp://auth-srv.default.svc.cluster.local:3000/within the cluster, andhttp://ticketing.dev/otherwisegoes tohttp://client-srv.default.svc.cluster.local:3000/. Those in turn will get forwarded to whatever Pods they're connected to.There's no particular prohibition against multiple Pods or Services having the same port. I tend to like setting all of my HTTP Services to listen on port 80 since it's the standard HTTP port, even if the individual Pods are listening on port 3000 or 8000 or 8080 or whatever else. | I am building a microservice full stack web application as (so far) :ReactJS (client microservice) : listens on 3000Authentication (Auth microservice) : listens on 3000 // accidently assigned the same portTechnically, what I have heard/learned so far is that wecannot havetwo Pods running on thesame port.
I am really confusedhow am I able to run the application (perfectly) like this with same ports on different applications/pods?ingress-nginx config:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
## our custom routing rules
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000I am really curious, am I missing something here? | Kubernetes Ingress: two microservices running on the same port? |
I assume your frontend is Single Page Application. SPA has static content like HTML, CSS, images, fonts etc. It is the perfect candidate to be deployed as static website that gets data from backend using REST APIs.In cloud environments like AWS, GCP it is recommended to host SPA applications separately from REST APIs. e.g. in AWS SPA can be deployed in Amazon S3 and it remains directly accessible without going through API gateway.API gateway is supposed to be used to only route REST API calls and performing cross cutting concerns such as authentication etc. However the problem with this approach is that you will get CORS error while hitting REST APIs from frontend because SPA and API gateway will be hosted on different domains. To overcome this issue CORS need to be enabled at API gateway.If you are looking for simpler solution, frontend can be served from a service through API gateway. Since static content is served only once in SPA it should be okay to start with this approach for simpler applications.https://aws.amazon.com/blogs/apn/how-to-integrate-rest-apis-with-single-page-apps-and-secure-them-using-auth0-part-1/https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html | Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed3 years ago.Improve this questionI have recently begun exploring the concept of microservices and API gateways and am particularly confused on how frontend endpoints should be hosted.If I have an API gateway that acts as the middleman between requests to all of my services, where exactly should the frontend be hosted? If I request /api/example, I understand that my API gateway should route that to the appropriate service and forward that services response. I do not understand however, how an API gateway should handle /home/ in a microservice context. In this case, we want to deliver html/css/javascript corresponding to /home/ to the client making the GET request. Does this mean that we should have some sort of frontend service? Won't creating a service that just returns HTML/CSS/JS be redundant and add increased latency, since all we really need to do is just immediately return the HTML/CSS/JS associated with our frontend?An alternative I was thinking about was to have the API gateway itself provide endpoints that return the HTML/CSS/JS required for the client to render the frontend. In other words, the API gateway could just immediately respond with the HTML corresponding to /home/ when receiving a GET request to /home/ rather than calling a service. However, I read online that API gateways should not be actually serving endpoints, rather just proxying them to services.That is my main question: Where should frontend code go when your backend is built out using a microservice architecture? | Microservices, API Gateways, and Frontends [closed] |
Thenginxconfiguration file is callednginx.confand on most systems is located atetc/nginx/nginx.conf.nginx.confmay optionally containincludestatements to read parts of the configuration from other files. Seethis documentfor more. Read yournginx.conffile to identify which files and directories are sourced and in which context and which order.Some distributions are delivered with annginx.conffile that sources additional files from directories such as/conf.d/and/sites-enabled/.There is also a convention on some distributions to symlink files between/sites-available/and/sites-enabled/.Thenginx -Tcommand (uppercaseT) is useful to list the entire configuration across all the included files. | I have the following config files and locations:etc/ngnix/nginix.conf
var/etc/nginx/sites-available/myproject
etc/ngnix/conf.d/default.conf
etc/ngnix/conf.d/web.confI'm confused regarding each conf file role, rules, when to use one or another, are they loaded one after another, or just one, are directives overwriting ? | What is the order of the config file for NGINX |
Short answer: not possible on your current setup.When starting,nginxfirst creates a separate process for every group of virtual hosts that listen on the sameIP:portcombination, and then sets the capabilities of that process to be the sum of all capabilities of every virtual host in that group handled by said process.In your case, there's only one process that handles all the virtual hosts bound to*:443, so the process includes thehttp2capability.In order to achieve what you want, you need to makenginxspawn a different process that doesn't have thehttp2capability on a separateIP:portcombination.For the virtual hosts you want to be accessed viahttp2, you must either:use a different port- trivial, just use another port for them (e.g.listen 8443 ssl http2;) and removehttp2from all the others
(e.g. `listen 443 ssl;)use a different IP- you need to add another IP to the sameNICthat uses your current IP and modify your virtual hosts accordingly
(e.g.listen new_ip:443 ssl http2;andlisten current_ip:443 ssl;respectively)Example config for multiple IPs:server {
listen current_ip:443 ssl;
server_name http11-host.example.com;
...
}
server {
listen current_ip:443 ssl;
server_name another-http11-host.example.com;
...
}
...
...
server {
listen new_ip:443 ssl http2;
server_name http2-host.example.net;
...
}
server {
listen current_ip:443 ssl http2;
server_name another-http2-host.example.org;
...
} | I have several virtual hosts onnginx.Can I enableHTTP/2forspecific virtual hosts onlyonnginx?When I enableHTTP/2for a virtual host, like:server {
listen 443 ssl http2;
server_name a.b.com;
...
}I can access a.b.com by HTTP2.0.But now every other virtual host on the samenginxsupports HTTP/2 too.
But I want to access them only byHTTP/1.1.Is thehttp2directive at server level? | Can I enable HTTP/2 for specific server blocks (virtual hosts) only, on Nginx? |
It looks like this might be the solution to the problem:http://blog.seancarpenter.net/2013/09/02/rails-ssl-route-generation-with-nginx-and-unicorn/Because the site is running within nginx, nginx is terminating SSL, and Rails has no idea that this is the case.To pass this info on to Rails, the nginx config needs to be set so that it adds theX-Forwarded-Proto httpsheader as it forwards the request on to the appserver.The example config from the above article shows...location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https; # New header for SSL
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_something_server;
} | I have a Ruby on Rails app that has the following within a template...When I request the page over https I end up with a form action that has the exact same url as the original request, but instead ofhttps://it's being generated ashttp://, resulting in mixed content errors.What am I missing? Under what circumstances wouldrequest.original_urlreturn the wrong scheme?I'm running Ruby on Rails using Unicorn and nginx. | Why does ActionDispatch::Request.original_url return the wrong scheme? |
Looks like you can use NGINX as web-server instead of Apache. | I see the latest versions of MAMP include NGINX 1.6.Can NGINX be used instead of apache, or is it just being used to server cached content. If possibel I'd rather use NGINX directives instead of .htaccess. | Using MAMP with NGINX |
I rather use return 301 for redirections and use rewrite only if I want to display something like a nice url.Please try the followinglocation ~ ^/admin/(.*) {
return 301 /another/subdirectory/$1 ;
} | Let say that I have a url like this:http://www.example.com/admin/admin.php?fail=1how can I rewrite the url to behttp://www.example.com/another/subdirectory/admin.php?fail=1Thank youUpdate: this is what I've tried so far, but it will not redirect admin.php?fail=1location /admin/ {
rewrite ^/admin/(.*)$
/another/subdirectory/$1 redirect;
} | Nginx redirect all contents of a subdirectory to another subdirectory |
it appears that PHP is trying to write session data to disk in a directory that's not actually writable, namely/var/lib/php/session.Thanks to Michael Hampton | Amazon Linux lastestPHP 5.4.19 (cli) (built: Sep 3 2013 23:19:23)nginx version: nginx/1.2.9installed PHP-FPM: PHP 5.4.19 (fpm-fcgi) (built: Sep 3 2013 23:22:01)phpinfo() is workingpma.nginx.conf:server {
listen 80;
server_name pma.my.server;
root /usr/share/phpmyadmin;
index index.php;
charset UTF-8;
access_log /var/log/myserver/pma.access.log;
error_log /var/log/myserver/pma.error.log;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass php-fpm;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT /usr/share/phpmyadmin/;
fastcgi_intercept_errors on;
}}/var/log/myserver/pma.error.log:[error] 21374#0: *13 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 1.0.255.202, server: pma.my.server, request: "GET /js/get_image.js.php?theme=pmahomme HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "pma.my.server", referrer: "http://pma.my.server/"/var/log/php-fpm/error.logNOTICE: fpm is running, pid 21598
NOTICE: ready to handle connections
WARNING: [pool www] child 21600 exited on signal 11 (SIGSEGV) after 12.862493 seconds from start
NOTICE: [pool www] child 21614 started
WARNING: [pool www] child 21602 exited on signal 11 (SIGSEGV) after 13.768522 seconds from start
NOTICE: [pool www] child 21617 started/var/log/messageskernel: [12499.658777] php-fpm[21603]: segfault at 0 ip 00000000005c5a39 sp 00007fffb44d6d60 error 4 in php-fpm[400000+31c000]I don't have big experience with Nginx and FastCGI, so I need your help. Do you have any ideas? Thanks in advance | Nginx + php-fpm on Amazon Linux = exited on signal 11 |
I've deleted my previous answer and would like to suggest a solution I've provided belowI did a little search and found this solution to your problem -In code, where you useauth_basicdirective, make such changes:satisfy any;
allow 10.0.0.1/8; // give access for all internal request
deny all;
auth_basic "...."; // your auth_basic code goes here
auth_basic_user_file ...; // your auth_basic_user_file goes hereHow it works?satisfydirective implies thatanyorallfrom next coming access rules must be passed to give access to resource. You can find more details here:satisfyThis should fit your problem perfectly ;) | Coming from apache2 one feature I can not achieve; require authentication only to external access but free access to users on my local network.
Any ideas how to handle easily this scenario?Any help would be appreciated. | Nginx authentication except those on local network |
Here is my setup that works on a subdomain.server {
listen 80;
server_name gitlab.example.com;
root /home/gitlab/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# @gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html @gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location @gitlab {
proxy_redirect off;
# you need to change this to "https", if you set "ssl" directive to "on"
proxy_set_header X-FORWARDED_PROTO http;
proxy_set_header Host gitlab.example.com:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
} | I was using Apache2 before I installed GitLab on my VPS. I just want to make GitLab a subdomain of my site (git.example.com) and have my main site (www.example.com) look at /var/www/html/index.htmlHere is my nginx.conf file as of now:user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream gitlab {
server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; }
server {
listen 80;
server_name www.example.com;
root /home/gitlab/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# @gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html @gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location @gitlab {
proxy_redirect off;
# you need to change this to "https", if you set "ssl" directive to "on"
proxy_set_header X-FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
} | How to configure GitLab as a subdomain in nginix.conf |
These two settings solved the same problem for me:oplocks = no
level2 oplocks = no | I am developing on aNginxweb server with aSambashare.Whenever I edit a CSS or JS file, I got this error when I reload my website (F5).2012/04/18 11:15:38 [crit] 29607#0: *47708 open() "/var/www/[...].js"
failed (11: Resource temporarily unavailable), client: 192.168.[...],
server: [...], request: "GET [...].js HTTP/1.1", host: "[...]",
referrer: "http://[...]"I need to refresh another time and the errors disappear.I found herethat somebody has the same problem had me which can be caused byF_SETLEASE, but I couldn't find how to finally solve this problem.Any clue? | Nginx: "Resource temporarily unavailable" using a Samba share |
Although Heroku seem to be using Nginx for their reverse-proxy component, the thing about a platform-as-a-service stack like this is that no individual tenant has to (nor even gets to) configure or tune distinct elements of the stack for any given application.Requests in and out could be routed through any number of different elements to and from your Rails app so it's their platform infrastructure (and not any particular tenant) that manages all of the internal configuration and behavior. You give up the fine-grained control for the other conveniences offered by a PaaS such as this.If you really need what you've described then I'd suggest you might need to look elsewhere for Rails app hosting. I'd be surprised if their answer would be anything else but no. | Readingthis articleon nginx website, I'm interested in usingX-Accel-Redirectheader in the way that Apache or Lighttpd users might use theX-Sendfileheader to help with the serving of large files.MosttutorialsI've found require you to modify the nginx config file.Can I modify the nginx config file on Heroku and if so, how?Secondly,I foundthis X-Accel-Redirect pluginon github which looks like it removes the need to manually alter the nginx config file - it seems to let you add the redirect location in your controller code - does anyone know if this works on heroku? I can't test it out until tonight.NB - I have emailed both Heroku support and goncalossilva to ask them the same questions but I have no idea when they will get back to me. I will post back with whatever it is they tell me though. | Is it possible to use modify nginx config file and use X-Accel-Redirect on Heroku? |
They are totally different.In the firstproxy_passstatement you have included a URI parameter with a value of/. In the second you haven't.When you giveproxy_passa URI parameter (within a prefixlocation), it transforms the requested URI similarly to thealiasfunction, whereby the value of thelocationdirective is substituted for the value of the URI parameter. For example/myapi/foobecomes/foobefore being passed upstream.If you do not provideproxy_passwith a URI parameter, no transformation takes place, and the request/myapi/foois passed upstream unchanged.Seethis documentfor details. | Trailing Slash in nginx has been giving me some sleepless nights lately. Requesting some help with thisQuestion:Strange trailing slash behavior inproxy_pass.So why would this work :location /myapi/ {
proxy_pass http://node_server8/;
}and this won'tlocation /myapi/ {
proxy_pass http://node_server8;
}Notice the missing trailing slash at the end ofhttp://node_server8in second code block. This is specially strange as I have a few other configurations where I don't have a trailing slash on the backend and all works fine. | A little confused about trailing slash behavior in nginx |
The problem was with my Nginx Docker setup/configuration: I am usingnginx:alpine, which has the configuration files at/etc/nginx/conf.d/. There,default.confdefines the default configuration of Nginx. So, I had to removedefault.confand copy my configuration there instead. In theDockerfile:COPY nginx.conf /etc/nginx/conf.d/nginx.conf
RUN rm /etc/nginx/conf.d/default.confOf course, I also had to define the standard route innginx.confthen:server {
location / {
root /usr/share/nginx/html;
}
location /health {
return 200 'alive';
add_header Content-Type text/plain;
}
} | I'm new to Nginx, which I'm running in a Docker container to serve a simple website. I want to add an/healthendpoint that simply returns status 200 + some arbitrary content.I copied and adjusted the standardnginx.conffrom/etc/nginx/by addingserver {
location /health {
return 200 "alive";
}
}at the bottom inside thehttpblock. But when I run the Docker, and try to accesslocalhost/health, I just getno such file or directory. Accessing the website atlocalhostworks fine.I also tried copying other code blocks, e.g., this one:https://gist.github.com/dhrrgn/8650077But then I getconflicting server name "" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "" on 0.0.0.0:80, ignored.Am I placing thelocationat a wrong location insidenginx.conf? Do I need some special server configuration? What's the problem? | Nginx status endpoint running inside Docker |
Thanks for all your inputs.
I have identified the problem and resolve it . The problem is with the Network file system and the php script takes more time to read it because the uploaded assets total size is in GB's and the wordpress framework built in Filter will do recursive call to calculate the size. I have used Debug bar slow action and filter plugin to identify which action / filter is taking more time.Solution: And in admin panel of multisite, there is a setting if you enable then it will skip calculating the size of the disk - Now site is very fast. But will lose the functionality of restricting users from uploading ex. i define 2 gb for each site, then that functionality to restrict users from the disk quota of 2 GB will be disabled. Is there a way to tune that stuff ? Kindly provide your valuable suggestions. | Recommendation to improve wp-admin panel performance : Please provide the suggestions and way to identify the bottlenecks in wordpress admin panel issue. It is a multisite and it has 3rd party plugins enabled . How to identify plugin conflict and which plugin is causing perf/ memory issues. All these php-fpm, nginx are running in DOcker containerIssues observed:
During login
During Post create new page loading | Recommendation to improve wp-admin panel performance |
Try to add theapplication/javascriptcontent type:gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;I took values from thisconf H5BP: | I'm using NGINX as a reverse proxy in front of a Node.js app. The basic proxy works perfectly fine and I'm able to compress assets on the Node server withcompressionmiddleware.To test if it's possible to delegate the compression task to NGINX, I've disabled the middleware and now I'm trying to gzip with NGINX with the following configuration:worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
## gzip config
gzip on;
gzip_min_length 1000;
gzip_comp_level 5;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/css
text/javascript
image/gif
image/png
image/jpeg
image/svg+xml
image/x-icon;
location / {
proxy_pass http://app:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
}With this configuration, NGINX doesn't compress the assets. I've tried declaring these in thelocationcontext with different options but none of them seems to do the trick.I couldn't find relevant resources on this so I'm questioning if it could be done this way at all.Important points:1- Node and NGINX are on different containers so I'm not serving the static assets with NGINX. I'm just proxying to the node server which is serving these files. All I'm trying to achieve is offload the node server with getting NGINX to do the gzipping.2- I'm testing all the responses with "Accept-Encoding: gzip" enabled. | Compressing assets with NGINX in reverse proxy mode |
It's pretty simple thankfully. Nginx's modules (proxy, fastcgi, uwsgi etc) all have the ability to inform a request not to use the cache.location ~ ^/api {
root /var/www/project/web/app.php;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/?.*)$;
include fastcgi_params;
# Don't cache anything by default
set $no_cache 1;
# Cache GET requests
if ($request_method = GET)
{
set $no_cache 0;
}
fastcgi_cache_bypass $no_cache;
fastcgi_no_cache $no_cache;
fastcgi_cache fcgi;
fastcgi_cache_valid 200 5m;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}As per Richard Smith's suggestion, a more elegant solution using the maps directive is below:map $request_method $api_cache_bypass {
default 1;
GET 0;
}
location ~ ^/api {
root /var/www/project/web/app.php;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/?.*)$;
include fastcgi_params;
fastcgi_cache_bypass $api_cache_bypass;
fastcgi_no_cache $api_cache_bypass;
fastcgi_cache fcgi;
fastcgi_cache_valid 200 5m;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}The additions to the location are essentially telling Nginx to use or ignore the cache depending on the verb. It sets $no_cache to 1, which will bypass the cache for all requests, except where the method is GET when it is set to 0 which instructs it to use the cache (if available). | I asked this question on serverfault, but nobody answered. Hope, stackoverflow people know Nginx better :)I want to handle all [GET] requests to /api from cache and handle all other requests as in last location block (without cache). All the requests to /api with methods PUT, POST, DELETE also have not to use cache.I saw the similar questionhere, but still can not understand how to use it in my case.Thanks in advance.My config:location / {
root /var/www/project/web;
# try to serve file directly, fallback to app.php
try_files $uri /app.php$is_args$args;
}
location ~ ^/api {
root /var/www/project/web/app.php;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/?.*)$;
include fastcgi_params;
fastcgi_cache fcgi;
fastcgi_cache_valid 200 5m;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
root /var/www/project/web;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/?.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
} | Nginx: enable/disable caching depending on HTTP method |
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.Make sure you haveforwarded your port 32769to your actual host (the mac), in order for that port to be visible from localhost.This is valid for the old boot2docker, or the new docker machine.VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769(controlvmif the VM is running,modifyvmis the VM is stopped)(replace "boot2docker-vm" b ythe name of your vm: seedocker-machine ls)I would recommend to not use-P, but a static port mapping-p xxx:80 -p yyy:443.That way, you can do that port forwarding once, using fixed values.Of course, you can access the VM directly throughdocker-machine ip vmnamecurl http://$(docker-machine ip vmname):32769 | I have a running nginx container:# docker run --name mynginx1 -P -d nginx;And got its PORT info bydocker ps:0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcpThen I could get response from within the container(id: c30991a04b2f):docker exec -i -t c3099 bashcurl http://localhost=> which return the defaultindex.htmlpage content, it worksHowever, when I make thecurl http://localhost:32769outside of the container, I got this:curl: (7) failed to connect to localhost port 32769: Connection refusedI am running on a mac with docker version 1.9.0; nginx latestDoes anyone know what cause this? Any help? thank you | docker nginx container not receiving request from outside, connection refused |
Websockets start their life under a HTTP upgrade handshake. Once the handshake is successfully completed you get back a long running bidirectional websocket connection.If you use Nginx as a proxy for websockets then you can also use "X-Forwarded-For" but only on the handshake. See for examplethis simple configuration:# WebSocket Proxy
#
# Simple forwarding of unencrypted HTTP and WebSocket to a different host
# (you can even use a different host instead of localhost:8080)
server {
listen 80;
# host name to respond to
server_name ws.example.com;
location / {
# switch off logging
access_log off;
# redirect all HTTP traffic to localhost:8080
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}... and some references onthis page.You configure what Nginx should send along in the upgrade request (the info you use to identify the client) and it will be your backend server's job to use the information from the handshake to identify the client and then associate the websocket connection to your client. Based on that association, any message that comes on that websocket connection belongs to the previously identified client. | Is there any way to pass client identity to Nginx (to get sticky session) when using WebSockets? Something similar to "X-Forwarded-For" header for HTTP ? | Nginx: What is the X-Forwarded-For alternative for WebSockets? |
add this in your ES config to ensure it only binds to localhostnetwork.host: 127.0.0.1
http.host: 127.0.0.1then ES is only accessible from localhost and not the world.make sure this is really the case with the tools of your OS. e.g. on unix:$ netstat -an | grep -i 9200
tcp4 0 0 127.0.0.1.9200 *.* LISTENin any case I would lock down the machine using the OS firewall to really only allow the ports you want and not only rely on proper binding. why is this important? because ES also runs its cluster communication on another port (9300) and evil doers might just connect there. | I'm trying to restrict direct access to elasticsearch on port 9200, but allow Nginx to proxy pass to it.This is my config at the moment:server {
listen 80;
return 301;
}
server {
listen *:5001;
location / {
auth_basic "Restricted";
auth_basic_user_file /var/data/nginx-elastic/.htpasswd;
proxy_pass http://127.0.0.1:9200;
proxy_read_timeout 90;
}
}This almost works as I want it to. I can access my server on port 5001 to hit elasticsearch and must enter credentials as expected.However, I'm still able to hit :9200 and avoid the HTTP authentication, which defeats the point. How can I prevent access to this port, without restricting nginx? I've tried this:server {
listen *:9200;
return 404;
}But I get:nginx: [emerg] bind() to 0.0.0.0:9200 failed (98: Address already in use)as it conflicts with elasticsearch.There must be a way to do this! But I can't think of it.EDIT:I've edited based on a comment and summarised the question:I want to lock down < serverip >:9200, and basically only allow access through port 5001 (which is behind HTTP Auth). 5001 should proxy to 127.0.0.1:9200 so that elasticsearch is accessible only through 5001. All other access should 404 (or 301, etc). | Restricting direct access to port, but allow port forwarding in Nginx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.