Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I think joelhardi's solution is superior to the following. However, in my own application, I like to keep the blog on a separate VPS than the Rails site (separation of memory issues). To make the user see the same URL, you use the same proxy trick that you normally use for proxying to a mongrel cluster, except you proxy to port 80 (or whatever) on another box. Easy peasy. To the user it is as transparent as you proxying to mongrel -- they only "see" the NGINX responding on port 80 at your domain.upstream myBlogVPS { server 127.0.0.2:80; #fix me to point to your blog VPS } server { listen 80; #You'll have plenty of things for Rails compatibility here #Make sure you don't accidentally step on this with the Rails config! location /blog { proxy_pass http://myBlogVPS; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }You can use this trick to have Rails play along with ANY server technology you want, incidentally. Proxy directly to the appropriate server/port, and NGINX will hide it from the outside world. Additionally, since the URLs will all refer to the same domain, you can seemlessly integrate a PHP-based blog, Python based tracking system, and Rails app -- as long as you write your URLs correctly.
I've got a standard Rails app with Nginx and Mongrel running athttp://mydomain. I need to run a Wordpress blog athttp://mydomain.com/blog. My preference would be to host the blog in Apache running on either the same server or a separate box but I don't want the user to see a different server in the URL. Is that possible and if not, what would you recommend to accomplish the goal?
What's the best way to run Wordpress on the same domain as a Rails application?
Check outthis questionand theNginx Manual.Try changing yourblogline to:location ^~ /blog/ { root /home/me/wordpress; index index.php index.html index.htm; }
I have my main website and wordpress in different directories on my server on which I use nginx as the web server. The main website is in /home/me/www and Wordpress is in /home/me/wordpress. I need to have them in separate directories this way for a particular reason. How do I specify this in the nginx configuration file? I currently have the following and it does not work:location / { root /home/me/www; index index.php index.html index.htm; } location /blog { root /home/me/wordpress; index index.php index.html index.htm; } location ~ \.php$ { set $php_root /home/me/www; if ($request_uri ~ /blog) { set $php_root /home/me/wordpress; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; }It currently returns HTTP 404 when I try to accesshttp://mydomain/blog
Serving php files from different locations in nginx
I am not an expert in this area but I have deployed Django using uWSGI on Nginx with this method. A socket file represents a Unix socket. In this case, uWSGI creates it and it will be through this socket that uWSGI and Nginx will talk to each other.The "Concept" section of the link you provided talks about it:uWSGI is a WSGI implementation. In this tutorial we will set up uWSGI so that it creates a Unix socket, and serves responses to the web server via the WSGI protocol. At the end, our complete stack of components will look like this:the web client <-> the web server <-> the socket <-> uwsgi <-> DjangoThe first part of the tutorial talks about using TCP port socket to achieve the same result. If you have already followed those steps then you should skip the Unix socket part. However, it also mentions thatUnix sockets are better due to less overhead.
I followedthis doc, and almost everything went well untill "mysite.sock" occurred. It occurred like this:server unix:///path/to/your/mysite/mysite.sock; # for a file socket # server 127.0.0.1:8001; # for a web port socket (we'll use this first)This doc did not mention anything about the "mysite.sock" and after one day's searching, I found nothing.
In django + nginx + wsgi, what is a "mysite.sock"
Since Python2 is no longer supported you just need to ask for Python3.Sosudo apt-get install python3-certbot-nginxshould solve your Problem.
When I try to install Certbot for Nginx and runsudo apt-get install python-certbot-nginxI getE: Package 'python-certbot-nginx' has no installation candidateHow to install Certbot for Nginx?
E: Package 'python-certbot-nginx' has no installation candidate
In order to use regular expressions for matchinglocations, you need to prefix the expression with either~or~*:if ($server_port = 80) { location ~ (en|fr)/shop { rewrite ^ https://$host$request_uri permanent; } }From thedocumentation:To use regular expressions, you must use a prefix:"~"for case sensitive matching"~*"for case insensitive matchingSince nginx does't allowlocationblocks to be nested inside ofifblocks, try the following configuration:if ($server_port = 80) { rewrite ^/(en|fr)/shop https://$host$request_uri permanent; }
I have http:// and https:// on the same host like the following:server { listen 80; listen 443 ssl; ... ... }What I need to do is redirecting users who access my shop tohttps://. The problem is I have many languages:https://example.com/en/shophttps://example.com/fr/shopetc...I tried this and it didn't work(nginx: configuration file /etc/nginx/nginx.conf test failed):if ($server_port = 80) { location (en|fr)/shop { rewrite ^ https://$host$request_uri permanent; } }
Redirecting to SSL using Nginx
Make sure nginx is sendingfastcgi_param HTTPS onfor connections on 443.
I'm running FastCGI behind Nginx, and need to detect when the url is accessed via HTTPS. However, my Django web application always reports that the connection is HTTP (request.is_secure() == False). However, SSL is setup correctly, and I've verified my https:// urls are secure with an SSL checker.How can I get Django to correctly detect when the request is from an HTTPS url?My Nginx settings are:http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; listen 443 default ssl; ssl_certificate /home/webapp/ssl.crt ssl_certificate_key /home/webapp/ssl.key server_name myapp.com; access_log /home/webapp/access.log error_log /home/webapp/error.log root /home/mywebapp; location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8801; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; } } }I start the Django FastCGI process with:python /home/webapp/manage.py runfcgi method=threaded host=127.0.0.1 port=8801 pidfile=/home/webapp/fastcgi.pid
FastCGI application behind NGINX is unable to detect that HTTPS secure connection is used
As you pointed out, looks like nginx is proxying your https request toipWhichEndsWith.249:8443, which is an HTTPS endpoint, usinghttpas protocol.You should add the following annotation to your PodSpec:LATESTThis annotation was added to replace the deprecated annotation since 0.18.0#2871Add support for AJP protocolnginx.ingress.kubernetes.io/backend-protocol: "HTTPS"DEPRECATEDThis annotation was deprecated in 0.18.0 and removed after the release of 0.20.0#3203Remove annotations grpc-backend and secure-backend already deprecatednginx.ingress.kubernetes.io/secure-backends: "true"This should make nginx forward your request to the pods with https.Source:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-protocolDocs:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol
I did nginx ingress controller tutorial fromgithuband exposed kubernetes dashboardkubernetes-dashboard NodePort 10.233.53.77 443:31925/TCP 20dcreated ingressapiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-passthrough: "true" nginx.org/ssl-backends: "kubernetes-dashboard" kubernetes.io/ingress.allow-http: "false" name: dashboard-ingress namespace: kube-system spec: tls: - hosts: - serverdnsname secretName: kubernetes-dashboard-certs rules: - host: serverdnsname http: paths: - path: /dashboard backend: serviceName: kubernetes-dashboard servicePort: 443ingress-nginx ingress-nginx NodePort 10.233.21.200 80:30827/TCP,443:32536/TCP 5hhttps://serverdnsname:32536/dashboardbut dashboard throws error2018/01/18 14:42:51 http: TLS handshake error from ipWhichEndsWith.77:52686: tls: first record does not look like a TLS handshakeand ingress controller logs2018/01/18 14:42:51 [error] 864#864: *37 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 10.233.82.1, server: serverdnsname, request: "GET /dashboard HTTP/2.0", upstream: "http://ipWhichEndsWith.249:8443/dashboard", host: "serverdnsname:32536" 10.233.82.1 - [10.233.82.1] - - [18/Jan/2018:14:42:51 +0000] "GET /dashboard HTTP/2.0" 009 7 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36 OPR/49.0.2725.64" 25 0.001 [kube-system-kubernetes-dashboard-443] ipWhichEndsWith.249:8443 7 0.001 200On my mind it is related with nginx redirection to upstream: "http://ipWhichEndsWith.249:8443/dashboard" . tried to update controller image version to 0.9.0-beta.19 - didnt helpThank you for any help.
ingress configuration for dashboard
Theproxy_next_upstreamdirective is a configuration directive to control re-request from a group ofupstream serversby aproxy_passif request to one of them fails. It doesn't make sense withoutproxy_passand anupstreamblock defined. You may use it if you proxy to multiple upstream servers like this:upstream backends { server 192.2.0.1; server 192.2.0.2; ... } server { ... location / { proxy_pass http://backends; proxy_next_upstream error timeout http_404; } }If you want nginx to search for a file on disk, and if it's not found - proxy request to another server, configure it e.g. usingtry_filesfallback instead:location / { root /path/to/root; try_files $uri @fallback; } location @fallback { proxy_pass http://... }Seehttp://nginx.org/r/try_filesfor more info about thetry_filesdirective.
I want nginx to search my local host for the file first and on a 404 error it should search server 1.1.1.1.I am able to fetch the file that is located on local host, but not able to get from server 1.1.1.1.server { listen 80; server_name localhost; access_log /var/log/nginx/access.log main; location /products/ { proxy_next_upstream http_404; root /var/foo; } } server { listen 80; server_name 1.1.1.1; location /products/ { ######### } }I guess proxy_next_upstream is not switching to the server.Any help on this would be appreciated.
Nginx proxy_next_upstream doesn't work
I have two versions of PHP in opt/remi folder php56 and php72how to set default version to PHP 7.2SCL are designed for parallel installation so don't alter default version in base systemOnce the collection is enabled, the version will be used$ scl enable php72 bash $ php -v PHP 7.2.8 (cli) (built: Jul 17 2018 05:35:43) ( NTS )If you want 7.2 to be the default version (base system) you should install it, according toWizard instructionsfor "Default / single version" (and keep 5.6 as secondary version)
I have two versions of PHP inopt/remifolderphp56andphp72but when Iphp -von cmd it shows:Copyright (c) 1997-2016 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies with Xdebug v2.4.1, Copyright (c) 2002-2016, by Derick RethansHow to set default version to PHP 7.2?
Set default version of Php in CentOS 7
With two things in mind:Unicorn is listening on 8080 (you can check this withsudo netstat -pant | grep unicorn)Your document root is/opt/gitlab/embedded/service/gitlab-rails/publicYou can create a new vhost for gitlab in apache with the following configuration: ServerName gitlab.example.com ServerSignature Off ProxyPreserveHost On Order deny,allow Allow from all ProxyPassReverse http://127.0.0.1:8080 ProxyPassReverse http://gitlab.example.com/ RewriteEngine on RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule .* http://127.0.0.1:8080%{REQUEST_URI} [P,QSA] # needed for downloading attachments DocumentRoot /opt/gitlab/embedded/service/gitlab-rails/public
I have installed GitLab7.2.1with the .deb package fromGitLab.orgfor Debian 7 on a virtual server where I have root access. On this virtual server I have already installed Apache, version2.2.22and I don't want to use Ngnix for GitLab.Now I have no idea where the public folders of GitLab are or what I have to do or on what I have to pay attention.So my question is: How do I have to configure my vhost for apache or what do I have to do also that I can use a subdomain like "gitlab.example.com" on my apache web server?
GitLab 7.2.1 with Apache Server instead of Nginx
Had the same problem. Fixed it by simply setting the right permissions on my CSS and JS files and folders. Be careful about setting permissions! But for a file to be readable on the web, it has to be readable by the user running the server process.chmod -R +rx css chmod -R +rx jsGives read and execute permissions. The-Ris for recursive. Only do this for files you want readable by the world!
I have setup nginx 0.7.67 on Ubuntu10.10 along with php-cli . I'm trying to get my front-controller based PHP framework to run, but all pages except index.php give a 403 error.Ex :http://mysite.com/styles/style.css- 403 Forbiddenhttp://mysite.com/scripts/script.css- 403 Forbiddenhttp://mysite.com/index.php- WorksMy/etc/nginx/sites-enabled/defaultis as followsserver { listen 80; server_name mysite.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; index index.php index.html; root /full/path/to/public_html; location ~* \.(js|css|png|jpg|jpeg|gif|ico|html)$ { expires max; } location ~ index.php { include /etc/nginx/fastcgi_params; keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } }Any suggestions on how to fix the above?PS: This is the entry from the error log2010/10/14 19:56:15 [error] 3284#0: *1 open() "/full/path/to/public_html/styles/style.css" failed (13: Permission denied), client: 127.0.0.2, server: quickstart.local, request: "GET /styles/style.css HTTP/1.1", host: "mysite"
Nginx gives a 403 error for CSS/JS files
You can put the gzip configuration anywhere, but if you want to apply it to all websites / files it is best to put it in the http section - this will then be the default for all server and location blocks. I would also "shorten" / change your config to the following:http { gzip on; gzip_min_length 500; gzip_proxied any; gzip_comp_level 4; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/json application/xml application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml; gzip_vary on; gzip_disable "msie6"; ... here come your server blocks / rest of your config }I use that configuration and it works fine for me - you can also test it in your browser first (for example with Firebug) before testing it with external services.Using gzip_static only makes sense if you actually generate gzipped files for Nginx (as filename + .gz), so this has nothing to do with enabling gzip and should only be a possible second step.
I want enable the gzip compression on my nginx server. The nginx.conf file is here:http { # Enable Gzip server { location ~* \.(?:ico|woff|css|js|gif|jpe?g|png)$ { expires 30d; add_header Pragma public; add_header Cache-Control "public"; } location /api { try_files $uri $uri/ /api/index.php; } location / { ##merge gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_min_length 1100; gzip_buffers 4 8k; gzip_proxied any; gzip_types # text/html is always compressed by HttpGzipModule text/css text/javascript text/xml text/plain text/x-component application/javascript application/json application/xml application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml; gzip_static on; gzip_proxied expired no-cache no-store private auth; gzip_disable "MSIE [1-6]\."; gzip_vary on; try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; } location ~ "^/ngx_pagespeed_static/" { } location ~ "^/ngx_pagespeed_beacon" { } } }Unfortunately the gzip compression not working, Google Pagespeed and Gtmetrix not detect this.Where can I place the gzip conf?In Thehttp{}server{}orlocation{}tag?I already tried in thehttpand in thelocationtags too
Nginx enable gzip
According to thenginx documentation, theerror_logdirective supportsstderras its argument. The following configuration should therefore log error messages to stderr:http { error_log stderr; ... }Unfortunately,access_logdoesn't supportstdoutas its argument. However, it should be possible to set it tosyslog(documentation) and get systemd to include syslog calls into its journal.
I want to redirect nginx access logs tostdoutto be able to analyze them throughjournalctl(systemd).There is the same question with approved answer.Have nginx access_log and error_log log to STDOUT and STDERR of master processBut it does not work for me. With/dev/stderrI getopen() "/dev/stderr" failed (6: No such device or address). With/dev/stdoutI get no access logs injournalctl -u nginx.nginx.confdaemon off; http { access_log /dev/stdout; error_log /dev/stdout; ... } ...sitename.confserver { server_name sitename.com; root /home/username/sitename.com; location / { proxy_pass http://localhost:3000/; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log on; } }nginx.service[Service] Type=forking PIDFile=/run/nginx.pid StandardOutput=syslog StandardError=syslog SyslogIdentifier=nginx ExecStartPre=/usr/sbin/nginx -t -q -g 'master_process on;' ExecStart=/usr/sbin/nginx -g 'master_process on;' ExecReload=/usr/sbin/nginx -g 'master_process on;' -s reload ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid TimeoutStopSec=5I've tried my best to workaround that by changing every possible parameter in the code above and with different nginx versions (1.2, 1.6) but without any success.I'm really, really interested how to make this work so I raise this question again on a different thread as I consider previous answer is wrong, speculative or environment specific.$ journalctl -u nginxcontains only lines likeFeb 08 13:05:23 Username systemd[1]: Started A high performance web server and a reverse proxy server.and no sign of access logs :(
Nginx log to stderr
You may be able to use regexp to modify it but a better way is to use a proxy redirect:proxy_redirect http://foo.bar/baz/ /;http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirectAny Location headers forfoo.bar/baz/will go to/If you just want to redirect/baz/api, that'll work too.If any redirects are also adding the port, you'll need to addhttp://foo.bar:8080/baz/as well (separate redirect).Hope this helps!
I have an nginxproxy_passsetup to pass every request on/apithrough to a backend Tomcat REST service. This service in some cases returns aLocationheader which varies according to the type of request, e.g.,Location: http://foo.bar/baz/api/search/1234567-- thebazpart is due to it being hosted on Tomcat.My current configuration rewrites thefoo.barhost name correctly, but leaves thebazpart intact. I'd like to strip this, but the proxy_pass options seem to be limited to clearing or setting a new value for the header.Is there a way to modify headers dynamically before being passed on to the client, using a regex substitute, for instance? This is my nginx configuration:location /api { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; proxy_redirect off; proxy_pass http://foo.bar:8080/baz/api; }
Modifying a Location header with nginx proxy_pass
Give this a try...server { listen 80; server_name dev.int.com; access_log off; location / { proxy_pass http://IP:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $remote_addr; port_in_redirect off; proxy_redirect http://IP:8080/jira /; proxy_connect_timeout 300; } location ~ ^/stash { proxy_pass http://IP:7990; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $remote_addr; port_in_redirect off; proxy_redirect http://IP:7990/ /stash; proxy_connect_timeout 300; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/nginx/html; } }
I'm trying to configure Nginx to proxy stuff on a subdomain: dev.int.comI want dev.int.com to be proxied to IP:8080, and dev.int.com/stash to be proxied to IP:7990Here's my current config fileserver { listen 80; server_name dev.int.com; access_log off; location / { proxy_pass http://IP:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $remote_addr; port_in_redirect off; proxy_redirect http://IP:8080/jira /; proxy_connect_timeout 300; location ~ ^/stash { proxy_pass http://IP:7990; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $remote_addr; port_in_redirect off; proxy_redirect http://IP:7990/ /stash; proxy_connect_timeout 300; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/nginx/html; } }However, /stash redirects are going to /. What am I doing wrong?
Configure Nginx with proxy_pass
Only when you start your app withpython sever.pyis theif __name__ == '__main__':block hit, where you're registering your database with your app.You'll need to move that line,db.init_app(app), outside that block.
I have an application that works in development, but when I try to run it with Gunicorn it gives an error that the "sqlalchemy extension was not registered". From what I've read it seems that I need to callapp.app_context()somewhere, but I'm not sure where. How do I fix this error?# run in development, works python server.py # try to run with gunicorn, fails gunicorn --bind localhost:8000 server:app AssertionError: The sqlalchemy extension was not registered to the current application. Please make sure to call init_app() first.server.py:from flask.ext.security import Security from database import db from application import app from models import Studio, user_datastore security = Security(app, user_datastore) if __name__ == '__main__': # with app.app_context(): ?? db.init_app(app) app.run()application.py:from flask import Flask app = Flask(__name__) app.config.from_object('config.ProductionConfig')database.py:from flask.ext.sqlalchemy import SQLAlchemy db = SQLAlchemy()
SQLAlchemy extension isn't registered when running app with Gunicorn
The bcrypt module is platform dependant (as fibers), so you need to remove the package after decompressing the bundle in your server:rm -R path/to/bcryptthen install it again:npm install bcrypt
I wasn't sure if this should be a stackoverflow or serverfault question.I installed Meteor's accounts-password module and it worked locally, but broke my app when deployed to the server. Here's the scoop:I'm running the latest Meteor 1.0.5 locally on OSX (OS just fully updated) Building with --architecture os.linux.x86_64 Deploying to Ubuntu 14.04.2 LTS x86_64 (just updated) Running nodejs v0.12.1 (freshly built) Serving app with nginx v1.4.0And still getting:/home/secrethistory/bundle/programs/server/node_modules/fibers/future.js:245 throw(ex); ^ Error: Module did not self-register. at Error (native) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Module.require (module.js:365:17) at require (module.js:384:17) at bindings (/home/secrethistory/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt/node_modules/bindings/bindings.js:74:15) at Object. (/home/secrethistory/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt/bcrypt.js:3:35) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32)Any tips or places to look next?
bcrypt is breaking my meteor application, how do I fix it?
Yes, it's possible assuming you can distinguish the normal HTTP requests and the socket ones.The simplest solution is to match the socket uri withlocation, for example all the requests to/wswill be redirected tolocalhost:8888, any other url tolocalhost:8889. Here it is an example of configurationserver { server_name _; location /ws { proxy_pass http://localhost:8888; # this magic is needed for WebSocket proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; } location / { proxy_pass http://localhost:8889; } }You should also remember to bind the websocket server to localhost:8888 and not to 0.0.0.0:8888. This step is not needed but with it the original port is not exposed!
I am using Nginx as a web host and proxy for a websocket running on the same device listening on port 8888. Trying to find a way to have nginx listen on 80 and forward the websocket requests to the internal port. Without exposing a new port to the outside. Is this even possible?UPDATE:This is my current configuration:error_log /var/log/nginx/error_log.log warn; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream websocket { server localhost:8888; } server { listen 80; #listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html/EncoderAdmin; # Add index.php to the list if you are using PHP index index.php index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; } location /ws { proxy_pass http://localhost:8888; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }When I try to connect to it with ws://[address]/ws I get:WebSocket connection to 'ws://[address]/ws' failed: Error during WebSocket handshake: Unexpected response code: 400
NGinx forward websocket from 80 to websocket port
Make a separate location block for config.js above the others.location ~ config\.js { alias xyz; expires off; } location static etc
location /static { alias /home/ubuntu/Documents/zibann/momsite/momsite/static; # your Django project's static files - amend as required if ($uri ~* ".*config.js") { expires off; } if ($uri ~* ".*\.(js|css|png|jpg|jpeg|gif|swf|svg)" ) { access_log off; expires 365d; add_header Cache-Control public; } }Hopingconfig.jswould not get cached, but it does.How can I exclude one file from being cached?
Nginx, turn off cache for a specific file
Reference (how nginx handles request):http://nginx.org/en/docs/http/request_processing.htmlIn this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port.the default server is the first one — which is nginx’s standard default behaviourCould you check the host header of those bad requests?Also you can createan explicitdefault server to catch all of these bad requests, and just log the request info (i.e, $http_host) into a different error log file for investigation.server { listen 80 default_server; server_name _; error_log /path/to/the/default_server_error.log; return 444; }[UPDATE] As you are doingnginx -s reloadand you have so many domains in that nginx conf file, the following is possible:A reload works like thisstarting new worker processes with a new configuration, graceful shutdown of old worker processesSo old workers and new workers could co-exist for a while. For example, when you add a new server block (with new domain name) to your config file, during the reloading time, the new workers will have the new domain and the old one will not. When the request happens to be sent by the old worker process, it will be treated as unknown host and served by the default server.You said that it's done every 2 minutes. Could you runps aux |grep nginxand check how long each worker is running? If it's much more than 2 minutes, the reloading may not work as you expected.
I have around 1300vhosts in one nginx conf file. All with the following layout (they are listed after each other in the vhost file).Now my problem is that sometimes my browser redirects site2 to site1. For some reason, while the domain names don't event match.It looks like nginx is always redirecting to the first site in the vhosts file.Somebody that know what this problem can be?server { listen 80; server_name site1.com; rewrite ^(.*) http://www.site1.com$1 permanent; } server { listen 80; root /srv/www/site/public_html/src/public/; error_log /srv/www/site/logs/error.log; index index.php; server_name www.site1.com; location / { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } } location ~ .(php|phtml)$ { try_files $uri $uri/ /index.php; fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } server { listen 80; server_name site2.com; rewrite ^(.*) http://www.site2.com$1 permanent; } server { listen 80; root /srv/www/site/public_html/src/public/; error_log /srv/www/site/logs/error.log; index index.php; server_name www.site2.com; location / { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } } location ~ .(php|phtml)$ { try_files $uri $uri/ /index.php; fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } }EDIT Maybe another thing to mention is that, I reload all this vhosts every 2 minutes with nginx -s reload.On the first tests it looks like the redirection only happens when reloading... Going to do some more tests, but this could be helpful..
Nginx redirecting to wrong vhost
Your issue was that you were setting the HTTP Host header that will be sent to AWS API Gateway to the wrong value.API Gateway needs the HTTP Host header to be set to its own host, e.g. toSOMETHING.execute-api.REGION.amazonaws.comSo you should have:proxy_set_header Host $proxy_host;instead of:proxy_set_header Host $host;In fact you don't have to explicitly set the proxy Host header because, if not set, Nginx will default it to $proxy_hostSee Nginxdocs on this
I want to configure Nginx reverse proxy server which will redirect all of the requests it gets by HTTP to my AWS Api Gateway endpoint which is HTTPS (its a GET method). (If you want to know why, the reason for this is that I have an AWS Lambda function which I want a 3rd party vendor to call via Api Gateway, but he currently has a bug with ssl_handshake with AWS, probably because of SNI. So I will give him this HTTP proxy server).I've tried something like this:server { listen 80 default_server; listen [::]:80 default_server; server_name MY_SERVER_NAME; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass https://SOMETHING.execute-api.REGION.amazonaws.com/dev/RESOURCE/METHOD; proxy_ssl_server_name on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_buffering off; } }But currently I'm getting 403 from CloudFront when I try to call tohttp://MY_SERVER_NAMEI feel like I'm missing something in my SSL configurations at Nginx but I'm not sure what.
Nginx proxy_pass to aws Api Gateway
Sure, there are open source codes, which you can use and customize for your case (example).IMHO there are better implementations, which you can use as an "auth proxy" in front of your application. My favorite iskeycloak-gatekeeper(you can use it with any OpenID IdP, not only with the Keycloak), which can provide authentication, authorization, token encryption, refresh token implementation, small footprint, ...
I have a basic Nginx docker image, acting as a reverse-proxy, that currently uses basic authentication sitting in front of my application server. I'm looking for a way to integrate it with our SSO solution in development that uses JWT, but all of the documentation says it requires Nginx+. So, is it possible to do JWT validation inside of open-sourced Nginx, or do I need the paid version?
Does Nginx open source support OpenID and JWT
Problem 1:As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)Problem 2:Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.
I'm trying to proxy WebSocket + HTTP traffic with nginx.I have read this:http://nginx.org/en/docs/http/websocket.htmlMy config looks like:http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80; server_name ourapp.com; location / { proxy_pass http://127.0.0.1:100; proxy_http_version 1.1; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } }I have 2 problems:1) The connection closes once a minute.2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
nginx and proxying WebSockets
A "return code" of -9indicatesthat the process was killed with SIGKILL. If you aren't doing that yourself, theOOM killeris a likely culprit.
I am deploying code on elastic beanstalk and it gives me this error. I was using nginx proxy and elastic load balancer I disabled both and then try to deploy code this give me following error. I am unable to find any solutionnpm WARN deprecated[email protected]: use uuid module instead Not using a reverse proxy Running npm install: /opt/elasticbeanstalk/node-install/node-v6.9.1-linux-x64/bin/npmSetting npm config jobs to 1 npm config jobs set to 1 Running npm with --production flag Failed to run npm install. Snapshot logs for more details. UTC 2017/01/03 11:47:22 cannot find application npm debug log at /tmp/deployment/application/npm-debug.log Traceback (most recent call last): File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 695, in main() File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 677, in main node_version_manager.run_npm_install(options.app_path) File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 136, in run_npm_install self.npm_install(bin_path, self.config_manager.get_container_config('app_staging_dir')) File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 180, in npm_install raise e subprocess.CalledProcessError: Command '['/opt/elasticbeanstalk/node-install/node-v6.9.1-linux-x64/bin/npm', '--production', 'install']' returned non-zero exit status -9 (Executor::NonZeroExitStatus)
returned non-zero exit status -9
There is goodarticleabout common pitfalls in nignx wiki.First, I've movedrootdirective to server level. Second,locationis the best way to check urls. So I rethink your requirements asif location consist of digitsand request from facebookwe have to rewrite url, and the result is:root /home/eshlox/projects/XXX/project/project/assets/dist; location / { try_files $uri $uri/ /index.html; } location ~ "^/\d+$" { if ($http_user_agent ~* "facebookexternalhit") { rewrite (.+) /api/content/item$1?social=1; } try_files $uri $uri/ /index.html; }Also, there is almost no reason to have=404after/index.htmlintry_filesdirective.
What i want to do:Check if request comes from FacebookCheck if URL is like domain.com/2If above conditions are true - show content from /api/content/item/$1?social=1If above conditions are false - show "normal page"It is a single page app. Before my changes configuration looked like this (and it worked):location / { root /home/eshlox/projects/XXX/project/project/assets/dist; try_files $uri $uri/ /index.html =404; }I've tried to use if statements:location / { set $social 1; if ($http_user_agent ~* "facebookexternalhit") { set $social UA; } if ($uri ~* "^/(\d+)$") { set $social "${social}URL"; } if ($social = UAURL) { rewrite ^/(\d+)$ /api/content/item/$1?social=1; } root /home/eshlox/projects/XXX/project/project/assets/dist; try_files $uri $uri/ /index.html =404; }With this configuration everything works as i expected only if both conditions are true or false. If one of conditions is true and the second is false (or vice versa) then nginx always returns status 404.I have found "IfIsEvil" on nginx site, i've tried to use mapping (can i use mapping in this case?) but still i can't resolve this problem.Any ideas?Best regards.
Nginx - multiple/nested IF statements
I know it's a pretty old thread but it might help some people anyway.Basically it redirects any 404 error to index.php, but if the file exists (type file) it will set the right root.I did it from the top of my head. It might not be working right away, and you have to put the right path and fastcgi config. I also put everything back to index.php as it should work like that with Zend_Frameworkerror_page 404 = /index.php; location / { if (-f $request_filename) { root /var/www; } } location ~ \.php$ { fastcgi_pass unix:/tmp/php.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/index.php; include /etc/nginx/fastcgi_params; }
The Zend Framework based site I have been working on is now being migrated to its production server. This server turns out to be nginx (surprise!). Naturally the site does not work correctly as it was developed on Apache and relies on an htaccess file.My question is... anyone have any experience with this? Any ideas on how to translate what the htaccess file does to an nginx.conf file? I'm researching this but am hoping someone already has experience with this. Thanks!EDIT: This is the current htaccess:RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L]
Zend Framework on nginx
I resolved it, it was a configuration file issue, I added:location ~ .php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; }
I don't know why I got this error every time I tried to open the page:2013/04/06 17:52:19 [error] 5040#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8080"
Nginx connect() failed error
Try this in your terminal:ulimit -aAnd the result should be sth similar to this:core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimitedIn your case, to increase the open files limit to 1024, use this code:ulimit -n 1024Check by runningsudo nginx -tand let's hope you don't see the error again
I'm having configuration errors and I have researched online but I'm not quite sure what the problem is. I'm wanting to install PHP and Nginx on a os x 10.7.5 operating system. Whenever i try to start or stop the server I get the following errors:tone$ nginx nginx: [warn] 1024 worker_connections exceed open file resource limit: 256 alcfwl128:~ tolbert$ nginx: [emerg] open() "/usr/local/Cellar/nginx/1.4.3/logs/nginx.pid" failed (2: No such file or directory) nginx -s stop nginx: [error] open() "/usr/local/Cellar/nginx/1.4.3/logs/nginx.pid" failed (2: No such file or directory)For the first error I have tried the following command:tone$ ulimit -n 65536But I get this error:-bash: ulimit: open files: cannot modify limit: Invalid argumentI'm not sure if I'm to create the logs folder in the directory along with the nginx.pid file or if it is located somewhere else. Your help is appreciated.
Nginx on macOS : open files resource limit
The best way to implement WWW and HTTPS redirection is to create a newserversection in Nginx config:server { listen 80; #listen for all the HTTP requests server_name example.com www.example.com; return 301 https://www.example.com$request_uri; }You will also have to performhttps://example.comtohttps://www.example.comredirection. This may be done with code similar to the following:server { listen 443 ssl; server_name example.com; ssl_certificate ssl.crt; #you have to put here... ssl_certificate_key ssl.key; # ...paths to your certificate files return 301 https://www.example.com$request_uri; }And of course, you must reload Nginx config after each change. Here are some useful commands:check for errors in the configuration:sudo service nginx configtestreload configuration (this would be enough to make changes "work"):sudo service nginx reloadrestart the whole webserver:sudo service nginx restartImportant note:All yourserversections must be insidehttpsection (or in a file included inhttpsection):http { # some directives ... server { # ... } server { # ... } # ... }
After purchasing a SSL certificate I have been trying to force all pages to secured https and to www.https://www.exampl.comis working and secure but only if type it in exactly. www.example.com or example.com are still pointing to http.We use nginx as a proxy and need to input the rewrite there. I have SSH / root access via Putty. I have accessed nginx.conf by inputting into putty.Now what? Do I input the nginx commands on this page? Starting where the cursor is? Any command lines first?HTTPS:.htacess –Original code I was given before I found out I had to input into nginxRewriteEngine On RewriteCond %{HTTP_HOST} ^example.com [NC] RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://www.example.com/$1 [R,L]Nginx code converter –This is how it shows on the converter. Is everything on the correct lines?# nginx configuration location / { if ($http_host ~* "^example.com"){ rewrite ^(.*)$ http://example.com/$1 redirect; } }and thenWWW.htacess –Original code I was given before I found out I had to input into nginx#Force www: RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301,NC]Nginx code converter –This is how it shows on the converter. Is everything on the correct line?# nginx configuration location / { if ($http_host ~* "^example.com"){ rewrite ^(.*)$ http://www.example.com/$1 redirect; }}Do I then save? Restart?Any help would be greatly appreciated. I have been battling this for weeks. My Hosting company helped as far as they could, now I am learning on the fly…. Or should I just stop and hire a developer? $$$Thanks
Force www. and https in nginx.conf (SSL)
Since you are using Nginx - that must mean you are running PHP with PHP-FPM.After you install stuff you need to:sudo /etc/init.d/php-fpm restartorservice php5-fpm restartin newer ubuntu versionsso that PHP will pickup the new extensions.
I recently discovered NginX, and decided to try it out on my server. I have NginX running and able to serve PHP and HTML files. But now I want to try to install drupal. When trying to install it and check the requirements, I am stopped by one requirement.PHP extensions DisabledDrupal requires you to enable the PHP extensions in the following list (see the system requirements page for more information):gdI have tried to install gd by doingapt-get install php5-gd, and it says it is already installed. So I created aphpinfo()file, and checked to see if gd was enabled and I wasn't able to find it. Does this have to do with NginX or PHP? What do I do to fix this?
How to install PHP extensions on nginx?
Well I opened up theDockerfile from the official nginx containerfor you and saw they use:CMD ["nginx", "-g", "daemon off;"]
I have a dockerized nginx that works great if I run a shell, but (naturally enough) if I give "nginx start" itself as the entrypoint, it just daemonizes and exits immediately for lack of a process to wait for.My inclination is just to do something like this:pid=$(cat /run/nginx.pid) while ps ax | awk '{print $1}' | grep -q $pid; do sleep 60 doneThough this seems like quite a hack. But when I google for examples, I see people running bash.Any pointers?
docker nginx deployment entrypoint
I found the solution as I was trying to make changes in/etc/nginx/sites-available/django-project. But I needed to add following lines in/etc/nginx/nginx.confthe global settings for Nginx. Lines I've added are:http { ... proxy_connect_timeout 10; proxy_send_timeout 15; proxy_read_timeout 20; }I have a small website hosted and this one above settings are enough. But others may set their settings according to their needs.
I am running a Django site on DigitalOcean using 1ClickInstallation image. Every thing worked fine but I got issue for 504 Gateway Timeout Error. I've tried multiple settings on blogs but not working. Following are my settings:upstream app_server { server 127.0.0.1:9000 fail_timeout=0; } server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /home/django/django_project; index index.html index.htm; client_max_body_size 4G; server_name www.mydomain.com; keepalive_timeout 5; location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2|woff|ttf)$ { expires 365d; } # Your Django project's media files - amend as required location /media { alias /home/django/django_project/media/; } # your Django project's static files - amend as required location static/static-only { alias /home/django/django_project/static-only/; } # Django static images location /static/django_project/images { alias /home/django/django_project/static-only/django_project/images/; } # Proxy the static assests for the Django Admin panel location /static/admin { alias /usr/lib/python2.7/dist-packages/django/contrib/admin/static/admin; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } }I followed docs at following linkhttp://nginx.org/en/docs/http/ngx_http_limit_req_module.htmlFollowing is the result of "wget 127.0.0.1:9000"but couldn't make sense that where exactly to add directives. Kindly advise.
Nginx 504 Gateway Timeout Error for Django
Use the return directive to return HTML code. Remember to set proper content type, otherwise the browser will asume raw text and won’t render the code:server { listen 80; location / { add_header Content-Type text/html; return 200 'Hello World'; } }
I want to achieve something like this:server { listen 80; location / { return 200 Hello World } }i.e., any request should return the inline html. Is this possible with NGINX?EDIT:I tried this:server { listen 80; location / { return 200 'Hello World' } }But testing in browser did not render the html, instead the browser tried to download a file containing the html which is not the behavior I wanted.
NGINX: How to return inline html?
Read thisdocumentation. especially:Note: for curly braces( { and } ), as they are used both in regexes and for block control, to avoid conflicts, regexes with curly braces are to be enclosed with double quotes (or single quotes).So for example the line :rewrite ^/saison-([0-9]{1})$ /pages.php?cat_page=saison-$1&season=$1 last;should be:rewrite "^/saison-([0-9]{1})$" /pages.php?cat_page=saison-$1&season=$1 last;This should remove the ";" syntax error (for the rule I didn't check they are functionnaly valid).
I have this rules that sucessfully worked on apache but return error or nginx :rewrite ^/saison-([0-9]{1})$ /pages.php?cat_page=saison-$1&season=$1 last; rewrite ^/saison-([0-9]{1})/([a-z0-9-]+)$ /evenements.php?season=$1&title=$2 last; rewrite ^/saison-([0-9]{1})/([a-z0-9-]+)/([a-z0-9-]+)/([a-z0-9-]+)$ /evenements.php?season=$1&title=$2&place=$3&date=$4 last; rewrite ^/saison-([0-9]{1})/([a-z0-9-]+)/([a-z0-9-]+)/([a-z0-9-]+)/([a-z]+)$ /evenements.php?season=$1&title=$2&place=$3&date=$4&view=$5 last;I got :*Restarting nginx: [emerg]: directive "rewrite" is not terminated by ";" in /path/rwrules.nginx:1If I remove this 4 lines from my rewrite rules, it works. What's the problem ?
Curly braces ({ and }) from Apache to Nginx rewrite rules.
If you see--with-http_sub_moduleinnginx -Voutput, you can be certain that the module is already built in. So simply use its directives in configuration file, there is no need to do any special magic to load the module itself.Now as to why documentation says:This module is not built by default, it should be enabled with the --with-http_sub_module configuration parameter.This applies to when you compile nginx yourself, e.g. run the standard./configure && make && make install. So the module is not installed by default if you run./configurewithout arguments.But its worth noting that mostpackagednginx builds (ones that you get fromyum install nginxor e.g.apt install nginx) would have custom./configurecommand, which (as in your case) already likely had the--with-http_sub_moduleas one of the configuration switches.
How to start Nginx as service with this module?According to thisdocumentation, it says:This module is not built by default, it should be enabled with the --with-http_sub_module configuration parameter.I don't understand where to enter this command. Is itnginx service start --with-http_sub_module? That is so confusing.When I enternginx -Vit shows that--with-http_sub_moduleis available.
Nginx, how to start service with ngx_http_sub_module enabled
I found the solution which is basically to define the SSL options and the SSL certificate outside the "server" block:ssl_certificate ssl/mysite.com.crt; ssl_certificate_key ssl/mysite.com.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP; ssl_prefer_server_ciphers on; server { listen 80; server_name *.mysite.com; rewrite ^ https://$host$request_uri? permanent; } server { listen 443 ssl; server_name one.mysite.com; ssl on; location / { proxy_pass http://localhost:8080; } } server { listen 443 ssl; server_name two.mysite.com; ssl on; location / { proxy_pass http://localhost:8090; } }Key things:"ssl on;" is the only thing that needs to be within the "server" blocks that listen in https, you can put it outside too, but what will make the "server" blocks that listen in port 80 to use https protocol and not the expected http.Because the "ssl_certificate", "ssl_ciphers: and other "ssl_*" are outside the "server" block, Nginx does the SSL offloading without a server_name. Which is what it should do, as the SSL decryption cannot happen based on any host name, as at this stage the URL is encrypted.JAVA and curl don't fail to work now. There is no server_name - host miss match.
I need to use Nginx as an SSL proxy, which forwards traffic to different back ends depending on the subdomain.I have seem everywhere that I should define multiple "server {" sections but that doesn't work correctly for SSL. Doing that I would always have the SSL being processed in the first virtual host as the server name is unknown until you process the https traffic.Scenario:One IP addressOne SSL wildcard wildcardMultiple backends which needs to be accessed like the following:https://one.mysite.com/ -> http://localhost:8080 https://two.mysite.com/ -> http://localhost:8090Nginx says "if" is evil:http://wiki.nginx.org/IfIsEvil, but what else can I do?I have tried this, but it doesn't work, I get an 500 error but nothing in the error logs.server { listen 443; server_name *.mysite.com; ssl on; ssl_certificate ssl/mysite.com.crt; ssl_certificate_key ssl/mysite.com.key; location / { if ($server_name ~ "one.mysite.com") { proxy_pass http://localhost:8080; } if ($server_name ~ "two.mysite.com") { proxy_pass http://localhost:8090; } }Has anyone managed to accomplish this with Nginx? Any help/alternatives, link, would be much appreciated.
nginx proxy based on host when using https
By default Django ignores allX-Forwardedheaders, base onDjango docs.Force read theX-Forwarded-Hostheader by settingUSE_X_FORWARDED_HOST = Trueand setSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'). So insettings.py:USE_X_FORWARDED_HOST = True SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
I'm serving my django app behind a reverse proxyThe internet -> Nginx -> Gunicorn socket -> Django appIn the nginx config:upstream my_server { server unix:/webapps/my_app/run/gunicorn.sock fail_timeout=0; }The SSL is set up with certbot at the nginx level.request.build_absolute_uriinviews.pygenerates http links. How can I force it to generate https links?
build_absolute_uri with HTTPS behind reverse proxy
Nginx doesn't know on which port your spring boot applicaiton is running. Make application run on port 5000 that Nginx redirects to by default by adding "server.port=5000" to application.properties or other suggested ways in the last step:https://pragmaticintegrator.wordpress.com/2016/07/12/run-your-spring-boot-application-on-aws-using-elastic-beanstalk/
I'm trying to deploy a very simple Spring Boot application on AWS Elastic Beanstalk using AWS's Java configuration (not their Tomcat configuration), but I keep getting a 502 error with the following log:2016/06/10 02:00:14 [error] 4921#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 38.94.153.178, server: , request: "GET /test HTTP/1.1", upstream: "http://127.0.0.1:5000/test", host: "my-single-instance-java-app.us-east-1.elasticbeanstalk.com"I've tried setting my port via Spring's application.properties to what the log seems to want (5000, usingserver.port=5000) and have verified that my application runs successfully on that port on localhost.This questionis very similar, except that I'm deploying a JAR instead of a WAR. It seems like there is something I'm missing regarding configuring Nginx, and I don't know how to proceed.Here's my Spring Application:@SpringBootApplication public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @RestController public static class MainController { @RequestMapping("/test") public String testMethod() { return "Method success!"; } } }
Spring Boot Application deployed on Elastic Beanstalk Java environment returns 502
According tonginx manual, this directive adds theExpiresandCache-ControlHTTP header to the response.Value-1means these headers are set as:Expires:current time minus 1 secondCache-Control: no-cacheSo in summary it instructs the browser not to cache the document.
Given the samplelocationexample below, what does-1mean forexpires? Does that mean "never expires" or "never caches"?# cache.appcache, your document html and data location ~* \.(?:manifest|appcache|html?|xml|json)$ { expires -1; access_log logs/static.log; }https://github.com/h5bp/server-configs-nginx/blob/b935688c2b/h5bp/location/expires.conf
What does `expires -1` mean in NGINX `location` directive?
I found a way to restart nginx after deployment using an undocumented technique for running post-deployment scripts. I added this to my .ebextensions:files: "/opt/elasticbeanstalk/hooks/appdeploy/post/03_restart_nginx.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash service nginx restart
I'm running a rails application on Ruby 2.0/Puma instances and am trying to customize the nginx configuration. I need to increase the permitted request size to allow file uploads. I've found some other posts that have lead me to add this to my .ebextensions:files: "/etc/nginx/conf.d/proxy.conf" : mode: "000755" owner: root group: root content: | client_max_body_size 70M;That does create the file as expected, but it doesn't seem to work until I manually restart nginx. Because of that, I've tried to figure out a way to restart nginx with .ebextensions commands, but haven't had any success. Does anyone know of a way to restart nginx with .ebextensions or know of a better approach to solving this problem?
Customizing Nginx Configuration in AWS Elastic Beanstalk
I've just made the decission to go with SSL myself and found an article on theDigitalOceansite on how to do this. It might be thelisten 443 default deferred;, which according to that article should besslnotdeferred.Here's the nginx block they use;server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; listen 443 ssl; root /usr/share/nginx/html; index index.html index.htm; server_name your_domain.com; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { try_files $uri $uri/ =404; } }UPDATE:I now have my own site running on SSL. Along with the above I just told Rails to force SSL. In your production environment config;# ./config/environments/production.rb config.force_ssl = trueOptionally, you can add these setting in thenginx.conf;http { ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; keepalive_timeout 70; }UPDATE: 2015-09Since I wrote this answer I've added a few of extra things to mynginxconfig, which I believe everyone should also include. Add the following to yourserverblock;server { ssl_prefer_server_ciphers On; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; add_header X-Frame-Options DENY; }The first three lines (ssl_prefer_server_ciphers,ssl_protocols,ssl_ciphers) are the most import as they make sure you have a good strong SSL settings.TheX-Frame-Optionsprevents your site from being included via thetags. I expect most people will benefit from including this setting.
I have a staging rails app running with passenger on nginx. I want to secure the connections with SSL. I have read a lot of resources online but I have yet to make it run on SSL.So far, my server block on nginx.conf is:server { listen 80; listen 443 default deferred; server_name example.com; root /home/deploy/app/public; passenger_enabled on; passenger_set_cgi_param HTTP_X_FORWARDED_PROTO https; ssl on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:!ADH:!AECDH:!MD5; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; }The site is running but not on HTTPS.
How do I setup ssl on a rails 4 app? (nginx + passenger)
How does get_ip() work?If nginx is a reverse proxy and gunicorn is the app server, it's always getting requests from nginx on the local machine.The real ip that nginx sends to the app server is in my caseHTTP_X_REAL_IPvia the nginx conf lineproxy_set_header X-Real-IP $remote_addr;So you might want to set that, and in your django app account for the different header by either using your new IP header or setrequest.META['REMOTE_ADDR']=request.META['HTTP_X_REAL_IP']
I have a webserver set up with gunicorn and nginx and django.I am accessing it remotely, and with this:def testIP(request): ip_address = utils.get_ip(request)I just keep getting an ip address of 127.0.0.1 Like i said I am accessing it remotely and thus it should not be giving a local address.I think it might have something to do with gunicorn, but I want to check here first to see if you guys have any insights.
Django get IP only returns 127.0.0.1
I just found the solution:location /ip/ { keepalive_timeout 0; }
How do I let nginx close the tcp connection instantly after the request is fulfilled?
Force nginx to close connection instantly
The '/api' part of the proxy_pass target is the URI part the error message is referring to. Since ifs are pseudo-locations, and proxy_pass with a uri part replaces the matched location with the given uri, it's not allowed in an if. If you just invert that if's logic, you can get this to work:location /tvoice { if ($http_user_agent ~ iPhone ) { # return 301 is preferable to a rewrite when you're not actually rewriting anything return 301 https://m.domain1.com$request_uri; # if you're on an older version of nginx that doesn't support the above syntax, # this rewrite is preferred over your original one: # rewrite ^ https://m.domain.com$request_uri? permanent; } ... if ($http_user_agent !~ facebookexternalhit) { rewrite ^/tvoice/(.*) http://mydomain.com/#!tvoice/$1 permanent; } proxy_pass http://mydomain.com/api; }
i'm new to nginx, comming from apache and i basically want to do the following:Based on user-agent: iPhone: redirect to iphone.mydomain.comandroid: redirect to android.mydomain.comfacebook: reverse proxy to otherdomain.comall other: redirect to ...and tried it the following way:location /tvoice { if ($http_user_agent ~ iPhone ) { rewrite ^(.*) https://m.domain1.com$1 permanent; } ... if ($http_user_agent ~ facebookexternalhit) { proxy_pass http://mydomain.com/api; } rewrite /tvoice/(.*) http://mydomain.com/#!tvoice/$1 permanent; }But now i get an error when starting nginx:nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except"And i dont get how to do it or what the problem is.Thanks
Nginx proxy or rewrite depending on user agent
It's not possible with your current config since it's static. You have two options -1. Use docker engine swarm mode- You can define replicas & swarm internal DNS will automatically balance the load across those replicas.Ref -https://docs.docker.com/engine/swarm/2. Use famous Jwilder nginx proxy- This image listens to the docker sockets, uses templates in GO to dynamically change your nginx configs when you scale your containers up or down.Ref -https://github.com/jwilder/nginx-proxy
My nginx.conf file currently has the routes defined directly:worker_processes auto; events { worker_connections 1024; } http { upstream wordSearcherApi { least_conn; server api1:61370 max_fails=3 fail_timeout=30s; server api2:61370 max_fails=3 fail_timeout=30s; server api3:61370 max_fails=3 fail_timeout=30s; } server { listen 80; server_name server_name 0.0.0.0; location / { proxy_pass http://wordSearcherApi; } } }Is there any way to create just one service in docker-compose.yml and whendocker-compose up --scale api=3, does nginx do automatic load balance?
docker-compose --scale X nginx.conf configuration
For now I moved to EC2 as EBS issues weren't getting solved. I had the same issue on EC2 but I could fix it as I access my machine.Puma workers were timing out because my assets weren't precompiled. Everytime I take a new build on server, I have to run the following :RAILS_ENV=production rake assets:precompile
I am new to AWS Beanstalk-Rails-Puma-Nginx. After deploying my RAILS app to Beanstalk, all my api calls work fine, but HTML pages are causing error.When opening my HTML page -Nginx throws502 Bad Gatewayerror.Puma log :Started GET "/admin" for 182.70.76.160 at 2016-04-22 05:13:19 +0000 Processing by Devise::SessionsController#new as HTML Rendered devise/sessions/new.html.erb within layouts/application (6.1ms) [18858] ! Terminating timed out worker: 22913var/app/current/production.logis empty.Read somewhere, that adding SSL could solve. Is it required to added SSL?Please help! I am stuck!STATUS: My assets were huge because of which it was killing itself. I was using a theme and removed all the unnecessary js, css and images.Now, Puma doesn't terminate, but it doesnot compile assets. I had selected Ruby as application type so it should do it for me, correct?
Puma "Terminating timed out worker" after rendering HTML
Looks like you are confusing the fact that users, browsing online, will trigger standard requests to both "download" your static content,anduse your 2 APIs (book and api). It's not the NGINX service serving the static content that is accessing your APIs, but the users browsers/applications, and they do that exactly the same for both static content and APIs (former has more/specific headers and data, like auth...).On your diagram you'll want to put your newstatic-serviceat the exact same level as yourbook-serviceandapi-service, iebehindthe ingress. But yourstatic-servicewon't have a link with thedb-servicelike the other 2. Then just complete your ingress rules, with the static-service at the end as in this example:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: your-global-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /book-service backend: serviceName: book-service servicePort: 80 - path: /api-service backend: serviceName: api-service servicePort: 80 - path: / backend: serviceName: static-service servicePort: 80You'll have to adjust your services names and ports, and pick the paths you want your users to access your APIs, in the example above you'd have:foo.bar.com/book-servicefor your book-servicefoo.bar.com/api-servicefor the api-servicefoo.bar.com/ie everything else going to the static-service
I have a smalljavawebapp comprising of three microservices -api-service,book-serviceanddb-serviceall of which are deployed on a kubernetes cluster locally using minikube.I am planning to keep separate UIs forapi-serviceandbook-service, with the common static files served from a separate pod, probably annginx:alpineimage.I was able to create a front end that serves the static files fromnginx:alpinereferring to thistutorial.I would like to useingress-nginxcontroller for routing requests to the two services.The below diagram crudely shows where I am now.I am confused as to where I should place the pod that serves the static content, and how to connect it to the ingress resource.I guess that keeping a front end pod before ingress defeats the purpose of ingress-nginx controller. What is the best practice to serve static files. Appreciate any help. Thanks.
How to serve static contents in a kubernetes application
With a LOT of help from AWS paid support, I got this working. The reality is I was not far off it was down to some SED syntaxt.Here's what currently works (Gist):option_settings: - option_name: AWS_SECRET_KEY value: - option_name: AWS_ACCESS_KEY_ID value: - option_name: PORT value: 8081 - option_name: ROOT_URL value: - option_name: MONGO_URL value: - option_name: MONGO_OPLOG_URL value: - namespace: aws:elasticbeanstalk:container:nodejs option_name: ProxyServer value: nginx option_name: GzipCompression value: true - namespace: aws:elasticbeanstalk:container:nodejs:staticfiles option_name: /public value: /public container_commands: 01_nginx_static: command: | sed -i '/\s*proxy_set_header\s*Connection/c \ proxy_set_header Upgrade $http_upgrade;\ proxy_set_header Connection "upgrade";\ ' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.confIn addition to this you need to got into your Load balancer and change the Listener from HTTP to TCP:described here:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.elb.html).
I am running Meteor on AWS Elastic Beanstalk. Everything is up and running except that it's not running Websockets with the following error:WebSocket connection to 'ws://MYDOMAIN/sockjs/834/sxx0k7vn/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400My unstanding was to add something like:proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";To the proxy config, via my YML config file.Via my .exbextension config file:files: "/etc/nginx/conf.d/proxy.conf" : mode: "000755" owner: root group: root content: | proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";I have ssh'd into the server and I can see the proxy.conf with those two lines in it.When I hit my webserver I still see the "Error during WebSocket handshake: " error.I have my beanstalk load configured with stick sessions and the following ports:BTW I did seehttps://meteorhacks.com/load-balancing-your-meteor-app.htmland I tried to:Enable HTTP load balancing with Sticky Session on Port 80 Enable TCP load balancing on Port 8080, which allows websocketBut could not seem to get that working either.Adding another shot at some YAML that does NOT work here":https://gist.github.com/adamgins/0c0258d6e1b8203fd051Any help appreciated?
How do I customize nginx on AWS elastic beanstalk to loadbalance Meteor?
According to the Nginxdocumentation, a reverse proxy can be used to provide load balancing, provide web acceleration through caching or compressing inbound and outbound data, and provide an extra layer of security by intercepting requests headed for back-end servers.Gunicorn is designed to be an application server that sits behind a reverse proxy server that handles load balancing, caching, and preventing direct access to internal resources.By exposing Gunicorn's synchronous workers directly to the internet, a DOS attack could be performed by creating a load that trickles data to the servers, like theSlowloris.
FromGunicorn's documentation:Deploying GunicornWe strongly recommend to use Gunicorn behind a proxy server.Nginx ConfigurationAlthough there are many HTTP proxies available, we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks. You can use slowloris to check if your proxy is behaving properly.Why is it strongly recommended to use a proxy server, and how would the buffering prevent DOS attacks?
Why use gunicorn with a reverse-proxy?
111 connection refused likely means your app isn't running on the server/port combination. Also check that the security group for your app instance (or load balancer) has an inbound rule set to allow traffic from the nginx instance
I got this messageconnect() failed (111: Connection refusedHere is my log:------------------------------------- /var/log/nginx/error.log ------------------------------------- 2018/10/21 06:16:33 [error] 4282#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.4.119, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com" 2018/10/21 06:16:33 [error] 4282#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.4.119, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8081/favicon.ico", host: "hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com", referrer: "http://hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com/"I am using nodejs/express Elastic Beanstalk env.I have one nginx related file in.ebextensions/nginx/conf.d/proxy.confUpper file contains:client_max_body_size 50M;Whenever I try to get my webpage I got 502 bad gateway.What's wrong with my app?
AWS elastic Beanstalk / nginx : connect() failed (111: Connection refused
The code of the nginx-auth-request-module isannotated at nginx.com. The module always replaces the POST body with an empty buffer.In one of thetutorials, they explain the reason, stating:As the request body is discarded for authentication subrequests, you will need to set the proxy_pass_request_body directive to off and also set the Content-Length header to a null stringThe reason for this is that auth subrequests are sent at HTTP GET methods, not POST. Since GET has no body, the body is discarded. The only workaround with the existing module would be to pull the needed information from the request body and put it into an HTTP header that is passed to the auth service.
I'm using Nginx (version 1.9.9) as a reverse proxy to my backend server. It needs to perform authentication/authorization based on thecontentsof the POST requests. And I'm having trouble reading the POST request body in my auth_request handler. Here's what I got.Nginx configuration (relevant part):server { location / { auth_request /auth-proxy; proxy_pass http://backend/; } location = /auth-proxy { internal; proxy_pass http://auth-server/; proxy_pass_request_body on; proxy_no_cache "1"; } }And in my auth-server code (Python 2.7), I try to read the request body like this:class AuthHandler(BaseHTTPServer.BaseHTTPRequestHandler): def get_request_body(self): content_len = int(self.headers.getheader('content-length', 0)) content = self.rfile.read(content_len) return contentI printed out the content_len and it had the correct value. However, the self.rfile.read() will simply hang. And eventually it will time out and returns "[Errno 32] Broken pipe".This is how I posted test data to the server:$ curl --data '12345678' localhost:1234The above command hangs as well and eventually times out and prints "Closing connection 0".Any obvious mistakes in what I'm doing?Thanks much!
Nginx auth_request handler accessing POST request body?
fastcgi_read_timeoutshould be put into a location which you're using for processing requests to your file.location { fastcgi_pass you.app:9000; ... fastcgi_read_timeout 900s; # 15 minutes }Please, see more examples indocumentation
I am running a file with considerable amount of code and have to process it for 1000 users. It takes approximately 55 seconds to process 500 users, so I have to increase the default gateway timeout time.Fromthis question, I found that I have to increasefastcgi_read_timeout, but I don't know where to put it infastcgi.conf.
How to increase page timeout to prevent 504 error?
I have at least solved it for the short term by using stunnel (referring to this article:http://www.darkcoding.net/software/proxy-socket-io-and-nginx-on-the-same-port-over-ssl/).Stunnel can convert HTTPS to HTTP and by that token WSS to WS. Nginx served the socket application running on 9000 port as usual:/etc/stunnel/stunnel.conf[https] accept = 443 connect = 80 TIMEOUTclose = 0/usr/local/nginx/conf/nginx.conf#user nobody; worker_processes 1; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } tcp { upstream websockets { ## Play! WS location server 127.0.0.1:9000; check interval=3000 rise=2 fall=5 timeout=1000; } server { listen 80; listen 8000; server_name socket.artoo.in; tcp_nodelay on; proxy_pass websockets; proxy_send_timeout 300; } # virtual hosting #include /usr/local/nginx/vhosts/*; } #http { # # server { # listen 443 ssl; # server_name socket.artoo.in; # # ssl_certificate /usr/local/nginx/key/socket.domain.com.crt; # ssl_certificate_key /usr/local/nginx/key/socket.domain.com.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # # location / { # proxy_pass http://127.0.0.1:9000; # } # } #}Now the only thing I need to worry about is how to increase the timeout for websockets on nginx, the connection seems to be breaking every 75 secs (default for nginx).
I am having a problem in connecting through WSS to my server. I followed the following article to setup nginx with websockets:http://www.letseehere.com/reverse-proxy-web-socketsThe following is my nginx config which serves a Play! application:#user nobody; worker_processes 1; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } tcp { upstream websockets { ## Play! WS location server 127.0.0.1:9000; } server { listen 80; listen 8000; server_name socket.domain.com; tcp_nodelay on; proxy_pass websockets; proxy_send_timeout 300; } # virtual hosting #include /usr/local/nginx/vhosts/*; } http { server { listen 443 ssl; server_name socket.artoo.in; ssl_certificate /usr/local/nginx/key/socket.domain.com.crt; ssl_certificate_key /usr/local/nginx/key/socket.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://127.0.0.1:9000; } } }While the server is accessible onhttp://socket.domain.com,https://socket.domain.com, ws://socket.domain.com but not wss://socket.domain.com
Nginx config for WSS
When you use Cloudflare then there are two parts to encrypt:From the user's browser to CloudflareFrom Cloudflare to your serverThis means that you need two certificates for full encryption.Cloudflare automatically provides you with the first one. This is the one that a user sees if they check the URL padlock.There are various ways to deal with the Cloudflare > Server encryption. All of these are free.Select Cloudflare's "flexible" SSL/TLS encryption mode. This does NOT encrypt the request from Cloudflare to your server, but the browser will show the green padlock and say the site is secure. Kind of obnoxious, if you aks me.Use Lets Encrypt to install a cert on your serverhttps://certbot.eff.org/lets-encrypt/ubuntufocal-apache. You can now set Cloudflare's SSL/TLS encryption mode to "Full(strict)". I decided NOT to go with this solution because the basic solution doesn't work with load balancers.Install Cloudflare's Origin Certificate on your server. You can set its expiry to 15 years, which is nice (at least until 2035 when your have forgotten about this and your site breaks). Here are the Ubunto directions:Set up Ubuntu Apache2 SSL using .pem and .key from CloudflareYou can also create and install your own origin certificate, which is apparently quite easy, but I haven't tried.Aug 2023 update: I went with option 3 and it's still working years later. I just noticed that the URL padlock says that the cert is Lets Encrypt and that it's only valid for three months. I guess this is what Cloudflare uses for the browser > Cloudflare part and that they handle updating it. I've never had to. I think it used to say that it was a Cloudflare cert.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed2 years ago.Improve this questionI've been really confused between cloudflare's ssl and using let's encrypt to have my website become full https.Many sources say to use either or use both. However there is not a very decisive way to figure out whether to use both or just use one over the other.In most cases, people love cloudflare because it is a free CDN. And it comes with a simple way of setting up SSLHowever it looks like Let's Encrypt is the next big thing and it would be silly not to learn more about it.Some people say that Cloudflare is enough..http://community.rtcamp.com/t/letsencrypt-with-cloudflare/5659Some have gone to extreme lengths to set up bothhttps://medium.com/@benjamincaldwell/better-ssl-tls-certificates-from-lets-encrypt-with-nginx-and-cloudflare-9f01f89940cd#.tlhx6g5inhttps://community.letsencrypt.org/t/how-to-get-a-lets-encrypt-certificate-while-using-cloudflare/6338?u=pfghttp://pushincome.com/cloudflare-lets-encrypt-free-ssl-setup-ubuntu-apache/https://flurdy.com/docs/letsencrypt/nginx.htmlI was wondering what was the best way to setup let's encrypt properly to use with cloudflare still as a CDN for my content.Thanks.
let's encrypt vs cloudflare or both? [closed]
Shouldn't this be an example of usingtry_files?location /p/ { try_files $uri @s3; } location @s3{ proxy_pass http://my_bucket.s3.amazonaws.com; }Make sure there isn't a following slash on the S3 url
So I'm moving my site away from Apache and onto Nginx, and I'm having trouble with this scenario:User uploads a photo. This photo is resized, and then copied to S3. If there's suitable room on disk (or the file cannot be transferred to S3), a local version is kept.I want requests for these images (such ashttp://www.mysite.com/p/1_1.jpg) to first look in the p/ directory. If no local file exists, I want to proxy the request out to S3 and render the image (but not redirect).In Apache, I did this like so:RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^p/([0-9]+_[0-9]+\.jpg)$ http://my_bucket.s3.amazonaws.com/$1 [P,L]My attempt to replicate this behavior in Nginx is this:location /p/ { if (-e $request_filename) { break; } proxy_pass http://my_bucket.s3.amazonaws.com/; }What happens is that every request attempts to hit Amazon S3, even if the file exists on disk (and if it doesn't exist on Amazon, I get errors.) If I remove the proxy_pass line, then requests for files on disk DO work.Any ideas on how to fix this?
Nginx Proxy to Files on Local Disk or S3
I found the solution to the problem. Make sure that permalinks are working properly before you assume (like I did) that it is an issue with the plugin.I was able to correct permalinks for a wordpress site in a subdirectory on an nginx site. This article will help you if you face the same issuehere
For some reason "out-of-the-box" the Wordpress JSON API does not work on Nginx. I have tried several redirect schemes in the nginx conf. The only thing I have gotten to work is?json. However, this does not work for authentication and registration.As an FYI, I am developing a cordova application and attempting to use the WP JSON API for WP backend.
How do I get the Wordpress JSON-API to work on Nginx server?
It's likely that it can't kill the process.Open up the nginx sysvinit script located in /etc/init.d/ (or /etc/rc.d/) and find where nginx.pid is thought to be. It'll be something like "/var/run/nginx.pid".If the pid file isn't there, open nginx.conf and look for the pid setting. If it is a mismatch - set the conf value to where the script thinks it should be, e.g.# pid of nginx process pid /var/run/nginx.pid;
I've got Ubuntu 11.04 i386 server with nginx 1.0.11 installed. Also, I'm usingthis init.d script, the only one I've found in several different places. It starts the server nicely, however, on stop/reset it says* Stopping Nginx Server... [fail]Of course, the daemon is not stopped, and upon restart the configuration is not reloaded.How can I repair this?
Nginx daemon stop is failing
If you don't want 404 errors to show in your nginx error logs,log_not_foundis the directive you want to configure.By default,log_not_foundis set to "on" which will report file not found errors in your main error log. To turn these off, just do:log_not_found off;Source:http://nginx.org/en/docs/http/ngx_http_core_module.html#log_not_found
In my configuration, I'm using:error_page 404 /switch;When I browse to /testABC, I see this error in my log:open() "/usr/local/nginx/html/www/testABC" failed (2: No such file or directory)How can I disable this error? I'm currently using the 404 as part of the site functionality.
Disable 404 error logging
i've done the same thing on iis, first you have to build your app with "base-href" option:ng build --output-path=dist/fr --prod --bh /fr/ ng build --output-path=dist/en --prod --bh /en/and for nginx use this configlocation /fr/ { alias /var/www/dist/fr/; try_files $uri$args $uri$args/ /fr/index.html; } location /en/ { alias /var/www/dist/en/; try_files $uri$args $uri$args/ /en/index.html; }and for navigation from /en/someroute to /fr/someroute , you can get the current router url in your component where you have the language switchergetCurrentRoute() { return this.router.url; }and when click on a language selector you redirect to the same route with the selected language : {{language.lang}} change language methodchangeLanguage(lang: string) { const langs = ['en', 'fr']; this.languages = this.allLanguages.filter((language) => { return language.lang !== lang; }); this.curentLanguage = this.allLanguages[langs.indexOf(lang)].name localStorage.setItem('Language', lang); if (isDevMode()) { location.reload(true); } }
I built an angular-5 application using i18n that supports both french and english. I then deployed a separate version of the app for each supported language- dist |___ en/ | |__ index.html |___ fr/ |__ index.htmlI also added the following nginx configuration to serve the application in both languages;server { root /var/www/dist; index index.html index.htm; server_name host.local; location ^/(fr|en)/(.*)$ { try_files $2 $2/ /$1/index.html; } }What I wanted to do is to serve both applications and allow switching between the english and the french version.Let's say for example I'm onhost.local/en/somethingif I switch tohost.local/fr/somethingI should get the french version of the "something" page.With the nginx configuration I shared, I get a 404 not found error every time I refresh pages when browing my apps which also prevents me from browsing my apps independently from one another and switching between them.What did I miss? What's the appropriate Nginx conf to achieve that?
Nginx configuration for angular i18n application
Well that's not how you refer to the WSGI file with gunicorn. Seethe docs:The module name can be a full dotted path. The variable name refers to a WSGI callable that should be found in the specified module.So if your wsgi.py file is in GenericRestaurantSystem/wsgi.py, your command should begunicorn -b 127.0.0.1:8000 GenericRestaurantSystem.wsgi:application
I have made a django web app using the default localhost, however I am trying to set it up on a server so that I can configure a postgre database and continue on without having to redo the database later on.I am hosting the site though a digital ocean ubuntu 14 droplet. When I created the droplet I selected that it already come preconfigured for django. It uses nginx and gunicorn to host the site.When I first created the instance of the server, a basic django app was configured to work on the given IP. And it did.I tried cloning my project into the same directory as that project assuming it would live on the python path ('/home/project') and configured the nginx to serve up 127.0.0.1:8000 per some of the documentation I found.I believe the issue lies in when I try to bind gunicorn. I get the following error with this input.gunicorn -b 127.0.0.1:8000 GenericRestaurantSystem/wsgi.py:applicationImportError: Failed to find application, did you mean 'program/wsgi:application'?I am not 100% sure, but it seems as though gunicorn is not serving up anything (or not even on) at this point.Any suggestions as to binding this application successfully?
Gunicorn will not bind to my application
Theserverdirective must be contained in the context ofhttpmodule. Additionally you are missing top-level events module, which has one obligatory setting, and a bunch of stanzas which are to be in the http module of your config. Whilenginx documentationis not particularly helpful on creating config from scratch, there areworking examplesthere.Source:nginx documentation on server directive
With nginx/0.7.65 I'm getting this error on line 4. Why doesn't it recognizeserver?#### CHAT_FRONT #### server { listen 7000 default deferred; server_name example.com; root /home/deployer/apps/chat_front/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } #### CHAT_STORE #### server { listen 7002 default deferred; server_name store.example.com; root /home/deployer/apps/chat_store/current/public; error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } #### LOGIN #### server { listen 7004 default deferred; server_name login.example.com; root /home/deployer/apps/login/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } #### PERMISSIONS #### server { listen 7006 default deferred; server_name permissions.example.com; root /home/deployer/apps/permissions/current/public; error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } #### SEARCH #### server { listen 7008 default deferred; server_name search.example.com; root /home/deployer/apps/search/current/public; error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } #### ANALYTICS #### server { listen 7010 default deferred; server_name analytics.example.com; root /home/deployer/apps/analytics/current/public; error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }
unknown directive "server" in /etc/nginx/nginx.conf:4
nginx -s reloadcan only be used when nginx is running. It sends a signal to the master process of nginx which itself notifies the worker processes.The problem here is, that you do not have a nginx instance running during your build process. So you are unable to reload it.It is not a real problem though. Since nginx is not running, you do not need to reload it. After starting nginx (with an runit init file when using the phusion base image), it will be loading your provided configuration.That said, I would not recommend running multiple services in a single container though. It is easy enough with docker-compose to orchestrate multiple containers nowadays.
I am trying to install nginx (and then nodejs) in my container. I have my own nginx config files in nginx/ folder:# --- Release with Alpine ---- FROM phusion/baseimage:0.10.1 # Create app directory WORKDIR /app RUN apt-get update \ && apt-get -y install nginx RUN ls /etc/nginx COPY nginx/nginx.conf /etc/nginx/nginx.conf RUN ls /etc/nginx RUN ls /etc/nginx/sites-enabled RUN rm /etc/nginx/sites-enabled/default COPY nginx/serverlogic /etc/nginx/sites-enabled/serverlogic RUN ls /etc/nginx/sites-enabled RUN nginx -s reload RUN echo "$PWD FINALFINALFINAL" && ls EXPOSE 8080But I get an error onRUN nginx -s reloadline saying: "nginx: [error] open() "/run/nginx.pid" failed (2: No such file or directory)". What is that?p.s. Is there some image on docker hub with ubuntu, node.js and nginx installed? I want to put them in one container for simplicity.
Reload Nginx in during docker build
Because nginx is going to start at port 80, it needs to be root. LaunchAgents are run as non-root user when that user logs in.LaunchDaemonsare loaded at boot as root user.launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.nginx.plistMove thehomebrew.mxcl.nginx.plistsudo mv ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist /Library/LaunchDaemons/Load the plist from the LaunchDaemons foldersudo launchctl load -w /Library/LaunchDaemons/homebrew.mxcl.nginx.plistNow,sudo brew services listshows a running nginx processName Status User Plist nginx started root /Library/LaunchDaemons/homebrew.mxcl.nginx.plistRunningbrew services listwithout root will result in an error status, because you need to be root to read the status.
brew services says nginx is started..MacBook-Pro-van-Youri:Homebrew youri$ brew services start nginx Service `nginx` already started, use `brew services restart nginx` to restart.Same for launchctlMacBook-Pro-van-Youri:Homebrew youri$ launchctl load ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist /Users/youri/Library/LaunchAgents/homebrew.mxcl.nginx.plist: service already loadedMy homebrew.mxcl.nginx.plist Label homebrew.mxcl.nginx RunAtLoad KeepAlive ProgramArguments /usr/local/opt/nginx/bin/nginx -g daemon off; WorkingDirectory /usr/local brew services list says the following:MacBook-Pro-van-Youri:LaunchAgents youri$ brew services list Name Status User Plist mariadb started youri /Users/youri/Library/LaunchAgents/homebrew.mxcl.mariadb.plist nginx error youri /Users/youri/Library/LaunchAgents/homebrew.mxcl.nginx.plist php71 started youri /Users/youri/Library/LaunchAgents/homebrew.mxcl.php71.plistThe syntax is oke:MacBook-Pro-van-Youri:LaunchAgents youri$ plutil -lint homebrew.mxcl.nginx.plist homebrew.mxcl.nginx.plist: OKWhen i runsudo nginxi can access my website
Nginx not started, homebrew says it is
You're missing semicolons on a bunch of lines, that's why nginx -t is failing.
I've deployed a Django application on DigitalOcean. First off, when i try to secure this with https and ssl, I get this error.when i run nginx -t :nginx: [emerg] invalid parameter "server_name" in /etc/nginx/sites-enabled/django:12nginx: configuration file /etc/nginx/nginx.conf test failedupstream app_server { server unix:/home/django/gunicorn.socket fail_timeout=0; } server { #listen 80 default_server; #listen [::]:80 default_server ipv6only=on; listen 443 ssl server_name domain.com ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; root /usr/share/nginx/html; index index.html index.htm; client_max_body_size 4G; server_name _; keepalive_timeout 5; # Your Django project's media files - amend as required location /media { alias path/to/media; } # your Django project's static files - amend as required location /static { alias path/to/static; } # Proxy the static assests for the Django Admin panel location /static/admin { alias path/to/staticadmin; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; proxy_buffering off; proxy_pass http://app_server; } } server { listen 80; server_name domain.com; return 301 https://$host$request_uri; }Furthermore, I can access the website using the ip address but not the domain name registered.It results in a 400 bad request page. Could this be an issue with the settings.py ?for reference in settings.pyALLOWED_HOSTS=['*']. What list do I provide in the ip_addresses() function?Are these two problems related?using Django v1.10.5
invalid parameter server_name in /etc/nginx/sites-enabled/django
you can do:server_name one.example.org two.example.org;if both are exactly identical except for the domainnameif you have just similar locationblocks you can move those locations to a separate file and then do aninclude /etc/nginx/your-filename;to easily use it in each serverblock
Let's say I have a nginx configuration set up for a domain like this:server { root /path/to/one; server_name one.example.org; location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }Now, if I want to add another domain with different content, is there a way I can re-use equivalent statements from the previous domain, or do I have to duplicate everything for every new domain I want to support?server { root /path/to/two; # different server_name two.example.org; # different location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }I tried moving thelocationdirective outside theserverclosure, but obviously things don't work like that because I got an error "location directive is not allowed here" when restarting nginx.
Reuse configuration statements for domains in nginx.conf
Based on @cyberchis's answer i simplified the process, and I have got through the same setup twice. I hope that it also works for you.Check the user of nginx1.1. Opennginx.confwithnano /etc/nginx/nginx.conf.1.2. Check the 1st. lineuser www-data;, and the user here iswww-data.Edit external_url of gitlab2.1. Opengitlab.rbwithnano /etc/gitlab/gitlab.rb.2.2. Edit the lineexternal_url 'GENERATED_EXTERNAL_URL'toexternal_url 'http://gitlab.yourdomain.com'.2.3. Uncomment and change the linenginx['enable'] = truetonginx['enable'] = false.2.4. Uncomment and change the lineweb_server['external_users'] = []toweb_server['external_users'] = ['www-data'].Add a configuration file for gitlab3.1. Download thegitlab-omnibus-nginx.conffromgitlab repository.3.2. Go to the directory where the file is, and copy this file to nginx withcp /directory-to-this-file/gitlab-omnibus-nginx.conf /etc/nginx/sites-enabled.3.3. Open this file withnano /etc/nginx/sites-enabled/gitlab-omnibus-nginx.conf.3.4. Change this linelisten 0.0.0.0:80 default_server;tolisten 0.0.0.0:7001;// gitlab runs on port 70013.5. Change this linelisten [::]:80 default_server;tolisten [::]:7001;// gitlab runs on port 70013.6. Change this lineserver_name YOURSERVER_FQDNtoserver_name www.yourdomain.com.Configure nginx4.1. Opennginx.confwithnano /etc/nginx/nginx.conf.4.2. Add this configurationhttp { ... server { listen 80; server_name gitlab.yourdomain.com; location / { proxy_pass http://127.0.0.1:7001; } } }Reconfigure gitlab and reload nginx5.1.sudo gitlab-ctl reconfigure5.2.sudo systemctl reload nginxConfigure firewall to export port 7001(Optional)Since the gitlab runs on my local server, therefore the port 7001 has to been allowed to reach from the outside. Easiest way to enable it is to runufw allow 7001.Now the gitlab runs on your subdomaingitlab.yourdomain.comwhich you should access.
I've been following theinstructions from the GitLab wiki, however, it seems as if some key pieces of information are missing. In the section "Using a Non-Bundled Web Server" it never explains how I need to reconfigure my Nginx installation to reverse proxy over to GitLab.Basically, I'd like to have GitLab installed under git.example.com, but I can't seem to find the configuration settings for my existing Nginx installation that'll do that. The wiki page goes on to talk about configuring an existing Passenger/Nginx installation, but I don't have Passenger, so I don't think that applies to my situation.I suppose the easiest solution would be if there were a way to tell Gitlab to use it's built-in Nginx and just listen on an internal port, and then have my other Nginx forward to that port, but I can't seem to figure out how to configure Gitlab to handle that.Any help would be greatly appreciated.
Forwarding to GitLab Subdomain with Existing Nginx Installation
Have you looked intoForeman? Foreman makes it easy to start and stop your application if it has multiple processes. Incidentally it also provides anexportfunction that can generate somesystemdorupstartscripts for you to (re)start and stop your application.As you are already using capistrano you can usecapistrano-foremanto integrate all this nicely with capistrano.I hope you find some use in these resources
I've been struggling with this a week now and really can't seem to find an answer. I've deployed my Rails App with Capistrano. I use Puma as a server.When I deploy, everything works ok. The problem is to get Puma to start at reboot and/or when it crashes.To get the deployment setup, I've used thistutorial. I'm also using RVM. The problem I seem to get is to get the service to start Puma. Here's what I've used (service file):[Unit] Description=Puma HTTP Server After=network.target [Service] Type=simple #User=my-user WorkingDirectory=/home/my-user/apps/MyApp/current ExecStart=/home/my-user/apps/MyApp/current/sbin/puma -C /home/my-user/apps/MyApp/shared/puma.rb Restart=always [Install] WantedBy=multi-user.targetThat doesn't work. I was starting to think the problem was Ruby not being installed for all users, so I've installed RVM for all users and still get the same problem. My server has only root and my-user.Looking at how Capistrano deploys, the command it runs is:cd /home/my-user/apps/MyApp/current && ( RACK_ENV=production /home/my-user/.rvm/bin/rvm default do bundle exec puma -C /home/my-user/apps/MyApp/shared/puma.rb --daemon ). If I use the aforementioned command, I get an error from Systmd complaining about missing parameters. So I've written a script with it and got the service file to call this script to start the app.That doesn't work either. Note that if I call the script from anywhere on the server the script does start the App, so its an issue on configuring Systemd, but I can't figure out what's wrong and I'm not sure how to debug it. I've seen the debug page on System's website, but it didn't help me. If I runsystemctl status puma.serviceall it tells me is that the service is in failed state, but it doesn't tell me how or why.Also worth noting: If I runbundle exec puma -C /home/my-user/apps/MyApp/shared/puma.rbfrom my App folder it works ok, so how I could duplicate this command with Systemd service?
How to get systemd to restart Rails App with Puma
Yes, Nginx supports HTTP/2 server push sinceversion 1.13.9, released on February 20, 2018.The Nginx team pointed out in the original1.9.5 blog postthat it wasn't supported back then.
I'm planning to upgrade my nginx to 1.9.6 which supports HTTP/2.Have nginx implemented HTTP/2 server push?ref:https://ma.ttias.be/service-side-push-http2-nghttp2/
Does the nginx HTTP/2 module support Server Push?
I also read about the nginx+zeromq module and I immediately spotted a considerable difference.ZeroMQ nginx module uses REQ/REP sockets to communicate with the backend processes. On the other hand mongrel2 uses two sockets. One PUSH/PULL to send messages downstream (to the handlers) and one PUB/SUB (to receive responses from handlers). This makes it totally asynchronous. When mongrel2 sends a request to the backend handlers it returns immediately from the zmq_send() call and the response will be received in another socket, anytime later.Another difference is that mongrel2 is capable of sending the same response to more than one client. Your handler can tell mongrel2 something like this: "Deliver this response to connections 4, 5, 6 and 10, please". Mongrel2 send the connection ID within the message to the handlers.Hope this helps! =)
I see this newNGINX+ZeroMQproject on github and am now confused.What are the feature and scalability differences betweenMongrel2and NGINX+ZeroMQ.(The reason why I ask is because I'm under the impression Mongrel2 was solely created since NGINX didn't support ZeroMQ)
Mongrel2 vs. NGINX+ZeroMQ?
This is a huge topic but let me help and give you some pointers.Nginx is much more than just a reverse proxy. It can servestatic content,can compress the response content, can run multiple apps on different port on the same VM and much more.PM2 essentially helps you to scale throughput of your service by running it in cluster mode and utilizing all the cores of the box. Read this stackoverflowanswerto understand more on this.Now to answer your questionCan we use both for production?Yes and you should. Nginx can run on port 80. PM2 can run on port 3000 (or whatever port) which can then manage traffic within the instances of the app.gzip alone will make a huge difference in the app end user performance.Here is agood articlein case you need code help on how to set it up
I am new to Node.js. I have built my first Node.js server. I am doing some research to improve performance of node js server in production. So I learned about NGINX and Process Manager(PM2).NGINX:It can load balance the incoming requests.It can act as reverse proxy for our application.PM2:It can divide our application as clusters though it has in built load balancer.We can monitor and restart application when crashed.Can we use both for production?Though load balancer is there in PM2 can I use only PM2?What is the advantage of using NGINX over PM2?If I use Load balancer using NGINX and clustering using PM2, will it give better performance than using only one (NGINX or PM2)?
Can we use both NGINX and PM2 for node.js production deployment?
The error indicates that yourSCRIPT_FILENAMEis incorrect. Your comment:in the wordpress container it's at /var/www/html/index.php in the nginx container it's at /appsuggests thatnginxandphp-fpmare seeing a different document root.In which case, use:fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
I've looked through every question like this here and tried to apply the stated fixes with no success.I'm using thewordpress:4.7.3-php7.0-fpm-alpinedocker image with a separate nginx container in front of it.When I curl wordpress I get:File not found.When I check the wordpress container logs, I get:127.0.0.1 - 16/Mar/2017:06:26:24 +0000 "GET /index.php" 404 127.0.0.1 - 16/Mar/2017:06:31:27 +0000 "GET /index.php" 404 127.0.0.1 - 16/Mar/2017:06:32:16 +0000 "GET /index.php" 404 127.0.0.1 - 16/Mar/2017:06:37:17 +0000 "GET /index.php" 404 127.0.0.1 - 16/Mar/2017:06:39:09 +0000 "GET /index.php" 404The actual nginx error is:2017/03/16 06:26:24 [error] 17#17: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 10.128.0.7, server: k8wp, request : "GET / HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000"I'm using php 7/var/www/html # php-fpm -v PHP 7.0.16 (fpm-fcgi) (built: Mar 3 2017 23:07:56) Copyright (c) 1997-2017 The PHP Group Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies with Zend OPcache v7.0.16, Copyright (c) 1999-2017, by Zend TechnologiesMy nginx config isserver { root /app; # Add index.php to the list if you are using PHP index index.php index.html index.htm index.nginx-debian.html; server_name _localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }I'm running nginx as the userwww-data:user www-data;According to/usr/local/etc/php-fpm.d/www.confthe user and group are uncommented and set towww-data
File not found nginx php-fpm
See thepython docs section on FCGI. Basically, with Python, you use the WSGI interface on top of an fcgi server which talks to the web server (the fcgi client).SeePython + FastCGIfor a couple of Python fcgi servers.Edit:This nginx wiki pageexplains exactly how to set up Python with nginx using fcgi.This wiki pagedescribes the uWSGI module for nginx, which is the natural way to use Python with a web server, if you don't really need to use fcgi.This blog entryalso looks like good info on uWSGI.In production, Apache + mod_wsgi or Nginx + mod_wsgi?has some useful info for nginx mod_wsgi as well.
I am looking to run standalone python scripts through fcgi for use with nginx, but I have no idea where to start with spawning the processes. Currently, I have PHP successfully with nginx+fcgi, but I'm unsure if/how I can do the same with python. Any suggestions on where to start?
Running python through fastCGI for nginx
NOT RECOMMENDED - this was a quick and dirty way to get Django up and running with ELB, but in most cases a health check response should be returned directly from your application code. See the rest of the answers for a better solution.This is something better handled by nginx, so the fact that it's serving a django app should make no difference.You will first need to configure ELB's health check to ping a specific URL of your instance, say/elb-status. You can follow the instructions here:http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-healthchecks.html#update-health-check-configAfter that, all you need to do is set up nginx to always send back an HTTP 200 status code. You can add something like this to yourserverblock innginx.conf:location /elb-status { access_log off; return 200; }See thisanswerfor more details.
I have a webapp which required authentication to access any page of it. But for my ELB to work I have to setup health-check page for ELB so that ELB discover django app.This page should return HTTP 200 and no auth required. How do I setup this with django/nginx world.
How to setup health check page in django
To add HTTP and WebDAV methods like PUT, DELETE, MKCOL, COPY and MOVE you need to compile nginx withHttpDavModule(./configure --with-http_dav_module). Checknginx -Vfirst, maybe you already have theHttpDavModule(I installed nginx from the Debian repository and I already have the module).Then change your nginx-config like that:location / { root /var/www; dav_methods PUT; }You can get more info on thenginx docs entry for the HttpDavModule.
I am using an application which needs toPUTa file on a HTTP server. I am using Nginx as the server but getting a405 Not Allowederror back. Here is an example of a test with cURL:curl -X PUT \ -H 'Content-Type: application/x-mpegurl' \ -d /Volumes/Extra/playlist.m3u8 http://xyz.comAnd what I get back from Nginx: 405 Not Allowed 405 Not Allowed nginx/1.1.19 What do I need to do to allow thePUT?Any clues would be awesome!
How do I allow a PUT file request on Nginx server?
Now, it is possible to do this way:fastcgi_param PHP_VALUE "include_path=/my/include/path";More information here:http://bugs.php.net/bug.php?id=51595Using this technique to set php values, I have successfully set different "error_log" locations for multiple virtual hosts.Thanks, PHP and NginX guys!
Apache lets you set php.ini values for virtual hosts with the php_value directive.Does nginx have something similar? Is there another way to set the include_path on a per-site basis?
Set php include_path from nginx
Check out html folder in the nginx directory - there should be 50x pages.By default, I believe, all "special pages", including 404 page arehardcodedstatic char ngx_http_error_404_page[] = "" CRLF "404 Not Found" CRLF "" CRLF "404 Not Found" CRLF ;Source:https://github.com/nginx/nginx/blob/release-1.15.8/src/http/ngx_http_special_response.c#L132but can be customized:server { ... error_page 404 /404.html; ... }
Where doesnginxstore the default error pages it outputs on disk? I.E. the standard404looking like:404 Not Found404 Not FoundnginxHopefully these arenothard-coded into the nginx source. Thanks.
Where does nginx store default error pages
If every host on your server runs in its own PHP-FPM pool, than addingfastcgi_param PHP_VALUE ...to one nginx host will not affect the other ones.If on the other hand allnginxhosts use one PHP-FPM pool, you should specifyPHP_VALUEfor every host you have (error_reporting=E_ALLfor one of them, empty value for others). Sincefastcgi_parampassesPHP_VALUEif specified, and doesn't pass if not. In a while all workers will havePHP_VALUE=error_reporting=E_ALL, unless you explicitly setPHP_VALUEin other hosts.Additionally,fastcgi_param PHP_VALUE ...declarations override one another (the last one takes effect).Steps to reproduce:apt install nginx php5-fpm/etc/nginx/sites-enabled/hosts.conf:server { server_name s1; root /srv/www/s1; location = / { include fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param PHP_VALUE error_reporting=E_ERROR; } } server { server_name s2; root /srv/www/s1; location = / { include fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } }Adds1,s2to/etc/hostsChangepmtostatic,pm.max_childrento1in/etc/php5/fpm/pool.d/www.confcat /srv/www/s1/index.php:<?php var_dump(error_reporting());systemctl restart php5-fpm && systemctl restart nginxcurl s2 && curl s1 && curl s2int(22527) int(1) int(1)
One can seterror_reportinginnginx.conflike so:fastcgi_param PHP_VALUE error_reporting=E_ALL;But if I do this in one server block, will it affect all the others as well? Should I change php settings in all server blocks simultaneously?
How to set php ini settings in nginx config for just one host
You should move that redirect to be in the "/" block:location / { return 301 https://$host$request_uri; }They do process in order, but yours isoutsideof any location block, so likely is taking precedence.
I have the following http config:server { listen 80; server_name example.com; # Necessary for Let's Encrypt Domain Name ownership validation location /.well-known/acme-challenge/ { root /home/vagrant/.well-known/acme-challenge/; } return 301 https://$host$request_uri; }I would likehttp://example.com/.well-known/acme-challenge/filenameto serve/home/vagrant/.well-known/acme-challenge/filenamewhile every other http request should be redirected to https.I thought Nginx would process rules in their order, if matches, uses it, otherwise continues. But apparently not. How can I achieve what I want ?
Nginx: Redirect all but one
Ifnginxviews the file system from the root, then therootshould be set to/mnt/q/app/client/public, and not either of the two values you are using.The last element of thetry_filesdirective can be a default action (e.g./index.html), a named location or a response code. You have a named location in the penultimate element - which will be ignored.Your named location should work, but is unnecessary, astry_filesis capable of implementing it more simply. Seethis documentfor more.For example:root /mnt/q/app; location / { root /mnt/q/app/client/public; try_files $uri $uri/ /index.html; } location /api { } location /auth { }The$uri/element will add a trailing/to directories, so that theindexdirective can work - you do not have to add it if you do not need it.
I always seem to have problems with nginx configurations. My SPA is located at /mnt/q/app (pushstate is enabled) and the frontend root is located at client/public. Everything should be mapped to index.html, where the app picks up the route and decides what to do.Full path to the index is/mnt/q/app/client/public/index.html.I think I ran out of options by now. No matter what I do, I just get a 404 back from nginx, I think the configuration is simple enought and have no clue what's wrong.server { listen 80; server_name app.dev; root /mnt/q/app; location / { root /client/public; try_files $uri @rewrites =404; } location @rewrites { rewrite ^(.+)$ /index.html last; } }Any help is appreciated.
nginx config with spa and subdirectory root
Theapacheuser comes fromphp-fpm.conffile. It does not matter that you run it as root, the service will start as the user which is configured in this file.Find yourphp-fpm.conffile. It should be somwhere in/etc. Edit it and change linesuser = apache group = apachetouser = www-data group = www-dataI'm assuming your default nginx configuration also uses thewww-datauser.
On a fresh AWS Linux HVM box, I ran the commands:sudo yum update sudo yum install git nginx php-fpmI then tried tosudo service start php-fpm, but got the error:Starting php-fpm: [10-Sep-2014 20:52:39] ERROR: [pool www] cannot get uid for user 'apache' [10-Sep-2014 20:52:39] ERROR: FPM initialization failedWhere am I going wrong and where is the apache user coming from?
Unable to start php-fpm - "cannot get uid for user 'apache'"
You need to useUpstream module, andReverse Proxy module. To reverse proxy to the https upstream, use thisproxy_pass https://backend;where backend is an uptream block.However, if I were doing this, I'd terminate ssl on the nginx server, and make upstream app servers doing what they are good at: serving the content, instead of worrying about ssl encryption/decryption overhead. Setting up ssl termination on nginx is also very simple using theSSL module. A very good case study is also givenhere.
Trying to setup Nginx as load balancer for https servers. The upstream serves over port 443 with SSL certificates configured. How to configure Nginx, so that the SSL certificate configuration is handled only on the upstream servers and not in the Nginx server?
Nginx load balance with upstream SSL
It's probably not possible. There doesn't seem to be any documentation on thenginx HttpAuthBasicModule pageto suggest that you can timeout Basic HTTP authentication.TheHTTP specificationforAuthorizationheaders also does not specify a timeout mechanism. I don't expect you'll be able to rely on basic authentication if you need timeouts, unless you're also fronting a web application.If you're fronting a web application, you could maintain a session in a cookie and time out the session after a period of inactivity. When the session timeout finishes, use your web application to send the following headers:HTTP/1.1 401 Unauthorized WWW-Authenticate: Basic Realm="MyApp"That will prompt the browser to ask for credentials again. If you need access to the user's identity in your web application, youshouldfind it in theREMOTE_USERCGI environment variable.To serve static assets efficiently using this technique,XSendfilemight be useful.
I'm protecting my dev server using nginx and theauth_basicmodule, but I can't seem to find a way to specify the interval at which the 'authentication' expires.I would like to be able to force nginx to ask for the password say every 6 hours. Is there a way to do that? If not, what is an acceptable workaround?
nginx auth_basic time limitation
i don't think, that is possible ...localhost:8080/long_pollingis aURI... more exactly, it should behttp://localhost:8080/long_polling... inHTTPtheURIwould be resolved as requesting/long_polling, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sendingGET /long_polling HTTP/1.1 Host: localhost:8080plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...greetzback2dos
I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_polling for clients to connect to. My webapp runs on localhost:80.I've used nginx to proxy requests from nginx to the comet server (localhost:80/long_polling proxied to localhost:8080/long_polling), however, I have two gripes with this solution:nginx gives me a 504 Gateway time-out after a minute, even though I changedEVERY single time out settingto 600 secondsI don't really want nginx to have to proxy to the comet server anyway - the nginx proxy is not built for long lasting connections (up to half an hour possibly). I would rather allow the clients to directly connect to the comet server, and let the comet server deal with it.So my question is: is there any linux trick that allows me to expose localhost:8080/long_polling to localhost:80/long_polling without using the nginx proxy? There must be something. That's why I think this question can probably be best answered by a linux guru.The reason I need /long_polling to be exposed on port 80 is so I can use AJAX to connect to it (ajax same-origin-policy).This is my nginx proxy.conf for reference:proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; send_timeout 600; proxy_buffering off;
nginx proxy to comet
I believe it is a combination of both... You can tell that "X-Powered-By: PHP/5.3.6-6~dotdeb.1" comes from PHP and "Server: nginx" comes from NGINX.You can alter the headers in PHP as follows:The gzip header most definitely comes from NGINX as it is compressing the output (html) to the browser. PHP can "add" to the headers by calling a function like the one above. Then the server combines it with the PHP headers and serves the request.It depends on your server whether or not the PHP headers take precedence over the server headers.Hope this helps.
When I simply echo something out of php file, I do not send any headers intentionally, however - there are some default headers present anyway when I look at firebug response:response headers:HTTP/1.1 200 OKServer: nginxDate: Thu, 23 Jun 2011 19:33:51 GMTContent-Type: text/htmlTransfer-Encoding: chunkedConnection: keep-aliveVary: Accept-EncodingX-Powered-By: PHP/5.3.6-6~dotdeb.1Expires: Thu, 19 Nov 1981 08:52:00 GMTCache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0Pragma: no-cacheContent-Encoding: gzipI'm curious - are these default response headers set by the server(nginx) or by PHP?
Where are these extra HTTP headers coming from?
return-directives are executed before most other directives. To solve your problem you need to split this into two locations:location /auth { auth_basic_user_file /etc/nginx/.htpasswd; auth_basic "Secret"; try_files DUMMY @return200; } location @return200 { return 200 'hello'; }Thetry_files-directive is evaluated afterauth_basic. The second location is evaluated only as a result oftry_files.
i thought this would work but for some reason it skips the auth_basic and always returns 200. Same happens if i swap 200 for a 301 redirect.If i comment out the return statement it works ok. Ideally i want just an/authendpoint that once authenticated it will 301 redirect to another path.location /auth { auth_basic_user_file /etc/nginx/.htpasswd; auth_basic "Secret"; return 200 'hello'; }Am i missing something ?many thanksfLo
auth_basic within location block doesn't work when return is specified?
To ensure that we can scale to multiple nodes but keep up interconnectivity between different clients and different servers, I use redis. It's actually very simple to use and set up.What this does is creates a pub/sub system between your servers to keep track of your different socket clients.var io = require('socket.io')(3000), redis = require('redis'), redisAdapter = require('socket.io-redis'), port = 6379, host = '127.0.0.1', pub = redis.createClient(port, host), sub = redis.createClient(port, host, {detect_buffers: true}), server = http(), socketServer = io(server, {adapter: redisAdapter({pubClient: pub, subClient: sub})});read more here:socket.io-redisAs far as handling the different node servers, there are different approaches.AWS ELB(elastic load balancer)NginxApacheHAProxyAmong others...
I am involved in a development project of a chat where we are using node.js, socket.io (rooms) and mongodb. We are at the stage of performance testing and we are very concerned if the system needs a load balance.How can we develop if our project needs it? J'a researched on NGINX looks cool, but we are in doubt whether solves our problem as how the system will be a chat, we fear the servers are not ~talking~ with each other correctly ...Where do we go if we need a load balancing?
Chat project - load balance with socket.io
The issue is that you are missing thedirectory=...in your config file[program:sitepro] command = /home/user/sitepro/bin/gunicorn sitepro.wsgi:application --bind mywebsite.fr:8002 directory = /home/user/sitepro/site user = user autostart = true autorestart = trueOtherwise gunicorn will not know where to findsitepro.wsgi:application
I try to deploy my website in Django+Supervisor+NGINX on Ubuntu server 16.04.Here is my .conf (supervisor):[program:sitepro] command = /home/user/sitepro/bin/gunicorn sitepro.wsgi:application --bind mywebsite.fr:8002 user = user autostart = true autorestart = trueMy NGINX config file :server { listen 80; server_name .mywebsite.fr; charset utf-8; root /home/user/sitepro/site/sitepro; access_log /home/user/sitepro/site/logs/nginx/access.log; error_log /home/user/sitepro/site/logs/nginx/error.log; location /static { alias /home/user/sitepro/site/static; } location / { proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://127.0.0.1:8002; } }When I try to launch gunicorn on the root of my project, everything goes right :(sitepro) user@mybps:~/sitepro/site$ gunicorn sitepro.wsgi:application --bind mywebsite.fr:8002 [2017-11-01 16:09:37 +0000] [1920] [INFO] Starting gunicorn 19.7.1 [2017-11-01 16:09:37 +0000] [1920] [INFO] Listening at: http://79.137.39.12:8002 (1920) [2017-11-01 16:09:37 +0000] [1920] [INFO] Using worker: sync [2017-11-01 16:09:37 +0000] [1925] [INFO] Booting worker with pid: 1925I've done a supervisorctrl reread and update (worked). And if I make supervisorctl status siteprositepro FATAL Exited too quickly (process log may have details)And if I access my website I've got the "Welcome to Nginx" default page.I've tried many tutorials for deploy django : I'm lost and tried many things. Could someone give me a simple and fast tutorial to deploy Django that he used for his propre needings please ?Thanks !
Django - Supervisor : exited too quickly
remote_addr will refer to the proxy, but you can configure the proxy to send the client address with header fields X-Real-IP/X-Forwarded-For.Combined with thengx_http_realipmodule, you can modify the incoming header to use the real client address for remote_addr. I believe this will work as expected with allow/deny syntax.Just to clarify -- allow/deny syntax should be identical after enabling and configuring the module. Substitute your IP and your proxy addresses below.Back-end nginx allow/deny:location / { allow ; allow 127.0.0.1; deny all; }Back-end nginx realip configuration:set_real_ip_from ; real_ip_header X-Forwarded-For;On your nginx proxy configuration:proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;If you have multiple intermediate proxies involved, you'll need to enable real_ip_recursive and whitelist additional addresses with the set_real_ip_from directive.
Nginx supportsallowanddenysyntax to restrict IPs, e.g.allow 192.168.1.1;. But if traffic goes through a reverse proxy, the IP will refer to the proxy's IP. So how can it be configured to whitelist a specific origin IP and deny all other incoming requests?
Nginx - Allowing origin IP
As stated by Michael nginx balances thin (whatever that means). But another reason to use nginx in front of any ruby server is to serve static files (if you use page caching, they also can be served by nginx), which means requests wont even touch your ruby app, and increase a lot your performance. Also nginx although it seems to be the popular choice on the ruby/rails community, there are other alternatives like apache.
Why do we need to install nginx with thin on production setup, As thin is itself a web server. Every blog post people are using ruby+rails+nginx+thin?
Why do we need nginx with thin on production setup?
I think your problem is that the passenger module is not present in nginx.All the passenger dependent directives you've described (passenger_root, passenger_ruby, passenger_enabled) are available only when the passenger module isattachedto nginx. This is why you have to compile nginx with--add-module='/path/to/passenger-3.0.9/ext/nginx'.Unfortunately, I don't know of any method to enable passenger module without re-installing nginx. But, according tohttp://wiki.nginx.org/Modules, "Nginx modules must be selected at compile-time.", so there could be a chance that there isn't a way to do that.
Rather a simple question I believe, is it possible to install passenger when nginx is already installed on your webserver?If the answer is Yes, I already performed these actions:At this very moment I already have nginx installed (for my PHP applications) and next I did a checkout of the passenger's git repository:mkdir /repositories cd /repositories/ git clone https://github.com/FooBarWidget/passenger.git cd passenger/and then add this snippet to/etc/nginx/conf/nginx.confhttp { ... passenger_root /repositories/passenger; passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.2-p290/ruby; ... }However when I want to restart nginx I get the following error:* Starting Web Server nginx nginx: [emerg] unknown directive "passenger_root" in /etc/nginx/nginx.conf:19Which concludes me to say that there is still some config I need to set, for nginx to be aware that we're using passenger.My server blockserver { listen 80; server_name rails.kreatude.com; root /srv/www/my_test_app; passenger_enabled on; }
Installing Passenger when Nginx is already installed; Possible?
There is a small point you missed in documentationWhen variables are used in proxy_pass:location /name/ { proxy_pass http://127.0.0.1$request_uri; }In this case, if URI is specified in the directive, it is passed to the server as is, replacing the original request URI.So you config needs to be changed toset $pass_url http://test-microservice.example.com:80$request_uri; proxy_pass $pass_url;
I'm trying to get NGINX's resolver to automatically update the DNS resolution cache, so I'm transitioning to using a variable as theproxy_passvalue to achieve that. However, when I do use a variable, it makes all requests go to the root endpoint of the request and cuts off any additional paths of the url. Here's my config:resolver 10.0.0.2 valid=10s; server { listen 80; server_name localhost; location /api/test-service/ { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # If these 2 lines are uncommented, 'http://example.com/api/test-service/test' goes to 'http://example.com/api/test-service/' set $pass_url http://test-microservice.example.com:80/; proxy_pass $pass_url; # If this line is uncommented, things work as expected. 'http://example.com/api/test-service/test' goes to 'http://example.com/api/test-service/test' # proxy_pass http://test-microservice.example.com:80/; }This doesn't make any sense to me because the hardcoded URL and the value of the variable are identical. Is there something I'm missing?EDIT: Ah, so I've found the issue. But I'm not entirely sure how to handle it. Since this is a reverse proxy, I need theproxy_passto REMOVE the/api/test-service/from the URI before it passes it to the proxy. So..This:http://example.com/api/test-service/testShould proxy to this:http://test-microservice.example.com:80/testBut instead proxies to this:http://test-microservice.example.com:80/api/test-service/testWhen I'm not using a variable, it drops it no problem. But the variable adds it. Is that just inherently what using the variable will do?
NGINX - Using variable in proxy_pass breaks routing
I finally found a solution to the problem you've well described. I made it work with URL rewriting, but it seemed a bit overkill.So, for anyone having the same problem, it appears the cleanest solution would be to replace this :proxy_set_header Host $host;with this :proxy_set_header Host $http_host;With this setup, Nginx will keep the port in your redirections, no matter you firewall configuration.Hope this helps. Cheers !
I want to keep the ServerName and Port dynamicly on my rewrite: Lets say the Firewall redirect port 8081 to 80. So, if i access the webserver for example with "192.168.1.123/frontend" or "my.domain.tld:8081/frontend" i should be redirect to "192.168.1.123/frontend/" or "my.domain.tld:8081/frontend/"If i use the normalredirect rewrite ^(.*[^/])$ $1/ permanent;and i access with the port 8081 the port got removed. (I already triedport_in_redirect off;)I use almost the default configuration:server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.php index.html index.htm index.nginx-debian.html; server_name _; rewrite ^(.*[^/])$ $1/ permanent; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } }Thank you in anticipation!SOLUTION:Thanks to the NGINX Mailing list! I fixed this problem with a rewrite rule:if (-d $request_filename) { rewrite [^/]$ $scheme://$http_host$uri/ permanent; }
Prevent NGINX to remove the port
TheROOT_URLenvironment variable should be set to the URL that clients will be accessing your application with. So in your case, it would behttp://gentlenode.comorhttps://gentlenode.com.TheROOT_URLenvironment variable is read byMeteor.absoluteUrl, which is used in many (core) packages. Thus, settingROOT_URLmay be a requirement if you use these packages.spiderableis one such package.// Line 62 of spiderable_server.js var url = Spiderable._urlForPhantom(Meteor.absoluteUrl(), req.url);
I'm getting some problems to make spiderable work with PhantomJS on my Ubuntu server. I saw this troubleshooting onMeteorpedia:Ensure that the ROOT_URL that your Meteor server is configured to use is accessible from the server itself. (Since v0.8.1.3[1])I think that this could be a possible answer to why it is not working. What is exactly the purpose of this environment variable?My application is publicly accessible onhttp://gentlenode.com/but myproxy_passon nginx is set tohttp://gentlenode/.# HTTPS Server server { listen 443; server_name gentlenode.com; # ... location / { proxy_pass http://gentlenode/; proxy_http_version 1.1; # ... } }Should I setROOT_URLtohttp://gentlenode.com/, tohttp://gentlenode/or tohttp://localhost/?You can find my nginx configuration here:https://gist.github.com/LeCoupa/9877434
Meteor - What is the purpose of "ROOT_URL" and to what should it be defined?
Passenger standalone is good enough to run in production, it may be easier to use the OS packages insteadInstallation is usually as simple asyum installorapt-get installUsually includes all the appropriate startup scripts like/etc/init.d/nginxYou don't have to write scripts to make sure it starts up after rebooting. Ubuntu will automatically set that up, and on CentOS/RedHat, it's just a one-time call tochkconfigOpening ports 80 and 443 usually requires root, but your app should execute as your regular unprivileged user. The OS packages handle this automatically.Running a shared copy of nginx means you can run multiple sites/apps from the same server, by different users, if needed.It seems that Passenger is already based on Nginx core but i see there are also a version passenger-nginx. What's the difference between them if they are both based on Nginx?There is almost no difference. Passenger standalone just automates setting up nginx (if you don't already have it) and passenger-nginx. Passenger standalone typically starts as your regular unprivileged user on port 3000 or another a high port number, and nginx typically starts as root using ports 80 and 443.
Excuse me if my question may seem inappropriate but i was unable to find any information regarding my question.I am currently choosing a production web server for my rails app, Passenger seems to fit my needs perfectly, although there is a small question that popped in my head.It seems that Passenger is already based on Nginx core, but I see there is also a versionpassenger-nginx. What's the difference between them if they are both based on Nginx?Thank you in advance.
Passenger and Nginx or Passenger Standalone only?
Sorry, don't have a step by step tutorial. But here is a high level overview that might help:You probably want to go with the Apache server (http://httpd.apache.org/) This comes with most *nix distributions.You then want to use mod python (or as the commenter pointed out mod_wsgi:http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/) to connect to Django :http://docs.djangoproject.com/en/dev/howto/deployment/modpython/?from=olddocs. Once you complete this step, Apache is now fronting for Django.Next you want to collect the static files in your Django into one directory and point apache at that directory. You can do this using the the ./manage.py collectstatic if you used django.contrib.staticfiles (http://docs.djangoproject.com/en/dev/howto/static-files/.)So the trick is you're not telling Django to delegate serving static files to a specific server. Rather you're telling httpd which urls are served via Django and what urls are static files.Another way of saying this is that all requests come to the Apache web server. The webserver, according to the rules you specify in httpd.conf, will decide whether the request is for a static file or whether it is for a dynamic file generated by django. If it for a static file it will simply serve the file. If the request is for a dynamic file it will, via modpython, pass the request to Django.Hope that helps.
Does anyone have a simple step-by-step tutorial about serving static files on a Django production app? I read the Djangodocsand it sounds really complicated... I'm trying to go the route of serving static files using a different server like lighttpd, nginx, or cherokee, but setting these up is all Greek to me. I downloaded lighttpd, tried to follow the instructions to install, and within a few seconds got an error. Missing this or that or whatnot... I'm not a UNIX whiz and I'm not very good at C/C++, so all this ./configure and MAKE install are gibberish to me... So I guess my immediate questions are:Which server would you recommend to serve static files that's easy to install and easy to maintain?Assuming I actually get the server up and running, then what? How do I tell Django to look for the files on that other server?Again, anyone has step-by-step tutorials?Thanks a lot!
serving static files on Django production tutorial
In theory, you could have a PV in in RWX mode mounted to both express and ingress and provide custom config to the nginx-ingress pods, but that should be avoided. Ingress Controller has one responsibility - implement Ingress rules defined in your cluster. To serve static content you should have a pod that does that, which indeed means ie. running second nginx in your stack. The thing is, that you should treat your ingress controller as part of the infrastructure providing generic cluster functionality, and serving static files from some place (or container if they are versioned/built as docker images) is de facto part of your application.
I have aexpressweb server with static files. Let's call this myexpress-deployment.I'd like to use myingress-nginxto serve static files from myexpress-deploymentwithout ever actually hitting my express server.Innginxthis is done with thelocationdirective where you point to files locally hosted. While I see an option forlocations-snippetin theingress-nginxconfigMap, I'm not entirely sure how I would have this point to files in another container.Is this possible withingress-nginx? If so how would I go about it? Alternatively, is this something that requires an nginx container to be hosted along side my express server? (Seems odd that I would need 2 nginx for that)
serving static files from ingress-nginx
I think this solution may be better (if you are using something like Express OR something similar that uses the "middleware" logic):Add a middleware function to change the url like sorewriter.jsmodule.exports = function temp_rewrite() { return function (req, res, next) { req.url = '/data' + req.url; next(); } }In your Express app do like so:app.config// your configuration app.configure(function(){ ... app.use(require('./rewriter.js').temp_rewrite()); ... }); // here are the routes // notice you don't need to write '/data' in front anymore all the time app.get('/', function (req, res) { res.send('This is actually site.com/data/'); }); app.get('/example', function (req, res) { res.send('This is actually site.com/data/example') });
I'm currently trying out nginx and nodejs with connect running nodejs proxied in nginx. The problem I have is that I currently don't run nodejs under the root (/) but under /data as nginx should handle the static requests as normal. nodejs should not have to know that it's under /data but it seems to be required.In other words. I want nodejs to "think" it runs at /. Is that possible?nginx config:upstream app_node { server 127.0.0.1:3000; } server { ... location /data { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://app_node/data; proxy_redirect off; } }nodejs code:exports.routes = function(app) { // I don't want "data" here. My nodejs app should be able to run under // any folder app.get('/data', function(req, res, params) { res.writeHead(200, { 'Content-type': 'text/plain' }); res.end('app.get /data'); }); // I don't want "data" here either app.get('/data/test', function(req, res, params) { res.writeHead(200, { 'Content-type': 'text/plain' }); res.end('app.get /data/test'); }); };
Running nodejs under nginx
My guess is that your upstream server (either apache or your script) triggered a redirect to theabsoluteurlhttp://localhost/test/register/. Because you usehttp://127.0.0.1in yourproxy_passdirective, nginx doesn't find a match of the domain name and returns theLocationheader as is.I think the right solution is to NOT use absolute redirect if the redirect is to an internal url. This is always a good practice.However, without changing upstream server, there are two quick solutions.you can useproxy_pass http://localhost;This will tell nginx the domain name of upstream islocalhost. Then nginx will know to replacehttp://localhostbyhttp://correct.name.gr:8000when it finds that part in theLocationheader from upstream.Another one is to add aproxy_redirectline to force nginx to rewrite any location header withhttp://localhost/in it.proxy_pass http://127.0.0.1; proxy_redirect http://localhost/ /;I prefer to the first solution because it's simpler. There is no DNS lookup overhead of usingproxy_pass http://localhost;because nginx does the lookup in advance when it starts the web server.reference:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
I am using the following configuration for nginx 1.4.1:server { listen 8000; server_name correct.name.gr; location /test/register { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1; } }What I want to do is when the users visthttp://correct.name.gr:8000/test/register/they should be proxied to the apache which runs on port 80.When I visithttp://correct.name.gr:8000/test/register/I get correct results (index.php). When I visithttp://correct.name.gr:8000/test/register/asdI get correct results (404 from apache). When I visithttp://correct.name.gr:8000/test/asdI get correct results (404 from nginx). When I visithttp://correct.name.gr:8000/test/register123I get correct results (404 from apache).The problem is when I visithttp://correct.name.gr:8000/test/register. I get a 301 response and I am redirected tohttp://localhost/test/register/(notice the trailing slash and of course the 'localhost')!!!I haven't done any other configurations to nginx to put trailing slashes or something similar. Do you know what is the problem ? I wanthttp://correct.name.gr:8000/test/registerto work correctly by proxying to apache (or if not possible at least to issue a 404 error and not a redirect to the localhost of the user).Update 1: I triedhttp://correct.name.gr:8000/test/registerfrom a different computer than the one with which I had the bad behavior yesterday.. Well, it worked: I just got a 301 response that pointed me to the correcthttp://correct.name.gr:8000/test/register/! How is it possible to work from one computer but not from the other (I'm using the same browser-Chrome in both computers)? I will try again tomorrow to test from a third one to see the behavior.Thank you !
nginx and trailing slash with proxy pass
I found a way to achieve it poorly. Studyinghttp://laravel.com/docs/master/filesystemdeeply I didn't found a valid answer to get Plupload working. Otherwise, I found a poor solution that is using a:And use something like:foreach($request->file("files") as $file) { Storage::put( 'test'.rand(1,100), file_get_contents($file->getRealPath()) ); };In my controller, that enables multiple file uploading to S3. But it isn't asynchronous and also it's UX is worse.I'll keep working to have Plupload enabled eventually. I will post updates here.
I'm testing a plupload upload to S3 on a Nginx server. The whole project is based on Laravel.The problem is when I start upload, Chrome's console says:XMLHttpRequest cannot load http://BUCKET.s3.amazonaws.com/. Response for preflight is invalid (redirect)I have tried using PHP headers to enable CORS, but error still occurs.Current uploading script: date('Y-m-d\TH:i:s.000\Z', strtotime('+1 day')), 'conditions' => array( array('bucket' => $bucket), array('acl' => 'public-read'), array('starts-with', '$key', ''), array('starts-with', '$Content-Type', ''), array('starts-with', '$name', ''), array('starts-with', '$Filename', ''), ) ))); $signature = base64_encode(hash_hmac('sha1', $policy, $secret, true)); ?> .... .... ....I checkedthisto enable CORS on Nginx, but the fact is when I put that snippet on mylocation /block it just return a 404 when I open the URL.My Nginx site configuration file:server { listen 80; server_name myserver.com; root /usr/share/nginx/myserver/public; index index.html index.htm index.php; charset utf-8; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } access_log off; error_log /var/log/nginx/myapp-error.log error; sendfile off; client_max_body_size 100m; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_buffer_size 16k; fastcgi_buffers 4 16k; } location ~ /\.ht { deny all; } }Thank you so much
Laravel+Plupload uploading to S3 response for preflight is invalid - CORS
maybehttp://pecl.php.net/bugs/bug.php?id=17689or bug id #18138
I am getting a502 Bad Gatewayfrom Nginx on a line of PHP code that is working fine in other places of my program ($this->provider = new OAuthProvider();), and that have worked fine before. This is the message I get in the Nginx error log for each 502:recv() failed (104: Connection reset by peer) while reading response header from upstreamIn the PHP-FPM log there is a warning for each 502:[WARNING] [pool www] child 17427 exited on signal 11 SIGSEGV after 142070.657176 seconds from startAfter trying a number of changes to the nginx.conf I am stuck and would very much appreciate any pointers of what to do next.I'm running Nginx 0.7.67 and PHP 5.3.2 on Ubuntu 10.04.
Nginx + PHP-FPM 502 Bad Gateway