Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Try removingtry_files $uri $uri/ =404;Remove or comment out thetry_filesline completely and it will work.
Nginx + passenger is only showing the root page / path, all other routes fail with 404 Not Found. I know nginx and passenger must be running to correctly bring up the root path. Here's the entry for sites-available:server { listen 80; server_name staging.redacted.com; # Replace this with your site's domain. passenger_enabled on; passenger_app_env staging; keepalive_timeout 300; client_max_body_size 4G; root /var/www/staging.redacted.com/current/public; # Set this to the public folder location of your Rails application. #try_files $uri/index.html $uri.html $uri; try_files $uri $uri/ =404; }routes.rb - even though they all work and pass their specs.Rails.application.routes.draw do resources :users resources :brand_api_keys resources :ir_service_buckets resources :ir_service_images resources :ir_services do resources :ir_service_buckets do resources :ir_service_images end end resources :devices do resources :scans end resources :images do resources :scans end resources :campaigns do resources :images end resources :agent_brands resources :brands do resources :brand_api_keys resources :ir_service_buckets do resources :ir_service_images end resources :agent_brands resources :campaigns end resources :agents do resources :brand_api_keys resources :agent_brands end root 'agents#index' end
nginx, passenger only showing root page, all other routes fail
location ~ Histats { return 404; }PS:If is evil
I want to return all requests to urls that contains the stringHistatsto 404. How can I do that?
How do I redirect all requests that contains a certain string to 404 in nginx?
Turns out Codeigniter sets its own max size. I haven't figured out how to limit that, but changing nginx won't change anything unfortunately. Thanks for all the help VBart and gsharma.
"upstream sent too big header while reading response header from upstream"I keep getting this when I try and do an authentication from facebook. I've increased my buffers:proxy_buffer_size 256k; proxy_buffers 8 256k; proxy_busy_buffers_size 512k; fastcgi_buffers 8 256k; fastcgi_buffer_size 128k;But it doesn't seem to help. Any thoughts as to why this might occur?nginx.conf file:user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; proxy_buffer_size 256k; proxy_buffers 8 256k; proxy_busy_buffers_size 512k; fastcgi_buffers 8 256k; fastcgi_buffer_size 128k; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }/etc/nginx/sites-enabled/defaultserver { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex on; } }
nginx big header response
#initliaze mobile set $mobile ""; if ($request_uri !~* "^/mobile.*" ){ set $mobile Y; } if ($http_user_agent ~* (iPhone|iPod|android|blackberry) ) { set $mobile "${mobile}E"; } if ( $http_referer !~* "xxx\.org" ){ set $mobile "${mobile}S"; } if ( $host ~* "xxx\.org" ){ set $mobile "${mobile}S"; } if ($mobile = YESS){ rewrite ^ $scheme://$host/mobile$request_uri ; }
I am trying to setup my nginx to redirect all the requests from mobile devices to /mobile/$uri i came up with this solution but it doesn't seem to work. Is it a syntax problem or a misunderstanding of the whole redirecting concept.if ($http_user_agent ~* '(iPhone|iPod|android|blackberry)') { rewrite ^(.*) http://xxxx.org/mobile/$1 permanent; }When i use my android phone i am getting something like xxx.org/mobile/mobile/mobile/mobile....Any ideas?Any suggestions?
nginx: redirect mobile requests to /mobile/$uri
Actually I think it is much easier to change the nginx re-write rules than to write middleware for django to do this. After reading up on how nginx processes it's location matching (most exact -> least exact) I created locations for /media and /download as well as a catch all location for / I then moved the rewrite rule to under the / location and simplified it - as I'm no longer worried about checking for files because this entire location is passed to django - the rule becomes :set $subdomain ""; set $subdomain_root ""; if ($host ~* "^(.+)\.domain\.com$") { set $subdomain $1; set $subdomain_root "/profile/$subdomain"; rewrite ^(.*)$ $subdomain_root$1; break; }and would probably be even simpler if my nginx\regex scripting was better :)
I currently have the following (hacky) re-write rule in my nginx.conf to allow dynamic sub-domains to be re-directed to one Django instance.set $subdomain ""; set $subdomain_root ""; set $doit ""; if ($host ~* "^(.+)\.domain\.com$") { set $subdomain $1; set $subdomain_root "/profile/$subdomain"; set $doit TR; } if (!-f $request_filename) { set $doit "${doit}UE"; } if ($doit = TRUE) { rewrite ^(.*)$ $subdomain_root$1; break; }I'm sure there is a more efficient way to do this but I need to change this rule so that any requests to*.domain.com/media/*or*.domain.com/downloads/*go todomain.com/media/*anddomain.com/downloads/*.
Complex nginx rewrite rules for subdomains
No, Nginx cannot.
Is it possible to configure nginx to run a unix command based on a url?For example:http://localhost/list/usr/localThis runs:ls /usr/localThen returns the results?
Can nginx run a unix command based on url?
After searching, I create a file outside the container calledclient_max_body_size.confwith the contentsclient_max_body_size 25m;(or whatever) and bind mount it into your nginx-proxy container:docker run -d --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock \ -v $ /client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro \ -p 80:80 jwilder/nginx-proxy
In my architecture I use the /jwilder/nginx-proxy as a proxy server in my docker and then I installed 3 WordPress websites with MySQL and WordPress.They are working well but /jwilder/nginx-proxy has a default configuration upload limit to 2MB but my WordPress template is about 20MB.When I am trying to upload this template413 Request Entity Too Large nginx/1.13.6In addition, I used below code to install/jwilder/nginx-proxydocker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxyHow can I configure the upload limits?Regards
How can I change the docker jwilder/nginx-proxy upload limits?
I managed to figure this out myself. Nginx really does cache the knowledge that the upstream server is inaccessible.To fix this I changed my cache config fromproxy_cache_valid any 5m;toproxy_cache_valid 5m;Removinganyimplies you only want to cache 200, 301, and 302 responses.
I'm using Nginx proxy_cache to cache responses from a Node server. When that server is offline Nginx returns a custom 502 page. All going well so far...When the Node server comes back online Nginx keeps returning the 502 page for 5 mins (my cache time). If I delete all the files in the cache dir it makes no difference. If I restart Nginx it busts the cache and starts serving real content again.Is it normal for Nginx to cache 502s? Note that it's not caching a 502 response from the backend server it's caching the fact that the server isn't accessible.What can I do to stop this?
Nginx proxy_cache caches 502 errors
Convention:You should keep location and server declarations in virtual host files (/etc/nginx/conf.d/*.conf;and/etc/nginx/sites-enabled/*;, as you can see from the nginx conf). Files in/etc/nginx/conf.d/*.conf;are typically symlinked to files in/etc/nginx/sites-enabled/*;in order to become "enabled"Some things to trySee myblog posthere which has a setup similar to yours.Try moving yourindex index.html index.html index.phpfiles directive outside of alocation {}block
I'm new to nginx and I just can't determine why my nginx config doesn't work as expected. All I want to do is to make nginx prioritize index.html over index.php for every web root (/) request.This is my nginx config:user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; multi_accept on; } http { ## # Basic Settings ## server { location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 100000; types_hash_max_size 2048; client_body_in_file_only clean; client_body_buffer_size 32K; client_max_body_size 300M; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ----------------- cut --------------- ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }Where's my error? What's the correct way to write this nginx config?
nginx location index directive not working
Because you already haveingressin place and the path is/, there will be no way of disabling the basic auth on yourhttps://externalprovider/oauth2/auth.For best explanation please refer to answer provided [email protected] do that, you need to set up anotheringressand configure it to disable basic auth. You can also check this question on StackTwo ingress controllers on same K8S clusterand this oneKubernetes NGINX Ingress: Disable external auth for specific path.
I would like to be able to disable external authorization for a specific path of my App.Similiar to this SO:Kubernetes NGINX Ingress: Disable Basic Auth for specific pathOnly difference is using an external Auth provider (OAuth via Microsoft Azure) and there is aThis is the path that should be reachable by the public/MyPublicPathMy ingress.yaml:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myIngressName annotations: nginx.ingress.kubernetes.io/auth-signin: https://externalprovider/oauth2/sign_in nginx.ingress.kubernetes.io/auth-url: https://externalprovider/oauth2/auth nginx.ingress.kubernetes.io/auth-request-redirect: https://myapp/context_root/ nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User, X-Auth-Request-Email, X-Auth-Request-Access-Token, Set-Cookie, Authorization spec: rules: - host: myHostName http: paths: - backend: serviceName: myServiceName servicePort: 9080 path: /Can I have it not hit thehttps://externalprovider/oauth2/authurl for just that path?I've tried using ingress.kubernetes.io/configuration-snippet to set auth_basic to value "off" but that appears to be tied to the basic auth directives not the external ones.
Kubernetes NGINX Ingress: Disable external auth for specific path
TLDR: This scary error message proves the conversion/s are actually being received by Twitter, so thankfully there's not much to worry about.I'm getting the same console error when implemented as perTwitter's Google Tag Manager instructions. Clearing out cookies didn't help in my instance. In fact, the same error shows up on Twitter's own help pages!Here's the offending function in the minifieduwt.jsscript, exposed astwttr.conversion.buildPixel():buildPixel: function(e) { var t = new Image; t.src = e },User agents will queue up a request for theImageas soon as it'ssrcproperty is set, and often expect a valid image in response. Twitter's servers however providecontent-type:text/html;charset=utf-8as a response header.The latest version of Chrome obviously doesn't like loadingtext/htmlinto instances ofImage, but could probably be logging a nicer error message, especially since the response also includes aContent-Length: 0header indicating there's nothing to see.
I'm trying to loadTwitter's tracking pixelon my Meteor/NodeJS website.The code they provide is:!(function(e, t, n, s, u, a) { e.twq || ((s = e.twq = function() { s.exe ? s.exe.apply(s, arguments) : s.queue.push(arguments); }), (s.version = "1.1"), (s.queue = []), (u = t.createElement(n)), (u.async = !0), (u.src = "//static.ads-twitter.com/uwt.js"), (document.body.appendChild(u))); })(window, document, "script"); twq("init", "MY-TRACKING-ID"); twq("track", "PageView");It loads fine but returns the following error in the console:Refused to execute script from 'https://analytics.twitter.com/i/adsct?p_id=Twitter...' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.This is the exact same problem as:https://twittercommunity.com/t/analytics-tracking-pixel-error-was-blocked-due-to-mime-type-mismatch-x-content-type-options-nosniff/83583/2, but while that thread is unresolved he's running the Twitter tracking pixel on that site now, which suggests it's a server configuration issue.Looking at the code, thisuwt.js file from Twitterit requests a script fromhttps://analytics.twitter.com/i/adsctwhich Chrome is preventing from running.This answersuggests it's either a MIME type config issue (I'm running Nginx) or a header issue, but removingX-Content-Type-Options: nosniffand restarting Nginx had no effect.Any idea how to fix or better troubleshoot this?
Twitter tracking pixel causing MIME type error
I needed to add this to my nginx block:proxy_set_header X-Forwarded-Proto https;🙈
I'm trying to recognise whether my express app is serving over anhttpsprotocol.Usingnginxto handle the certification and encryption (on the same machine), and forward requests,req.protocolevaluates tohttpeven when https is being used and working fine.I've tried both of the following (individually):app.set('trust proxy', 'loopback');andapp.enable('trust proxy');Yetreq.protocolstill reportshttp.What gives?Here'sreq.header:{ 'x-real-ip': '196.38.239.10', 'x-forwarded-for': '196.38.239.10', host: 'idwork.co', 'x-nginx-proxy': 'true', connection: 'close', 'content-length': '0', 'cache-control': 'no-cache', origin: 'file://', 'content-type': 'application/json', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Postman/4.3.2 Chrome/47.0.2526.73 Electron/0.36.2 Safari/537.36', 'postman-token': 'redacted', accept: '*/*', 'accept-encoding': 'gzip, deflate', 'accept-language': 'en-US' }Here are my relevant(?) nginx forwarding rules:location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass 127.0.0.1:1234; proxy_redirect off; }
req.protocol never gives https behind nginx proxy
Edit file {kibana-directory}/config/kibana.yml. Find this line:port: 5601and change it to:port: 80
I have elasticsearch 1.4 and kibana4 running on an Amazo EC2 instance running RHEL7.Kibana4 is running as a standalone process and is not deployed in a web container such as nginx.It is listening on Port 5601.(the default port). I would like to have kibana listen on port 80.Can this be achieved without using nginx? If yes how?
Kibana4 to listen on Port 80 instead of Port 5601
I ended up with this solution: you simply start several php-cgi processes and bind them to different ports, and you need to update nginx config:http { upstream php_farm { server 127.0.0.1:9000 weight=1; server 127.0.0.1:9001 weight=1; server 127.0.0.1:9002 weight=1; server 127.0.0.1:9003 weight=1; } ... server { ... fastcgi_pass php_farm; } }For the sake of convenience, I created simple batch files.start_sandbox.bat:@ECHO OFF ECHO Starting sandbox... RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9000 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9001 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9002 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9003 -c php\php.ini RunHiddenConsole.exe mysql\bin\mysqld --defaults-file=mysql\bin\my.ini --standalone --console cd nginx && START /B nginx.exe && cd ..andstop_sandbox.bat:pstools\pskill php-cgi pstools\pskill mysqld pstools\pskill nginxas you can see, there are 2 dependencies:pstoolsandrunhiddenconsole.exe
I'm currently usingnginxandPHP FastCGIbut that arrangement suffers from the limitation that it can only serve one HTTP request at a time. (Seehere.) I start PHP from the Windows command prompt by doing;c:\Program Files\PHP>php-cgi -b 127.0.0.1:9000However there is another way to run PHP know as "Fast CGI Process Manager" (PHP-FPM).When running on Windows 7 behind nginx, can PHP-FPM handle multiple simultaneous HTTP requests?
Can Windows PHP-FPM serve multiple simultaneous requests?
This is how I route EVERYTHING to index.php, including sub-directory requests, HTTP args, ect.location / { try_files $uri $uri/ /index.php?$args; #if doesn't exist, send it to index.php } location ~ \.php$ { include fastcgi_params; fastcgi_intercept_errors on; # By all means use a different server for the fcgi processes if you need to fastcgi_pass 127.0.0.1:9000; }So for example, these get sent to index.php:http://foo.bar/something/ http://foo.bar/something/?something=1While these go directly to fileshttp://foo.bar/someotherphp.php http://foo.bar/assets/someimg.jpg
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI'm migrating my server from Apache to Nginx and have this very simple.htaccessrule:RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [QSA,L]The idea behind it is to direct every request to a front controller (index.php). I'm trying to do the same with Nginx. I used an online converter to make this Nginx location block:location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } }but when I add it to my site's configuration Nginx just spits out the source code of the PHP file as a download. For reference, here's the entire configuration file:http://pastebin.com/tyKtM1iBI know PHP works, as if I remove the location block and make a file with<?php phpinfo();it works correctly.Any help would be appreciated.
Routing requests through index.php with nginx [closed]
I've deployed a couple of simple applications on Linode and found their documentation to be excellent. In particular they have step-by-step tutorials tailored to specific environments. For example, in my case (like you) I wanted to use nginx, and I was using Ubuntu 10.04, so I followed this guide:http://library.linode.com/frameworks/ruby-on-rails-nginx/ubuntu-10.04-lucidIf it's your first time setting up on a VPS there will be some hurdles certainly, but I found the experience to be very rewarding.Regarding hosting your code, you have a number of options, but keep in mind that this is really a separate issue from deploying your app. You deploy your app on linode, but you don't have to host your code there, although you certainly can.In general terms, if you're okay with making your code open, then certainly github is a good choice. If you want to keep the code private but still have access online (rather than just on one computer), you can take advantage of your linode machine and host your code there.If you will have a number of other people contributing to the codebase, you might consider setting upgitosisorgitolite, which make it easy to do this. Alternatively if you will be the main user contributing to the codebase, you can setup a simpler configuration through HTTP, explained here:http://dev.bazingaweb.fr/2011/02/23/how-to-set-up-git-over-http.htmlLinode also has documentation on setting up a remote git repository:https://library.linode.com/linux-tools/version-control/gitIf you're choosing between gitosis and gitolite, I'd go with gitolite since gitosis appears to have been abandoned and is no longer being actively maintained.Other references on deploying on linode:http://infinite-sushi.com/2011/01/deploying-a-rails-app-to-a-linode-box/http://blog.chris-spencer.co.uk/from-zero-to-git-deployment-on-linode
I'm planning to host a Rails application on Linode, but I'm still unsure about the requirements and process of deploying. I'm only getting the 512 plan since I'm expecting relative small traffic for the site.My question is, do I need to get a repository such as Github to store my code? I'm also a bit concerned about how long it takes to set the server up and the deployment process. I've browsed through the Linode library but I'm not entirely clear on how to deploy Rails apps. I'm planning to use nginx as my server and passenger for deploying. Does anyone know where I can learn to deploy Rails applications on a Linode machine? A step-by-step tutorial with detailed explanation would be great. Thanks!
Hosting a Rails Application on Linode
If you don't know why you need Nginx or Apache on top of Node.js, then you don't need it.Nginx does a few things faster (and in some cases easier to configure) than Node.js: proxying, url rewriting, http caching, redirection, static file serving, and load balancing.If you experience that your Node.js code for any of these roles are growing complex, or turn out to be performance bottlenecks, it's worth investigating. Until then, no need to bother.
This question already has answers here:Closed11 years ago.Possible Duplicate:Why do we need apache under Node.js express web framework?I wonder why I should install a server such as Nginx or Apache with Node.js. I used to think that the server can help me to handle cache control or something more. But I found out that the Connect static middleware already does it, right?
Why install server (Nginx, Apache...) with Node.js? [duplicate]
you need to addproxy_set_header Authorization "Basic ....";where the....is base64 ofuser:pass.
I want to pass a request to an upstream server. The original url is not password protected but the upstream server is. I need to inject a Basic auth username/password into the request but get errors when doing:upstream supportbackend { server username:[email protected]; }andupstream supportbackend { server support.yadayada.com; } location /deleteuser { proxy_pass http://username:password@supportbackend; }
Nginx proxy_pass to a password protected upstream
Ok, for me it worked to usedocker build . -f Dockerfileinstead ofdocker build - < Dockerfile(which was the suggestion from the offical docker documentation by the way:https://docs.docker.com/engine/reference/commandline/build/#tarball-contexts). The solution was taken from github:https://github.com/moby/moby/issues/34986#issuecomment-343680872
When I try to build the following (simple) NGINX docker containerhttps://github.com/MarvAmBass/docker-nginx-ssl-secure, it always fails with the following error:stat /var/lib/docker/tmp/docker-builder00Whatever/basic.conf: no such file or directoryMy directory looks like this:├── basic.conf ├── Dockerfile ├── entrypoint.sh ├── LICENSE ├── README.md └── ssl.confRunning the commandsudo docker build - < Dockerfileas root user doesn't change a thing. Does anyone have a solution here?
Docker build from Dockerfile shows "ADD failed - No such file or directory"
You should run thisuser@user ~ $ sudo netstat -tulpn | grep --color :80it will show you process idtcp6 0 0 :::80 :::* LISTEN 2063/apache22063/apache2 – PID/Process name
I'm trying to start nginx as follows:kurt@kurt-ThinkPad:~$ which nginx /usr/sbin/nginx kurt@kurt-ThinkPad:~$ sudo /usr/sbin/nginx nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind()Following thisEasyEngine tutorial, I tried to kill the process using port 80 withfuser -k:kurt@kurt-ThinkPad:~$ sudo fuser -k 80/tcp 80/tcp: 31924 31925 31926However, after re-runningsudo /usr/sbin/nginxI get exactly the same error message.I've tried a couple of other 'diagnostics' describedhere, usingfuser,lsof -i, andnetstat:kurt@kurt-ThinkPad:~$ fuser 80/tcp kurt@kurt-ThinkPad:~$ lsof -i :80 | grep LISTEN kurt@kurt-ThinkPad:~$ netstat -tulpn | grep --color :80 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN -Only thenetstatcommand gives a result, but I wasn't able to infer a process ID from it.Any ideas on how to get nginx to work?
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) even after killing the process using port 80
Unfortunately the Nginx Api Gateway is featured only in Nginx Plus. But there isKong, an alternative that is built on top of Nginx and is also open source.There is even a post on Nginx's official blog about it:https://www.nginx.com/blog/nginx-powers-kong-api-management-solution/Edit: The blog post has been replaced by an nginx plus one. The original post can still be accessed through web.archive.org:https://web.archive.org/web/20160413082936/https://www.nginx.com/blog/nginx-powers-kong-api-management-solution/
Does NginX(Open source, not Nginx plus), supports API Gateway? Please, help me get to the valid documentation or information. Thank you!
Does NginX(Open source, not Nginx plus), supports API Gateway?
You need to check what you have set in /etc/php5/fpm/pool.d/www.conf file in line request_terminate_timeout. I has got:request_terminate_timeout = 300sThis is because it had always stopped working after 5 min (300s = 5min)After i changed it torequest_terminate_timeout = 3600smy problem have gone. I have got now 60 min to complete my ajax request :)PS Make sure that you remove;before that line because it is used to comment line.
I had problem with ajax. It always stopped working 5 min after request. I did't know what was causing it. I looked on many pages to find a sollution but nor provide good one. What I can do ?
Why am i getting error: recv() failed (104: Connection reset by peer) while reading response header from upstream during ajax request
Your firstserverblock needs arootdirective to resolve local files.Seethis documentfor more.
Trying to get let's encrypt setup using the webroot method, which creates and needs to access files in the ./.well-known/acme-challenge/ directory. Everything there (including the manual test file I added) shows up as 404.Going kind of crazy as I've tried variants of:location ~ /.well-known { allow all; } location /.well-known/acme-challenge { default_type text/plain; } location /.well-known { try_files $uri $uri/ =404; }with no luck. I've also checked permissions on the folders and even set to 777. I'm pretty new to setting up nginx config so I'm sure there's an existing condition that's throwing it off:server{ listen 80; server_name domain.com www.domain.com; location / { rewrite ^(.*)$ https://domain.com$1 permanent; } location ~ /.well-known { allow all; } } server { listen 0.0.0.0:443 ssl; root /var/www/domain.com/public_html; index index.php index.html index.htm; server_name domain.com www.domain.com; ssl on; ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; client_max_body_size 32m; } location ~ /.well-known { allow all; } }
Nginx permission issue (404)
I would use the$urivariable andifin alocationblock to achieve this.location / { if ( $uri !~ ^/(index\.php|css|images|core|uploads|js|robots\.txt|favicon\.ico) ) { rewrite ^ /server/index.php last; } }Also, as for thepathinfo security problems, (discussion) it's a good practice to addtry_files $uri =403; fastcgi_split_path_info ^(.+.php)(.*)$;to thelocation ~ \.php$block.
First of all, I have tried to search for similar questions, but the solutions to those questions were specific lines of code, that I couldn't customise to fit my needs.I have a Codeigniter installation, and I'm trying to migrate from Apache to nginx. However, in Apache the .htaccess was pretty simple: it would take a whitelist, and rewrite everything else toindex.php.RewriteEngine on RewriteCond $1 !^(index\.php|css|images|core|uploads|js|robots\.txt|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L]However in nginx, I have tried out the if and try_files directives, as well as messing around with locations, to no avail. I'm still new to how nginx reads the server config, and the tutorials online were somewhat confusing to follow through.Additionally, the index.php will not be in the web root, but in a subdirectoryserver.Because of this, I also need to make sure even URI requests beginning with /server do not go to the directory, but toindex.phpThis is my nginx virtual host configuration so far:server { listen 80; server_name example.com; root /home/example/public_html; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location / { index index.htm index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/example.sock; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~* ^.*(/|\..*) { try_files $uri $uri/ /server/index.php; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }Which helps to redirect requests toindex.php, but doesn't have a whitelist. I would appreciate it if anyone could generate a working example with a brief explanation of what each part does.
nginx rewrite all to index.php except whitelist
I think it is not possible to use a symlinked public directory. The only workaround I can imagine is to symlink any file and directory inside of the public dir.# public folder: /data/public # app folder: /webapp/ mkdir -p /webapp/public && ln -sf /data/public/* /webapp/public/For every new file or directory in /data/public you have to run this command again.
I'm putting together a rails deployment where the public directory is a symlink to another directory on the system. This is with passenger 3 on nginx .8. It does't seem to like that setup. Nginx always follows symlinks by default, so AFAIK it's not a matter of doing the equivalent of Apache's +FollowSymLinks.updateLooks like this is covered here:http://www.modrails.com/documentation/Users%20guide%20Nginx.html#application_detectionNote that Phusion Passenger for Nginx does not resolve any symlinks in the root path. So for example, suppose that your root points to /home/www/example.com, which in turn is a symlink to /webapps/example.com/public. Phusion Passenger for Nginx will check for /home/www/config/environment.rb, not /webapps/example.com/config/environment.rb. This file of course doesn’t exist, and as a result Phusion Passenger will not activate itself for this virtual host, and you’ll most likely see some output generated by the Nginx default directory handler such as a Forbidden error message.Detection of Rack applications happens through the same mechanism, exception that Phusion Passenger will look for config.ru instead of config/environment.rb.So I wonder if some proper symlinking of config.ru might do the trick.
Can my /public directory be a symlink with rails 3 + passenger 3 + nginx 0.8?
Try the following:echo "deb http://security.ubuntu.com/ubuntu bionic-security main" | sudo tee -a /etc/apt/sources.list.d/bionic.list sudo apt update apt-cache policy libssl1.0-dev sudo apt-get install libssl1.0-dev
I try to install nginx on an Ubuntu 20.04 AWS EC2 server by doing:sudo apt update sudo apt upgrade sudo apt install nginxHowever the last command fails:Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: nginx : Depends: libssl1.0.0 (>= 1.0.2~beta3) but it is not installable E: Unable to correct problems, you have held broken packages.Any ideas on how to resolve this? Thanks in advance
Problem installing nginx on ubuntu 20.04 AWS EC2 node
You could try using Terraform's helm provider.provider "helm" { kubernetes { host = azurerm_kubernetes_cluster.your_cluster.kube_config.0.host client_key = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.client_key) client_certificate = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.client_certificate) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.cluster_ca_certificate) } } data "helm_repository" "stable" { name = "stable" url = "https://kubernetes-charts.storage.googleapis.com" } resource "helm_release" "nginix_ingress" { name = "nginx_ingress" repository = data.helm_repository.stable.metadata.0.name chart = "stable/nginx-ingress" namespace = "kube-system" }If your cluster is already created, you will have to import it as well using a data source.helm_releasealso supports custom values.Hereis the link if you need more information.
How can i create anginx ingressinazure kubernetesusingterraform, earlier in thislink, i remember seeing some steps as a mandatory installation for all setups, right now it seems to be removed and there is a specific way of installing foraksin thislink, should i rewrite all these to adapt toterraformor is there any other smart way of installingnginx ingressforaksthroughterraform
How to create nginx ingress in terraform - aks
The instructions you are referring to are for compiled installation.Assuming you want to add the moduleto your existingNGINX install, below are the generic steps that will get things running.Fetch exactly matching version of NGINX as the one you have installed, from nginx.org onto your system and extract it to, say,/usr/local/src/nginxgit cloneNGINX module's source code onto your system, to e.g./usr/local/src/nginx-module-foocd /usr/local/src/nginx. This is where you will find theconfigurescript. You will basically configure NGINX with the location of theconfigof specific module in question, thus next step:./configure --add-dynamic-module=../nginx-module-foo --with-compatmakeAs a resulf of the compilation you will have module's.sofile somewhere inobjsdirectory of your NGINX sources. You will then copy it over to e.g./usr/lib64/nginx/modules/directory.To make your existing NGINX load the module, addload_module modules/foo.so;at the very top of/etc/nginx/nginx.conf.You can decipher the many downsides to the whole compiled approach: one is having compilation software (gcc) on a production system, other is having to re-do all those steps any time you upgrade NGINX or the module.For the reasons mentioned, you might want to search for a packaged install of third-party modules.For CentOS/RHEL systems, you might want to look at GetPageSpeed repos (subscription-ware, and I'm biased to mention it, because I'm the maintainer. But this is free for CentOS/RHEL 8 at the time of this writing. Installing the module you want,goes downto a couple of commands:yum -y install https://extras.getpagespeed.com/release-latest.rpm yum -y install nginx-module-substitutionsFor Debian-based systems, probably there are alternative PPAs existing for the same.
When running nginx -t I get this error:nginx: [emerg] unknown directive "subs_filter_types" in /etc/nginx/sites-enabled/my.site.com.conf:285 nginx: configuration file /etc/nginx/nginx.conf test failedSo I need to install the substitution filter module and in the nginx documentationhttps://www.nginx.com/resources/wiki/modules/substitutions/#subs-filter-typesWhich says to run these commands:git clone git://github.com/yaoweibin/ngx_http_substitutions_filter_module.git ./configure --add-module=/path/to/moduleThe problem is I don't have the configure script anywhere in my nginx installation nor in the git repository. I really don't understand. At the very least I want to know the content of that nginx configure script.
How to install a module on nginx?
Can Lambda do it? Yes. Should Lambda do it? No.Why? Cost.First, let's say you do handle20k Requests / Second, every second for an entire day. That will then equate to1.728 Billionrequests in that day. In the free tier, you do get 1 Million requests free, so that drops the billable requests down to 1.727 Billion. Lambda charges$0.20 / Million Requests, so:1.728 Billion requests * $0.20 / Million requests = $345.40I'm pretty sure your cost for EC2 is lower than thatper day. Taking them4.16xlargeinstance, withon-demandpricing, we get:$3.20 / Hour * 24 Hours = $76.80See the difference? But, Lambda also charges for compute time!Let's say you include thec++executable in your Lambda function (called fromPythonorNode, so we won't take into account the performance hit going fromc++to an interpreted language. Since Lambda charges in100 Millisecondblocks, rounded up, for this estimate we will assume that all the requests finish within100 Milliseconds.Say you use the smallest memory size,128 MB. That will give you3.2 Million Secondswithin the free tier, or32 Million Requestsgiven that they are all under100 Milliseconds, free. But that still leaves you with1.696 Billion Requestsbillable. The cost for the128 MBsize is$0.000000208 / 100 Milliseconds. Given that each request finishes under100 Milliseconds, the cost for the execution time will be:$0.000000208 / 100 Milliseconds * 1.696 Billion 100 Millisecond Units = $352.77Adding that cost to the cost of the requests, you get:$345.40 + $352.77 = $707.17EC2:$76.80Lambda:$707.17Note, this is just using the 20k Requests / Second number that you gave and is for asingle day. If the actual number of requests differs, the requests take longer than 100 Milliseconds, or you need more memory than 128 MB, the cost estimate will go up or down accordingly.Lambda has its place, but EC2 does also. Just because you can put it on Lambda doesn't mean you should.
On a project I’m working on, there are a number of web services implemented on AWS. The services that are relatively simple (DynamoDB insert or lookup) and will be used relatively infrequently have been implemented as Lambdas, which were perfect for the task. There is also a more complex web service which does a lot of string processing and regex matching which needs to be highly performant, that has been implemented in C++ (roughly 5K LOC) as a Nginx module and can handle in the region of 20K requests/s running on an EC2 instance (the service just takes in a small JSON payload, does a lot of string processing and regex matching against some reference data that sits in static data files on S3, and returns a JSON response under 1KB in size)There is a push from management to unify our use of AWS services and have all the web services implemented as Lambdas.My question is: can a high performance web service such as the C/C++ nginx compiled module running on EC2 that’s expected to run continuously and handle 20K to 100K req/s actually be converted to AWS Lambda (in Python) and expected to have the same performance or is this better left as is on EC2? What are the performance considerations to be aware of if converting to Lambda?
Converting a high performance web service from Nginx on AWS EC2 to AWS Lambda
This is the configuration that I'm using, it's working OK:server { listen 0.0.0.0:20007; index index.html; root /full/path/to/site; # pass the request to the node.js server with the correct headers location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://your_app/; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }
I'm having trouble getting go.net/websocket to work behind nginx. It works if the application is accessed directly but with nginx, I get an EOF error from Receive.What am I doing wrong?Nginx version: 1.5.10This is my nginx configuration.location /wstest/ { proxy_pass http://localhost:7415/; proxy_http_version 1.1; proxy_set_header Upgrade "websocket"; proxy_set_header Connection "Upgrade"; proxy_buffering off; }Go code:func main() { http.HandleFunc("/", home) http.Handle("/sock", websocket.Handler(pingpong)) http.ListenAndServe(":7415", nil) } func home(w http.ResponseWriter, r *http.Request) { homeTmpl.Execute(w, nil) } func pingpong(conn *websocket.Conn) { var msg string if err := websocket.Message.Receive(conn, &msg); err != nil { log.Println("Error while receiving message:", err) return } if msg == "ping" { websocket.Message.Send(conn, "pong") } } var homeTmpl = template.Must(template.New("home").Parse(` WS Test Pinging... `))
Unable to get go.net/websocket working behind nginx
Maybe this works:server { listen 80; server_name www.domain.net domain.net; location / { rewrite "^$" http://www.domain.com/ permanent; } }
I'd like to do an Nginx rewrite where I have two domains: domain.com and domain.net with the following rules:1) If a user goes tohttp://www.domain.net/, he will be redirected tohttp://www.domain.com/2) If a user goes tohttp://www.domain.net/anything_else.htmlthe rewrite will not occur.This is my failed attempt:server { listen 80; server_name www.domain.net domain.net; location / { rewrite / http://www.domain.com/ permanent; } }The correct format would be much appreciated!
Nginx rewrite only when root domain
I found the solution. The problem was with my requests, I was usinglocalhostat the URL, with that I took the wrong pod IP. I've just changed the request to use straight the service IP and that sort out my problem.
I trying to deploy my angular application with kubernates inside a container with nginx.I create my docker file:FROM node:10-alpine as builder COPY package.json package-lock.json ./ RUN npm ci && mkdir /ng-app && mv ./node_modules ./ng-app WORKDIR /ng-app COPY . . RUN npm run ng build -- --prod --output-path=dist FROM nginx:1.14.1-alpine COPY nginx/default.conf /etc/nginx/conf.d/ RUN rm -rf /usr/share/nginx/html/* COPY --from=builder /ng-app/dist /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]My nginx config:server { listen 80; sendfile on; default_type application/octet-stream; gzip on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\."; gzip_min_length 1100; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_comp_level 9; root /usr/share/nginx/html; location / { try_files $uri $uri/ /index.html =404; } location /api { proxy_pass https://my-api; } }If I launch this image locally It works perfectly but when I deploy this container inside a kubernate cluster the site load fine but all api request shows the errorERR_CONNECTION_REFUSED.I'm trying to deploy in GCP I build the image and then publish my image by GCP dashboard.Some idea for thisERR_CONNECTION_REFUSED?
Connection Refused with nginx and kubernetes
According tonginx documentationthe default access log format is:log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';Applied to your log line:$remote_addr = 172.68.244.173 (literal string for compatibility reasons) = - $remote_user (from Auth Header) = - $time_local = [24/Aug/2018:12:14:04 +0000] $request = "\x16\x03\x01\x00\xEC\x01\x00\x00\xE8\x03\x03\x8A?\xB5\xFA\x17?\x8A\x9B\x04T>yK\x1A\xF6\x8F_\xBE:.\xF9\xED\xF6\xEE\xFCM\xD0\x88Ji\xDD\xF5 \xFF\xBDm\x98@mo:U\xA6\x0E\xB7\x93\x02sm`\xC6\xD1s0vV*\x88y\xDA&\xFCfZ\xF4\x00\x16\x13\x01\x13\x02\x13\x03\xC0+\xC0/\xC0\x13\x00\x9C\x00/\xC0(\x005\x00" $status = 400 $body_bytes_sent = 173 $http_referer = "-" $http_user_agent = "-"To summarize: Your server received a request from the address 172.68.244.173 with nouser agentheader sent and the request consisted of mostly non-printable characters. Slight possibility this is a broken client sending a bad request, more likely it's anattempt to discovera vulnerability in your web server or application. This will happen often to any server on the internet.
My server is compiled on a docker.The Nginx container is built from a standard assembly.I want to read the access.log nginx but I see this kind of content:172.68.244.173 - - [24/Aug/2018:12:14:04 +0000] "\x16\x03\x01\x00\xEC\x01\x00\x00\xE8\x03\x03\x8A?\xB5\xFA\x17?\x8A\x9B\x04T>yK\x1A\xF6\x8F_\xBE:.\xF9\xED\xF6\xEE\xFCM\xD0\x88Ji\xDD\xF5 \xFF\xBDm\x98@mo:U\xA6\x0E\xB7\x93\x02sm`\xC6\xD1s0vV*\x88y\xDA&\xFCfZ\xF4\x00\x16\x13\x01\x13\x02\x13\x03\xC0+\xC0/\xC0\x13\x00\x9C\x00/\xC0(\x005\x00" 400 173 "-" "-"How to read such a log? What does this mean?
How to read nginx access.log?
If all the permissions under themyproject_appfolder are correct, andcentosuser ornginxgroup have access to the files, I would say it looks like a Security Enhanced Linux (SELinux) issue.I had a similar problem, but with RHEL 7. I managed to solve it by executing the following command:sudo semanage permissive -a httpd_tIt's related to the security policies of SELinux, you have to add thehttpd_tto the list of permissive domains.This post from the NGINX blog may be helpful:NGINX: SELinux Changes when Upgrading to RHEL 6.6 / CentOS 6.6Motivated by a similar issue, I wrote a tutorial a while ago onHow to Deploy a Django Application on RHEL 7. It should be very similar for CentOS 7.
I'm working in a Django project deployment. I'm working in a CentOS 7 server provided ma EC2 (AWS). I have tried to fix this bug by many ways but I cant understand what am I missing.I'm using ningx and gunicorn to deploy my project. I have created my/etc/systemd/system/myproject.servicefile with the following content:[Unit] Description=gunicorn daemon After=network.target [Service] User=centos Group=nginx WorkingDirectory=/home/centos/myproject_app ExecStart=/home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application [Install] WantedBy=multi-user.targetWhen I runsudo systemctl restart myproject.serviceandsudo systemctl enable myproject.service, thedjango.sockfile is correctly generated into/home/centos/myproject_app/.I have created mynginxconf flie in the folder /etc/nginx/sites-available/ with the following content:server { listen 80; server_name my_ip; charset utf-8; client_max_body_size 10m; client_body_buffer_size 128k; # serve static files location /static/ { alias /home/centos/myproject_app/app/static/; } location / { include proxy_params; proxy_pass http://unix:/home/centos/myproject_app/django.sock; } }After, I restartnginxwith the following command:sudo systemctl restart nginxIf I run the commandsudo nginx -t, the reponse is:nginx: configuration file /etc/nginx/nginx.conf test is successfulWhen I visit my_ip in a web browser, I'm getting a 502 bad gateway response.If I check the nginx error log, I see the following message:1 connect() to unix:/home/centos/myproject_app/django.sock failed (13: Permission denied) while connecting to upstreamI really have tried a lot of solutions changing the sock file permissions. But I cant understand how to fix it. How can I fix this permissions bug?... Thank you so much
Nginx: Permission denied to Gunicorn socket on CentOS 7
Yes, it's buffering issue. If you are using few workers - each worker has own buffer.Ways to improve:disable bufferingdecrease buffer size (1)addflushoptions, if flush to disk still rarecreate own log collector with sorting (nginx can syslog protocol, for example)But usually you don't need to care about order of log records. Log analytic systems will sort it by self.(1) For linux systems buffer size must not exceed the size of an atomic write to a disk file. In modern linux - it's 64k. Well, I'm not 100% sure about this size because information very discrepant. But if you will find broke lines in log - decrease this size.
Records in my nginx log file are out of order. (Edit: by "out of order" I mean chronologically. e.g. Log lines for 2017-02-21 09:13:26 will often bebeforelines for 2017-02-21 09:13:45) Perhaps a certain amount of out of order records are to be expected because they are logged after a request is completed, not when received. But this is a way higher number of requests that are being logged out of order, including known short (fast) requests for small static files.Is this a known side effect of using buffered logging or can this be improved?For getting a more complete picture, here are some other config params:Innginx.conf:log_format main '$remote_addr - $remote_user [$time_local] "$request" 'In the config file for the virtual host:server { #The backlog parameter matches sysctl net.core.somaxconn setting. Default value is 511 on Ubuntu. listen 80 backlog=30000; server_name www.example.com; access_log /var/log/nginx/access.log main buffer=128k; error_log /var/log/nginx/error.log; root /var/www/html/website; ... }
nginx logs are out of order, probably due to buffered logging
Multiplefastcgi_paramstatements (at the same block level) setting the same parameter will silently use the value from the last statement.This includes statements read via anincludedirective.Always declarefastcgi_paramstatements after theinclude fastcgi_params;statement to avoid any ambiguity in your configuration files.
I havenginx.confwith following structure:http { [ ... ] server { [ ... ] location ~ \.php$ { fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 3000; include fastcgi_params; } } }This nginx runs inside Docker so it has no idea, which domain is linked to it (there is nginx reverse proxy on hosting system). But I have a problem that when I try to acces$_SERVER['SERVER_NAME']from PHP, it's empty... How can I set it to constant value? When I tried:fastcgi_param SERVER_NAME example.comit's still empty.Please note that I have to useSERVER_NAME, because it's in 3rd part code.
Set constant SERVER_NAME with nginx
You need to set a document root with therootdirective, either within yourlocation ~ \.php$block or inherited from the outerserverblock.The solution may be to move theroot c:/Users/Youri/PhpstormProjects;line out of yourlocation /block into a position above it.Usuallyfastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;is the correct method to specify the full path to the script, whereasSCRIPT_NAMEis usually just the last element.Like this:server { ... root c:/Users/Youri/PhpstormProjects; location / { index index.html index.htm index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }
Running php-cgi on port 9000Netstat gives meTCP 127.0.0.1:9000 DESKTOP-xxxxxxx:0 LISTENING [php-cgi.exe]nginx.confhttp://pastebin.com/wkfz8wxwEvery php file gives me thisNo input file specified.error...ChangedSCRIPT_FILENAMEtoSCRIPT_NAMEand no succes..I am on Windows 10 Home x64
Nginx, fastcgi PHP Windows, No input file specified
Theclient_header_buffer_sizeis not available within the "location" context. You'll also need to move thelarge_client_header_buffersMove them to within the "server" context and it'll work.server { client_header_buffer_size 1k; large_client_header_buffers 2 1k; location / { client_body_buffer_size 10K; client_max_body_size 8m; proxy_pass http://127.0.0.1:8000; } }Ref:http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
I have the nginx server running fine with this config.server { location / { proxy_pass http://127.0.0.1:8000; } }but when I try and modify buffer size it fails.server { location / { client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 2 1k; proxy_pass http://127.0.0.1:8000; } }I get this errorReloading nginx configuration: nginx: [emerg] "client_header_buffer_size" directive is not allowed hereAny suggestions?
Trying to increase nginx buffer
Did you take a look atServer Sent Eventsas you are initiating your request via ajax so you aren't performing bidirectional communication; you only want server to push you updates when it has ones
Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed9 years ago.Improve this questionScenarioI am developing some sort of web based cloud storage service.One feature is that the user can initiate transcoding of video files (so that they can be streamed on different devices).This takes some time and I want to display a progress bar to the user.My plan is to submit the job using ajax where it is written into a database. The ajax call returns the ID of the job in the database, and this id will be used as the channel for notifications.So when the job has been submitted, the client subscribes to the channel "job-databaseID" on some self hosted websocket server.Transcoding workers then periodically select pending jobs from the database table and process them. While processing they push their progress to the websocket server to the same channel where the client is listening.The front-end application should be a website with javascript and jquery. The back end should be programmed in PHP and MySQL and an apache or nginx webserver.QuestionIs this a proper way of using websockets? Usually I see websockets employed in aon-to-manynotification scenario. Here it is aone-to-onenotification scenario. Are there maby better alterantives for this kind one way information flow?Also I often see channels for Websocket scenarios to be more or lesslong-lived. Here it is veryshort-lived. Would it maby make more sense to make one channel per user?What would be a good websocket server for that kind of use? Ideally the channels would be auto-removed once no client is connected to it any more and auto-created the same way, so I don't need to take care of that.
Are websockets the right technology to be used to update progress bars for the client and how to implement it? [closed]
I just ran into this issue, too. As long as I did not escape the spaces and used single or double quotes, I was able to use a root path with spaces.the following worksroot "/directory/with spaces not escaped/will work"this does not workroot "/directory/with\ escaped\ spaces/will\ not\ work"
I have tried escaping (and not escaping), with (and without) single quotes double qoutes but i always end up on the 404 page.is it even possible? i tried searching for it, but landed ofhttps://serverfault.com/questions/361915/how-can-i-make-nginx-recognise-directories-with-spaces-in-its-namebut i already tried that, and it didnt work.if this i just me, then please prove it to me :)
Is it possible to have a root path containing spaces in my nginx.conf
Add this to yourserverblock:port_in_redirect off;E.g.server { listen 80; server_name localhost; port_in_redirect off; }Documentation reference.You should also change server_name tomyName.server_nameshould be your domain name.You should also be listening on port80, and then use proxy_pass to redirect to whatever is listening on port8000.The finished result should look like this:worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name www.myweb.com; location / { proxy_pass http://localhost:8000/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }Comments were removed for clarity.
I'm trying to set up a domain for my node project with nginx (v1.5.11), i have succesfull redirected the domain to the web, but i need to use 3000 port, so now, my web location looks likehttp://www.myweb.com:3000/and of course, i want to keep only "www.myweb.com" part like this:http://www.myweb.com/I have search and try many configurations but no one seems to work for me, i dont know why, this is my local nginx.conf file, i want to changehttp://localhost:8000/text tohttp://myName/text, remember that the redirect is working, i only want to "hide" the port on the location.#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8000; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { proxy_pass http://localhost:8000/; proxy_redirect http://localhost:8000/ http://myName/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }pd. I'm trying to fix it on my local windows 8 machine, but if other OS is required, my remote server works on Ubuntu 12.04 LTSThanks you all.
Can't hide location's port with nginx
According to nginxdocs, add line to your configuration file.access_log /path/to/your/logs/nginx_access.log; error_log /path/to/your/logs/nginx_error.log info;To log with supervisor, you can add lines to your configuration file like this[program:program] command=/virtualenv/python /path/to/django/source/manage.py run_gunicorn --log-file /path/to/your/logs/gunicorn.log stdout_logfile=/path/to/your/logs/supervisor.logAs you see, gunicorn log is specified in parameterlog-fileFinally in django settings you can do the logging according todocs
The I am logging all the caught errors in the django app in the django logger. Where do the errors that do not get caught go? It should go to supervisor log file, in my opinion. But that is empty.
Error Logging in Nginx+Gunicorn+Supervisor+Django
If thisifis created inside thelocation /for example, create a separatelocation /some-pagethis way theifwon't be executed when the URI is/some-pageEDIT: ok let me explain what i understood and you tell me if i'm right or wrong,Good IP (yours): serve page as it isBad IP (not yours): redirect to/some-pageThe problem is, when Bad IP is redirected to/some-pageit still redirects to/some-pageagain because it's still a Bad IP, so it passes theiftestMy solution: Remove the/some-pagelocation from the/block:location / { # bla bla if ($remote_addr != 127.0.0.1) { rewrite ^ http://www.example.com/some-page; } # rest of bla bla } location /some-page { try_files index.html index.php; # or whatever }When Bad IP is forwarded to/some-pageit no longer will execute theifcondition, so that will end the infinite redirection loop.Second EDIT: You could set the permissions in nginx it self, let me demonstrate:location / { error_page 403 = @badip allow 127.0.0.1; deny all; #rest of bla bla } location @badip { return 301 $scheme://example.com/some-page; }
I want all nginx requests that aren't made by my IP address to be redirected to/some-page. I'm currently using this nginx config:if ($remote_addr != 127.0.0.1) { rewrite ^ http://www.example.com/some-page; }This works for me as I'm not redirected, but anyone else is stuck in a redirect loop since the block doesn't check if the request is for/some-page.How can I fix this? I'm not sure how to check the request path.
Redirecting all requests that aren't from my IP with nginx
The following should do what you want:server { listen 80; root /var/www/mysite; location = / { try_files /index.html = 404;} location / { rewrite ^ / permanent; } }
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI'm closing down a website and I need nginx to redirect all users to the root and not just show them the same page on all website urls.For now I have this:server { listen 80; root /var/www/mysite; rewrite ^.*$ /index.html last; }However this doesn't redirect, but rather showsindex.htmlcontent everywhere. How do I do a redirect so thatmysite.com/somepagewould redirect tomysite.comwhich, in turn, would show index.html page?
How to tell nginx to redirect all pages of the website to root? [closed]
No, error log writes aren't buffered.
We're looking to watch nginx error logs for modifications but having some difficultly accounting for edge cases such as file truncations, etc.It would be helpful to know if nginx writes its error log files on the fly or if it buffers writes to error logs. Buffering wouldnt make a lot of sense for error logs but could still be the case to ensure high performance in nginx.We know that nginx buffers access log writes but cant currently find evidence that it does the same for error logs.
Does nginx buffer its error logs?
Bygzip_disable MSIE [1-6].(?!.*SV1);you've disabled gzip for almost any browser which has digits in it's User-Agent, as there are two separate regular expressions: "MSIE" and "[1-6].(?!.*SV1)". Add quotes around or better use this instead:gzip_disable msie6;Seedocsfor details.
This is the portion in my nginx.conf but i not sure why when i check with gzip compression checker or http header, the content is not compress.https://pasify.comuser nginx; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 0; #keepalive_requests 5; #keepalive_timeout 65; send_timeout 10m; # output compression saves bandwidth gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml; gzip_buffers 16 8k; # Disable gzip for certain browsers. gzip_disable MSIE [1-6].(?!.*SV1); # Load config files from the /etc/nginx/conf.d directory # The default server is in conf.d/default.conf include /etc/nginx/conf.d/*.conf; ## Detect when HTTPS is used map $scheme $fastcgi_https { default off; https on; } client_max_body_size 20M; }May i know what is the problem ?
Nginx server contents gzip compress not working
Create twoserverentries with differentlistenandssl_certificate(_key)directives using the different IP addresses but the samerootwhere your shared web pages are stored. For example:server { listen 1.2.3.4:443; server_name first-domain.example; root /srv/html/shared_domain_data; ssl on; ssl_certificate /etc/nginx/ssl/first_domain.pem; ssl_certificate_key /etc/nginx/ssl/first_domain_key.pem; } server { listen 9.8.7.6:443; server_name second-domain.example; root /srv/html/shared_domain_data; ssl on; ssl_certificate /etc/nginx/ssl/second_domain.pem; ssl_certificate_key /etc/nginx/ssl/second_domain_key.pem; }It's called nothing special.
I would like to have two domains, each with their own SSL cert, each SSL cert has its own IP of course, to point to the same web site on one physical server. The server will have to have two IPs too of course. What is this called? How is this done with nginx? OS is Linux. Thanks!
Multiple IPs+domains+SSL certs for one web site
I had a similar issue.Each of your applications nginx config files should point to the correct port number that the .Net Core application is set to run on.This is determined in each of your .Net Core applicationsprogram.csin the.UseUrls()extension, e.g.public static IWebHost CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseContentRoot(Directory.GetCurrentDirectory()) .UseUrls("http://0.0.0.0:2001") .UseStartup() .Build();Each application will need to have a different port number and have this reflected in its nginx config files, like so:server { listen 80; server_name domain; location / { proxy_pass http://localhost:2001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }Hope this helps.
I am trying to host multiple ASP NET Core sites with different domains on Linux, Unbunt 18.04 and using nginx as reverse proxy.These are the steps:1) Creating new .conf files in /etc/nginx/sites-available2) Creating folders in /var/www/ and uploadin the .net app3) Creating new .service files for each .conf fileThe default nginx .conf is unchanged.The .conf files look like this:server { listen 80; server_name domain; location / { proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }The .service files look like this:[Unit] Description=Event Registration Example [Service] WorkingDirectory=/var/www/example ExecStart=/usr/bin/dotnet /var/www/example/example.dll Restart=always # Restart service after 10 seconds if the dotnet service crashes: RestartSec=10 KillSignal=SIGINT SyslogIdentifier=dotnet-example Environment=ASPNETCORE_ENVIRONMENT=Production Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false [Install] WantedBy=multi-user.targetWith this configuration, even I deploy few sites, all of them are redirected to same content. My goal is to host multiple .net core apps on same server. How the configuration should look like?
Hosting multiple ASP NET Core sites on unbuntu and nginx as reverse proxy
now servers require the use of SNI for https connections. Like almost all modern webservers. You need do addproxy_ssl_server_name on;to your configuration.The smallest location block would be the following:location / { proxy_set_header host my-app.now.sh; proxy_ssl_server_name on; proxy_pass https://alias.zeit.co; }
I have several application server running several Node applications (via PM2).I have one NGINX server which has the SSL certificate for the domain and reverse-proxies to the Node applications.Within the NGINX configuration file I set the domains with their location block like this:server { listen 443 ssl; server_name geolytix.xyz; ssl_certificate /etc/letsencrypt/live/geolytix.xyz/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/geolytix.xyz/privkey.pem; location /demo { proxy_pass http://159.65.61.61:3000/demo; proxy_set_header HOST $host; proxy_buffering off; } location /now { proxy_pass https://xyz-heigvbokgr.now.sh/now; proxy_set_header HOST $host; proxy_buffering off; } }This only works for the application server. The proxy to the Zeit Now deployment yields a bad gateway. The application itself work as expected if I go to the Zeit Now address of my deployment.Does anybody know whether I might be missing some settings to proxy to Zeit Now?
NGINX proxy to a Zeit Now deployment
You can install the required version 3 from the bionic repository if you don't depend on other software that already uses version 4:sudo apt install libcurl3/bionicThis will ask you to removecurlitself,libcurl4, and dependend software. Consider carefully if you need those, before removing them.You still can roll back by installinglibcurl4and the removed software again.
System:Ubuntu:18.04 LTSPassenger:5.3.1nginxwithlibnginx-mod-http-passenger$ sudo nginx -tReturns:PassengerLoggingAgent: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by PassengerLoggingAgent) 2018/06/04 02:28:40 [alert] 10411#0: Unable to start the Phusion Passenger watchdog because it encountered the following error during startup: Unable to start the Phusion Passenger logging agent: it seems to have crashed during startup for an unknown reason, with exit code 1 (-1: Unknown error)```
Nginx+Passenger Error - libcurl.so.4: version `CURL_OPENSSL_3' not found
You should capture rest of the url and then use itlocation ~* /app1/(.*) { proxy_pass http://localhost:8080/$1$is_args$args; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }
I've got a site calledhttp://example.com, with an app running that can be accessed athttp://example.com/app1. The app1 is sitting behind an nginx reverse proxy, like so:location /app1/ { proxy_pass http://localhost:8080/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }Adding the trailing slash to theproxy_passfield lets me "remove" the /app1/ part of the URL, at least as far as the app is concerned. So app1 thinks that it's getting requests to the root url (as in, I have a route in app1 that sits on'/', not'/app1').However, I'd like to have nginx make this case-insensitive. So whether I go tohttp://example.com/App1orhttp://example.com/APP1, it should still just forward the request to app1,andremove the /app1/ part of the url.When I attempt to use nginx's case insensitive rules, it does not let forward the rest of the URI to app1.location ~* /app1/ { proxy_pass http://localhost:8080/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }That gives me an nginx configuration error.My goals are two fold:Match/app1/case insensitivelyRemove the/app1/part of the url when "passing" the url over to the appI've tried rewriting the url, but it won't let me add the rest of the URI to proxy_pass.Any help would be appreciated!
Nginx case insensitive proxy_pass
Update your config file to this[nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ gpgcheck=0 enabled=1you need the mainline repository
I usedyum install nginxon my ECS server, but the version is not high enough to support http2. After googling around, I added a config file:/etc/yum.repos.d/nginx.repoWith the content:[nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=0 enabled=1Then I runyum update nginx, which gave me the version 1.8.1, which is still not high enough to support http2.Can anyone help me update my nginx to version 1.9.5 or higher, please?
How to update my Nginx with yum on CentOS 7
It is now possible to get Cloudflare to respect your web servers headers instead of overriding them with the minimum described in the Browser Cache TTL setting.Firstly navigate to the Caching tab in the Cloudflare dashboard:From here you can scroll down to the "Browser Cache Expiration" setting, from here you can select the "Respect Existing Headers" option in the dropdown:Further reading:Does CloudFlare honor my Expires and Cache-Control headers for static content?Caching Anonymous Page ViewsHow do I cache static HTML?Note:If this setting isn't chosen, Cloudflare will apply a default 4 hour minimum to Cache-Control headers. Once this setting is set, Cloudflare will not touch your Cache-Control headers (even if they're low or not at all set).
I am trying to get some html pages to be cached, the same way images are automatically cached via CloudFlare but I can't get CloudFlare to actually hits its cache for html.According to the documentation (Ref:https://support.cloudflare.com/hc/en-us/articles/202775670-How-Do-I-Tell-CloudFlare-What-to-Cache-), it's possible to cache anything with a Cache-Control set topublicwith amax-agegreater than 0.I've tried various combinations of headers on my origin Nginx server without success. From a simpleCache-Control: public, max-age=31536000to more complex headers includings-maxage=31536000,Pragma: public,ETag: "569ff137-6",Expires: Thu, 31 Dec 2037 23:55:55 GMTwithout any results.Any ideas to force CloudFlare to serve the html pages from their cache?PS: I am getting theCF-Cache-Status: HITon the images and it works fine but on the html pages nothing, not evenCF-Cache-Status: something. With a CloudFlare page rule for html pages, it seems to work fine but I want to avoid using one, mainly because it's too CloudFlare specific. I am not serving cookies or anything dynamic from these pages.
Cache-Control Headers not respected on CloudFlare
smtp.senduses LuaSocket'ssocket.protectfunction for handling internal errors. This function is implemented in C and doesn't allow yielding in the current releases (the version in git HEAD now allows yielding on Lua 5.2+, see discussionhere). Apparently someone tries to yield from within it. Inetc/dispatch.luain the LuaSocket package (better use the git HEAD version) there is areplacement function forsocket.protectthat should allow yielding on all Lua versions (at the cost of an extra temporary coroutine). You can try replacing the C function with that Lua function like so:local socket = require("socket") local base = _G -- paste modified socket.protect function here -- continue with your own code: local smtp = require("socket.smtp") -- ...
When I use the following script:local smtp = require("socket.smtp") local from = "from@host" local rcpt = "rcpt@host" local msg = { headers = { to = rcpt, subject = "Hi" }, body = "Hello" } smtp.send{from = from,rcpt = rcpt,source = smtp.message(msg)}I'm getting an error message:lua entry thread aborted: runtime error: attempt to yield across C-call boundary.I'm using the newestluasocketinstalled fromluarockswith Lua 5.1 using nginx compiled with LuaJIT 2.1. What is causing this error message and how do I fix it?
Luasocket + nginx error - lua entry thread aborted: runtime error: attempt to yield across C-call boundary
It seems to me, as if you haven't installed the right module? ngx_lua (http://wiki.nginx.org/HttpLuaModule)You mention OpenResty. Did you configure it with lua? If not, the guide is here(http://wiki.nginx.org/HttpLuaModule#Installation).Quick resumé:The ngx_openresty bundle can be used to install Nginx, ngx_lua, either one of the standard Lua 5.1 interpreter or LuaJIT 2.0, as well as a package of powerful companion Nginx modules. The basic installation step is a simple./configure --with-luajit && make && make install.You can manually compile ngx_lua into nginx too, the full guide is in the link too.After comment-discussing - I removed the irrelevant part of the answer.
I am very new to nginx and lua .i have installed Openresty . below is my code in nginx.conf file .server{ location /hellolua { default_type 'text/plain'; content_by_lua ' local name = ngx.var.arg_name or "Anonymous" ngx.say("Hello, ", name, "!") '; } }When i am runningsudo service nginx starti am getting errorStarting nginx: nginx: [emerg] unknown directive "content_by_lua" in /etc/nginx/nginx.conf:24 nginx: configuration file /etc/nginx/nginx.conf test failedtPlease let me know what i am missing .
getting error while using lua with nginx
As it stated in the documentation:http://nginx.org/en/docs/http/ngx_http_limit_req_module.htmlnginx uses "leaky bucket" algorithm which is simple and pretty common in network area. You can read about it on wikipedia:http://en.wikipedia.org/wiki/Leaky_bucketAs for your question (rate=1r/s burst=5 nodelay), in practice it would be something like this:Req.# | Time (sec) | Response 1 0 200 OK 2 0.1 200 OK 3 0.2 200 OK 4 0.3 200 OK 5 0.4 200 OK 6 0.5 200 OK 7 0.6 503 8 0.7 503 9 0.8 503 10 0.9 503 11 1.0 200 OK 12 1.1 503 13 1.2 503 14 1.3 503 15 1.4 503 16 1.5 503 17 1.6 503 18 1.7 503 19 1.8 503 20 1.9 503 21 2.0 200 OK 22 2.1 503 23 2.2 503 24 2.3 503 25 2.4 503 26 2.5 503 27 2.6 503 28 2.7 503 29 2.8 503 30 2.9 503 31 3.0 200 OK 32 3.1 503 33 3.2 503 34 3.3 503 35 3.4 503 36 3.5 503 37 3.6 503 38 3.7 503 39 3.8 503 40 3.9 503 41 4.0 200 OK 42 4.1 503 43 4.2 503 44 4.3 503 45 4.4 503 46 4.5 503 47 4.6 503 48 4.7 503 49 4.8 503 50 4.9 503
Say I setlimit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; server { location / { limit_req zone=one burst=5 nodelay; }Then for5 seconds, I send10 requestsper second.Which request should see a200and which should see a503?Would it be the first of each 10 requests?Or doesnginxkeep track ofbad userscontinuously sending requests, and in this case only the first of the 50 requets would get a200?
Nginx#ngx_http_limit_req_module: For how long is 503 returned once exceeding the rate?
Oh, I am mistaken the port for management plugin, it's 15672, not 5672. All goodserver { listen xxx.xxx.xxx.yy:80; server_name xxxxxxxxxx access_log acces.log; error_log error.log; location / { client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_pass http://xxx.xxx.xxx.xx:15672; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
I have loadbalancer and vm with rabbitmq broker. On rabbitmq open 5672 port with plugin managment if I am create proxy to rabbitmq recivecurl: (52) Empty reply from serverI am can connect with telnet to rmq server and have callbackcurl: (56) Recv failure: Connection reset by peerNginx configserver { listen xxx.xxx.xxx.yy:80; server_name xxxxxxxxxx access_log acces.log; error_log error.log; location / { client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_pass http://xxx.xxx.xxx.xx:5672; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
RabbitMQ managment Nginx proxy
Assuming you also want to serve static files, you could use something like this:server { server_name example.com; # Set the docroot directly in the server root /var/www; # Allow index.php or index.html as directory index files index index.html index.php; # See if a file or directory was requested first. If not, try the request as a php file. location / { try_files $uri $uri/ $uri.php?$args; } location ~ \.php$ { # If the php file doesn't exist, don't pass the request to php, just return a 404 try_files $uri =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass your_php_backend_address; } }
What rule would I use for nginx so my default file extension is .php?I currently access a pages using www.mywebsite.com/home.phpbut I want to just use www.mywebsite.com/homeThanks
Nginx: Setting a default file extension
You can make use of the ngx_real_ip_module.http://nginx.org/en/docs/http/ngx_http_realip_module.htmlWith this you can specify the Cloudflare CIDRs to be allowed to override thebinary_remote_addrwith the value fromX-Forwarded-For. Make sure you have this check in place. Config could look like:set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 162.158.0.0/15; set_real_ip_from 104.16.0.0/12; set_real_ip_from 172.64.0.0/13; set_real_ip_from 131.0.72.0/22; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32; set_real_ip_from 2a06:98c0::/29; set_real_ip_from 2c0f:f248::/32; real_ip_header X-Forwarded-For; real_ip_recursive off;Cloudflare IPs can change, this command will automatically update them intocloudflare_ips.conf:cat /dev/null > cloudflare_ips.conf && curl -s https://www.cloudflare.com/ips-v4 | while read ip; do echo "set_real_ip_from $ip;" >> cloudflare_ips.conf; done && curl -s https://www.cloudflare.com/ips-v6 | while read ip; do echo "set_real_ip_from $ip;" >> cloudflare_ips.conf; done && printf "real_ip_header X-Forwarded-For;\nreal_ip_recursive off;\n" >> cloudflare_ips.confYour rate limit config can use thebinary_remote_addrvariable. If the client comes via cloudflare, CFs IP will be replaced with the IP from the Header. If the client connects directly the client IP will be used. If a client tries to send aX-Forwarded-ForHeader that one will not be accepted as the clients IP does not match any CIDR from yourcloudflare_ips.conffile.
I am aware of the headersCF-Connecting-IP,$binary_remote_addr,http_x_forwarded_forI want to make a setting:limit_req_zone $http_x_forwarded_for zone=k_request_limit_per_ip:10m rate=400r/s; limit_conn_zone $http_x_forwarded_for zone=k_connection_limit_per_ip:10m;But Cloudflare isn't the only place that this machine is going to be accessed, so I want to limit direct access too. Is there a way to write something like:if(header == `X-Forwarded-For`) { limit_req_zone $http_x_forwarded_for zone=k_request_limit_per_ip:10m rate=400r/s; } else { limit_req_zone $binary_remote_addr zone=k_request_limit_per_ip:10m rate=400r/s; }Or would something like this work:limit_req_zone $http_x_forwarded_for zone=http_zone:10m rate=400r/s; limit_req_zone $binary_remote_addr zone=binary_zone:10m rate=400r/s;An alternative would be to fully allow all Cloudflare IP addresses. And limit the Non-Cloudflare IP addresses.Good source:NGINX rate limiting doesn't work when using Cloudflare. I can bring down my site with a simple `ab` command
Nginx check if Cloudflare forward or direct IP and limit accordingly
Worked. The problem was in the "/etc/nginx/nginx.conf" file. After a lot of reading and try - I've found that inside the file it forwards to HTML (instead of my nodejs web server). Changed the line of "root /path_to_ws", restarted Nginx and it worked. Thank you for the help!
I've created an environment in AWS which includes an EC2 instance with node js web-server and Nginx installed, behind a self-signed application load balancer.My ALB gets requests from HTTPS (443) and forwards them on HTTP (80) to the Nginx. My Nginx should get the requests from the ALB (in port 80) and forward them on port 9090 (which used by the node js web server).However, I'm having issues with translating the requests from the Nginx to the application. When entering the URL with the ALB DNS on HTTP I'm able to get to the above page (instead of my webserver application page):My default.conf file attached above:All my security groups are open to test the problem (on 443, 80, 9090). so ports are not the problem, but the Nginx configuration.Also, my target group presented above:What could be the problem / what further configuration should I do? Thank you.
Nginx as a reverse proxy behind AWS ALB (self-signed)
Check the owner of the public directory, for example using this command:ls -l /var/www/deebaco.com/htmlnginxoften runs asnobody, Instead, it should run with the same user as the owner.Edit/etc/nginx/nginx.confto set the same user. For example, if the directory is owned bywww-data, add the following:user www-data;After saving the config, validate that it is correct:sudo nginx -tIf the command above confirms that the syntax is ok, reload nginx configuration:sudo systemctl reload nginx.serviceThis should solve the problem.
I have hosted react app on nginx, when i try to access any file with extension (e.g favicon.ico), nginx throws 403 forbidden error although it works fine for basic app routing i.e, for files without extension. I'm pasting nginx config below, and the static files that i'm trying to access is in the /var/www/deebaco.com/html. Do i need to write another location block to serve files with extensions?server { listen 80; server_name deebaco.com www.deebaco.com; return 301 https://www.deebaco.com$request_uri; } server { listen 443 ssl; server_name deebaco.com; ssl_certificate /root/deebaco.com.chained.crt; ssl_certificate_key /root/deebaco.com.key; return 301 https://www.deebaco.com$request_uri; } server { listen 443 ssl http2; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; server_name www.deebaco.com; root /var/www/deebaco.com/html; ssl_certificate /root/deebaco.com.chained.crt; ssl_certificate_key /root/deebaco.com.key; #location / { #try_files $uri $uri/ /index.html?$args; #} location / { try_files $uri /index.html; autoindex on; autoindex_exact_size off; } }
Nginx showing 403 forbidden
Here's my solution: } /> Only routes that have/#will match/. One could easily add a conditional if needed to render something on root.
We're replacingreact-router-domsHashRouterwithBrowserRouterand would like to redirect old routes to the new ones. I taggednginxbecause I don't mind making redirects there. :)So say we have an old route/#/usersand/#/users:id: They should match and redirect to/usersand/users/:id.So far I tried (react-router-dom v5.0.1): The first route matches and redirects fine. The second one (with theid) is problematic. When I navigate to/#/users/123it redirects to/users/:id. It's replacing the actual123with:id.Those are two examples of routes. We have more with params to redirect as well.
How to redirect old react-router HashRouter (with the #) to BrowserRouter?
As you say, you do not need to use aregular expressionto match a single string. You can use the=and!=operators with theifdirective. Seethis documentfor details.For example:if ($host != foo.example.com) { ... }The correctregular expressionfor the above requires two anchors (^and$) and an escape for the.which would otherwise represent any character.For example:if ($host !~* ^foo\.example\.com$) { ... }The use of quotes is optional, unless the string contains reserved characters (such as{and}).Seethis linkfor more on regular expressions.
I have as below in nginx but it doesn't do what I intend:location / { if ($host !~* blah.blah.com) { return 307 https://$host/$request?uri; } }I basically need to do a 307 when $host doesn't match blah.blah.com in regex or it could really just be a plain string. I tried putting them in double quotes or /.../ but nothing seems to work. How do I specify this in nginx's location block?
nginx how to do if not regex
Do you mean you want to use nginx as a reverse proxy server for django with additional level of authorization? You simply move yourauth_basicandauth_basic_user_filedirectives fromlocationblock toserverblock:upstream gunicorn_server { server unix:; } server { listen ...; server_name ...; auth_basic "Login Required"; auth_basic_user_file etc/nginx/.htpasswd; ... # other parameters location / { try_files $uri @gunicorn; } location @gunicorn { proxy_pass http://gunicorn_server; } }UpdateAssuming there is an "admin" area which includes both/admin.htmland/admin/any/other/urito additionaly protect this area with HTTP Basic Auth you can use following configuration:upstream gunicorn_server { server unix:; } server { listen ...; server_name ...; ... # other parameters location / { try_files $uri @gunicorn; } location /admin { auth_basic "Login Required"; auth_basic_user_file etc/nginx/.htpasswd; try_files $uri @gunicorn; } location @gunicorn { proxy_pass http://gunicorn_server; } }To protect a single fileadmin.htmlreplacelocation /admin {withlocation = /admin.html {.
I want to use http auth but also, a reverse proxy using gunicorn.For http auth I use:location = admin.html { auth_basic 'Login Required' auth_basic__use_file etc/nginx/.htpasswd; }for gunicorn, proxy reverse I found:try_files $uri @gunicorn;How can I combine both ?
Nginx with gunicorn with double authorization
This should solve the problem:location / { if ($request_uri ~ ^([^.\?]*[^/])$) { return 301 $1/; } try_files $uri $uri/ /index.php$is_args$args; }
I want to redirect URLs without slash to the path with trailing slash. So/some-urlto/some-url/And the rest of the URLs, like/some-url.xml/some-url?/some-url?q=v/some-url/Should stay without redirection.I found this articlehttps://www.ateamsystems.com/tech-blog/nginx-add-trailing-slash-with-301-redirect-without-if-statements/in which author suggests to use following rule:location ~ ^([^.\?]*[^/])$ { try_files $uri @addslash; } location @addslash { return 301 $uri/; }Unfortunately this doesn't really work. Because url/some-url?q=vgets redirected to/some-url/Could you suggest how to change regular expression to make it work?
How to configure redirects to url with trailing slash in nginx?
We run usually quite small applications... Rarely more than 2000 requests per minute. But anyway its is hard to compare different applications. Thats what we use on production:Recommendation by thedocumentationHaharakiri = 20 # respawn processes taking more than 20 seconds limit-as = 256 # limit the project to 256 MB max-requests = 5000 # respawn processes after serving 5000 requests daemonize = /var/log/uwsgi/yourproject.log # background the process & loguwsgi_conf.ymlprocesses: 4 threads: 4 # This part might be important too, that way you limit the log file to 200 MB and # rotate it once log-maxsize : 200000000 log-backupname : /var/log/uwsgi/yourproject_backup.logWe use the following project for deployment and configuration of our django apps. (No documentation here sorry... Just used it internally)https://github.com/iterativ/deployit/blob/ubuntu1604/deployit/fabrichelper/fabric_templates/uwsgi.yamlHow can you tell if you configured it correctly... ? Since it depends much on your application I would recommend to use some monitoring tools such as newrelic.com and analyse it.
I have a mysql + django + uwsgi + nginx application and I recently had some issues with uwsgi's default configuration so I want to reconfigure it but I have no idea what the recommended values are.Another problem is that I couldn't find the default settings that uwsgi uses and that makes debugging really hard.Using the default configuration, the site was too slow under real traffic (too many requests stuck waiting for the uwsgi socket). So I used a configuration from some tutorial and it hadcpu-affinity=1andprocesses=4which fixed the issue. The configuration also hadlimit-as=512and now the app gets MemoryErrors so I guess 512MB is not enough.My questions are:How can I tell what the recommended settings are? I don't need it to be super perfect, just to handle the traffic in a descent way and to not crash from memory errors etc. Specifically the recommended value forlimit-asis what I need most right now.What are the default values of uwsgi's settings?Thanks!
Recommended settings for uwsgi
Try this:server_name "~^(www.)?ucwebapi(-uccore)?(\d{1,3}-\d{1,3})?\.testme\.net";It looks like there are some missing characters between your regex101 page and what ended up in your config.I've also tuned it a bit so that it will NOT match:ucwebapi-uccore999.testme.net ucwebapi-uccore-.testme.net ucwebapi-uccore-999.testme.net
server_name in nginx does not matchI want to match with such FQDNs I came up withserver_name "~^(www.)?ucwebapi-uccore(\d{0,3})-(\d{0,3})\.testme\.net";To Matchucwebapi.testme.net ucwebapi-uccore.testme.net ucwebapi-uccore1-0.testme.net ucwebapi-uccore999-999.testme.netValidated withhttps://regex101.com/r/tAwEp9/2Tested withserver_name "~^(www.)?ucwebapi-uccore(\d{0,3})-(\d{0,3})\.testme\.net ucwebapi1.testme.net";to see if ucwebapi1.testme.de server is reachable at all.Is there any restriction im not aware of? Thank you.
nginx server_name regex
Nginx will not run gzip filter ifContent-Encodingheader is found in the upstream response. So, you can setContent-Encoding: identityheader on the backend, and nginx will pass the response (and header) to the client without gzip process.identitymeans "not encoded".
I am tyring to stop nginx from gzipping a single PHP request. I already have the following:@ini_set('zlib.output_compression', 'Off'); @ini_set('implicit_flush', 1); header('X-Accel-Buffering: no');According to everything I have found, thatX-Accel-Bufferingalone should disable gzip, however when I load this page from a browser, I can still see the header:Content-Encoding:gzipI'm using php7-fpm, nginx 1.10.1, debian8EDIT:I did a test using sleep() to delay the output. It looks likeheader('X-Accel-Buffering: no');IS working, however it only prevents buffering and not gzipping. I guess gzipping is working as a stream somehow.I can see that if I output 1,000 bytes, looping over an echo statement with 1 char each, the browser receives about 11kb. If i echo a str_rep x 1000, then much less data is sent. There must be some overhead there.Regardless, I need to disable gzip so that I can send a large amount of content and measure the download time. If it's gzipped, I don't know what the actual throughput is.
How can I disable nginx gzip from PHP?
Pipelines are quite different in logstash and fluentd. And it took some time to build working Kubernetes -> Fluentd -> Elasticsearch -> Kibana solution.Short answer to my question is to installfluent-plugin-parserplugin (I wonder why it doesn't ship within standard package) and put this rule afterkubernetes_metadatafilter: type parser format /^(?[^ ]*) (?[^ ]*) \[(?[^\]]*)\] (?[^ ]*) (?[^ ]*) \[(?[^\]]*)\] "(?\S+[^\"])(?: +(?[^\"]*?)(?: +\S*)?)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")? (?[^ ]*) (?[^ ]*) (?:\[(?[^\]]*)\] )?(?[^ ]*) (?[^ ]*) (?[^ ]*) (?[^ ]*)$/ time_format %d/%b/%Y:%H:%M:%S %z key_name log types server_port:integer,code:integer,size:integer,request_length:integer,request_time:float,upstream_response_length:integer,upstream_response_time:float,upstream_status:integer reserve_data yes Long answer with lots of examples is here:https://github.com/kayrus/elk-kubernetes/
I'd like to parse ingress nginx logs using fluentd in Kubernetes. That was quite easy in Logstash, but I'm confused regarding fluentd syntax.Right now I have the following rules: type tail path /var/log/containers/*.log pos_file /var/log/es-containers.log.pos time_format %Y-%m-%dT%H:%M:%S.%NZ tag kubernetes.* format json read_from_head true keep_time_key true type kubernetes_metadata And as a result I get this log but it is unparsed:127.0.0.1 - [127.0.0.1] - user [27/Sep/2016:18:35:23 +0000] "POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1475000747571 HTTP/2.0" 200 37593 "http://localhost/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Centos Chromium/52.0.2743.116 Chrome/52.0.2743.116 Safari/537.36" 951 0.408 10.64.92.20:5601 37377 0.407 200I'd like to apply filter rules to be able to search by IP address, HTTP method, etc in Kibana. How can I implement that?
Parse nginx ingress logs in fluentd
So the solution was to restart puma.cap production deploy:restartEvery time I reboot the server, I need to restart puma as well.
I keep getting this error in the nginx.error.log:2016/06/06 20:14:02 [error] 907#0: *1 connect() to unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock failed (111: Connection refused) while connecting to upstream, client: 50.100.162.19, server: , request: "GET / HTTP/1.1", upstream: "http://unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock:/", host: "appname.com" (here it is with manually added newlines for your convenience) 2016/06/06 20:14:02 [error] 907#0: *1 connect() to unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock failed (111: Connection refused) while connecting to upstream, client: 50.100.162.19, server: , request: "GET / HTTP/1.1", upstream: "http://unix:///home/user/apps/appname/shared/tmp/sockets/appname- puma.sock:/", host: "appname.com"This is my nginx.conf:upstream puma { server unix:///home/user/apps/appname/shared/tmp/sockets/appname-puma.sock; } server { listen 80 default_server deferred; # server_name example.com; root /home/user/apps/appname/current/public; access_log /home/user/apps/appname/current/log/nginx.access.log; error_log /home/user/apps/appname/current/log/nginx.error.log info; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @puma; location @puma { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://puma; } error_page 500 502 503 504 /500.html; client_max_body_size 10M; keepalive_timeout 10; }What am I doing wrong? I followedDigital Ocean's tutorialto set up Capistrano, Nginx and Puma.
Rails/Nginx/Capistrano/Puma: (111: Connection refused) while connecting to upstream
Okay, finally I used:location @return_204 { return 204; } location / { proxy_pass http://zzz; proxy_intercept_errors on; error_page 500 502 503 504 = @return_204; }
I need my nginx front-end to return 204 when the back-end reply a 5xx or timeout.Is it possible?Thanks
Have nginx return a 204 instead of 5xx
Addingresolver 127.0.0.1;helps, but it's very strange...
I'm trying to make redirections using nginx. The idea is to redirect uri /id_1234/ to localhost:1234 for some ports. The redirection for the fixed port:location /id_1234/ { rewrite ^/id_1234/(.*) /$1 break; proxy_pass http://localhost:1234; proxy_redirect http://localhost:1234/ $scheme://$host/id_1234/; }It works just fine. Now I try to change 1234 to any port:location ~* ^/id_([0-9]*)/ { rewrite ^/id_([0-9]*)/(.*)$ /$2 break; proxy_pass http://localhost:$1; proxy_redirect http://localhost:$1/ $scheme://$host/id_$1/; }With this config, I get a 502 error, with the following error in log:no resolver defined to resolve localhostIf I change $1 to actual port after localhost:, it works fine for the specified port. How can be redirection port specified using regex?Thanks in advance!
nginx proxy redirection with port from uri
See what you have to do is: Lets say you have 3 node instances running on 3000,5000,7000. Now you have to point 3 sub domains to same ip lets say ig you have a domain example.com then ex1,ex2,ex3 these three will point to same ip. Now create 3 seperate files in /etc/nginx/sites-enabled/ lets say ex1.example.com, ex2.example.com, ex3.example.com now configure the server blocks in these files to oint to respective node application and restart nginx. Now you have three node applications on same server with three different access links..
I'm stuck trying to setup up several Node apps on different domains on one Digital Ocean droplet. I followed theHost Multiple Node.js Applications On a Single VPS with nginx, forever, and crontabarticle exactly.I have the domains all pointed correctly and A records set.I can't seem to get apps to run (with forever) on any other port besides the default express 3000.I changed the Nginx settings like it asked:I uncommented the server_names_hash_bucket_size 64; (like it says)I created /etc/nginx/conf.d/example.com.conf files for the apps (they are different domains. I put 1 on port 3000 and the other on 4000).example:server { listen 80; server_name your-domain.com; location / { proxy_pass http://localhost:{YOUR_PORT}; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }}I don't understand the difference between when Nginx is running the app and when forever is? Where does "npm start" come into play? How many potential servers are working at the same time?I can't seem to get more than 1 app running at once. I can figure out how to properly assign a Node app folder to a port and keep it alive forever with forever.
How to host 3 node apps with 3 different domains on one VPS?
Apparently my coworker has better Google-foo than I.This is apparently a known issue with virtualbox and nginx that has to do with the nginx's sendfile. You can simply add "sendfile off;" in either your server or location blocks in the nginx config. Here's a blogpost about it:nginx virtualbox static files
I have an Ubuntu VirtualBox that's setup by Vagrant. Its running NGINX to serve some static files and a Django app.I have the source folder synced via vagrant to the repo in my host (windows). I can make changes to a Javascript file in Windows and verify that the changes are made to my file in the VM by SSH'ing in and opening the file in nano.However, when I make the changes remotely, NGINX seems to serve up the unchanged version with "illegal" characters added to the end (which really freaks out browsers). I get the same file when I CURL localhost while ssh'd into the vm.EDITIt actually does the same thing when I edit the file via SSHI can reload the vm via vagrant (which re-syncs the folders) and it works fine until the next remote change.Restarting nginx and gunicorn doesn't help.Does vagrant lock the files so that nginx has to rely on a cache? What might be going on here?Thanks!
NGINX not picking up changes in Vagrant Synced folder
Gotya! Chef 11 feature. Issue with it exist in chef-solo solely :)To make a quick resume, difference is:chef.add_recipe() - loads entire cookbook context (all the files, e.g. recipes, definitions, attributes...)include_recipe "" - files(attributes, definitions etc.) that are not in the expended run list are not loaded.There are at least 4 ways to solve the issue(put files in the run list):include_attribute - include desired attribute file explicitly.metadata.rb->dependency - if your cookbook is using recipe from another cookbook, put that cookbook in metadata.rb's dependency section, and all it's files will be loaded.chef.add_recipe() - Load recipe via Vagrantfile. (Mentioned here just for reference)Berkshelf - you may use this cookbook manager to solve the issue as well. Here's theStackoverflow thread about this exact problemandsome DocsFor those who are interested in further reading, Chef 11 introduced dependency-based cookbook loading for non-recipe files. The new loading logic means that files belonging to cookbooks which exist in the cookbook_path but are not in the expanded run_list or dependencies of the cookbooks in the expanded run_list will no longer be loaded. REF:Opscode breaking changes documentation, and if you need a signature of the error I got,here'sthe exactly same one, even for the same cause.
Just ran nginx::source recipe on my vagrant box, and I have very unusual behaviour.When I include a recipe from theVagrantfile(as below), everything works like a charm,chef.add_recipe("project::nginx")chef.add_recipe("nginx::source")(project::nginxrecipe is very simple. Using it to override default attributes of the nginx cookbook)but if I include a recipe at the very end ofproject::nginx(mentioned up), everything falls apart:node.default['nginx']['server_names_hash_bucket_size'] = 128 include_recipe "nginx::source"Until now I didn't know there's any difference in behaviour between those two invocations. Does anybody here knows what's the difference?
"include_recipe" vs. Vagrantfile "chef.add_recipe". What's the difference?
You'll want to run nginx in foreground mode by adding the following to your nginx.confdaemon off;You can specify a custom nginx.conf to nginx with the -c argument
I'm trying to useForeman(version 0.31.0) to manage our application's processes but I'm not having much luck with nginx (nginx/1.0.10 + Phusion Passenger 3.0.11).Here's the relevant line from my Procfile:nginx: sudo /home/ubuntu/nginx/sbin/nginxWhen I start the app, Foreman reports that nginx is started and then immediately terminated:$ foreman start 21:18:28 nginx.1 | started with pid 27347 21:18:28 nginx.1 | process terminated 21:18:28 system | sending SIGTERM to all processesHowever,nginx is actually running, even though Foreman reports otherwise.Similarly, if I export to Upstart:rvmsudo foreman export upstart /etc/init -a my_app -u ubuntuand runsudo start my_app, nginx starts properly. Butsudo stop my_appdoes not stop nginx. It continues running.Is there a trick to getting nginx to work with Foreman?Note: I foundthis issue with Foremanand I'm wondering if it's related.
Foreman not working with NGINX
If you don't need HTTP access to your subversion repository, all you need to do is just install subversion and create a repository like this:svnadmin create /path/to/repositoryThen you can check out local copies directly:svn co /path/to/repository /path/to/my/checkoutOr over ssh:svn co svn+ssh://server/path/to/repositoryIf your packaging system is trying to install Apache with subversion, that is a packaging issue. However in Ubuntu, the subversion package does not require apache. Its requirements are:Depends: libsvn1 (= 1.6.12dfsg-1ubuntu1), libapr1 (>= 1.2.7), libc6 (>= 2.4), libsasl2-2 Suggests: subversion-tools, db4.8-util, patch
I'm using Ubuntu 10.10 and I could like to install Subversion. I don't need http access to the files and I would like to use SSH. The majority of the examples I've seen on how to install Subversion use Apache. I don't want to install Apache on my sever since I'm using NGINX. Can I just install Subversion without installing Apache? If yes, how? Thank you!
Install Subversion on Ubuntu with NGINX, not Apache
I think adding this task to your deploy.rb file should donamespace :deploy do task :restart do run "touch #{current_path}/tmp/restart.txt" end endBasically this will run thetouch tmp/restart.txtin the rails root directory which will restart passenger
I've finally gotten capistrano to work on my website, however, I cannot seem to get the restart part of the application to work. What I want todo is setup capistrano to restart the mongrel cluster that is running the rails app after a deploy has gone through. Since I used passenger to install everything, I have no clue how to restart the mongrel cluster.Does anyone know how todo this? In each tutorial that I've read, it mentions that there should be a restart.txt file in the /tmp folder of the app, however I cannot find anything that explains how to restart it... or what to put in the file.
Capistrano + NGINX Passenger Restart Rails App
if ($host ~* www\.(.*)) { set $host_without_www $1; rewrite ^(.*)$ http://$host_without_www$1 permanent; # $1 contains '/foo', not 'www.mydomain.com/foo' }Answer from server fault:https://serverfault.com/questions/139579/nginx-subdomain-rewrite
Yet another nginx rewrite rule question:How can I do a rewrite fromhttp://www.*.domain.comtohttp://*.domain.com?
nginx subdomain rewrite
This is easily possible! You could base it off Pion’srtp-to-webrtcexample. This allows you to easily get media from ffmpeg into the browser.The ffmpeg command you run instead would be like this oneffmpeg -re -i rtmp://localhost:1935/$app/$name -vn -acodec libopus -f rtp rtp://localhost:6000 -vcodec copy -an -f rtp rtp:localhost:5000 -sdp_file video.sdpI would consider transcoding to VP8 since not all browsers support H264.—-If you want sub-second playback in the browser I would check outProject Lightspeedthat’s your best option today IMO.
I'd like to use OBS to stream via RTMP to a nginx server, and then locally send the RTMP fragments to WebRTC, so that they can be transmitted to the client via a MediaStream. I think this possible as it is essentially describedhere. I'm doing this because the multi-second latency of HLS is not appropriate for what I'm trying to do.I'm having trouble extracting the RTMP fragments from nginx, the only plausible command I could find for doing this in thedocumentationwaspull rtmp://.... When I tried this I did not see any files appearing in my root folder, where I would normally find the HLS files if I were usinghls on. Does anyone know how to accomplish what I'm trying achieve above?Thanks!
send nginx rtmp fragments to WebRTC
Option 1 add www-data group to my-user:sudo adduser www-data my-userOption 2 change user of php-fpm into my-user (ref):find options user and group in www.conf, and change it into[my-user] group=mygroup
I have followed this upvotedanswerand did the following:sudo chown -R my-user:www-data /var/www/domain.com/ sudo find /var/www/domain.com/ -type f -exec chmod 664 {} \; sudo find /var/www/domain.com/ -type d -exec chmod 775 {} \; sudo chgrp -R www-data /var/www/domain.com/storage /var/www/domain.com/bootstrap/cache sudo chmod -R ug+rwx /var/www/domain.com/storage /var/www/domain.com/bootstrap/cacheEverything works fine, but whenever a directory (within the storage directory) is created by my-user and not www-data user, the webserver can't write to it or vice versa. Unless I rerun those commands after the directory has been created.Notes: sometimes I run commands with my-user that create directories, and sometimes the www-data user create directories. (within the storage directory).Also, my-user is already within the www-data group.How can I avoid permission errors? without running all those commands again.
How to setup laravel file permission once and for all
According to this blog post, it's not a "No" but more of a "We can't be sure" (emphasis mine):NGINX tests and verifies that NGINX Plus operates correctly when it is run on a FIPS‑enabled OS that is running in FIPS mode.NGINX cannot make similar statements for NGINX Open Source...https://www.nginx.com/blog/achieving-fips-compliance-nginx-plus/#FIPS-Compliance-with-NGINX-Open-SourceThey can't make claims for the OS you compile on or the flags that you use to build. There's a lot going on in an OpenSSL build.https://wiki.openssl.org/index.php/Compilation_and_InstallationAnd any deviation from the "trusted path" or "validated" build steps may invalidate your installation.https://www.openssl.org/docs/fips/UserGuide-2.0.pdf
I am investigating FIPS compliance for our platform. nginx is one of the components and we use nginx 1.15.1. I found the documentation about nginx plus being FIPS compliant.When NGINX Plus is executed on an operating system where a FIPS‑validated OpenSSL cryptographic module is present and FIPS mode is enabled, NGINX Plus is compliant with FIPS 140-2 with respect to the decryption and encryption of SSL/TLS and HTTP/2 traffic.https://docs.nginx.com/nginx/fips-compliance-nginx-plus/Does this apply to open source nginx as well? I did not find any documentation for the open source version. I have posted the query in nginx forum as well but checking it here as well in case folks have already done FIPS compliance with the open source version.
Is Nginx open source FIPS compliant?
I just figured it out. As stated in thedocsWhen variables are used in proxy_pass:location /name/ { proxy_pass http://127.0.0.1$request_uri; }In this case, if URI is specified in the directive, it is passed to the server as is, replacing the original request URI.So the problem was that I specified a request uri in the variable (the trailing /). After removing this / everything worked fine.Here the working config:location /monitoring/prometheus/ { set $prometheusUrl http://prometheus.monitoring.svc.cluster.local:9090; proxy_set_header Accept-Encoding ""; proxy_pass $prometheusUrl; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; sub_filter_types text/html; sub_filter_once off; sub_filter '="/' '="/monitoring/prometheus/'; sub_filter 'var PATH_PREFIX = "";' 'var PATH_PREFIX = "/monitoring/prometheus";'; rewrite ^/monitoring/prometheus/?$ /monitoring/prometheus/graph redirect; rewrite ^/monitoring/prometheus/(.*)$ /$1 break; }
I have the following nginx.conflocation /monitoring/prometheus/ { resolver 172.20.0.10 valid=5s; set $prometheusUrl http://prometheus.monitoring.svc.cluster.local:9090/; proxy_set_header Accept-Encoding ""; proxy_pass $prometheusUrl; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; sub_filter_types text/html; sub_filter_once off; sub_filter '="/' '="/monitoring/prometheus/'; sub_filter 'var PATH_PREFIX = "";' 'var PATH_PREFIX = "/monitoring/prometheus";'; rewrite ^/monitoring/prometheus/?$ /monitoring/prometheus/graph redirect; rewrite ^/monitoring/prometheus/(.*)$ /$1 break; }When I naviagte tohttps://myHost/monitoring/prometheus/graphI get redirected to /graph (https://myHost/graph)When I don't use the variable and place the url directly to proxy_pass everything works as expected. I can navigate tohttps://myHost/monitoring/prometheus/graphand see prometheus.location /monitoring/prometheus/ { resolver 172.20.0.10 valid=5s; proxy_set_header Accept-Encoding ""; proxy_pass http://prometheus.monitoring.svc.cluster.local:9090/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; sub_filter_types text/html; sub_filter_once off; sub_filter '="/' '="/monitoring/prometheus/'; sub_filter 'var PATH_PREFIX = "";' 'var PATH_PREFIX = "/monitoring/prometheus";'; rewrite ^/monitoring/prometheus/?$ /monitoring/prometheus/graph redirect; rewrite ^/monitoring/prometheus/(.*)$ /$1 break; }Can anyone explain to me why using the variable leads to a different behaviour in terms of routing? I need to use variables to force nginx to resolve the dns name on each request.
nginx proxyPass to prometheus using variable
you can do it like below. Basically you capture response from other services and then combine themlocation /services/refALL { content_by_lua_block { local respA = ngx.location.capture("/services/refA") local respB = ngx.location.capture("/services/refB") local respC = ngx.location.capture("/services/refC") ngx.say(respA.body .. respB.body .. respC.body) } }
I have Nginx/openresty and some other services running on one VM. Basically VM accepts requests on Openresty and then openresty forwards requests to appropriate service. e.g. below requests getting forwarded to ServiceA, ServiceB and ServiceC respectively. It is working fine.http://server:80/services/refAhttp://server:80/services/refBhttp://server:80/services/refCNow I need to expose a new endpoint which could get the responses from all services A, B and C. and then return one consolidated response.I cannot use multiple proxy_pass in my location, could someone suggest how can I achieve that? e.g.http://server:80/services/refALL--> returns a consolidated response from A, B and C Services.
Is it possible to consolidate multiple responses and send one response in NGINX
Yourdev workstation's/etc/hostsfile is readed only when you open a browser (or want to reach the network) from yourdev workstation. If you reach the network from your mobile thedev workstation's/etc/hostswill be ignored.If you want to reach192.168.2.11assite1.devfrom your mobile or tablet too you should create an/etc/hosts-entry on each device (same as on your workstation).Other possibility is to use your routerwhenyour router supports local DNS function.Or a complicated solution (a very ugly hack) is to install a DNS server to your workstation and sets the router's DNS to your workstation's IP.
I would like to access a virtual host on my dev workstation (Arch Linux) from mobile or tablet connected in the same network.My nginx.conf virtual host spec looks like this:http {include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; client_max_body_size 16M; # Domain site1.dev server { server_name site1.dev; listen 80; root /path/to/dir; location / { root /path/to/dir; index index.php; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path/to/dir/$fastcgi_script_name; include fastcgi_params; } }}My /etc/hosts file contains:127.0.0.1 localhost localhost.localdomain site1.dev 192.168.1.11 site1.devThis works on localhost, but I can't reach site1.dev from phone or tablet connected in the same network. It works only using IP address 192.168.1.11 Is there some way how to make it work usingsite1.devname?
Accessing NGINX virtual host in local network
It seems you forgot the Listen directive. Try the following:server { listen 80; server_name internal-docs.mysite.com; root /var/www/docs-internal; index index.html; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } error_page 404 /404.html; }If that does not work, check:That Nginx user has read permission to the site content. For example if your Nginx user is www and you have root access, do the following:# su www $ cat /var/www/docs-internal/index.htmlIf that fails, ensure the location has correct ownership and permissions. Note that for a user to be able to browser a directory, that directory must have the execute bit for that user or user group.That Nginx user has read permission on file ../sites-available/internal-docs.mysite.com. For example if your Nginx user is www and you have root access, do the following:# su www $ cat /etc/nginx/sites-available/internal-docs.mysite.comIf that fails, ensure that the config files have correct ownership. Note: normally Nginx master process is run by root, and that process spawns sub-processes run as Nginx user, so permissions on config files are unlikely to be the problem.That maybe your config file name should end with ".conf" (on my server I have the following line:include conf.d/*.conf;so it willNOTload any conf file ending with ".com".That Nginx tries to load files in ../sites-available/ in its main config file. Maybe it does not and looks instead in the conf.d directory (the default).That you can do a ping and nslookup on the subdomain. If you cannot, then you have to fix that first (DNS, firewall...).
I have someslatedocs as website and would like to serve them on the internal server, through a subdomain as follows:internal-docs.mysite.com. For the record, accessingmysite.comshows the "nginx is running propertly" page.I've created a config file with following path and name:/etc/nginx/sites-available/internal-docs.mysite.com:server { listen 80; server_name internal-docs.mysite.com; root /var/www/docs-internal; index index.html; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } error_page 404 /404.html; }And of course, I've put the files in/var/www/docs-internal. And then I made a symlink to the uppershown config file in the/etc/nginx/sites-enableddir:internal-docs.mysite.com -> ../sites-available/internal-docs.mysite.comThen I reloadnginx -s reloadbut "this site can't be reached" error is what I get when accessing the URL.The setup and configuration look correct to me (according to the guidelines I've followed), so that's why I'm in a dead end, sort of...
Serve static content through subdomain in nginx
After several weeks i can find problem in mywsgi.py. It common solution useos.environ['ENV']forDJANGO_SETTINGS_MODULE, but with deffrent users and permissions its dosen't work.If you use in yourwsgi.pyfile something like this:os.environ["DJANGO_SETTINGS_MODULE"] = "config.settings." + os.environ["ENV"]And have problem withno python application found- split your wsgi file. I can catch thatos.environ["ENV"]return empty string. I add it for my all user, use source and etc. But uwsgi in emperior mode don't see it. You sould usewsgi_dev.pyandwsgi_production.pywhere you can write somethink like thisos.environ["DJANGO_SETTINGS_MODULE"] = "config.settings.production". It's not so elegant but solve this problems fine.For use splitting wsgi you can write something like this inwsgi.pyimport os from django.core.wsgi import get_wsgi_application if os.environ.get('DEV') is True: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.dev") else: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production") application = get_wsgi_application()
I user Django 1.10 with uWSGI and nginx on ubuntu 16.04 and deploy my app with ansible. My project have not default structure, but quite common ( thank Two scoopce for this :). I use split dev and production settings and config folder instead 'name' project folder. It's looks like this:|-- config | |-- __init__.py | |-- settings | | |-- __init__.py | | |-- base.py | | `-- dev.py | |-- urls.py | |-- wsgi_dev.py | `-- wsgi_production.py |-- manage.py `-- requirements.txtMyproduction.pygenarate from ansible with security encrypt and locate in config/settings.With this config i get "no python application found check your startup logs". Uwsgi don't see my application.( {{ }} it's jinja2 syntax for ansible )/etc/uwsgi/sites/{{ project_name }}[uwsgi] chdir = {{ django_root }} home = /home/{{ project_user }}/venvs/{{ project_name }} module = config.wsgi_production:application master = true processes = 5 socket = /run/uwsgi/{{ project_name }}.sock chown-socket = {{ project_user }}:www-data chmod-socket = 660 vacuum = true
How solve"no python application found check your startup logs" error for Django + uWSGI + nginx stack
I was asking the same question myself so I did some digging around and here is what I found out:Certbot mainly uses 80 or 443 ports for challenges (http-01andtls-sni-01) to verify domain ownership as it is described incertbot docs:Under the hood, plugins use one of several ACME protocol challenges to prove you control a domain. The options are http-01 (which uses port 80), tls-sni-01 (port 443) and dns-01 (requiring configuration of a DNS server on port 53, though that’s often not the same machine as your webserver). A few plugins support more than one challenge type, in which case you can choose one with --preferred-challenges.Looking atcertbot_nginxplugin implementationofhttp-01challenge we can see that plugin edits nginx configuration to include additional server block that is used to perform the challenge:def _make_server_block(self, achall): """Creates a server block for a challenge. :param achall: Annotated HTTP-01 challenge :type achall: :class:`certbot.achallenges.KeyAuthorizationAnnotatedChallenge` :param list addrs: addresses of challenged domain :class:`list` of type :class:`~nginx.obj.Addr` :returns: server block for the challenge host :rtype: list """
I success with using certbot-nginx plugin.I know thatitis opensource and hosted on github.But I do not have enough skill to analyze this code.For example:I have several internal sites which is proxied by nginx. All virtualhost configs has following access restrictions by anonymous:allow 192.168.1.0/24; allow 192.168.0.0/24; allow 10.88.0.0/16; allow 127.0.0.1; # gate1.example.com allow X.X.X.X; # gate2.example.com allow X.X.X.X; # other gate's # ....... deny all;This access restrictions prohibits letsencrypt servers, as well all other undefined hosts.Butcertbot renew --nginxperforms certificate update normally.How does it work ?If it is secure ?
Letsencrypt certbot-nginx plugin. How does it work?
You probably need to limit the capture to the first two path segments:location ~ ^(/microsite/[^/]+) { try_files $uri $uri/ $1/; }The[^/]character class matches anything that is not a/
Firstly I want to state that I'm rather new to nginx, I basically only know what I've learned over the last week.That said, I currently have a Nginx server with a standard configuration of:server { listen 80; server_name site.com; root /var/www/site.com/; index index.php; location / { try_files $uri $uri/ /index.php; } location /microsite/first { try_files $uri $uri/ /microsite/first/; } location /microsite/second { try_files $uri $uri/ /microsite/second/; } ... }This works fine, although for everymicrositeI add to the existing ones, it requires that a new location be added referring to the path of the newmicrosite.My question is:is it possible to dynamically set thelocationparameter in a way that it catches and references whatever sub-directory exists within themicrosite/directory?e.g. something along the line of the rewrite rulerewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3 last;(taken from the nginx site) but applied to thelocationparameter, like:location ~ ^/microsite/(.*)$ { try_files $uri $uri/ /microsite/$1/; }In which the$1would catch the sub-directory name passed in(.*)?(I tried this snippet that I built refering to the answer to (another)Nginx dynamic location configurationquestion, although it did not work)Also I'm not a Regex expert, I've tweaked a bit with it in the past but it was a while ago and I don't recall the precise terminology, so that may be part of the problem, perhaps?!Anyway, all help is appreciated.Thanks in advance!!
Nginx dynamic location path configuration
Literally moments after I posted this question, I stumbled uponthiswhile googling for how to whitelist IPs from a file in Nginx! Kind of funny considering I spent the last 2 hours googling for specific terms about rate limiting; talk about relevance, heh..limit_conn_zone $server_name zone=servers:1m; limit_conn servers 1;This in thehttp {block seems to do the trick.
I am looking for a way to limit the number of maximum concurrent connections to 1. I do not want a connection limit per IP, I already know this is supported.As far as I can see,max_connswould be exactly what I'm looking for, but unfortunately it's not available in the free version:Additionally, the following parameters are available as part of our commercial subscriptionLimitingworker_connectionsis not an option, as the minimum it wants is 4, and it affects more than the incoming requests.My conf:server { listen 80; server_name localhost; location / { rewrite_by_lua ' [some lua code] '; proxy_pass http://127.0.0.1:8080; } }
Limit Nginx max concurrent connections
You need to add intermediate certificate file in your nginx configuration.Hereis the powerfull tool byzakjanto obtain the intermediate certificate files using your main certificate, Store obtained crt file to your server and mentioned it in thenginx.confinssl_certificate
When i try to access my ruby site from android mobile device i get following error, can anyone help me solving this problem.With following added error NET::ERR_CERT_AUTHORITY_INVALID
NET::ERR_CERT_AUTHORITY_INVALID https in red color
I never run Nginx on Windows, but the official documentation says how:http://nginx.org/en/docs/windows.html.To run two node applications with Nginx, it's necessary to create a proxy. This is an example of how to alter thenginx.conffile for this:worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; server_name localhost; access_log C:\var\log\nginx\access.log; location ~ ^/(javascripts|stylesheets|images) { root C:\app1\public; expires max; } location / { proxy_pass http://localhost:3000; } } server { listen 81; server_name localhost; access_log C:\var\log\nginx\access.log; location ~ ^/(javascripts|stylesheets|images) { root C:\app2\public; expires max; } location / { proxy_pass http://localhost:3001; } } }In this case, there are two node applications, one running on port 3000 and an other on port 3001 - the Nginx works as a proxy.Further documentation:https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/.In your case, the configuration file are localised in:C:\nginx_v1_6\conf\nginx.confBackup the default file and update the content with what I posted.Finally, you can test the reverse-proxy throughlocalhost(port 80 default) andlocalhost:81, if the nodes servers and Nginx server are running.
I want to install Nginx on Windows, and to run two node application. How can I do this?I've tried to download Nginx 1.6.3, but I don't find something relevant about how to run on Windows. Just for Linux. I think there should be some modules for node.Any advice will be useful!
How to run Nginx with Node.js on Windows?
After further googling, I came uponthis solution:location / { # Send 404s to B error_page 404 = @backendB; proxy_intercept_errors on; log_not_found off; # Try the proxy like normal proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://A; } location @backendB { # If A didn't work, let's try B. proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://B; # Any 404s here are handled normally. }
Given an Nginx configuration roughly like this:upstream A { server aa:8080; } upstream B { server bb:8080; } server { listen 80; location @backendA { proxy_pass http://A/; } location @backendB { proxy_pass http://B/; } location / { # This doesn't work. :) try_files @backendA @backendB =404; } }Basically, I would like Nginx to try upstream A, and if A returns a 404, then try upstream B instead, and failing that, return a 404 to the client.try_filesdoes this for filesystem locations, then can fall back to a named location, but it doesn't work for multiple named locations. Is there something thatwillwork?Background:I have a Django web application (upstream A) and an Apache/Wordpress instance (upstream B) that I would like to coexist in the same URL namespace for simpler Wordpress URLs:mysite.com/hello-world/instead ofmysite.com/blog/hello-world/.Icouldduplicate my Django URLs in the Nginx locations and use wordpress as a catch-all:location /something-django-handles/ { proxy_pass http://A/; } location /something-else-django-handles/ { proxy_pass http://A/; } location / { proxy_pass http://B/; }But this violates the DRY principle, so I'd like to avoid it if possible. :) Is there a solution?
How to configure Nginx to try two upstreams before 404ing?
...BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: 10.0.2.2, server: 0.0.0.0:443This looks like someone checking if the server supports TLS_FALLBACK_SCSV, which it does in your case. Nothing to worry about. On the contrary this means that your server supports a useful security feature. For more information about TLS_FALLBACK_SCSV and how one can detect SSL downgrade attacks likePOODLEthis way you might have a look athttp://www.exploresecurity.com/poodle-and-the-tls_fallback_scsv-remedy/.TLS_FALLBACK_SCSV is a fairly new option intended to detect SSL downgrade attacks. It needs support on client and server. Older nginx/OpenSSL and older browsers simply did not have this option so this problem could not have been detected and thus not logged in earlier versions. This message is critical because it could indicate an actual SSL downgrade attack attempt against the client which was defeated by this option. In practice it is probably some tool probing for support of the option, likeSSLLabs.For reference the relevant code from ssl/ssl_lib.c function ssl_bytes_to_cipher_list:/* Check for TLS_FALLBACK_SCSV */ if ((n != 3 || !p[0]) && (p[n-2] == ((SSL3_CK_FALLBACK_SCSV >> 8) & 0xff)) && (p[n-1] == (SSL3_CK_FALLBACK_SCSV & 0xff))) { /* The SCSV indicates that the client previously tried a higher version. * Fail if the current version is an unexpected downgrade. */ if (!SSL_ctrl(s, SSL_CTRL_CHECK_PROTO_VERSION, 0, NULL)) { SSLerr(SSL_F_SSL_BYTES_TO_CIPHER_LIST,SSL_R_INAPPROPRIATE_FALLBACK); if (s->s3) ssl3_send_alert(s,SSL3_AL_FATAL,SSL_AD_INAPPROPRIATE_FALLBACK); goto err; } p += n; continue; }
I have problem with my nginx on Ubuntu 14.04 LTS. From time to time I get a critical error:2015/01/18 12:59:44 [crit] 1065#0: *28289 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: 10.0.2.2, server: 0.0.0.0:443I've checked version of my OpenSSL:root@www:~# ldd `which nginx` | grep ssl libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f39e236b000) root@www:~# strings /lib/x86_64-linux-gnu/libssl.so.1.0.0 | grep "^OpenSSL " OpenSSL 1.0.1f 6 Jan 2014I've searched for more information about it and found that it might be problem with old version OpenSSL. So I've tried to compile the latest version:wget https://www.openssl.org/source/openssl-1.0.1l.tar.gz && tar xzf && cd openssl-1.0.1l ./config && make && make installI've also replaced old OpenSSL binary file with new one via symlink:ln -sf /usr/local/ssl/bin/openssl `which openssl`After that I have:root@www:~# openssl version OpenSSL 1.0.1l 15 Jan 2015But still I have the old version in nginx:root@www:~# strings /lib/x86_64-linux-gnu/libssl.so.1.0.0 | grep "^OpenSSL " OpenSSL 1.0.1f 6 Jan 2014I couldn't find any other new libssl in Ubuntu after updating OpenSSL. How do I update libssl so that nginx could use the newest version?P.S.1. Maybe the problem with critical error isn't about version of OpenSSL.P.S.2. I think that this crtitical error might affect my whole Virtual Machine. I have also a problem with "from time to time" crashing of VM.I've tried so many things and now I am hopeless. Stackoverflow please help!
nginx critical error with SSL handshaking
It should be checked, but as I understood lifetime of the zone items relates to the active connections.Sozone=one:1mcan hold up to16 K unique IPsamong currently (simultaneously)active connections(total number of the active connections at the moment can exceed 16 K, because a few connections can be opened from the same IP).So zone size in mb should be >= number of simultaneous connections from the unique IPs / 16K.Notethat if users share single IP over the NAT that is rather often for USSR providers then you willlimit request frequency for the bunch of usersthat can be very inconvenient for them, so to handle this case you should setrate = simult_users_with_same_ipr/s
According to nginx documentation onlimit_req_zoneOne megabyte zone can keep about 16 thousand 64-byte states. If the zone storage is exhausted, the server will return the 503 (Service Temporarily Unavailable) error to all further requests.I wonder in what way these zones get cleared? For example if we have smth likelimit_req_zone $binary_remote_addr zone=one:1m rate=1r/s;and the number of unique users per a day exceeds 16000 - does it mean that the zone will get overflown and other users will start getting 503 error for the set up location? Or is there a time frame of user's inactivity after which the-user-related-zone-memory will be cleaned?My main concern here is to set an optimal zone size without a risk of getting it exhausted in case of high-load.
nginx: how limit_req_zone zone gets cleared?
I eventually went back to the first approach, because it's not convenient to add a query parameter to a url for this. It makes the client logic unnecessary complex.I found the solution to my first approach. The regex in the location statement was wrong. You need to capture the regex in variable names by using ? like this:location ~ ^/fwd/(?\w+)/(?.*)$Then $fwd_alias will contain the alias like foo or bar. $fwd_path with contain the whole path after that.To pass on the full path including optional query parameters you specify the proxy_pass as:proxy_pass http://$repo_url$fwd_path$is_args$args;That's it!So now, including the mapping in the first example, and adding the resolver, it comes down to:location ~ ^/fwd/(?\w+)/(?.*)$ { resolver 8.8.8.8; add_header Access-Control-Allow-Origin "*"; proxy_pass http://$repo_url$fwd_path$is_args$args; proxy_redirect off; access_log on; }And a request with the following path:http://localhost:8080/fwd/foo/something/else?with=querymaps to:http://foo.domain.nl/something/else?with=query
I'm trying to get the following pattern to work. I need to specify a dynamic path in my client side code to be able to switch to a few predefined hosts. I map these hosts by appending their alias to a /fwd/ url. The alias is mapped to the real server in nginx like this:map $uri $repoUrl { default invalid; ~^/fwd/foo/.* http://foo.domain.nl/; ~^/fwd/bar/.* http://bar.domain.nl/; }Then in the server config part I catch any url starting with fwd and apply the mapped alias value. The remaining part of the url, after the alias should be appended to the url as well.location /fwd/(\w+)/(.*)$ { add_header X-FwdHost "$repoUrl$2"; add_header Access-Control-Allow-Origin "*"; proxy_pass "$repoUrl$2"; proxy_redirect off; access_log on; }If I test this with:curl -i http://localhost:8080/fwd/foo/something/elseI get:X-FwdHost: http://foo.domain.nl/But when test the results from the regexp I get:$1: foo $2: something/elseSo overall it seems to be working. The regex appears to be ok, but I can't get it to concatenate into one string? Any ideas or is there an easier/better way to accomplish the same?[EDIT]I found a possibly much easier way to do this, by using a query parameter named forward. First map the query parameter to a the right host:map $arg_forward $repo_forward { default http://invalid_repo_forward/; foo http://foo.domain.nl/; bar http://bar.domain.nl/;}Then use the parameter in the path to be forwarded:location /fwd/ { add_header X-FwdHost $repo_forward; add_header Access-Control-Allow-Origin "*"; proxy_pass $repo_forward; proxy_redirect off; access_log on; }I would expect and url like:http://localhost:8080/fwd/?forward=fooTo result in:http://foo.domain.nl/...but still this doesn't work. I get a 404 returned. What am I missing?
Dynamic proxy_pass with map and regexp
In general, I don't believe this is possible. But, you might be able to hack something together based onthis article.Quoting relevant parts:2ed version is here:How to reference OS Environment Variables in nginx.confPosted at Nginx Forum:http://forum.nginx.org/read.php?2,215269,215278#msg-215278and furtherYou can read system environment variables with ngx_lua enabled in your nginx build:http://wiki.nginx.org/HttpLuaModuleenv PATH; http { ... server { location /path { set_by_lua $path 'return os.getenv("PATH")'; ... } }BTW, to use the set_by_lua directive, you also need to enable the ngx_devel_kit module here:https://github.com/simpl/ngx_devel_kit(it'll be easier if you use the ngx_openresty bundle).
Is there a way to specify, for example, that therootshould be relative to the directory where the config file is living? Something likeroot $conf_path/www
nginx variable for path of config file
Try this (untested):merge_slashes off; rewrite (.*)//+(.*) $1/$2 permanent;It might cause multiple redirects if there are multiple groups of slashes though.Like this:http://goout.cz/////cs/koncerty///praha/Might go to:http://goout.cz/cs/koncerty///praha/Then finally:http://goout.cz/cs/koncerty/praha/
I am using nginx in with my Java Application and my problem is that nginx is merging the slashes and I am unable to redirect my website to the correct version.For instance:http://goout.cz/cs/koncerty///praha/is merged tohttp://goout.cz/cs/koncerty/praha/and then I am unable to recognized the malformed URL and perform the redirection.I tried to setmerge_slashes off;and then:rewrite (.*)//(.*) $1/$2 permanent;But this has no effect and the // stays in the URL.How can I achieve this?
Nginx merge_slashes redirect
Option "-c" is to change dir, you have to use upercase "-C" to specify config file. LIkethin config -C /etc/thin/myapp.yml -c /var/
I am trying to use the folling command to restart thin.thin restart -c config/thin.ymlHere's the content of thin.yml:rackup: /root/SEHabitat/config.ru pid: /tmp/pids/thin.pid wait: 30 timeout: 600 log: /root/SEHabitat/log/thin.log max_conns: 1024 require: [] max_persistent_conns: 512 environment: production servers: 3 daemonize: true #chdir: /root/SEHabitat socket: /tmp/thin.sock #port: 3000Here's the output:/usr/lib/ruby/gems/1.8/gems/thin-1.2.11/lib/thin/runner.rb:171:in `chdir': Not a directory - /root/SEHabitat/config/thin.yml (Errno::ENOTDIR) from /usr/lib/ruby/gems/1.8/gems/thin-1.2.11/lib/thin/runner.rb:171:in `run_command' from /usr/lib/ruby/gems/1.8/gems/thin-1.2.11/lib/thin/runner.rb:151:in `run!' from /usr/lib/ruby/gems/1.8/gems/thin-1.2.11/bin/thin:6 from /usr/bin/thin:19:in `load' from /usr/bin/thin:19
Error When Restarting Thin for my Ruby on Rails Application
You need addproxy_set_header Cookie $http_cookie;in location config. Variable $http_cookie is user request cookie.
i have 3 heroku appsfrontendreactbackendnodereverse-proxynginxcalls to reverse-proxy/api/?(.*) are frowarded to backendrest all calls to reverse-proxy are forwarded to frontendthe/etc/nginx/conf.d/default.confcodeupstream frontend { server $FRONTEND_URL; } upstream backend { server $BACKEND_URL; } server { listen $PORT; location / { proxy_pass http://frontend; proxy_set_header Host $FRONTEND_URL; } location /api { rewrite /api/(.*) /$1 break; proxy_pass http://backend; proxy_set_header Host $BACKEND_URL; } }issuei am using cookie for authentication but the cookie being set by backend is not being 'forwarded'my codenow it works, changes i made:changing tosecure: falsein mynode appdid it for me (will add tls certificate later maybe)suggested fix by @mariolunow it looks likelocation /api { rewrite /api/(.*) /$1 break; proxy_pass http://backend; proxy_set_header Host $BACKEND_URL; proxy_set_header Cookie $http_cookie; }app.set("trust proxy", true);
nginx reverse proxy cookie forwarding