Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Well, I solved my problem!I ended up trying with a different player and different settings and managed to get some frames latency (or no latency at all), so I guess this ffmpeg command is not introducing latency at all.Just for the record, I'm using flowplayer with the following settings:$f("streams", "http://releases.flowplayer.org/swf/flowplayer-3.2.18.swf", { clip: { url: 'mystream', provider: 'rtmp', live: true, bufferLength: 0, bufferTime: 0, }, plugins: { rtmp: { url: 'flowplayer.rtmp-3.2.3.swf', netConnectionUrl: 'rtmp://192.168.1.196:1935/myapp' } } });
I have a video stream that I want to broadcast via RTMP.I'm using ffmpeg to do so, with the following command:ffmpeg -i http://192.168.1.77:18000/stream.flv -c copy -f flv rtmp://localhost/myapp/mystreamAs far as I know, transcoding the video stream would introduce some latency. So my question is: is it possible that I am introducing latency in the output stream by using this ffmpeg command (copy)?Side note: I'm trying to redirect my live video stream to a nginx-server in order to broadcast it (via RTMP) for several jwplayers. So far I got a delay of 1 second and some frames and I am wondering if it is possible to reduce it.
Live video ffmpeg latency using RTMP
Found the reason.It was when someone tried to access the server via IP over HTTPS, as in https:// (and the server-ip)The solution is to disable that option.Solution for Nginx:server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/example.crt; ssl_certificate_key /etc/ssl/example.key; return 444; }
So Django is sending me mail with this info:[Django] ERROR: Invalid HTTP_HOST header: ''.You may need to add u'' to ALLOWED_HOSTS. No stack trace available Request repr() unavailable.The problem is... You can't access my website with the server IP because I only allowing domain that already is in the ALLOWED_HOST setting.So... What should I do?Edit: I am using Nginx -> Gunicorn. This error only happens sometimes, like 1-2 times per week.
Why is Django sending me Invalid HTTP_HOST header with my server IP?
The behavior of your browser is correct. TheLocation:response header has no meaning for status code 404 (compare14.30 Locationand10.4 Client Error 4xx).Therefore the response's hypertext body is displayed in the browser - as specified by the HTTP specs for code 404 (see10.4.5 404 Not Found).To allow changing the HTTP response code from 404 to a different one (e.g. 302 for the temporary redirect), you have to configureNginxerror_pagedirective accordingly:error_page 404 = /404.php;Without the equal sign (=), it is not possible to change the status code with your PHP script:header("Location: http://google.com"); # PHP sets 302 status codeSAPI: php5-fpm
I'm currently moving to an nginx server. I tried putting this in my 404 ErrorDocument named404.php:If I now try to accesshttp://mydomain.com/404.php, this works as expected: It redirects me to Google. But once I try to accesshttp://mydomain.com/iDoNotExist, the 404 ErrorDocument is shownwithout redirecting meto Google.This behavior seems weird to me. Is there any way I can fix this?Edit:Here's what curling the page gives me:curl -I mydomain.com/404.phpHTTP/1.1 302 Moved Temporarily Server: nginx/1.2.1 Date: Sun, 05 Jan 2014 11:31:15 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.4.4-14+deb7u7 Location: http://google.com/curl -I mydomain.com/iDoNotExistHTTP/1.1 404 Not Found Server: nginx/1.2.1 Date: Sun, 05 Jan 2014 11:33:49 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.4.4-14+deb7u7 Location: http://google.com/Edit 2:As asked by hakre, I'm coming from anApachesetup, and yes, I'm usingChromium. As for Apache, this used to work for all the Gecko and Webkit browsers and even console-based browsers such as Lynx. It would probably have worked for Internet Explorer as well, but I've never tried that (no Windows around here, phew).
Redirect instead of 404 Error page - Status Code not working (Nginx)
I think the regex dollar forms only apply to the most recent regular expression. So you cannot combine the$1of theifwith the$1of therewritewithout usingset. However, there are simpler solutions for your scenario.Firstly, if you know the host name (for exampleexample.com), you can do the following:server { server_name www.example.com; return 301 $scheme://example.com$request_uri; } server { server_name example.com; ... }On the other hand, if you don't have a specific host name in mind, you can do the following catch-all solution:server { server_name ~^www\.(?.+)$; return 301 $scheme://$domain$request_uri; } server { server_name _; ... }You can find out more about this second formhere.I don't recommend catch-all solutions because it is only meaningful to have at most one catch-all server block. If possible, use the named server solution.Also, note that you can achieve the above redirection using therewrite ^ destination permanent;form. All these solutions avoid using the poorly regardedifdirective.
I have this working code in nginx config:if ($http_host ~* ^www\.(.+)$) { set $host2 $1; rewrite (.*) http://$host2$1; }I think that stringset $host2 $1;may be omitted and $1 used in rewrite statement without defining some variables. But rewrite has own $1..$9 params.How I may use $1 form if in the rewrite statement?
How to use nginx if's param $1 in rewrite statement
this works fine for me:http { server { listen 80; server_name service1.domain.com; location / { proxy_pass http://192.168.0.2:8181; proxy_set_header host service1.domain.com } } server { listen 80; server_name service2.domain.com; location / { proxy_pass http://192.168.0.3:8080; proxy_set_header host service2.domain.com; } } }have a try?
I have 2 servers on my network:one linux machine (192.168.0.2) with a website listening on port 8181 for service1.domain.com one windows machine (192.168.0.3) with a website listening on port 8080 for service2.domain.comI want to set up an nginx reverse proxy so that I can route requests like so:service1.domain.com --> 192.168.0.2:8181 with host header service1.domain.com service2.domain.com --> 192.168.0.3:8080 with host header service2.domain.comI have tried with the following config:### General Server Settings ### worker_processes 1; events { worker_connections 1024; } ### Reverse Proxy Listener Definition ### http { server { listen 80; server_name service1.domain.com; location / { proxy_pass http://192.168.0.2:8181; proxy_set_header host service1.domain.com; } } server { listen 80; server_name service2.domain.com; location / { proxy_pass http://192.168.0.3:8080; proxy_set_header host service2.domain.com; } } }But that doesn't seem to work?Is there anything blindingly obvious that I might be doing wrong here?
NGinx config for redirecting domain
It seems that nginx > 1.3 will ignore the e-tag from your application server if gzip is enabled for nginx. We didn't find a solution in nginx that would allow us to pass through the e-tags from the application server and gzip the response. I believe weak e-tags might work for this, but nginx does not currently support them.
I have a very simple controller with e-tags:class EtagsController < ApplicationController before_filter :require_user def index if stale?(:etag => current_user) render :layout => false end end endWhen I run this in my local development environment, the first request is a 200 with an ETag in the response. The second request submits back the ETag and I get a 304 response as expected.However, when this executes in my staging or production environments, there is no ETag in the response.Here are the request and response headers from my staging environment:Request URL: /etags Request Method:GET Status Code:200 OK Request Headers Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8 Cache-Control:max-age=0 Connection:keep-alive Cookie:__utma=169165539.1455374302.1372358226.1372358226.1372358226.1; __utmb=169165539.1.10.1372358226; __utmc=169165539; __utmz=169165539.1372358226.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); user_credentials=4ffa15df84112d22434f121eed06c59a5c32cb9ab72cf6bf1e952a3993201b5dec2917a028d20d4b63c70a84c6a290c4d5c4673ce967efec6f139c161850bc37%3A%3A101; _session_id=d21671b70349653406442ee0716633b2 User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36 Response Headers Cache-Control:must-revalidate, private, max-age=0 Connection:keep-alive Content-Encoding:gzip Content-Length:65 Content-Type:text/html; charset=utf-8 Date:Thu, 27 Jun 2013 18:37:10 GMT Server:nginx/1.4.1 + Phusion Passenger 3.0.19 Status:200 X-Powered-By:Phusion Passenger (mod_rails/mod_rack) 3.0.19 X-Rack-Cache:miss X-Request-Id:620c1ab99a1af7b6dde62cee77fc59fe X-Runtime:0.205884 X-UA-Compatible:IE=Edge,chrome=1Im stumped.Why is my staging environment not respecting the ETag in my controller?The technologies involved:Phusion Passenger 3.0.19nginx 1.4.1Rails 3.2.13Thannks
E-tags missing from response headers with rails 3.2 / nginx / phusion passanger
try this codelocation / { rewrite ^/test(/.*)$ http://example.com:3000$1 permanent; proxy_set_header Host $host; }Updated:if you don't want to rewrite the URL , try this code..server { -------- server_name www.example.com; location /test { proxy_pass http://example.com:3000; } }
If I have a domain, for example,http://www.example.com, and I would like to redirect all requests fromhttp://www.example.com/testtohttp://www.example.com:3000, how do I perform it properly?I've tried the following:location /test { proxy_pass http://www.example.com:3000; proxy_set_header Host $host; }But what it does is actually redirecthttp://www.example.com/testtohttp://www.example.com:3000/test, and that's not what I want.How can I do it properly?UPDATE:WhileKrizna's answer worked, it redirects me to my domain as expected.But what I want now is my browser bar to behttp://www.example.com/testinstead ofhttp://www.example.com:3000. If I understand right, I should set nginx to catch response and send it back by url user requested. How can I perform it?
Nginx - Redirect www.example.com/test to www.example.com:3000/
It tells you how to fix your issue in the link you provided. Change:domain->asciiName ?>:server->webserver->httpsPort : $VAR->server->webserver->httpPort ?>"todomain->asciiName ?>"You have to fix this in your application rather than in Nginx.Your application is generating links that point to port 50493like this. Unless you're going to setup a content filter that goes through all of the HTML and replacesexample.com:50493/path/of/urlwithexample.com/path/of/urlthen you have to fix it inside the application.
I'm having an issue where some redirects on a website have the proxy-pass port included, rendering them useless. My configuration is as follows:Physical server 1:server { server_name example.com www.example.com; location / { proxy_pass http://1.1.1.1:50493; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host "example.com"; } }Physical Server 2:server { listen 50493; server_name example.com www.example.com; set_real_ip_from 1.1.1.1; root /var/www/example.com; index index.php index.htm index.html; location / { if ($http_host !~ "^example.com"){ set $rule_0 1$rule_0; } if ($rule_0 = "1"){ rewrite ^/(.*) http://example.com/$1 permanent; } rewrite /[^/]+/([0-9]+)-[^/]+.html http://example.com/showthread.php?t=$1 permanent; } location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/example.com$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 600s; fastcgi_buffer_size 512k; fastcgi_buffers 16 512k; } }Generally, browsing works fine. The site is browsable as one expects. However some links that redirect (eg. after a login action), redirect to a link with the port 50493 included. So we gethttp://example.com:50493/index.phpfor example. That will not work. My question is, how do I remove the port?From what I can tell, the forum software takes the port from the php session port variable. I've tried setting port_in_redirect off but to no avail. If it helps, the issue highlighted here:http://kb.parallels.com/en/114425is similar.Thank you.
Removing port from nginx redirect
Problem was in script precompilation. It was solved by adding to "config/environments/production.rb" following line:config.assets.compile = true
I am running stack nginx+unicorn+rails 3.2When I am runningbundle exec unicorn_rails -c config/unicorn.rb -E developmentit is ok, and site running wellwhen I am trying start unicorn site in production modebundle exec unicorn_rails -c config/unicorn.rb -E productionI have "We're sorry, but something went wrong." error:
Unicorn doesn't run in production mode
nginx is a web server; in the context of Ruby on Rails applications, it receives the request from the user and hands it off to an application server. Application servers that are popular now:thinwebrickpumaunicornIn context of heroku, you have a choice of application servers, but not a choice of web servers.When hosted yourself, the use of unicorn versus thin can help give you zero downtime deploys. However, heroku can help give you zero-downtime deploys on their own, using any of thin/puma/unicorn.For experimental support of zero downtime on heroku:https://devcenter.heroku.com/articles/labs-preboot/
I am rather new to ROR development and currently i am using Herokuo (with Thin) to run my web application.I have been reading up on Zero Downtime deployment and i came across nginx and unicorn.Can anyone explain to me what exactly is Nginx and is it used in conjunction to Heroku? Same goes for unicorn?What are the pros and cons of using it instead of thin?Thanks so much in advance!
Nginx, unicorn and Heroku
it's not difficult to implement WebSocket in haproxy, though I admit it's not yet easy to find doc on this (hopefully this response will make one example). If you're using haproxy 1.4 (which I suppose you are) then it works just like any other HTTP request without having to do anything, as the HTTP Upgrade is recognized by haproxy.If you want to direct the WebSocket traffic to a different farm than the rest of HTTP, then you should use content switching rules, in short :frontend pub-srv bind :80 use_backend websocket if { hdr(Upgrade) -i WebSocket } default_backend http backend websocket timeout server 600s server node1 1.1.1.1:8080 check server node2 2.2.2.2:8080 check backend http timeout server 30s server www1 1.1.1.1:80 check server www2 2.2.2.2:80 checkIf you're using 1.5-dev, you can even specify "timeout tunnel" to have a larger timeout for WS connections than for normal HTTP connections, which saves you from using overly long timeouts on the client side.You can also combine Upgrade: WebSocket + a specific URL :frontend pub-srv bind :80 acl is_websocket hdr(Upgrade) -i WebSocket acl is_ws_url path /something1 /something2 /something3 use_backend websocket if is_websocket is_ws_url default_backend httpLast, please don't use the stupid 24h idle timeouts we sometimes see, it makes absolutely no sense to wait for a client for 24h with an established session right now. The web is much more mobile than in the 80s and connection are very ephemeral. You'd end up with many FIN_WAIT sockets for nothing. 10 minutes is already quite long for the current internet.Hoping this helps!
I am working on a Tornado app that uses websocket handlers. I'm running multiple instances of the app using Supervisord, but I have trouble load balancing websocket connections.I know nginx does not support dealing with websockets out of the box, but I followed the instructions herehttp://www.letseehere.com/reverse-proxy-web-socketsto use the nginx tcp_proxy module to reverse proxy websocket connections. However, this did not work since the module can't route websocket urls (ex: ws://localhost:80/something). So it would not work with the URL routes I have defined in my Tornado app.From my research around the web, it seems that HAProxy is the way to go to load balance my websocket connections. However, I'm having trouble finding any decent guidance to setup HAProxy to load balance websocket connections and also be able to handle websocket URL routes.I would really appreciate some detailed directions on how to get this going. I am also fully open to other solutions as well.
Load balance WebSocket connections to Tornado app using HAProxy?
I have figured this out. I had a setting in nginx: add_header Strict-Transport-Security "max-age=7200"; This is a new feature supported by chrome and firefox 4: chromium.org/sts
I have a drupal site that runs on nginx and php-fpm with haproxy balancing between multiple servers.I have two services set up for haproxy: http and https.if i go tohttp://subdomain.domain.com, it works fine.If I go tohttps://subdomain.domain.com, it also works fine. If I then go back to http it now redirects to https. This happens in firefox and chrome, but not in IE.Is there some setting somewhere that redirects to https automatically if it knows that it exists? Perhaps if a secure header is set?I tried looking at LiveHTTPHeaders, but it only shows the https portion at this point.I tried looking in Chrome, and it says this:t=1312233405229 [st= 0] +REQUEST_ALIVE [dt=192] t=1312233405229 [st= 0] URL_REQUEST_START_JOB [dt= 0] --> load_flags = 1114241 (ENABLE_LOAD_TIMING | MAIN_FRAME | VALIDATE_CACHE | VERIFY_EV_CERT) --> method = "GET" --> priority = 0 --> url = "http://subdomain.domain.com/" t=1312233405229 [st= 0] +URL_REQUEST_START_JOB [dt= 0] --> load_flags = 1114241 (ENABLE_LOAD_TIMING | MAIN_FRAME | VALIDATE_CACHE | VERIFY_EV_CERT) --> method = "GET" --> priority = 0 --> url = "http://subdomain.domain.com/" t=1312233405229 [st= 0] URL_REQUEST_REDIRECTED --> location = "https://subdomain.domain.com/"It seems to be doing a redirect, but doesn't say why.I tried sniffing with Wireshark, but wasn't able to make any sense of it, as I can't get the SSL decryption to work (I have the key).
Chrome and Firefox automatically redirect to https on a certain site
Is this what you're looking for?rewrite ^(.*)$ index.html
I have a static file, index.html. How would I configure nginx to serve it from every path on the domain?URL | file ----------------- / | index.html /foo | index.html /bar | index.html /baz | index.htmlEssentially, I want a wild card match.(I realize this will be an unusual setup.)
How does one map many URLs to a single file using nginx?
For specifics:http://www.csc.villanova.edu/~mdamian/Sockets/TcpSockets.htmdescribes the C library for TCP sockets.I think the key is that after a process forks while holding a socket file descriptor, the parent and child are both able to call accept() on it.So here's the flow. Nginx, started normally:Calls socket() and bind() and listen() to set up a socket, referenced by a file descriptor (integer).Starts a thread that calls accept() on the file descriptor in a loop to handle incoming connections.Then Nginx forks. The parent keeps running as usual, but the child immediately execs the new binary. exec() wipes out the old program, memory, and running threads, but inherits open file descriptors: seehttp://linux.die.net/man/2/execve. I suspect the exec() call passes the number of the open file descriptor as a command line parameter.The child, started as part of an upgrade:Reads the open file descriptor's number from the command line.Starts a thread that calls accept() on the file descriptor in a loop to handle incoming connections.Tells the parent to drain (stop accept()ing, and finish existing connections), and to die.
According tothe Nginx documentation:If you need to replace nginx binary with a new one (when upgrading to a new version or adding/removing server modules), you can do it without any service downtime - no incoming requests will be lost.My coworker and I were trying to figure out:how does that work?. We know (we think) that:Only one process can be listening on port 80 at a timeNginx creates a socket and connects it to port 80A parent process and any of its children can all bind to the same socket, which is how Nginx can have multiple worker children responding to requestsWe also did some experiments with Nginx, like this:Send akill -USR2to the current master processRepeatedly runps -ef | grep unicornto see any unicorn processes, with their own pids and their parent pidsObserve that the new master process is, at first, a child of the old master process, but when the old master process is gone, the new master process has a ppid of 1.So apparently the new master process can listen to the same socket as the old one while they're both running, because at that time, the new master is a child of the old master. But somehow the new master process can then become... um... nobody's child?I assume this is standard Unix stuff, butmy understanding of processes and ports and sockets is pretty darn fuzzy. Can anybody explain this in better detail? Are any of our assumptions wrong? And is there a book I can read to really grok this stuff?
How can Nginx be upgraded without dropping any requests?
We have had numerous issues with nginx's reverse proxy support and ultimately have achieved a better architecture by puttingHAProxybetween Mongrel and nginx. So our architecture is:web => nginx => haproxy => MongrelsWhat we saw earlier (before HAProxy) was that nginx would flood Mongrels with too many requests and Mongrel's request queue was not solid and it would quickly get stuck with too many queued requests. HAProxys queue is much more stable and it better balances all the requests between backends than nginx does. nginx only offers round-robin balancing when really an algorithm like least-connections is better. I dont know if Thin suffers from the same issue(s) as Mongrel.In our new setup nginx just proxies to a single haproxy instance and haproxy has all registered Mongrels configured. HAProxy has better support for upstream ok/fail detection and can also limit each app server to 1 connection (maxconn directive) which is key for Mongrel, not sure about Thin.The maxconn directive is so key that EngineYard has a patch for nginx which makes it native to nginx, so you dont need to deploy HAProxy just to take advantage of it.See:nginx-ey-balancer
Our setup is standard nginx (ver 0.7.59) + thin upstream servers on Debian lenny. Right now we're on 1 beefy box for web / app and 1 db box. Recently we started noticing thins will eventually start "hanging", i.e. they will no longer receive requests from nginx. We have 15 thins running, and after 10-15 minutes, the first 1 or 2 will be hung. If left all day, those same few thins plus a few more will remain hung. The only fix we've seen so far is restarting nginx. After a restart, the hung thins begin receiving requests again right away. Because of this, it seems like those thins might have been taken out of the upstream pool.If I understand the docs (http://wiki.nginx.org/NginxHttpUpstreamModule#server) correctly, with the defaults (which we have), if nginx can't "communicate" with a backend server 3 times within 10 seconds, it will set that upstream server to an "inoperative" state. It will then wait 10 seconds, then try that server again. That makes sense, but we're seeing the thin hang indefinitely. I tried setting max_fails to 0 for each of the thins, but that didn't help. I can't find out what would cause an upstream server to become permanently "inoperative".We've seen big growth rate increases recently, so we're not sure if it could be related to that, or just more apparent as a result of more traffic in a shorter period of time.Is there something else (a changeble directive or other conditions) in nginx that would cause it to take a server completely out of the pool?
Nginx Removing Upstream Servers From Pool
You need to configure nginx toforward theX-Forwarded-ForandX-Forwarded-Protoheaders. Example:server { listen 80; server_name example.com *.example.com; location / { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }Then setup theUseForwardedHeadersmiddleware. This middleware will updateHttpContext.Connection.RemoteIpAddressusing theX-Forwarded-Forheader value.var builder = WebApplication.CreateBuilder(args); // ... builder.Services.Configure(options => { options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto; }); var app = builder.Build(); app.UseForwardedHeaders(); // ...Check:Configure ASP.NET Core to work with proxy servers and load balancers
I have an asp.net core 7.0 api application in a docker container hosted in Kubernetes behind an Nginx ingress controller.To get the client ip address i'm usingcontext.HttpContext.Connection.RemoteIpAddressfor all user requests I get a Private Ip address like '10.244.0.1'In such instances i'm expecting Public IP address
HttpContext.Connection.RemoteIpAddress returns private address for asp.net core in kubernetes
Seems that NGINX does not do the auto recovery by default. Changing the config part from:upstream core { server core:3001; }to:{ server core:3001 max_fails=1 fail_timeout=1s; server core:3001 max_fails=1 fail_timeout=1s; }did the trick. the duplication is not mistake. Nginx tries to resolve the first line, on failure it will try the second one (circularly).
Every time when I'm restart the upstream server, my NGINX shows "bad gateway" which is ok, but later, when the upstream server restarts nginx not recover automatically and I need to restart it (the nginx) manually.Is there an option to make nginx to check every few seconds if the upstream backed to normal?upstream core { server core:3001; } server { server_name core.mydomain.com corestg.mydomain.com www.core.mydomain.com; #listen 80; #listen [::]:80; gzip on; gzip_static on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; gzip_proxied any; #gzip_vary on; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; listen 443 ssl http2; listen [::]:443 ssl http2; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; server_tokens off; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.rsa; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { proxy_ssl_session_reuse off; proxy_pass http://core; proxy_buffers 8 24k; proxy_buffer_size 2k; proxy_http_version 1.1; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_ignore_headers Set-Cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-NginX-Proxy true; # proxy_set_header Host $http_host; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_cache_bypass $http_upgrade; proxy_redirect off; } }
NGINX shows "bad gateway" when upstream server restart and not back to normal
Part of the solution is the response of the user:Esmaeil Mazahery, but a few more steps must be taken.First, I changed the Angular application Dockerfile (passed additional build parameters like: base-href and deploy-url)RUN npm run ng build -- --prod --base-href /projects/sample-app1/ --deploy-url /projects/sample-app1/Then, I changed the reverse proxy nginx.conf configuration fromlocation /projects/sample-app1 { # Angular app proxy_pass http://sample-app1:80; }to:location /projects/sample-app1 { # Angular app proxy_pass http://sample-app1:80/; }Redirection did not work properly without a slash at the end.The order in which nginx redirects are matched is also important. Therefore, before the addresses:/projects/sample-app1and/projects/sample-app2I put the symbol^~, which causes the given locations to be taken first. This nginx localization simulation tool also proved very useful:Nginx location match tester
I have created a reverse proxy using Nginx which redirects in various applications (ASP.NET API's and Angular app's).Reverse proxy nginx.conf (the most important settings):... server { listen 80; server_name localhost 127.0.0.1; location /projects/sample-app1/api { proxy_pass http://sample-app1-api:80; } location /projects/sample-app1 { # Angular app proxy_pass http://sample-app1:80; } location /projects/sample-app2 { # Angular app proxy_pass http://sample-app2:80; } location /api { proxy_pass http://random-api:80; } location / { proxy_pass http://personal-app:80; } }All API's are available and work properly because their path indicated by the location parameter is the same as in the controllers. An Angular application that runs on the url "/" also works, but the problem is with "sample-app1" and "sample-app2". When I type the url to go to these applications, I get an error similar to:Uncaught SyntaxError: Unexpected token < main.d6f56a1….bundle.js:1My suspicion is that the URL leading to the application contains additional elements (/projects/sample-app1) and its default index path is simply "/". So I would have to rewrite to remove the redundant part of the URL, but how to do it? My attempts so far have not been successful and I have tried different ways from other threads on StackOverflow and Github.Angular App nginx.conf:events{} http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; index index.html; location / { try_files $uri $uri/ /index.html; } }}
Nginx reverse proxy for Angular apps
According tothisthread from official nginx development forum, you can't (although this thread is almost 10 years old, SSL/TLS re-handshake still doesn't supported by nginx). The only workaround suggested by Igor Sysoev is to use an optional client certificate verificationssl_verify_client optional;and then checking the$ssl_client_verifyvariable value:location /verysecure/ if ($ssl_client_verify != SUCCESS) { # deny client return 403; # or process the request on some internal location, see # http://nginx.org/en/docs/http/ngx_http_core_module.html#internal # rewrite ^ /internal last; } ... }However using this workaround the certificate chooser window will popup (onlyfor clients who had the correct certificate installed) on the initial TLS handshaking, not only on visiting the/verysecure/URI.
Using Apache I created an HTTPS site that contains a folder calledsecure[which I want to access with user and password]and another folder calledverysecure[which I want to access with certificate authentication].When I access the site usinghttps://www.example.comI get the default index.html file located in the root, as would be expected. When I accesshttps://www.example.com/secure/I provide the user and password and get the index.html file located in that folder. When I accesshttps://www.example.com/verysecure/the certificate popup window allows me to choose the certificate that I want to use and upon doing so I get the index.html file located in that folder.How can I configure Nginx so that the certificate chooser pop up window comes only when I accesshttps://www.example.com/verysecure/and not when I accesshttps://www.example.com/orhttps://www.example.com/secure/?
Nginx certificate authentication of a specific location
@petomalina's answer is the easiest if your Cloud Run service is public. If it's not, it won't work as perthis answerIf your service isinternal, I tried OP's #1 option of putting an nginx reverse proxy in front and was getting the same error: 404.The issue is caused by theHostheader having the host of the proxy, not of the Cloud Run service. The fix is to configure nginx to override the Host header:upstream my-foo { server foo.run.app:443; } location /foo/ { proxy_pass https://my-foo; proxy_set_header Host foo.run.app; include /etc/nginx/proxy_params; proxy_ssl_session_reuse on; }
I have deployed 6 different Flask based applications to Google Cloud Run. They work perfectly fine when I access them through the autogenerated URL. Now, I want to unify all 6 services under one domain name with different routes.For example,mydomain.com/user ->https://custom-user-asdtgthyju-de.a.run.appmydomain.com/product ->https://custom-product-asdtgthyju-de.a.run.appThings I have tried1. Nginx deployed in a separate VM with reverse proxy to the cloud run URLsNot working, same configuration-same code deployed in regular VMs work but for cloud run deployments shows route "/user" not found2. Cloud Endpoints using ESPv2https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-runGot this working as per my requirement, but not able to pass the custom headers, like I use X-API-KEY for authentication, it doesn't even get to the Cloud Run. It is being stripped off by ESPv2 itself.Please help, how I can configure a reverse proxy/ API gateway in front of cloud run services. Has anyone triedExternal Nginx to Cloud Runmapping?Thanks
APIs in Cloud Run and Nginx reverse proxy in VM
NGINX supports also somewhat superior Brotli compression (aside from gzip), via3rd party module. So having all compression done in NGINX makes more sense.TTFB should not be affected if you keep both (NGINX will figure out that the response is already compressed). But for that same reason (NGINX receiving an already compressed response), you won't be able to add Brotli compression support to it (if you keep it in expressJS), because the Brotli compression module expects an uncompressed response to work with.
I have an app running on nodeJS/express and also using nginx. If I compress the served files on both systems, I suppose that slows the server response time. Therefore, when combining nginx with expressJS, do you usecompression in expressorcompression in nginx? Or it simply doesn't matter!?I know it may be opinion based, but I really wanted some feedback on this. Thanks in advance
When combining nginx with expressJS, shall I use compression in express or nginx?
Yes. You will need a certificate for TLS/SSL and you can route the requests based onreq.ssl_snito the proper backend.I'm not sure if psql uses SNI but i think this have you check.frontend public_ssl bind :::9000 v4v6 crt /usr/local/etc/haproxy-certs option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req.ssl_hello_type 1 } use-server postgres-a if { req.ssl_sni -i postgres-a.example.com } use-server postgres-b if { req.ssl_sni -i postgres-b.example.com } backend postgres-a server postgres-a FURTHER SERVER PARAMS backend postgres-b server postgres-b FURTHER SERVER PARAMSI have created a blog post with a picture for a more detailed description.https://www.me2digital.com/blog/2019/05/haproxy-sni-routing/
What I want to be able to do is connect to a postgres server like this:psql -h postgres-a.example.com -p 9000That connection should be received by a proxy server (like nginx or haproxy) and it will be redirected to database A because of host namepostgres-a.example.com. If I usepostgres-b.example.comand the same port, it should go to database B.I have been researching this, but I am still not 100% sure of how this would work. I read that the only way to redirect a TCP connection (psql) based on host name is using the SNI header. But I still don't understand if we will need a SSL certificate for this, or if we will need to usehttps://postgres-a.example.com(That doesn't make any sense to me). How it will work?Can someone help me understand this?
Is it possible to redirect a TCP connection based on host name?
Your current config tells nginx to map urls 1-to-1, that means that/next/what-everwill be mapped tohttp://localhost:3000/next/what-ever.To solve this you need to add an additional slash.server { ... location /next/ { proxy_pass http://localhost:3000/ // <---- this last slash tells to drop the prefix } }For more info readhttps://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/#passing-a-request-to-a-proxied-server
Recently I learn something about nextjs because I want to make a site which SEO friendly by using React. Everything looks great until I ran into a problem.It's about how can I deploy nextjs app into a directory which is not a root directory, for example '/next'. I use a simple config for nextjs, use the default node server listen to 3000 port and then use nginx for reverse.next.config looks like :nginx conf :When I access localhost:8080/next I got the 404 page which is provided by nextjs and the css or js is also 404. It seems like the config of nextjs or nginx is wrong.
How to deploy nextjs into a directory which is not a root directory
Comment the following line# include /etc/nginx/conf.d/*.conf;Why? Due to the lineinclude /etc/nginx/conf.d/*.conf;The default.conf is loaded and your server config is ignored.In addition, you need to include the root information in your server (which previously was provided by default.confHow to reproduceput the following 2 files in the same folder and executedocker build -t test . && docker run --rm -p 8080:80 testDockerfileFROM nginx:1.16.0-alpine # COPY ./build /usr/share/nginx/html COPY nginx.conf /etc/nginx/nginx.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]nginx.confuser nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; #include /etc/nginx/conf.d/*.conf; server { location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html; } location ~ ^/$ { rewrite ^.*$ /index.html last; } listen 80; server_name localhost; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } }
I'm usingnginx:1.16.0-alpineimage of Docker for serve react app (which is built before) and I want to redirect toindex.htmlpage in any cases (in what URL is got)nginx.conffile has the following content:user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; server { location / { try_files $uri $uri/ /index.html; } location ~ ^/$ { rewrite ^.*$ /index.html last; } } }Actually theserversection is added and the other lines are default!Dockerfilecontent is as below as well:FROM nginx:1.16.0-alpine COPY ./build /usr/share/nginx/html COPY nginx.conf /etc/nginx/nginx.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]For making sure, after building the container from the image and going to the shell inside that, the file/etc/nginx/nginx.confhas the above content.But the problem is: When I browse the urlhttp://localhost:3000/login(for example), it doesn't redirect tohttp://localhost:3000/index.html. It shows:404 Not Found nginx/1.16.0(Docker container is run on output port 3000 and local port on 80)Does anybody know why it is not working!?(I also searched the similar ways, but no success!)UPDATED:The pageReact-router and nginxdoesn't solve the problem.
How to always redirect to index.html in Nginx Docker?
The core of the problem is actually Chromium. This thing only fails in Chromium from what I can see.The problem with Nginx is in the implementation ofhttp2_push_preload.What Nginx seeks is a header withLink: ; as=type; rel=preload. It reads it and serves the files via push, unfortunately when the browser (I only tested Chrome actually) receives the document with theLinkheader as well as the Push it conflicts resulting in a significant slowdown and downloads the resources that were seen while parsing the document instead.# This results in HTTP/2 Server Push and the requests get duplicated due to the `Link` headers that were passed along location / { proxy_pass http://localhost:3000; http2_push_preload on; } # This results in Resource Hints getting triggered in the browser. location / { proxy_pass http://localhost:3000; } # This results in a regular HTTP/2 (no push) location / { proxy_pass http://localhost:3000; http2_push_preload on; proxy_hide_header link; } # This result in a valid HTTP/2 Server Push (proper) location / { proxy_pass http://localhost:3000; http2_push /commons.4e96503c89eea0476d3e.module.js; http2_push /index.module.js; http2_push /_app.module.js; http2_push /webpack-0b10894b69bf5a718e01.module.js; http2_push /main-3c17cb16bbbc3efc4bb5.module.js; }It seems Nginx does not work well with this feature yet...If only I could remove theLinkheaders and usehttp2_push_preload...Anyway I got it to work with the usage ofH2OH2O did let me delete the headers while preserving HTTP/2 Server Push// h2o.conf [...] proxy.reverse.url: "http://host.docker.internal:3000/" header.unset: "Link"Works alright with H2O:I hopeNginxfixes the wayhttp2_push_preloadworks and allows for more control.Along the side, I think thatChromiumshould deal with the issue anyway instead of downloading 2 times as many bytes.
A response for a document with the following headers enters Nginx:link: ; as=image; rel=preload link: ; as=script; rel=preload link: ; as=script; rel=preload link: ; as=script; rel=preload link: ; as=script; rel=preload link: ; as=script; rel=preloadWith the help of HTTP/2 Server Push the requests are Pushed to the client but 5 out of the 6 requests download two times (once with the push and once triggered by the document). The Network tab in Chrome Dev Tools looks like this:I've tested if theTypeis set properly and it seems alright. What could be the issue?Consecutive requests (chrome cache enabled) result in a similar way as well:What could be wrong? I'm pretty sure the request should not duplicate@edit I tried doing the Server Push without Nginx (talking directly to Node.js backend instead of the backend attaching link headers for Nginx). It works without an issue. The problem pops up when I use Nginx.Btw. I do know that one should not push all the contents via Server Push, especially images, but I did it just for a clear test. If you look closer it seems that only the scripts get duplicated and the picture downloads only once.
HTTP/2 Server Push results in duplicate requests
Found out that I made two logical mistakes:Sticky sessions doesn't work this wayI assumed that Kubernetes will look into the cookie and create some mapping of cookie hashes to pods. But instead, another session is generated and append to our http header.nginx.ingress.kubernetes.io/session-cookie-nameis only the name of those generated cookie. So per default, it's not required to change them.Scope to the right objectThe annotations must be present on the ingress, NOT the deployment (stupid c&p mistake)apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapp-k8s-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-hash: sha1 spec: tls: - hosts: - myapp-k8s.local rules: - host: myapp-k8s.local http: paths: - path: / backend: serviceName: myapp-svc servicePort: 80This works as expected.
I try to port an ASP.NET Core 1 application with Identity to Kubernetes. The login doesn't work and I got different errors likeThe anti-forgery token could not be decrypted. The problem is that I'm using a deployment with three replica sets so that further request were served by different pods that don't know about the anti-forgery token. Usingreplicas: 3it works.In the same question I found asticky session documentationwhich seems a solution for my problem. The cookie name.AspNetCore.Identity.Applicationis from my browser tools.apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-k8s-test annotations: nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: ".AspNetCore.Identity.Application" spec: replicas: 3 template: metadata: labels: app: myapp-k8s spec: containers: - name: myapp-app image: myreg/myapp:0.1 ports: - containerPort: 80 env: - name: "ASPNETCORE_ENVIRONMENT" value: "Production" imagePullSecrets: - name: registrypullsecretThis doesn't work, either with or without leading dot at the cookie name. I also tried adding the following annotationskubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/session-cookie-hash: sha1What is required to allow sticky sessions on Kubernetes with ASP.NET Core?
Sticky session for ASP.NET Core on Kubernetes deployment
It will work like this:$ CC_OPTS=--with-cc-opt='-O2 -g' $ ./configure "$CC_OPTS"so that the expansion of$CC_OPTSis passed as a single argument to./configure.But if you wanted also to pass, maybe:--with-ld-opt='-Wl,-gc-sections -Wl,-Map=mapfile'through a variable, you would need:$ CC_OPTS=--with-cc-opt='-O2 -g' $ LD_OPTS=--with-ld-opt='-Wl,-gc-sections -Wl,-Map=mapfile' $ ./configure "$CC_OPTS" "$LD_OPTS"because you need to passtwoarguments to./configure, and:./configure "$CC_OPTS $LD_OPTS"passes only one, and will fail.
I wish to call configure command (to compile nginx) from a bash script like this:CONF_OPTS=' --with-cc-opt="-O2 -g"' ./configure ${CONF_OPTS}but I got the following error:./configure: error: invalid option "-g"When I pass the options like:./configure --with-cc-opt="-O2 -g"I got no error.To reproduce:curl -O http://nginx.org/download/nginx-1.14.2.tar.gz tar xfz nginx-1.14.2.tar.gz cd nginx-1.14.2 OPTS='--with-cc-opt="-O2 -g"' ./configure ${OPTS}Results./configure: error: invalid option "-g""But:./configure --with-cc-opt="-O2 -g"it is okI think it is not nginx related, but I look to me as bash quote substitution issue.
autotools configure error when passing options using shell variable
I'm assuming that your backend is serving your assets, so I think the problem is that yourlocation {}block doesn't have an upstream like the regular paths defined in the nginx ingress.There's a lot of lua code in thenginx.confof your nginx-ingress-controller so it might take time to understand, but you can copy yournginx.conflocally:$ kubectl cp nginx-ingress-controller-xxxxxxxxx-xxxxx:nginx.conf .Check thelocation {}blocks that are defined for your current services and copy them in the bottom of yourserver-snippetlocation {}block like this:I believe aserver-snippetlike this would do:location ~* \.(?:jpg|jpeg|gif|png|ico|cur|mp4|ogg|ogv|webm|htc)$ { access_log off; expires 2M; add_header Cache-Control "public, max-age=5184000"; # 5184000 is 60 days <== add what you copied here set $namespace "k8s-namespace"; set $ingress_name "ingress-name"; set $service_name "service-name"; set $service_port "80"; set $location_path "/images"; ... ... ... proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; }
I'm hoping someone can help me here because I'm stuck.I'm moving over our nginx config from a traditional nginx/node server config whereby both nginx and node server are on the same machine.In Kubernetes, the ingress controller (nginx) obviously lives in another container.Where I'm getting stuck is reimplementing our rules that disable access logging for images and assets using location blocks.Our configuration looks likelocation ~* \.(?:jpg|jpeg|gif|png|ico|cur|mp4|ogg|ogv|webm|htc)$ { access_log off; expires 2M; add_header Cache-Control "public, max-age=5184000"; # 5184000 is 60 days }When I implement this same block in aserver-snippetit matches, but all the assets throw a 404.I did some Googling and found an answer that might explain why herehttps://stackoverflow.com/a/52711388/573616but the suggested answer hints to use anifblock instead of alocationblock because the location interferes with the proxy upstream, however, disabling access logs is not possible from inside theifblock, only from alocationcontext.The rest of my ingress looks like (everything else is default)real_ip_header X-Forwarded-For; real_ip_recursive on; underscores_in_headers on; gzip_types text/css application/x-javascript application/javascript application/json image/svg+xml; client_max_body_size 5M; proxy_buffers 8 16k; proxy_set_header X-Request-Start "t=${msec}"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_redirect off;The images live at /images/ on the upstream server path.So I'm back to trying to figure out how to get these location blocks working so I can actually disable the access logs for these images from aserver-snippetSo can anyone tell me how to get the above location block to not throw 404's for assets in an ingress controller?
Using Location Blocks In Kubernetes Ingress Nginx server-snippet Causes 404
It is possible. Add thetry_filesdirective to yourlocationblock, this will tell nginx to load all requests that cannot be matched to a filesystem path with yourindex.html:try_files $uri /index.html;
I have currently the code below. I am wondering if it's possible to still service this root even though I go to other pages likehttp://localhost/dog. The problem with my command below is it will return 404server { listen 80; server_name localhost; location / { root /usr/src/app/angularjs/dist; } }
nginx to always serve the root index.html in every path
If there is a percent sign in your URLs, just exchange the percent in your request with %25, like this:https://domain/testfile%253aThis then becomes the filehttps://domain/testfile%3a.The problem is not "Nginx decoding the URI" - you are trying to circumvent how normal URIs according to the RFC standard work, where the percent sign is always used to encode special characters. You might what to read thearticle about percent encodingand avoid all special characters which are used in URIs, because using those will all cause problems (characters like ?, &, #, and many more, and also :). For filenames it makes sense to avoid them completely, by for example replacing them with another character like _.
I run an nginx server which serves static files. Some filenames contain strings like%3a./var/www/testfile%3aIf I try to request those files, I get a404 Not Founderror.This seems to happen because nginxdecodes the URLand replaces%3awith:, and then does not find a file named/var/www/testfile:. I infered this from the following debug output from nginx:2018/06/21 10:03:21 [debug] 32523#0: *6 http process request line 2018/06/21 10:03:21 [debug] 32523#0: *6 http request line: "GET /testfile%3a HTTP/1.1" 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'2F:/' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:1 in:'74:t' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'65:e' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'73:s' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'74:t' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'66:f' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'69:i' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'6C:l' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'65:e' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'25:%' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:4 in:'33:3' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:5 in:'61:a' 2018/06/21 10:03:21 [debug] 32523#0: *6 s:0 in:'3A::' 2018/06/21 10:03:21 [debug] 32523#0: *6 http uri: "/testfile:"So far, I came up with 2 possible solutions:Rename all served files, so that%3abecomes:in the filename and educate every person who uploads files here about this.Write a rewrite rule that escapes the%sign as%25. But I believe the rewrite-phase comes after the URL already has been decoded. Currently there are no unescaped:characters in the filenames, so I could rewrite:to%253aand that might work. Though there might be other characters in those filenames where this isn't possible, as they might be in the URL both in encoded and unencoded form.I think there might be a simpler solution that I'm overlooking. Is there a way to tell nginx to treat every URL literally, e.g. without decoding escaped characters?
Stop nginx from decoding URL
I was able to solve this by usingcookie_nocache.Update the directive proxy_cache_bypass to:proxy_cache_bypass $cookie_nocache;If you need to bypass the cache, set a cookie named "nocache" to true (any value that isn't empty nor 0 will work). Since the browser will send the cookies to subsequent requests, this will work.To quickly test this, open the console and add the cookie like this.document.cookie="nocache=true"
I have a single page app and am trying to use the query parameter nocahce=true to bypass Nginx cache for the first response (HTML file) and ALL subsequent requests initiated by it (to get CSS, JS, etc).According tothis, I can bypass my cache using the query parameter but it is not working as expected.Steps to reproduce the issue:Use this minified generic configuration:http { ... proxy_cache_path /var/temp/ levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { ... location / { # Using angularjs.org as an example proxy_pass https://angularjs.org; proxy_set_header Host angularjs.org; proxy_cache STATIC; proxy_cache_valid 200 10m; proxy_cache_bypass $arg_nocache; add_header X-Cache-Status $upstream_cache_status always; } } }Expected:The response header of the requests "http://servername" and "http://servername/css/bootstrap" (or any other subsequent requests initiated byhttp://servername?nocache=true) to bypass the cache, i.e. contain "X-Cache-Status: BYPASS".Actual:The response header of "http://servername" contains "X-Cache-Status: BYPASS" but "http://servername/css/bootstrap" does not, instead the value of "X-Cache-Status" is HIT/MISS/etc depending on the cache status.Am I using the proxy_cache_bypass in a wrong way or do I need to do more to achieve the expected behavior?Thanks!
Nginx: Punching a Hole Through My Cache Not Working as Expected
easy peasylocation / { limit_req zone=admin burst=5 nodelay; limit_req_status 503; try_files $uri $uri/ /index.php?$query_string; auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; } location ~ \.php$ { include fastcgi_params; fastcgi_pass unix:/run/php/php7.1-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; } location ^~ /.well-known/ { auth_basic off; }I don't think it can be optimised
I did some coding to get the nginx config file working.My objective is to allow all.well-knownfolder and subfolders leaving the rest with basic auth, limit_req and laravel compatible.The problem now with let's Encrypt is that it is not renewing the cert because the route.well-known/acme-challenge/wPCZZWAN8mlHLSQWr7ASZrJ_Tbk71g2Cd_1tPAv2JXMis asking for permission, probably affected bylocation ~ \.php$So the question is: Can I integrate one solo function? like~ / and \.php$ \.(?!well-known).*And if so, can I integrate the code of both all together?location ~ /\.(?!well-known).* { limit_req zone=admin burst=5 nodelay; limit_req_status 503; try_files $uri $uri/ /index.php?$query_string; auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; fastcgi_index index.php; include fastcgi_params; auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; }
Laravel + basic auth except one folder not working
I went ahead and reproduced your use case.Assuming the installation of nginx ingress controller though helm went smoothly and when listing resources everything seems to be fine, you need to specify the paths in the ingress yaml file, as follows:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - host: test.demo.com http: paths: - path: /path1 backend: serviceName: s1 servicePort: 8080 - path: /path2 backend: serviceName: s1 servicePort: 8080 - path: /path3 backend: serviceName: s2 servicePort: 80 - host: demo.test.com http: paths: - backend: serviceName: s2 servicePort: 80Then, curl -H -I 'Host: test.demo.com'http://external-lb-ip/path1, for example, should return 200.
Running on Google Cloud platform / Container Engine - How do I set it up to point to this Ingress in the following?I have installed Nginx-ingress on Kubernetes with Helm and it works for thedefault backend - 404.I want to be able to use different http uri path, like/v1,/v2and others.For my own Chart that I want to use Ingress I have the following invalues.yaml:# Default values for app-go. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: gcr.io//app-go tag: latest pullPolicy: IfNotPresent service: type: ClusterIP port: # kubernetes.io/tls-acme: "true", ingress: enabled: true annotations: { kubernetes.io/ingress.class: "nginx", kubernetes.io/ingress.global-static-ip-name: "kube-ingress" } # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi nodeSelector: {} tolerations: [] affinity: {}How do I specify annotations for Nginx-ingress for different paths.helm version Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"}
Nginx-ingress setting annotations to work with Kubernetes Helm install values.yaml
You have to add sanitizer flags like-fsanitize=fuzzerto yourCFLAGSandyourLDFLAGS.If they aren't passed to the linker but just to the compiler you get tons of undefined symbol errors for sanitizer runtime library functions (like the one you quoted in your question).Note that when using-fsanitizer=fuzzerit makes sense to combine it with the Adress Sanitizer (i.e.-fsanitizer=fuzzer,address).Also, with libFuzzer, youhave to provide your own fuzzer callback functionLLVMFuzzerTestOneInput()and omit amain()function.
I can successfully compile Nginx with the following variables in the makefileCC = clang-6.0CFLAGS = -pipe -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Werror -gWhen attempting to use -fsanitize=fuzzer or -fsanitize=fuzzer-no-link and changing my Makefile to:CFLAGS = -pipe -fsantizer=fuzzer-no-link -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Werror -gI get numerous undefined references to__sancov_lowest_stackand to__sanitizer_cov_trace_const_cmp8How would I fix this? Which libraries am I missing?
Clang libFuzzer Undefined Reference to `__sanitizer_cov_trace_const_cmp8'
When you request the URI/,nginxwill process two requests.The first request (for the URI/) is processed by thelocation = /block, because that has highest precedence. The function of that block is to change the request to/index.htmland restart the search for a matchinglocationblock.The second request (for the URI/index.html) is processed by thelocation /block, because that matches any URI that does not match a more specificlocation.So the final response comes from the secondlocationblock, but both blocks are involved in evaluating access.Seethis documentforlocationsyntax andthis documenton theindexdirective.
When I study about the nginx location configuration, I have some questions.Here is my example.the files structure is like this: test1/index.html test2/index.htmland the nginx.conf location part is like below:location = / { root test1; index index.html; # deny all; } location / { root test2; index index.html; }the question is , when i issue curl -vhttp://host/, I get the page of test2/index.html , but when I get rid of the # in the location = / {} part, the result will be 403 forbidden. can anyone explain why ? when both the location = same_uri {A} and location same_uri {B} are in the configuration file,which configuration will match[A or B]? Thank you very much.http://nginx.org/en/docs/http/ngx_http_core_module.html#location
About priority of nginx location / {} and location = {}
Finally, I move the X-Accel part from $request to $response, and just set X-Accel-Redirect header.If we want limit the download speed, we can use$request->headers->set('X-Accel-Limit-Rate', 10000);, it works well, the number is in bytes.Then I've change the$response->headers->set('Content-Disposition', 'attachment;filename="'.$filename.'"');to$response->setContentDisposition(ResponseHeaderBag::DISPOSITION_ATTACHMENT, $filename);The final code is:BinaryFileResponse::trustXSendfileTypeHeader(); $response = new BinaryFileResponse($file->getAbsolutePath()); $response->setContentDisposition( ResponseHeaderBag::DISPOSITION_ATTACHMENT, $filename ); $response->headers->set('X-Accel-Redirect', '/protected-files/path/to/file'); return $response;And in Nginx:location /protected-files/ { internal; alias /var/www/html/files/; }
I would want to use Nginx X-Accel with Symfony, for the moment I've this code.$request->headers->set('X-Sendfile-Type', 'X-Accel-Redirect'); $request->headers->set('X-Accel-Mapping', '/var/www/html/files/=/protected-files/'); $request->headers->set('X-Accel-Limit-Rate', '1k'); BinaryFileResponse::trustXSendfileTypeHeader(); $response = new BinaryFileResponse($file->getAbsolutePath()); $response->headers->set('Content-Disposition', 'attachment;filename="'.$filename.'"'); $response->headers->set('Cache-Control', 'no-cache'); return $response;And the Nginx Conf:location /protected-files { internal; alias /var/www/html/files; }To test the code (know if the file is really served by Nginx), I've add a X-Accel-Limit-Rate on 1ko/s, but a 2Mo file is downloaded instantly, then I'm sure, it doesn't work fine.I've find this part of code on internet, because the Symfony doc, doesn't really explain how to use it... (http://symfony.com/doc/current/components/http_foundation.html#serving-files)Why I need to return a BinaryResponse with the file, like without Nginx X-Sendfile, and add the X-Sendfile, X-Accel properties in the resuqest ? I just return the response, no the request, how it can work ?
How to use Nginx X-Accel with Symfony?
You're building something very un-HTTP-like. An HTTP request should not take more than a few milliseconds to answer, and HTTP requests shouldn't be interdependent; if you need to execute calculations which take rather long, either try to speed them up by changing your architecture/model/caching/whatever, or treat it explicitly as long-running jobswhich can be controlled through an HTTP interface.That means that "a job" is a "physical resource" which can be queried through HTTP. You create the resource through a POST request:POST /tasks Content-Type: application/json {"some": "parameters", "go": "here"}{"resource": "/tasks/42"}Then you can query for the status of the task:GET /tasks/42{"status": "pending"}And eventually get results:GET /tasks/42{"status": "done", "results": [...]}When you POST a new task which supersedes the old one, your backend can cancel the old task in any way it sees fit; the resource would then return a status of "cancelled" or similar. Your client would simply not query the old resource again after starting a new task.Even if your client queries the resource once every second, it will still use less resources on the server (one connection being open for a solid 10 seconds vs. 10 connections open for 200ms within the same timeframe), especially if you apply some intelligent caching to it. This is also much more scalable as you can scale the task backend independently of the HTTP frontend to it, and the HTTP frontend can be scaled trivially to multiple servers and load balancers.
We're building a rest api using aiohttp. Our app is designed so that user sending requests more frequently than receiving responses (because of time of calculation). For user is important the result of the latest request only. Is it possible to stop calculations on outdated requests?Thank you
how to cancel a previous requests when new request come from the same user in the same session
I found a solution that works..If you change the order infastcgi.confit works and the correct values are returned byPHP_SELFandSCRIPT_NAMEfastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;is moved to the top of the filefastcgi.conffastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;
Why does$_SERVER['PHP_SELF']return/index.php/index.php??requesthttp://example.comoutput/index.php/index.phpindex.php<?php echo $_SERVER['PHP_SELF'];nginx.confserver { listen 80; server_name domain.com; root /var/www/public/www; # Add trailing slash rewrite ^([^.\?]*[^/])$ $1/ permanent; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { include /var/ini/nginx/fastcgi.conf; fastcgi_pass php; fastcgi_param SCRIPT_FILENAME /var/www/public/www/index.php; } }fastcgi.conffastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;
PHP_SELF returns /index.php/index.php
I would recommend using linux ACL, and give PHP process rights to write into the directory. That way you don't need sudo.Also, you will need rights to reload the nginx process. And imho having a cronjob under root user, that reloads the configuration, if it changes and is valid, is a much better option.You should read the relevant answers, that suggest not having rights to do a sudo call from PHP, for examplehttps://stackoverflow.com/a/35340819/602899https://stackoverflow.com/a/29520712/602899Just don't do it. Find a workaround that doesn't require sudo permissions.
I am working on a Laravel project where I have to generate a Nginx configuration file and store it on/etc/nginx/sites-availabledirectory which only has write rights for the admin user, I have admin rights on the server, I just want to know if there is a way for doing this using theProcess Componentof Symfony stack.Thanks a lot and bests ;)
How to use sudo commands from Symfony Process Component?
First of all, you need to understand one thing. While nginx will decrypt file - all other request will be blocked. That's why nginx does not support CGI, only FastCGI.If it ok for you (for example, nginx used only for update purposes), you can use perl or lua extension:http://nginx.org/en/docs/http/ngx_http_perl_module.html,https://github.com/openresty/lua-nginx-moduleUsing this modules you can exec shell. For access uploaded file need to setclient_body_in_file_onlydirective -https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_in_file_onlyExample for perl module (untested):location /upload { client_body_in_file_only clean; perl 'sub { my $r = shift; if ($r->request_body_file) { system("openssl smime -decrypt -binary -in ".$r->request_body_file." -inform DER -out /tmp/decrypted.zip -inkey private.key -passin pass:your_password"); } }'; }But much better to use fastcgi. You can use light fastcgi wraper for it, for example,fcgiwraphttps://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/
I have a small embedded Linux device that has 128 MB flash storage available to work with as a scratchpad. This device runs an NGINX web server. In order to do a firmware update - the system receives an encrypted binary file as an HTTPS POST through NGINX to the scratchpad. The system then decrypts the file and flashes a different QSPI flash device in order to complete the update.The firmware binary is encrypted outside the device like this:openssl smime -encrypt -binary -aes-256-cbc -in plainfile.zip -out encrypted.zip.enc -outform DER yourSslCertificate.pemThe firmware binary is decrypted, after being received through NGINX, on the device like this:openssl smime -decrypt -binary -in encrypted.zip.enc -inform DER -out decrypted.zip -inkey private.key -passin pass:your_passwordI'd really like to decrypt the binary as it is received ( on the fly ) through NGINX, so that it appears on the flash scratchpad in it's decrypted form.I've been unable to find any existing NGINX modules on Google that would do this. How might I accomplish this? Thanks.
Decrypt OpenSSL binary through NGINX as it is received ( on the fly )
Here you are using http basic authentication option, after a successful feedback given by nginx server, browser sends a base64 ofusername:password. just use python base64 module to get username & password,>>> from base64 import b64decode >>> authorization_header = "dXNlcm5hbWU6cGFzc3dvcmQ=" # value from flask request header >>> details = b64decode("dXNlcm5hbWU6cGFzc3dvcmQ=") >>> username, password = details.split(':') >>> username 'username' >>> password 'password' >>>
I have an nginx http server which authenticates users and passes the authenticated request to a Flask app via wsgi. When I print the entire header from the flask app no user information is available.Is it possible to get nginx to include the username in the request header?Here is the server block with the authentication config...server { listen 80; server_name notes.example.org; return 301 https://$server_name$request_uri; } server { # SSL configuration listen 443; listen [::]:443; include snippets/ssl-example.org.conf; include snippets/ssl-params.conf; server_name notes.example.org; location / { auth_basic "Restricted Content"; auth_basic_user_file /path/to/passwd/file; include uwsgi_params; uwsgi_pass unix:/path/to/wsgi/socket; } }Request header as seen by the app running under wsgi...Authorization: Basic **************== Content-Length: 0 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 Dnt: 1 Host: notes.example.org Upgrade-Insecure-Requests: 1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Language: en-US,en;q=0.8 Content-Type: Accept-Encoding: gzip, deflate, sdch, br
How to configure nginx to pass user info to wsgi/flask
I am not sure this repo has been updated for php after the ppa has been migrated (seehttps://github.com/oerdnj/deb.sury.org/wiki/PPA-migration-to-ppa:ondrej-php)basically inscripts/php.shyou need to replace ppa bysudo add-apt-repository ppa:ondrej/php(make sure to runsudo apt-get updateif you're running this command directly from the VM after the initial provisioning) - and to install php5.6 you need to runsudo apt-get install -qq libapache2-mod-php5.6with this change, you now getvagrant@vaprobash:~$ php -v PHP 5.6.28-1+deb.sury.org~trusty+1 (cli) Copyright (c) 1997-2016 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend TechnologiesIf you need additional php modules, make sure to replace the installation by specifying 5.6 in your php version such assudo apt-get install -qq php5.6-fpm
I have a Vaprobash VagrantFile building a Ubuntu Nginx stack.In it I specify PHP v5.6:php_version = "5.6" //Options: 5.5 | 5.6However, I run$ vagrant upwhen I ssh into the box and do$ php -vit shows PHP 5.5.9-1ubuntu4.20 (cli) (built: Oct 3 2016 13:00:37).Why wasn't5.6installed?
Vagrantfile PHP v5.6 specified but v5.5 installed
Whilst it isn't Azure-specific we've published a step-by-step guide to publishingServiceStack .NET Core Docker Apps to Amazon EC2 Container Servicewhich includes no-touch nginx virtual host management byrunning an Instance of jwilder/nginx-proxy Docker Appto automatically generate new nginx Virtual Hosts for newly deployed .NET Core Docker Apps.Thejwilder/nginx-proxyisn't AWS-specific and should work for any Docker solution that explains how it works in itsintroductory blog post.Using nginx-proxy is a nice vendor-neutral solution for hosting multiple Docker instances behind the same nginx reverse-proxy, but for Scaling your Docker instances you'll want to use the orchestration features in your preferred cloud provider, e.g. in AWS you can scale thenumber of compute instancesyou want in your ECS cluster or utilizeAuto Scalingwhere AWS will automatically scale instances based on usage metrics.Azure's solution for mangaging Docker Instances isAzure Container Servicewhich lets you scale instance count using theAzure acs command-line tool.
I'm wondering if anyone with bigger brains has tackled this.I have an application where each customer has a separate webapp in Azure. It is Asp.net MVC with a separate virtual directory that houses ServiceStack. The MVC isn't really used, the app is 99% powered by ServiceStack.The architecture works fine, but as we get more customers, we have to manage more and more azure webapps. Whilst we can live with this, the world of Containers is upon us and now that ServiceStack supports .net core, I have a utopian view of deploying hundreds of containers, and each request for any of my "Tenants" can go to any Container and be served as needed.I think I have worked out most of how to refactor all elements, but there's one architectural bit that I can't quite work out.It's a reasonably common requirement for a customer of ours to "Try" a new feature or version before any other customers as they are helping develop the feature. In a world of lots of Containers on multiple VMs being served by a nginx container (or something else?) on each VM, how can you control the routing of requests to specific versioned containers in a way that doesn't require the nginx container to be redeployed (or any downtime) when the routing needs changing - e.g. can nginx route requests based on config in Redis?Any advise/pointers much appreciated.G
ServiceStack Docker architecture
Ensure these 2 steps are in place.Check your nginx configuration/etc/nginx/conf.d/example.confand include the domain in theserver_namelike so:server_name example.com api.example.com;Check that you have a route setup within theroutes/api.phpfile. Using the sub-domain group is optional but be sure that you have the correct routes.Example of using domain group:Route::group(['domain' => 'api.example.com'], function () { Route::get('/v1/users', ['as' => 'api.users.index', 'uses' => 'UserController@index']); }Example without use of domain groupand allowing for both URL to point to the same Controller (be sure to define its own route names as per the'as').Route::get('/v1/users', ['as' => 'api.users.index', 'uses' => 'UserController@index']); Route::get('/api/v1/users', ['as' => 'users.index', 'uses' => 'UserController@index']);Update:Refer to official Laravel 5.3 documentation regarding the use of sub-domain routeshttps://laravel.com/docs/5.3/routing#route-group-sub-domain-routing
How to route api.example.com to example.com/api so i can justapi.example.com/v1/usersthan usingexample.com/api/v1/users.I'm using nginx, thank you.
Laravel 5.3, using api.example.com to example.com/api
The official documentation is "Settings NGiNXd"Check ifissue 1374is relevant in your case.gitlab_rails['registry_key_path'] = "/etc/gitlab/ssl/gitlab.example.com.key" registry['rootcertbundle'] = "/etc/gitlab/ssl/gitlab.example.com.crt"You do not need to specify these two as per documentation on enabling Registry. These two are for internal communication and are auto generated.
I have an Omnibus gitlab installer. I am trying to setup an HTTPS url with self signed cert. I am using Ubuntu 14.04 as my Host OS. The steps im following are:Modified gitlab.rbexternal_url 'https://gitlab.example.com' nginx['redirect_http_to_https'] = trueCreate Self signed cert with proper name and place it under /etc/gitlab/ssl with permission 600-rw------- 1 root root 1289 Sep 5 08:38 gitlab.example.com.crt -rw------- 1 root root 1679 Sep 5 08:38 gitlab.example.com.keyThen I did gitlab-reconfigure and restart.So when i try the new URL:https://gitlab.example.comthe page doesn't load.The port 443 is open by default and i am able to netcap the same.I am following this blog for setup -GitLab HTTPS with selfsignedI dont see any errors under /var/log/gitlabIs there any additional nginx config required for self signed cert?Can someone please let me know what logs I should be looking for and am i missing any steps.?
Enable HTTPS self signed cert for GitLab Community Edition for Ominbus installer
So, thanks toTom, things started working out :) The steps he indicated were the following, and I suggest this approach for everyone, not just those who have errors.Go tothe Mozilla SSL Configuration Generatorand generate a configuration for your server.Modify the file accordingly for your needs.Go toSSL labsand do a server test for your newly created server.Repeat steps 2-3 until your grade is A.???Profit!The important part is to read and understand as much as possible every single configuration setting you are putting in your files. Good luck!
I am trying to find a solution for this error (full error message is[crit] 556#0: *1940 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: xx.xx.xx.xx, server: 0.0.0.0:443)I have read multiple similar questions (likethis oneorthis one, but they all treat the problem with browsers, and my clients are mobile apps.Also, they all talk about having/not havingTLS_FALLBACK_SCSVenabled in openssl. After findingthis tutorialabout strong SSL security on nginx, I am even more baffled. In the tutorial it says thatOpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.On my system (Ubuntu trusty) I get# openssl version OpenSSL 1.0.1f 6 Jan 2014I have tried upgrading to a newer version, but this seems to be the latest version for Ubuntu trusty. It is worth mentioning that this errors started occurring quite recently, and without apparent reason. The errors don't show up always (the behaviour is quite random actually).However, this is worrying, since their frequency is getting higher and higher and important data is being lost because of these unsuccessful requests.Any help would be greatly appreciated. Thanks.
nginx SSL handshake fails on requests from mobile devices with "SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback"
There is now a log message when the server has exceeded max_fails. It has been added in 1.9.1. Log level is warning, the message says "upstream server temporarily disabled".
Where in the Nginx logs will it say that a server is unavailable because it failedxtimes inyseconds?I have a set of servers in an upstream block in nginx, each one has afail_timeoutandmax_failsvalue set like so:upstream loadbalancer { server ip1:80 max_fails=3 fail_timeout=60s; server ip2:80 max_fails=3 fail_timeout=60s; }If I intentionally bring down one of these servers (let's say ip:80), NGINX gets back a503which I have marked as an invalid header. So I make sure NGINX hits that server three times in sixty seconds.I expect there to be something in the logs that the server is being marked as unavailable, i.e. that thefail_timeouthas kicked in. But I can't find anything.Here is my logging config:access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log warn;
How can I see unavailable servers in Nginx logs?
No problem, you just need a certificate for the user-facing host.As a side note, unless circumstances justify, it is generally ill-advised to forward anything to a publicly available port and host.So that - unless there is a reason not to do so - you should firewall port 81 onvyno.mxto accept connections only from theapp.vno.mxserver.If they are the same server, that's it, or perhaps using 127.0.0.1 is even better.If they are distant, however, you might wish to encrypt the internal connection as well, you can just do that with asnakeoil(self-signed) certificate.
so I'm starting to learn about nginx and reverse proxy's and I have a question about SSL, the thing is that I have a reverse proxy server like this:upstream vnoApp { server vyno.mx:81; } server { listen 80; server_name app.vno.mx; location / { proxy_pass http://vnoApp/; proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } }what this is doing as you might expect is to listen tohttp://app.vno.mxand it 'reverse proxys'' it tohttp://vyno.mx:81, and everything works just fine, but now I want to add SSL support for the site and my question is if I have to add an SSL certificate to both vyno.mx and app.vno.mx (wildcard *vno.mx), or if I just add it to app.vno.mx it will work fine, thanks to all in advance!
If I'm using a reverse proxy on Nginx do I need an SSL certificate for the reverse proxy and the server?
Nginx isfaster and lighter, but many people find it easier to work with Apache because of.htaccesssupport (Nginx does not have an analog due to performance concern).The typical scheme is following: you bind Nginx on port80, configure it to serve static files (jpg, png, js, css, ttf, etc.), and make it proxy to Apache on, say, port8080for non-static resources. Apache in turn has abovementioned.htaccesssupport which allows you to apply rewrite rules and other stuff without webserver reload.
I have installed a popular control panel service called VestaCP (https://vestacp.com/) for my remote linux server. By default it installed both apache and nginx, but despite my best efforts I still can't work out why I need both. I'm familiar with apache and how to configure it, but I've never used nginx before. It appears to be a faster, slimmer apache. Why would you want both? Why not opt for a single one? In the VstaCP settings, it appears I can activate/deactivate Nginx (Proxy Support NGINX) for a website, but I can't use Nginx on its own without apache.I've found I have an apache conf and an nginx conf that are very similar (differently written, but the logic is identical). I'm not sure, but it suggests only one is actually listened to, not sure which though.I'm confused. Help.
Apache and Nginx together, why?
You can try this rule:location ~ ^/!/[a-zA-Z0-9]*[^a-zA-Z0-9].*$ { return 301 $scheme://$http_host; }
I have the following regex in my vhost conf:location ~* ^/!/[^a-zA-Z0-9] { return 301 $scheme://$http_host; }But it only appears to match the first character:# Redirects to https://shouttag.com correctly https://shouttag.com/!/!pink # Does not redirect as expected https://shouttag.com/!/p!nkVariations I have tried:# Assume that $ is unnecessary b/c I don't know what the end of the url may be location ~* ^/!/[^a-zA-Z0-9]$ { # Only seems to work when capturing data via group syntax () location ~* ^/!/[^a-zA-Z0-9]+ {Thanks.
NGINX - Regex - Search entire location for non alphanumeric
All console.log output goes to stderr, which is redirected to the global Nginx error log:https://www.phusionpassenger.com/library/admin/nginx/log_file/
Currently using nginx passenger to serve an express app in production. I can get the error.log and access.log from nginx. But how can I see console.log output that set in the codes?
How can I see console.log output in a node express app when using nginx + passenger
I was trying to run unicorn so i can fork my app to multiple instances. I guess the issue here was, i set passenger_enabled on and was actually running unicorn on 3000.so instead i ran passengerpassenger start -a 127.0.0.1 -p 3000 -d -e productionand my nginx conf like this,server { listen 80; server_name www.APPNAME.com; # Tell Nginx and Passenger where your app's 'public' directory is root /var/www/APPNAME/current/public; index index.html index.htm; # Static assets are served from the mentioned root directory location / { # root /var/www/APPNAME/current; # index index.html index.htm; proxy_pass http://127.0.0.1:3000; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; # proxy_set_header X-Real-Port $server_port; # proxy_set_header X-Real-Scheme $scheme; proxy_set_header X-NginX-Proxy true; } # Turn on Passenger passenger_enabled on; passenger_ruby /usr/local/rvm/gems/ruby-2.1.3/wrappers/ruby; }and everything works now!
Update:Currently i visit my app at domain.com:3000, but i would like to visit domain.com to see my appI have setup nginx at 80 to proxy my rails app at 3000. below is the configurationupstream railsapp { server 127.0.0.1:3000; } server { listen 80; server_name APP; # Tell Nginx and Passenger where your app's 'public' directory is root /var/www/APP/current/public; index index.html index.htm; # Static assets are served from the mentioned root directory location / { root /var/www/APP/current; index index.html index.htm; proxy_pass http://railsapp/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; # proxy_set_header X-Real-Port $server_port; # proxy_set_header X-Real-Scheme $scheme; proxy_set_header X-NginX-Proxy true; } # Turn on Passenger passenger_enabled on; passenger_ruby /usr/local/rvm/gems/ruby-2.1.3/wrappers/ruby; }i referred to :https://stackoverflow.com/a/5015178/1124639this is located at/etc/nginx/sites-enabled/APP.confand is included in /etc/nginx/nginx.conf as below withinhttp{...}include /etc/nginx/sites-enabled/*;but my APP.com still shows 'Welcome to nginx on Ubuntu!' and APP.com:3000 shows my app. What am i doing wrong?What i am using:Ubuntu 14.04 ec2 instancenginx 1.8.0unicorn server at 3000
How to config nginx to proxy to rails app? so that i dont have to say domain.com:port
You can disable adding or modifying of “Expires” and “Cache-Control” response header usingexpiresparam:expires off;nginx docs
I am trying to serve a website with nginx. I have noticed that when I make changes to my/etc/nginx/sites-available/game, runsudo service nginx restart, it is not reflected when I try to pull it up in the browser.The browser just hangs and waits for a response and then timesout.However, it works perfectly fine if I try to do a curl request to my site on the command line. I get the normal nginx html basic file. Why is that? Here. (and yes, I have made a soft link from sites-enabled/game to sites-available/game)server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; server_name my.site.uw.edu; location / { try_files $uri $uri/ =404; } }Also, I am using Ubuntu 14.04. I don't think this version of Linux uses SELinux, but could this be some sort of security configuration related deal? I have had trouble in the past with SELinux when deploying on CentOS machines.
Nginx configuration not updating for browser
I had the same issue. My solution was to simply comment out the offending line.public function bind($caller) { $this->_callers[]=$caller; // $this->_callers=array_unique($this->_callers); // LINE 9 }You may also find Magmi is getting a "500 hphp_invoke" error on /magmi/web/magmi_run.php. To get around this I added an exception handler within the first if statement. My magmi_run.php file now reads...setLogger(new FileLogger($fname)); } else { $mmi_imp->setLogger(new EchoLogger()); } $mmi_imp->run($params); } catch (Exception $e) { die("ERROR"); } } else { die("RUNNING"); } ?>
I'm trying to run magmi product import plugin on a Magento app which is running on an aws ec2 instance that has NGINX & HHVM on it. When I try to run the the magmi product import app on Magento I get the below server error in myhhvm error log./var/log/hhvm/error.log\nCatchable fatal error: Object of class Magmi_ProductImportEngine could not be converted to string in /var/www/qa-hoi/magmi-importer/inc/magmi_mixin.php on line 9This is themagmi_mixin.phpfile_callers[]=$caller; $this->_callers=array_unique($this->_callers); // LINE 9 } public function unbind($caller) { $ks=array_keys($this->_callers,$caller); if(count($ks)>0) { foreach($ks as $k) { unset($this->_callers[$k]); } } } public function __call($data,$arg) { if(substr($data,0,8)=="_caller_") { $data=substr($data,8); } for($i=0;$i_callers);$i++) { if(method_exists($this->_callers[$i],$data)) { return call_user_func_array(array($this->_callers[$i],$data), $arg); } else { die("Invalid Method Call: $data - Not found in Caller"); } } } }Any idea how I should go about solving this? Should I update my php.ini file?What could be causing the fatal error. It is not occurring on my local machine which has Apache.UPDATEI installed HHVM on my local machine and ran the xdebug. It seems that the$callerobject in the magmi file contains several arrays that cannot be evaluated. See screenshot below:
hhvm nginx toString server error with Magento Magmi
The problem was quite simple, but yet disguised. It was missing python packagepsycogreen. But it was not mentioned as dependency and when installingim_chatit didn't require such package. So if you were running Odoo with--workers=0, then installedim_chatand later switched to for example--workers=2, Odoo would not throw any error and longpolling port would never be opened.Installing this solved it:pip install psycogreen==1.0
There seems to be a problem with Debian distributions (tested for both Wheezy and Squeeze) using Odoo for longpolling port. longpolling port is never used. It supposed to be used wenworkersparameter is set to be greater than0, but it is not used anyway. But testing same thing on Ubuntu, longpolling port is used normally.There is an original Question (last comment of the issue)https://github.com/odoo/odoo/issues/3793checking nginx log I see this (everytime it tries to access longpolling through reverse proxy):2015/05/08 07:54:09 [error] 32494#0: *8 connect() failed (111: Connection refused) while connecting to upstream, client: IP address, server: _, request: "POST /longpolling/poll HTTP/1.1", upstream: "http://127.0.0.1:8072/longpolling/poll", host: "db.host.eu", referrer: "http://db.host.eu/web"And when I try to connect to8072port via telnet:$ telnet 127.0.0.1 8072 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refusedBut trying for example:$ telnet 127.0.0.1 8069 Trying 127.0.0.1... Connected to 127.0.0.1.So it seems that 8072 port is not used? In my odoo config it is set to be like this:longpolling_port = 8072 xmlrpc_port = 8069 workers = 2 xmlrpc_interface = 127.0.0.1 netrpc_interface = 127.0.0.1
Odoo (on Debian) - longpolling port is never used/opened
Regex locations have precedence over prefixed location blocks in nginx request processing. Hereinafter are relevant excerpts of nginx'slocation directive documentation.I strongly encourage you to read them carefully as many people don't do it and miss the basics.A few examples before to understand keywords :prefixed location :location /toto { [...] }regular expression location :location ~ /toto { [...] }[ ... ]To find location matching a given request, nginx first checks locations defined using the prefix strings (prefix locations). Among them, the location with the longest matching prefix is selected and remembered. Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used. If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used.[ ... ]If the longest matching prefix location has the “^~” modifier then regular expressions are not checked.[ ... ]Also, using the “=” modifier it is possible to define an exact match of URI and location. If an exact match is found, the search terminates. [...]A few other examples to illustrate the two operators that modifies location lookup order :location ^~ /toto { [...] }: prefixed location with higher priority than regex locationslocation = /toto { [...] }: exact prefixed location (exact match, highest priority)To sum things up, the priority list during location election for incoming request URI is :location = /toolocation ^~ /totolocation ~ /totolocation /totoSo the cleaner way to solve your issue is using :location ^~ /app { alias /path/to/app/public/; try_files $uri $uri @app; }
upstream app { server localhost:3000; } server { ... # If I comment this location out, images are displayed on the website location ~* \.(?:jpg|jpeg|png|gif|swf|xml|txt|css|js)$ { expires 6004800; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ... location /app { alias /path/to/app/public/; try_files $uri $uri @app; } location @app { rewrite /app(.*) $1 break; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $proxy_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://app; proxy_redirect http://app/ /app/; } ... }I'm struggling with this for some time. I have an express app in a sub folder under nginx. Above is the code in my nginx file in /sites-available/. When I remove the location for the static files, the images and css of the app are proxied, but if the static files cache is in the nginx file then the images and css files of the express app are not displayed on the website.Could someone help, please?
Express js app with nginx - a conflict with static files when serving a subfolder
Don't listen on port 80 for these domainsReturn nginx's special HTTP code444in a default vhost.server { listen 80 default_server; return 444; } server { listen 80; server_name ~^(?sub1|sub2|sub3)\.example\.com$; return 301 https://$subvar.example.com$request_uri; }
I have a few domains/subdomains, and I have aserverblock to properly redirect them to port 443. But what I'm also trying to do is for a couple of those subdomains, I don't want it to connect at all on port 80.So below is an example of the values I'm redirecting to port 443.server { listen 80; server_name ~^(?sub1|sub2|sub3|sub3)\.example\.com$; return 301 https://$subvar.example.com$request_uri; }So I also have asub4.example.comwhich I don't want to connect at all on port 80, but when I try to access it, I get the nginx 404 not found message, what I wanna achieve is a "server not found" sort of message.Let me know if you'd like more information, or if I'm missing anything.
Reject certain subdomains on port 80 on nginx
CloudFlare allows you to enable specificpage rules, one of which is to force SSL (bydoing a hard redirect). This is a great thing to usein addition todjango-sslifyordjango-secureIn addition to setting up your SSL redirect, you also need to tell Django to handle secure requests. Luckily,Django provides a decent guidefor doing this, but thereare a few thingsthat it doesn't mention but I've had to do with nginx.In your Django settings, you need to tell Django how to detect a secure requestSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')In your nginx configuration you need to set up theX-Forwarded-Protocolheader (and theX-Forwarded-For/X-Schemeheaders are also useful).proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;You also need to proxy theHostheader down, so Django is able to read the correct host and port, which is used in generating absolute urls and CSRF, among other things.proxy_set_header Host $http_host;Note that I used the$http_hostvariable instead of$hostor$host:$server_port. This will ensure that Django will still respect CSRF requests on non-standard ports, while still giving you the correct absolute urls.As with most things related to nginx and gunicorn, YMMV and it gets easier after you do it a few times.
IntroCloudflare's providing SSL for freenow, and I would be a fool to not take advantage of this on my site, and a downright dickhead to break everything in the process of trying to.I can code apps just fine, but when it comes to setting up or configuring https/nginx/gunicorn/etc/idon'tknowtheterminology, I know barely enough to follow Googled instructions.QuestionI would like to use django-sslify to force https on my Django web app. How may I achieve thiswithout upsetting the balance in my life, given the following known facts?Known factsI'm using Django 1.7, running on a DigitalOcean server hooked up to a (free) Cloudflare DNS. Django is fitted (served?) with nginx and gunicorn. Basically followedthis guideto get it all set up.Accessing my website currently defaults to a regularhttp://example.comheader.Manually accessinghttps://example.comworks with the green lock and all, but this breaks all form submissions with the error "(403) CSRF verification failed. Request aborted.".In my Cloudflare site settings, the domain is currently configured to "Flexible SSL".Trying to use django-sslify with my existing setup totally breaks everything, and the browser is unable to return a response.This info nuggettells me that I should use the "Full SSL" configuration setting when using django-sslify with Cloudflare's SSL.Cause for hesitation foundherewhere it is mentioned that a $20/mo Pro Cloudflare account is needed to handle SSL termination. So I really don't want to screw this up :/There was only 1 mention of "http" or "https" anywhere in my nginx and gunicorn configuration, specifically in my nginx config:location / {proxy_pass http://127.0.0.1:8001; ... }Ok I think that's all I haveAlso, my server is providing an Django Rest Framework api for a Phonegap app, does that need to be taken in to consideration? If I need to provide addtional information do let me know and I'll get back to you. Thank you for taking a look at this! :)
How to use django-sslify to force https on my Django+nginx+gunicorn web app, and rely on Cloudflare's new free SSL?
Somewhere nearuwsgi_pass, for example:location / { uwsgi_pass unix:///tmp/uwsgi.sock; include uwsgi_params; uwsgi_param UWSGI_SCRIPT webapp; uwsgi_param UWSGI_CHDIR /usr/local/www/app1; }
I want to be able to access environment variables (for passwords and such) in a Flask app.I'm running nginx and uWSGI. Where is the correct place to set them so they're available?Should I just add auwsgi_param PARAM_NAME 'param_value';line to the config for the site (in/etc/nginx/sites-enabled/mysite?
Set Environment Variables - nginx + uWSGI
you are rewriting to index.php if the file doesn't exist, so it never makes it to your try_files or errorpage...if (!-e $request_filename) { rewrite ^.*$ /index.php last; }^ should be removed, unless you have a specific purpose for it
My Nginx server is not displaying my 404 page. Instead, whenever trying to access a non-existent page or directory, it merely serves my index(.php) in the root of my web folder (without the corresponding stylesheet).Here's my own 'default' file under /etc/nginx/sites-available:server { listen 80; listen [::]:80 ipv6only=on; listen 443 ssl; listen [::]:443 ipv6only=on ssl; add_header Strict-Transport-Security max-age=15768000; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4'; ssl_prefer_server_ciphers on; root /usr/share/nginx/html; index index.html index.htm; location / { index index.php; } if (!-e $request_filename) { rewrite ^.*$ /index.php last; } try_files $uri $uri/ =404; error_page 403 404 405 /error/404.html; error_page 500 501 502 503 504 /error/50x.html; location ^~ /error/ { internal; root /usr/share/nginx/html; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } }This is undoubtedly down to me being new to this and fiddling with it too much, so there are likely other problems here too (on that note, a tip as to how - once the rest is fixed - I can force HTTPS connections would be swell!). Help and constructive input appreciated, thanks!
Nginx not displaying 404 page, instead serving index file in root
I was able to solve the above problem using theNginx ngx_http_userid_module. The hardest part was actually finding the module, implementing the solution was quite trivial.I used their example configuration:userid on; userid_name uid; userid_domain example.com; userid_path /; userid_expires 365d; userid_p3p 'policyref="/w3c/p3p.xml", CP="CUR ADM OUR NOR STA NID"';And then added the userid to my fastCGI cache key:fastcgi_cache_key "$scheme$request_method$host$request_uri$uid_got";Hopefully this answer helps someone discover this useful module quicker than I did.
I've implemented FastCGI caching on our site, and have seen great speed improvements. However the FastCGI cache key does not seem to be unique enough. If I login, my name appears in the header. However the next person to login still sees my name in the header, assuming the cache is still valid.Is there a way to make the cache key unique on a per-user basis? Ideally using a unique identifier from the user's Cookies or a PHP Session? I tried implemented the answer below, but Nginx failed to restart.Log in value from Set-Cookie header in nginxNote my cache key looks like this:fastcgi_cache_key "$scheme$request_method$host$request_uri";Update: My thought is if I can parse the HTTP headers sent to Nginx, then I can grab the PHP SESSION ID and use that. However I cannot find an example of how to do this anywhere. Right now I have something like this, which doesn't work.http_cookie ~* PHPSESSID=([0-9a-z]+) { set $ses_id $1; }
PHP 5.5 FastCGI Caching
Is it advisable to block these requests?If your application cannot serve anything meaningful without the host, then it's IMO advisable. Furthermore I couldn't find anything in HTTP 1.1 which says applications have to be backward compatible.What's the best way to block them?Answer them with505 HTTP Version Not Supported.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be about programming within the scope defined in thehelp center.Closed9 years ago.Improve this questionI use $_SERVER['HTTP_HOST'] for absolute url paths in my website. But, often, I find nginx error logs of HTTP/1.0 requests with undefined HTTP_HOST.Is it advisable to block these requests? What's the best way to block them?
Should I block HTTP 1.0 request? [closed]
WSGI containers expect a callable/function to run, they do not execute your 'main' entry. With run:Eve you are asking uWSGI to execute (at every request) the "Eve" function in the "run" module (that is obviously wrong)Moveapp = Eve(auth=globalauth.TokenAuth)out of the__main__check and tell uWSGI to use the 'app' callable in the "run" module withmodule = run:app
It is no time to move my Python Eve Api into a production environment. There are several ways to do this and the most common requirements are:Error LoggingAutomatic RespawnMultiple Processes (if possible)The best solution I found is to have a nginx server as frontend server. Withpython eve running on the uWSGI middleware.The problem: I have a custom__main__which is not called by uwsgi.Does anyone have this configuration running or another proposal? As soon as it works, I will share a running configuration.thank you.Solution (Update):Based on the proposial below I moved the Eve() Method to theinit.py and run the app with a sperate wsgi.py.Folder structure:webservice/ init.py webservice/modules/... settings.py wsgi.pyWhere init.py containsapp = Eve(auth=globalauth.TokenAuth) Bootstrap(app) app.config['X_DOMAINS'] = '*' ...and wsgi.py containsfrom webservice import app if __name__ == "__main__": app.run()wsgi.ini[uwsgi] chdir=/var/www/api/prod module=wsgi:app socket=/tmp/api.sock processes=1 master=True pidfile=/tmp/api.v1.pid max-requests=5000 daemonize=/var/www/api/logs/prod.api.log logto=/var/www/api/logs/uwsgi.lognginx.conflocation = /v1 { rewrite ^ /v1/; } location /v1 { try_files $uri @apiWSGIv1; } location @apiWSGIv1 { include uwsgi_params; uwsgi_modifier1 30; uwsgi_pass unix:/tmp/digdisapi.sock; }start command:uwsgi --ini uwsgi.ini
Running Python Eve Rest API in Production
location ~* ^/static/.+\.(png|whatever-else)$ { alias /var/www/some_static; expires 24h; } location / { # regular rules }Hand written, may contain mistakes.If you want to extend the rules to matchanything/something/static/*.pngjust remove the^in the patten.
Is there any way of serving static files by only some URL path? For example, next URL patternhttp://host/static/*.pnghas/static/substring (path), and Nginx will serve any statics from there.In the web server documentation I found an example:location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|js)$ { ...and defined my Nginx config like that:location / { try_files $uri $uri/ /index.html; } location /apib { #some proxy_pass } location /apim { #some proxy_pass } location /api { #some proxy_pass }I try to add additionallocationfor*/static/*.*with root dir/var/www/some_statics.
Nginx: serving static files by URL path
Try to changedefault_type application/octet-stream;todefault_type text/html;Maybe your php-script does not set a content MIME type and it goes from nginx.
I'm running on Windows 7 (64-bit), with PHP 5.4.12, and Nginx 1.5.8.I have read many tutorials on setting this up, and troubleshooting this issue, which is that when requesting a PHP file from my localhost, it downloads it as a file instead of displaying the PHP page. Below is my nginx.conf file:worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8081; server_name localhost; access_log C:/nginx/logs/access.log; error_log C:/nginx/logs/error.log; root C:/nginx/html; fastcgi_param REDIRECT_STATUS 200; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }I'm running nginx.exe manually through the command prompt. I've also tried starting php-cgi.exe manually first at a separate command prompt, like so:C:\php5.4.12\php-cgi.exe -b 127.0.0.1:9000The php file I'm requesting is within C:/nginx/html, and I'm requesting it as:http://localhost:8081/info.phpAnd it downloads it. The contents of this PHP file are:How can I possibly get my PHP scripts to run in this environment. Anyone have experience with this?
Nginx and FastCGI downloads PHP files instead of processing them
use$server_portinstead of$proxy_portpart in you configuration.change this lineproxy_set_header Host $host:$proxy_port;toproxy_set_header Host $host:$server_port;Catalina'sHttpServletResponse.sendRedirectimplementation uses thegetServerPortmethod to build anabsoluteredirect url (LocationHeader-Value).getServerPortreturns the part after the:from the request'sHostHeader-Value which has to be8085in your case.
I am using tomcat and nginx together to serve my web application. nginx listens to port 8085 and forwards requests to tomcat which is running on port 8084.If I do a redirect like the following:@RequestMapping("/test") public String test() { return "redirect:/"; }the page gets redirected to port 8084 (Tomcat port) instead of nginx port(8085).How can i redirect to the desired port?Edit: My nginx configuration is similar to this:server { listen 8085; server_name www.mydomain.com; location /{ proxy_pass http://127.0.0.1:8084; include /etc/nginx/conf.d/proxy.conf; } }and contents of /etc/nginx/conf.d/proxy.conf:proxy_redirect off; proxy_set_header Host $host:$proxy_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 8m; client_body_buffer_size 256k; proxy_connect_timeout 120; proxy_send_timeout 120; proxy_read_timeout 120; proxy_buffer_size 4k; proxy_buffers 32 256k; proxy_busy_buffers_size 512k; proxy_temp_file_write_size 256k;
Spring MVC “redirect:/” prefix redirects with port number included
I would suggest avoidingifand working with different locations making use of the precedence related to the pattern matching methods used (docs):#blank url location = / { return 302 http://subdomain.domain.tld/index.php; } #just /index.php location = /index.php { include common_settings; } #anything starting with /img/ location ^~ /img/ { include common_settings; } #anything starting with /include/ location ^~ /include/ { include common_settings; } #everything else location / { return 302 http://subdomain.domain.tld/index.php?url=$uri_without_slash; }And in a separated config file calledcommon_settings:include php.conf; root /srv/http/somefolder/someotherfolder/; index index.php;EDIT: Added removal of first slash in url:In your conf, outside anyserverdirective:map $request_uri $uri_without_slash { ~^/(?P.*)$ $trailing_uri; }
After about 2 hours of googling and trying various things out, I turn to you for help.Task: Rewrite the blank url to something, and everything else to something different in nginx.So, if I navigate to subdomain.somedomain.tld, I want to get served the index.php, and if I go to subdomain.somedomain.tld/BlAaA, I get redirected to index.php?url=BlAaA. Exceptions are files under /img, /include, and the index.php themselves. They don't get rewritten.The second part works already, as does the whitelist, but I can't figure out or find something to accomplish the whole idea.The working part:server { listen 80; server_name subdomain.domain.tld; location / { include php.conf; root /srv/http/somefolder/someotherfolder/; if ( $uri !~ ^/(index\.php|include|img) ){ rewrite /(.*) /index.php?url=$1 last; } index index.php; } }The answer provided by @pablo-b almost solved my problem. Only two problems persist with this approach: 1: PHP-FPM now needs to have the extensions of the files under /include/ (e.g. style.css, background.jpg) set in /etc/php/php-fpm.conf under security.limit_extensions. My original php.conf worked along the lines oflocation ~ \.php { #DO STUFF }which nginx doesn't like, since it kinda overwrites the location /index.php part from your suggestion. I can work around that, though, given enough time.2: $request_uri yields "/whatever", not "whatever" as value to my url= parameter. I can parse the "/" out in my php code, for sure, but my original solution didn't add the leading "/". Any elegant way to solve this?
nginx rewrite: everything but empty base url
Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10's to 1000's of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.Right now there is no automatic cleanup of the read-write layers -- they persist even after the Docker container has exited. By default, nothing is lost.So, even for a single low traffic site, you would find your disk use growing steadily over time. Youcouldadd your own automated cleanup.Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That's pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the "one new Docker container per request" approach. If you're going to have some persistent services, why not make the Docker containers more persistent and run them longer?
Is it recommended to launch a docker instance per request?I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I'd like to launch them dynamically so that if more than one user arrives that I would launch one per user... and/or a shared instance (determined by configuration)
launching adhoc docker instances: Is it recommended to launch a docker instance per request?
All you need to know:https://stackoverflow.com/a/10460399/814470https://stackoverflow.com/a/17839750/814470Two answers from duplicated question
Run Flask on server with uWsgi.uWsgi config /tmp/flask.sock /home/reweb/flask/ publicist:app python27 reweb /home/reweb/reload nginx configupstream flask_serv { server unix:/tmp/flask.sock; } server { listen 80; server_name some-domain.com; access_log /home/reweb/log/nginx-access.log; error_log /home/reweb/log/nginx-error.log; location / { uwsgi_pass flask_serv; include uwsgi_params; } }But instead of debugger page nginx show me 502 error.All Flask error traceback i can see in uwsgi error log.UPDATEFind old questionnginx + uwsgi + flask - disabling custom error pagesthere is no answer
Flask + uwsgi + nginx + debug. 502 error instead of debugger page
If you already using nginx you should serve media and static files using nginx, no reason to serve them with uWSGI and flask, too much overhead.#in case you have structure /path/to/your/media_dir/media location /media { root /path/to/your/media_dir; } #in case you have structure /path/to/your/media_dir location /media { alias /path/to/your/media_dir; }
I have a small Flask application destined from home network use.At the moment I have Flask running with uWSGI and nginx.The app basically scans a location and serves media files. Below is the code for rendering these files:@app.route('/get_media/', methods=['GET']) def get_media(filename): return send_from_directory('/media/', filename)Thesend_from_directoryworks fine, but seems to be slow. I must admit I don't really understand the process behind it. Is python serving these files or is nginx? I want nginx to be able to server these files, but I'm unsure how to configure the alias as I want the app to walk the directories of the files but would like to handoff the serving to nginx.
Flask send_from_directory for media files
Finally I managed this creating different PHP-FPM pools with the same permissions as the user. This way I can have different users separated from each other. And as a bonus deploy.rb is simplified.
I have a userdidongo(user & groupdidongo), and the nginx server (user & groupwww-data). I've setup Capifony to login asdidongouser: the first time I deploysetfaclcommand works ok (while the logs folder is empty). But after the web application, served by nginx, has generated some logs (prod.log) the very next deploy fails, with ansetfaclerror.I'm sure I'm doing a noob error with the permissions between the user and the web server, but I don't see what error. I see thatdidongoshould not be able to change permissions of a file he hasn't permissions to. But then, how I'm supposed to configure the server or Capifony?Thanks!Relevant (hope so) Capifony deploy.rb config:set :user, "didongo" set :webserver_user, "www-data" set :permission_method, :acl set :use_set_permissions, true set :shared_children, [app_path + "/logs", web_path + "/uploads", "vendor"] set :writable_dirs, ["app/cache", "app/logs"]This is the Capifony error:$ setfacl -R -m u:didongo:rwx -m u:www-data:rwx /home/didongo/staging/shared/app/logs setfacl: /home/didongo/staging/shared/app/logs/prod.log: Operation not permittedSome data on the ACL:$ getfacl app/logs # file: logs # owner: didongo # group: didongo user::rwx user:www-data:rwx user:didongo:rwx group::rwx mask::rwx other::r-x default:user::rwx default:user:www-data:rwx default:user:didongo:rwx default:group::rwx default:mask::rwx default:other::r-x # file: logs/prod.log # owner: www-data # group: www-data user::rw- user:www-data:rwx #effective:rw- user:didongo:rwx #effective:rw- group::rwx #effective:rw- mask::rw- other::r--
Capifony setfacl permissions: "Operation not permitted"
Use theX-Accel-Redirectheader in combination with a special Nginxlocationto have Nginx proxy the remote file.Here is thelocationto add to your Nginx configuration:# Proxy download location ~* ^/internal_redirect/(.*?)/(.*) { # Do not allow people to mess with this location directly # Only internal redirects are allowed internal; # Location-specific logging access_log logs/internal_redirect.access.log main; error_log logs/internal_redirect.error.log warn; # Extract download url from the request set $download_uri $2; set $download_host $1; # Compose download url set $download_url http://$download_host/$download_uri; # Set download request headers proxy_set_header Host $download_host; proxy_set_header Authorization ''; # The next two lines could be used if your storage # backend does not support Content-Disposition # headers used to specify file name browsers use # when save content to the disk proxy_hide_header Content-Disposition; add_header Content-Disposition 'attachment; filename="$args"'; # Do not touch local disks when proxying # content to clients proxy_max_temp_file_size 0; # Download the file and send it to client proxy_pass $download_url; }Now you just have to set theX-Accel-Redirectheader in your responses to Nginx:# This header will ask nginx to download a file # from http://some.site.com/secret/url.ext and send it to user X-Accel-Redirect: /internal_redirect/some.site.com/secret/url.ext # This header will ask nginx to download a file # from http://blah.com/secret/url and send it to user as cool.pdf X-Accel-Redirect: /internal_redirect/blah.com/secret/url?cool.pdfThe full solution was foundhere. I suggest reading it before implementing.
Our site is an image repository of sorts. Each image has the notion of an external URL and an internal URL. External URL's are seen by clients and they change as we experiment with SEO. The internal URL's are permanent URL's that point to our image hosting service. We use our Ruby on Rails app to provide the URL translation. Here's an example of a request:-------- ----- ------- ------- ------------ | | --eURL--> | | --> | | --> | | -iURL--> | | |client| |CDN| |Nginx| | RoR | |Image Host| | | <-------- | | <-- | | <-- | | <-IMG--- | | -------- ----- ------- ------- ------------The architecture is working, but streaming the image through RoR is inefficient. I want to have Nginx do the proxying. That's what it's for. The proposed architecture would look something like this:-------- ----- ------- ------- | | --eURL--> | | --> | | ------> | RoR | |client| |CDN| |Nginx| <-????- | | | | <-------- | | <-- | | ------- -------- ----- | | ------------ | | -iURL-> |Image Host| | | <-IMG-- | | ------- ------------What response can I send to Nginx to have it proxy the data? I don't mind adding Nginx modules to my infrastructure and of course I'm open to changing my nginx.conf.X-Sendfileis the closest thing I've found, but that only allows streaming from the local filesystem. Maybe there is some other obscure HTTP response header or status code I'm unaware of.
Nginx proxy redirect to another URI
As the other answer states,formidableis a very solid library for handling uploads. By default it buffers to disk, but you can override that behavior and handle the data as it comes if, if you need. So if you wanted to write your own proxy, node.js + formidable would be a great way to get uploads to stream as they come in.You could alsotrynode-http-proxy, but I'm not sure on how it buffers, unfortunately. You should also consider that it hasn't been used anywhere near as much as Nginx, so I'm not sure how much I'd trust it exposed directly to the wild (not so much an issue with the library, per-se, but more with Node).Have you taken a look at Nginx'sclient_body_buffer_sizedirective? It seems like setting it to a lower value would solve your memory issues.
I've already foundEvent loop for large files?, but it's mostly about downloads. The conclusion I take from that post is node.js might have been adequate for downloads, but Nginx is a battle-hardened solution that "ain't broke."But what about uploads? We have enormous files being uploaded. We do genomics, and human genome datasets are as much as 200GB in size. As far as I've been able to determine, Nginx always buffers the complete request, header and body, before forwarding it to a back-end. We've run out of memory handling three uploads at the same time.We have a swarm of small, "does one thing and does it well" servers running in our application, one of which handles the uploads (and type transformations to an in-house format) of the genomic data, and another of which provides socket.io handling to keep customers appraised of both upload progress and other events going on in our application's ecology. Others handle authentication, customer data processing, and plain 'ol media service.If I'm reading the code for node's http/https modules right, node.js would be an ideal tool for handling these issues: it speaks HTTP/1.1 natively, so the websockets passthrough would work, and it hands the(request, response)tuple to the handler function after processing the HTTP HEAD but holding off on the BODY until the handler function bindsrequest.on('data', ...)events to drain the BODY buffer.We have a well-segmented, url-based namespace for our services: "/import," "/events", "/users", "/api", "/media", etc. Nginx only handles the last three correctly. Would it be difficult or inappropriate to replace Nginx with a node.js application to handle all of them? Or is there some obscure reverse proxy (Nginx, Pound, and Varnish all have similar limitations) that already does everything I want?
Replacing Nginx with node.js for the import of large files?
location /app_a/ { rewrite /app_a/(.*) /$1 break; proxy_set_header Host $http_host; proxy_pass http://app_a; }
I'm trying to route traffic across multiple upstream servers on nginx like so:upstream app_a { server unix:/tmp/app_a.sock fail_timeout=10; # For a TCP configuration: # server localhost:8000 fail_timeout=0; } server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 index index.html index.htm; server_name localhost; root /home/ubuntu/app_a/www/staging/static; location ~ ^/app_a/(.*)$ { try_files $1 @proxy_to_app_a; } location @proxy_to_app_a { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_a; }Unfortunately the apps have no knowledge of full uris and expect to be sitting on root - which means i need to re-write the uri when passing to the app, which is why i thought this might work:location ~ ^/app_a/(.*)$ { try_files $1 @proxy_to_app_a; }the app works fine if location is just/(because of the aforementioned root issue), but this regex based solution doesnt seem to work. What do i need to do so the app gets/instead ofapp_ain the url?Thanks
Multiple apps on nginx
I never got this working to my satisfaction with nginx. Depending on your specific needs, two solutions which may be adequate:if you can tolerate the stream being on a different port, pass it through using the port forwarding feature of OpenWRT's built-in firewall.use the reverse-proxy capabilities of tinyproxy. The default package has the reverse-proxy capabilities disabled by a flag, so you need to be comfortable checking out and building it yourself. This method is definitely more fiddly, but does also work.I'd still be interested to hear of anyone who gets this working with nginx.
I'm using nginx on OpenWRT to reverse-proxy a motion-jpeg feed from an IP camera, but I'm experiencing lag of up to 10-15 seconds, even at quite low frame sizes and rates. With the OpenWRT device removed from the path, the camera can be accessed with no lag at all.Because of the length of the delay (and the fact that it grows with time), this looks like some kind of buffering/caching issue. I have already setproxy_buffering off, but is there something else I should be watching out for?Thanks.
How do I use nginx to reverse-proxy an IP camera's mjpeg stream?
So after a long back and forth with the Heroku Support staff we finally found the issue. I was using Datatables in various places around my site and using cookies to store user state settings. That cookie was getting longer and longer and longer as a user navigated around the site until my header surpassed the maximum header size allowed by NGINX (8K).The solution was to remove/simplify that cookie or switch from nginx Heroku (bamboo) to a Heroku stack that didn't use nginx (cedar).
UPDATE: This bug appears to be Browser Specific to Chrome. I've clicked the link about 50 times each in Firefox and IE and I can't seem to cause it. Also, once it is occurring, I can switch to FF or IE and it'll work fine on those two.I have a particular page in my Rails 3 application on heroku that loads fine for awhile. I can click the same page and it loads without a problem. But after a certain number of loads, it suddenly starts to give me a 400 Bad Request Error with nginx/0.7.67 below it.After it occurs once, everytime I load the page I get the 400 error. But if I leave the application alone for awhile, overnight for example, the page works again in the morning for a short while. But if I click the page a few times, it begins giving me that error again.It's not something that occurs locally so it seems like it must be a heroku issue.I also tried restarting heroku but that doesn't help. The only thing that seems to help is giving it some time off.The heroku logs don't give me any new info as far as errors. Everything appears to be working fine and then I get a line that ends 727 | https | 400 and it just stops.I'm using https if that helps.The full error heroku log is:[33m2011-07-02T15:25:59+00:00 heroku[nginx]: <-[0m GET /matters/show/34 HTTP/1.1 | 10.212.125.194 | 727 | https | 400Let me know what code from this page would help solve this problem if you have an idea.
Sporadic 400 Bad Request Error nginx/0.7.67 with Heroku and Rails 3
Easy answer: nginx does not currently do HTTP/1.1 to upstreams, and thus definitely not websockets (nor does it have threads, but that's another story). A custom websockets proxy based on node.js is probably a good solution. You could also build something in Java; there are plenty of people building websockets services with it now.
I want to host MULTIPLE WEBSOCKETS node servers (separate processes). It may be >1000 simultaneous connections. Also I want to log and control each connection and want to make it MEMORY efficient. Is it a good idea to write reverse proxy in node.js? Is it worse in anything than Nginx, pure Erlang or Scala? Can even Nginx handle 1000+ websockets connections? Does 1 connection freeze 1 Nginx thread? Is it memory efficient?
Nginx vs Node.js - reverse proxy for multiple web-sockets servers
I would recommend taking a look at this project:http://github.com/STPeters/luafcgidThere are instructions on how to use it with nginx.
I am currently trying to figure out ways to run Lua scripts using FastCGI with either lighttpd or Nginx. The only thing I was able to dig up yet wasWSAPIof the Kepler project. But I wonder if there are other possibilities. Important for me is:should be as lightweight as possibleshould be stable enough to use in a production environmentMany thanks in advance.
Running Lua scripts using FastCGI
Here is a list of Nginx Core Directives:http://wiki.nginx.org/NginxHttpCoreModuleBy skimming through these I can see there's probably more than one way to achieve this. I can't test it, but here's something that may work:error_page 403 /forbidden.html; location ^~ /upload/ { deny all; }However, I would advise against this; your upload directory shouldneversit inside your web root. You should simply move the folder to achieve what you're asking.
in nginx conf file,i use:location ~ \.jsp { proxy_pass http://127.0.0.1:86; }to parse my jsp file, now,i want excluded directory/upload/this directory is user upload file directory ,don't need parse JSP file.(http://example.com/upload/)how to change mylocation ~ \.jsp {?i need parse JSP*.jspbut excluded/upload/and it's subdirectory.thanks all :)
how to excluded directory from nginx conf?
first of all: to secure the django admin a little bit, i always use a url for the admin different to /admin/ a good idea would be to deploy the admin as a second application on another domain or subdomainyou can limit the requests per minute to the whole webapp via IPTABLES/NETFILTER. a tutorial how this is done can be found atdebian administrator. this is an example how to secure the ssh-port, but you can use the same technique for http.You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:fromwww.cyberciti.biz### Directive describes the zone, in which the session states are stored i.e. store in slimits. ### ### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ### limit_zone slimits $binary_remote_addr 5m; ### Control maximum number of simultaneous connections for one session i.e. ### ### restricts the amount of connections from a single ip address ### limit_conn slimits 5;The above will limits remote clients to no more than 5 concurrently "open" connections per remote ip address.
I'm looking into the various methods of rate limiting the Django admin login to prevent dictionary attacks.One solution is explained here:http://simonwillison.net/2009/Jan/7/ratelimitcache/However, I would prefer to do the rate limiting at the web server side, using Nginx.Nginx'slimit_reqmodule does just that - allowing you to specify the maximum number of requests per minute, and sending a 503 if the user goes over:http://wiki.nginx.org/NginxHttpLimitReqModulePerfect! I thought I'd cracked it until I realised that Django admin's login page is not in a consistent place, eg /admin/blah/ gives you a login page at that URL, rather than bouncing to a standard login page.So I can't match on the URL. Can anyone think of another way to know that the admin page was being displayed (regexp the response HTML?)
Rate limiting Django admin login with Nginx to prevent dictionary attack
The nginx proxy at the office can be configured to pass the client's IP address using theproxy_set_headerdirective.The nginxreverse proxydocs here show an example:location /some/path/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:8000; }In the config block where the reverse proxyproxy_passdirective is set, addingproxy_set_header X-Real-IP $remote_addr;tells the proxy to inject a new HTTP header,X-Real-IP, with the IP address of the client. In the Laravel app, you can use this to get the actual client IP. In your application, make sure only to use this header when it can be trusted1.This is necessary because the proxy is taking and re-sending the client's requestvia the proxy serverfrom its own network address. The only way the application can get the original client IP for the request being proxied is if that IP address is passed in the headers by the proxy.1https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For#security_and_privacy_concerns
I am running NGINX as part of a Docker package. It is both a webserver and a reverse proxy, and the container has PHP bundled in with it. The front-end web application is built with Laravel. There are some instances where I want to get the client's IP address, and this seems to a little problematic in some cases. This isn't for the proxy, but for the web application served with NGINX and PHP.On my development system at home I am getting an IP for the docker network when connecting with a browser from my local machine.$_SERVER['SERVER_ADDR'] = 172.19.0.10 $_SERVER['REMOTE_ADDR'] = 172.19.0.1On a Digital Ocean Dev server it has:$_SERVER['SERVER_ADDR'] 172.27.0.16 $_SERVER['REMOTE_ADDR'] 213.225.x.xx, which is Austria, my IPand on a Production server elsewhere I think it has:$_SERVER['SERVER_ADDR'] ??? $_SERVER['REMOTE_ADDR'] = The WAN IP for the office where the server is installed.This isn't the client IP, but the WAN address for the office itself.The server at that office is behind I think a WatchGuard or Fortigate router / Firewall.So, the Digital Ocean Dev server is actually "OK". I have access to what I need, but the office setup probably isn't setup to forward the client ip to the server that runs on their internet ?It isnt' critical currently, but it would be nice to be able to capture to public client IP in all cases.I could do a little more investigation with log files, etc., but pretty sure I'll have to:Possibly configure something in my NGINX configand/orHave someone configure the firewall to pass through the client IP because it seems like it isn't doing that currently, or I'm not capturing what it is sending.
How can I get the remote IP / Client IP using NGINX in Docker ?? Also using Laravel
You don't need equals sign here:location = /static/ { root /home/ubuntu/myprojectdir; }Instead try this:location /static/ { root /home/ubuntu/myprojectdir; }
I have built and successfully deployed a django rest framework using gunicorn and nginx on Ubuntu 18.04. However, the static files are not being pulled up.Django web app without loaded static filesHere is my nginx configuration:server { listen 80 default_server; listen [::]:80 default_server; server_name [my_server_domain_or_IP]; #root /var/www/html; location = /favicon.ico { access_log off; log_not_found off; } location = /static/ { root /home/ubuntu/myprojectdir; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } }And in my settings.py file:STATIC_URL = '/static/' import os STATIC_ROOT = os.path.join(BASE_DIR, 'static/')I have checked that DEBUG is set to False. I have also already runcollectstaticand the static files are located in a folder called static in: /home/ubuntu/myprojectdir/static.Every change I tried to make in the nginx configuration, I restarted nginx with the following command:sudo systemctl restart nginx.I mainly followed thistutorial, the only difference is that I edited the default nginx configuration instead of creating a new one because it wasn't deploying my django application this way.I can't seem to figure out what's wrong and have been trying to solve it for days. Am I missing something here?
Static files not loading for deployed django rest framework
SettingcontentType:"text/plain"for uploaded file solved the problem.Thanks @ofirule
I want to open files from S3 served by Nginx in a browser. Was unable to get it working with following configFiles in S3 buckets are text files with extension.manefiestlocation /manefiest/ { proxy_pass http://my-bucket.s3-website-us-west-2.amazonaws.com/; types { text/html manefiest; text/plain manefiest; } }I want browser to show the contents of the file. But with above config it was downloading the file.Whats wrong here ?
Nginx to serve contents of S3 files in browser
Based on the comments.The issue was caused byduplicate locationsof thenginxconfig file. This was due to deleting the nginx default path in.ebextensions, while EB re-creating it.Since this seems as a bug, AWS support ticked was created.
I've been moving over to ElasticBeanstalk using Amazon Linux 2 and I'm having a problem overwriting the default nginx.conf file. I'm following theAL2 docsfor the reverse proxy.They say, "To override the Elastic Beanstalk default nginx configuration completely, include a configuration in your source bundle at .platform/nginx/nginx.conf:"My apps folder structureWhen I run my deploy though, I get the errorCommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment: Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.","timestamp":1598554657,"severity":"WARN"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1598554682,"severity":"ERROR"}]}]}The main part of the error is"Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.""Which I'm confused about because this is where I've put the file/folder.I've tried completely removing the .ebextensions folder and got the same error.I've tried starting from a completely fresh beanstalk environment and still got that error. I'm not understanding how beanstalk is managing this.
AWS ElasticBeanstalk Amazon Linux 2 .platform folder not copying NGINX conf
For web-sockets to work over Tls(wss) protocol you would need to generate ssl certificates, after generating the certificates add the following line to uwsgi.ini file.https-socket=[ip]:[port], /path_to_server_certificate, /path_to_keyand restart the server (optionally you can also pass 2 more fields [,ciphers,ca]) more details can be foundhere.Alternatively, if your message broker is capable, you can directly expose it to client using some messaging protocol like Mqtt or Stomp
WebSocket connection to 'wss://ip_address:8008/ws/events?subscribe-broadcast' failed: WebSocket opening handshake timed outits timed out only when open UI in HTTPS, in HTTP its working...I have generated the certificate using OpenSSL in ubuntumy uwsgi configuration issocket = /tmp/uwsgi.sock chmod-socket = 666 socket-timeout = 60 chdir = wsgi-file = /wsgi.py virtualenv = vacuum = true enable-threads = true threads=500 startup-timeout = 15 graceful-timeout = 15 http-socket=:8008 http-websockets=truemy nginx configuration isserver { listen :80 default; listen :443 ssl http2 default_server; ssl_certificate /generate_crt.crt; ssl_certificate_key /generated_key.key; client_body_buffer_size 500M; client_body_timeout 300s; keepalive_timeout 5000; client_max_body_size 700M; access_log syslog:server=unix:/dev/log; root /tmp/MVM_APPS/angularjs/dist; index index.html index.htm; server_name localhost; location /api { uwsgi_pass unix:///tmp/uwsgi.sock; include uwsgi_params; uwsgi_read_timeout 120; uwsgi_send_timeout 1000; } location /ws/ { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_pass http://:8008; proxy_read_timeout 86400; } location /static { alias //static; } location / { try_files $uri $uri/ /index.html; } }I am using Django with WS4redis package.
WebSocket opening handshake timed out in https
The answer is in your question already:the shiny web-application is accessible through localhost:3838 on the host machineSo start the URL withhttp://localhost:3838. If you need this to be accessible from other hosts or you expect that published port number ever might change, you'll need to pass in a configuration option to say what the external URL actually is, or stand up a proxy in front of the two other containers that can do path-based routing.Ultimately any URL you put in an,, and so on gets interpreted in a browser, which does not run in Docker. That means that references to things that might happen to be running in Docker containers in HTML content always need to use the host's hostname and the published port number; they can never use the Docker-internal hostnames.
I have two docker containers:web, contains nginx with some static htmlshiny, contains an R Shiny web applicationWhen run, the shiny web-application is accessible through localhost:3838 on the host machine, while the static html site is accessed through localhost:80.My goal is to make a multi-container application through docker-compose, where users access the static html, and the static html occationally fetches data visualisations fromshinyviaI can't figure out how to point the iframe to a url that originates within the docker-compose network. Most people tend to host their Shiny apps on urls that are accessible through the Internet (ie. shinyapps.io), as a learning project I wanted to figure out a way to host a containerized shiny server alongside nginx.Desired result would be the ability to simply writein the static html, and it would findapp_xon the shiny server through the docker network.Is this something that can be sorted out through nginx configuration?
How to point iframe to a url within a docker network?
try_filesneeds two parameters, so you could use a dummy value to replace thefileterm. For example:try_files nonexistent /index.php$is_args$args;Seethis documentfor details.But the neater solution is probably arewrite...laststatement:rewrite ^ /index.php last;Therewritedirective will automatically append the query string. Seethis documentfor details.
I have done a load of searching for an answer for this and I cannot find a suitable answer.Basically, I have a site built in SilverStripe running on NGINX. It all works pretty well, but I want any files/images uploaded via the admin (to the assets folder) to be resolved via index.php in the site root (so we can check permissions of the files set in the admin before returning them to the user).I have a pretty simple nginx config (for my local docker instance):server { include mime.types; default_type application/octet-stream; client_max_body_size 0; listen 80; root /var/www/html; location / { try_files $uri /index.php?$query_string; } location ^~ /assets/ { try_files $uri /index.php?$query_string; } location /index.php { fastcgi_buffer_size 32k; fastcgi_busy_buffers_size 64k; fastcgi_buffers 4 32k; fastcgi_keep_conn on; fastcgi_pass php:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }The issue is the lines:location ^~ /assets/ { try_files $uri /index.php?$query_string; }Unfortunatelytry_fileschecks for a file's existence before handing the request over to php. Is there a way to stop this and hand all requests from the assets directory directly to PHP?
How to get NGINX to execute all URL's in a folder via index.php
UPDATETake a look at thisblog post. It explains how to setup C++/FCGI/nginx quite thoroughly.ORIGINAL ANSWERYour C++ code should be a listener (when it's running, it should listen to a port and return responses upon incoming requests). This part doesn't have anything to do with nginx. So first make sure that your code is working correctly; Run your code and try to access the specified port and see if you get the expected response.Then you need to setup aproxyin your nginx configuration that basically redirects all of the traffic that you want to your C++ port (e.g9000). For example you can set it up so that any url in the form ofhttps://your_domain.com/api/*redirects to your C++.This is pretty easy in nginx:location /api/ { proxy_pass http://127.0.0.1:9000/; }But first test your C++ alone, and make sure it works fineAlso you'd better use something likerunit,systemd, or similar tools to keep your C++ listener running (restart it if it crashes).
I have written below lines in configuration file created in /etc/nginx/conf.d named as "helloworld.local.conf".server{ listen 80 default_server; server_name hello_world; location / { root /var/www/helloworld; fastcgi_pass 127.0.0.1:9000; } }There is an index.html file in /var/www/helloworld which is displaying text "site comming soon".My c++ code looks like below:#include #include "fcgio.h" using namespace std; int main(void) { cout<<"Content-type:text/html\r\n\r\n"; cout<<"\n"; cout<<"\n"; cout<<"Hello World- First CGI Program\n"; cout<<"\n"; cout<<"\n"; cout<<" hello world\n"; cout<<"\n"; cout<<"\n"; return 0; }I have the c++ binary code file is produced using the following commandg++ abc.cpp -lfcgi++ -lfcgi -o hello_worldwhich is needed to be deployed on the NGINX server. I searched and tried different ways to run this script on the stackoverflow but still missing something.I also ran the below command to connect c++ binary code file to servercgi-fcgi -start -connect 127.0.0.1:9000 ./hello_worldNow when i am visiting the address 127.0.0.1:9000 in the browser, not getting "hello world " text which is in the c++ code.Output: I am suppose to get response as "hello world" from the the c++ binary code and that to be displayed on the html page.What am i missing please help.UPDATE: this is my config file now.server{ server_name hello; location / { fastcgi_index index.cgi; root /var/www/helloworld; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_name; include fastcgi_params; } }
How to run c++ CGI script on NGINX server
If you use the aforementioned configuration. Your backend will route all requests to index.html. Then when the Vue-Router is mounted, it will check the URL and provide the corresponding component. The implementation above will work.
I'm using Vue Router history mode for my Vue.js app. My problem is that, when I try to refresh a page that is not the root page, or enter its URL in the browser address bar, "page not found" 404 is displayed.Now, in the Vue Router guide they warn about this (seehttps://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurations), and suggest the solution to "add a simple catch-all fallback route to your server. If the URL doesn't match any static assets, it should serve the same index.html page that your app lives in".With this solution, if I try to access one of my non-root pages (with corresponding URL) through the browser address bar, the root page will be displayed. Is this interpretation correct?My question: is there a way to achieve the behaviour such that I can access my different pagesdirectlyfrom the browser address bar, and upon refresh stay on the same page?
Vue Router: access page directly from browser address bar
As pointed out by @larsksUbuntu 16.04supports nginx only till version1.10.3Official wikiwith more detailSo best/safe option would be either move your base OS to18.04or use nginx1.10.3Just for reference how you can install Nginx from src.wget https://nginx.org/download/nginx-1.14.0.tar.gz tar zxf nginx-1.14.0.tar.gz cd nginx-1.14.0 make sudo make install sudo nginxMore detailhere
# 1. use ubuntu 16.04 as base image FROM ubuntu:16.04 # defining user root USER root # OS update RUN apt-get update # Installing PHP and NginX RUN apt-get install -y nginx=1.4.* php7.0 # Remove the default Nginx configuration file RUN rm -v /etc/nginx/nginx.conf # Copy a configuration file from the current directory ADD nginx.conf /etc/nginx/ ADD web /usr/share/nginx/html/ # Append "daemon off;" to the beginning of the configuration RUN echo "daemon off;" >> /etc/nginx/nginx.conf # Expose ports EXPOSE 90 # Set the default command to execute # when creating a new container CMD service nginx startThis is my Dockerfile. I want to Install 1.14.2 of Nginx but Error occurs:E: Version '1.4.*' for 'nginx' was not found.How can I install specific version of nginx inside docker this way?
How to install nginx 1.14.X inside docker?
After I didwget -S https://wellcode.comI assumed that the problem was on the dns so in Cloudflare I changed SSL to full and the problem was solved.Explanation:The-Sflag will output headers and therefore show you the redirects. Example:HTTP/1.1 301 Moved Permanently Server: nginx Date: Tue, 05 Jan 2021 12:26:55 GMT Content-Type: text/html Content-Length: 162 Connection: close Location: https://example.com/foo?bar=baz&dragons=probably HTTP/1.1 200 OK Server: nginx Date: Tue, 05 Jan 2021 12:26:55 GMT Content-Type: application/json; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Accept-Encoding X-Powered-By: PHP/7.4.13 Expires: Tue, 05 Jan 2021 12:26:55 GMT Cache-Control: max-age=0 X-Content-Type-Options: nosniff Strict-Transport-Security: max-age=16070400; includeSubDomains
I'm trying to redirect http to https. I use letsencrypt for ssl certificates. My config looks like thisserver { listen 80; server_name example.com www.example.com; return 301 https://example.com$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; server_name example.com www.example.com; root /var/www/landing; location /.well-known/ { root /var/www/; } }When I'm trying to access example.com, I get a browser error saying that there were too many redirects. The error occurs for bothhttp://example.comandhttps://example.com, the server block is accessed when I go tohttp://www.example.combecause I get redirected tohttps://example.comand then I get the error above.How can I fix this?
Nginx too many redirects when redirecting http to https
It looks like your upstream definition is not correct. It's trying to connect to port 80 instead of port 9000.Tryupstream example { server mystack_app1:9000; # Also tried with just 'app1' # server mystack_app2; keepalive 32; }Btw, I suggest you to use the container_name in your docker-compose file.
I have docker stack running 2 containers, first is Nginx, second - application.The problem is that nginx shows Bad Gateway error:Here is nginx conf:upstream example { server mystack_app1; # Also tried with just 'app1' # server mystack_app2; keepalive 32; } server { listen 80; server_name example; location / { proxy_pass http://example; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 32k; client_max_body_size 8m; client_body_buffer_size 128k; } }Here is docker-compose.ymlversion: "3" services: app1: image: my-app:latest ports: - "9000:9000" networks: - webnet web: image: my-web:latest ports: - "81:80" networks: - webnet deploy: restart_policy: condition: on-failure networks: webnet:I use following command to deploy docker stack:docker stack deploy -c docker-compose.yml mystackSo I can access application from host's browser by localhost:9000 - it works ok.Also, from the nginx container, I can ping mystack_app1.But when accessing localhost:81, nginx shows502 Bad GatewayPlease help.
Nginx: 502 Bad Gateway within docker stack
Finally, i get nginx work with html5 fallback.open /etc/nginx/site-available/homestead.app, or any domain your specified in your homestead.yaml file.put/replace "location" section withlocation / { try_files $uri $uri/ /index.php; }then save, and open laravel web.php (router). put this code:Route::get('/{vue?}/{vue2?}/{vue3?}', function () { // return view('welcome'); if ( ! request()->ajax() ) { return view('index'); } });above code prevent laravel to returning error 404. now vue html5 history fallback works without # sign.thanks everyone for trying to help, without you guys i may not have any idea to resolve this issue.
I understand that i should put this code in order to make HTML5 History fallback:location / { try_files $uri $uri/ /index.html; }https://router.vuejs.org/en/essentials/history-mode.htmlbut to which file? tried search google, nothing works, put above code in /etc/nginx/nginx.conf will make nginx not working.im using vagrant Homestead for laravel.please help.
Where to put nginx configuration file?
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed6 years ago.Improve this questionI have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
Azure Web Application Gateway performance with load test [closed]
I have solved my problem by addinguseHashto my router:RouterModule.forRoot(appRoutes, { useHash: true }),
I have an angular2 application. All works fine, if I using localwebpack dev server.When I deploy application on the server behindnginxI can navigate using application links. But ifI enter URL to browser URL bar I get404 Not Founderror.Here is an Nginx config for site:server { listen 80; server_name mydomain; location /api { proxy_pass http://mydomain:4000; } location /token-auth { proxy_pass http://mydomain:4000; } location / { root /www; } }Here is my application details:@NgModule({ imports: [ RouterModule.forRoot(appRoutes),export const appRoutes:Routes = [ { path: 'login', component: LoginComponent }, { path: 'rss', component: RssComponent, data: { section: 1 }, canActivate: [AuthGuard] }, { path: '', redirectTo: '/login', pathMatch: 'full' } ];@Component({ selector: 'my-app', template: ` `, styleUrls: ['app.component.css'] }) RSSI am not sure it is Nginx configuration error, or it is my application error. How can I fix it ?
Routing behing nginx
I am assuming that your hard limit and soft limit is set properly. But you are getting this error because the vertx is not able to utilize the full ulimits that you have set.Check what is the maximum limit, your vertx server can use by:cat /proc/PID/limitsMax open files 700000 700000 filesis the line that tells you.If you have set the soft limit high but still this value comes low then there is something in the app (like the init files) that is changing the soft limit.So you can find that init script and simply change the soft limit there. It will fix your problem.https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors
My sincere apologies if the question is stupid but I am a novice here (front end developer, recently working on backend).I have my app running on Amazon aws machine. What I want is to efficiently utilize my resources so that more requests are served.I am running a Java vertx server that serves GET and websocket request. I have created three instances of this server running on different port and balanced the load using nginx.My aws resource is pretty muchlsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 100G 0 disk └─xvda1 202:1 0 100G 0 part /My soft limit is unlimitedulimit -S unlimitedMy hard limit is unlimitedulimit -H unlimitedI am checking the total number of opened files assudo lsof -Fn -u root| wc -l 13397Why am i getting this exceptionjava.io.IOException: Too many open filesMy ulimit -a iscore file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128305 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 700000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 128305 virtual memory (kbytes, -v) unlimited file locks (-x) unlimitedWhat is the best way to check the number of available files and also the number of files that are used. And how should I use the resources in such a way that I can make large number of connections.Please let me know.
Too many open files exception on AwS machine with high configuration
Further to your comment, any URI beginning with/fetchthat does not match a static file within the aliased path, should be redirected to/fetch/index.php.location ^~ /fetch { alias /usr/share/nginx/html/another_folder/web; if (!-e $request_filename) { rewrite ^ /fetch/index.php last; } location ~ \.php$ { if (!-f $request_filename) { return 404; } include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; } }We avoid usingtry_fileswithaliasbecause ofthis long term issue.Seethis cautionregarding the use ofif.
I faced with problem in configuring nginx server for yii2 basic app.Here is my service block file :server { listen 80 ; access_log /var/log/nginx/access-server.log; error_log /var/log/nginx/error-server.log; charset utf-8; location /fetch { root /usr/share/nginx/html/another_folder/web/; try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }My project located in another folder "another_folder". And i want that when user goes to url :http://ip/fetchfiles nginx will serve files from another folder.My error log file returns me :2017/02/11 12:38:52 [error] 4242#0: *12 FastCGI sent in stderr: "Unable to open primary script: /usr/share/nginx/html/index.php (No such file or directory)" while reading response header from upstreamAnd brother shows : No input file specified.Can you give help me with this issue?Thank you!
Nginx Yii2 configuration in different folders
Yournginxconfig file is in a wrong location.Steps to fix:sudo docker-compose downDelete nginx image:sudo docker images sudo docker rmi REPOSITORY TAG IMAGE ID CREATED SIZE pythonserving_nginx latest 152698f13c7a About a minute ago 54.3 MB sudo docker rmi pythonserving_nginxNow change the nginxDockerfile:FROM nginx:1.11.8-alpine MAINTAINER geoheil ADD sites-enabled.conf /etc/nginx/conf.d/sites-enabled.confPlease note the location of nginx config.Now try this docker-compose file (Using User-defined Networks):version: '2' services: application: restart: always build: ./application command: gunicorn -w 4 --bind :5000 wsgi:application networks: - testnetwork expose: - "5000" ports: - "5000:5000" db: restart: always image: postgres:9.6.1-alpine networks: - testnetwork ports: - "5432:5432" environment: - POSTGRES_USER=d - POSTGRES_PASSWORD=d - POSTGRES_DB=d volumes: - ./postgres:/var/lib/postgresql nginx: restart: always build: ./nginx networks: - testnetwork expose: - 8080 ports: - "8880:8080" networks: testnetwork:And Bring up containers:sudo docker-compose upBrowse tohttp://localhost:8880
Playing around with flask I would like to get a real setup up and running in docker. This means flask should be served via nginx and gunicorn. I set up a sample code repositoryhttps://github.com/geoHeil/pythonServingbut so far can't get nginx to work properly.Flask is served onapplication:5000, docker should resolve application to its respective name.Nginx config is as follows:server { listen 8080; server_name application; charset utf-8; location / { proxy_pass http://application:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }which looks good to me. So far I cant find the problem.editcompose file is here. Command to start wasdocker-compose build docker-compose up version: '2' services: application: restart: always build: ./application command: gunicorn -w 4 --bind :5000 wsgi:application links: - db expose: - "5000" ports: - "5000:5000" nginx: restart: always build: ./nginx links: - application expose: - 8080 ports: - "8880:8080"
serving flask via nginx and gunicorn in docker
The answer is herehttps://github.com/kubernetes/kops/tree/master/addons/ingress-nginxkubectl apply -fhttps://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yamlBut obviously the scc-ingress file needed to be changed to have a host such as foo.bar.comAlso, need to generate a self-signed SSL using OpenSSL as per this linkhttps://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tlsFinally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created
So I built my Kubernetes cluster on AWS usingKOPSI then deployed SocketCluster on my K8s cluster usingBaasilwhich deploys 7YAML filesMy problem is: thescc-ingressisn't getting any IP or endpoint as I have not deployed anyingress controller.According toingress controllerdocs, I am recommended to deploy annginx ingress controllerI need easy and explained steps to deploy the nginx ingress controller for my specific cluster.To view the current status of my cluster in a nice GUI, see the screenshots below:DeploymentsIngressPodsReplica SetsServices
How can I deploy an ingress controller for my Kubernetes cluster
Looks like everything was okay. Tried some curl calls to make sure headers were being set correctly (credits to @RichardSmith for recommendation). Also tested in different browsers. Everything worked! Turns out I needed to clear my primary browser's cache. Not sure why, but it resolved the issue!For anyone interested in controlling the cache of 301 redirects done by nginx:https://serverfault.com/questions/394040/cache-control-for-permanent-301-redirects-nginx
I'm setting up a server using Docker. One container runs an nginx image with SSL configured. A second container runs with a simple node app (on port 3001). I've got the two containers communicating with a --link docker parameter.I need to redirect all HTTP requests to HTTPS. Looking at other threads and online sources, I foundreturn 301 https://$host$request_uri. When I typehttp://localhostin the browser I'm getting the upstream's name in the browser (https://node_appinstead ofhttps://localhost). How can I successfully redirect without defining a server_name or explicitly defining a domain?Edit: I should clarify that accessinghttps://localhostdirectly in the browser works. HTTP does not.Here's my nginx.conf file:worker_processes 1; events { worker_connections 1024; } http { upstream node_app { server node:3001; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass http://node_app/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } }
Docker nginx redirect HTTP to HTTPS