Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
the second entry is wrong, it should be:proxy_set_header X-Forwarded-Ssl on;That will solve the issueUPDATE: without being able to test, the only thing I see missing is this header:proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Besides that, everything seems correct.
Configured Nginx as reverse proxy in front of Play! and passing https with the following headers set :-proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Ssl https;login()[https://localhost/login] is being forwarded to Play! on port 9000 as 'http'. But request.secure in login() is still 'false'. Any idea ?UPDATE: here is the server conf:-server { listen 443; server_name localhost; ssl on; ssl_certificate /home/aymer/play/key/localhost.crt; ssl_certificate_key /home/aymer/play/key/localhost.key; ssl_session_timeout 5m; location ~ ^/(images|javascript|js|css|flash|media|static)/ { root /home/aymer/play/playapp/public; expires 30d; } location ~* (login|register)$ { proxy_pass http://localhost:9000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Ssl on; } location / { rewrite ^/(.*) http://$host/$1 permanent; } }
(https)Nginx --> (http)Play!. But request.secure is false
It all comes down to how secureor paranoidyou'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.Asthecommentshave already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.
I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecurehttprequests (this is all happening across docker containers, by the way).Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
Is HTTPS behind reverse proxy needed?
Based on production experience, it's better to counterpart rule from docker docsone container for one process. You're shipping a (micro-)service with docker image, and if it's required to have nginx in it, you include it.So basically for django app there are:nginx (e.g.: for static files)gunicorn or uwsgidjango code itselfDon't see any perfomance issue on adding nginx to container, but little note on docker image size. On ubuntu:16.04/debian:jessie by adding nginx-full you increase your image size for around ~100mb. (some overhead on first pulling image).So it's not controversal to second scenario, because you can also add nginx behind your docker image for balancing purpose (or proxy_pass managing).
We are looking to move our current Nginx/Gunicorn/Django stack into Docker, and deploy it for high availability using Docker Swarm. One of the decisions we have been struggling with is whether or not to place Nginx in the same container as Gunicorn/Django. Here are the scenarios and how we view them:Scenario 1: Place Nginx in the app's container. This goes against the "each service has its own container" methodology, but it allows Nginx to communicate with Gunicorn directly through a unix socket instead of a port. This obviously isn't huge but it's worth mentioning. The main advantages are listed below. A potential disadvantage here is having extra overhead from too many Nginx instances (please weigh in on this).Scenario 2:Place Nginx in its own container. Though this follows the aforementioned methodology, it seems more flawed. In a Docker Swarm scenario, the distribution of Nginx and App containers will likely not be uniform. Some nodes may end up with more Nginx containers, while others have more app containers (and possibly even 0 Nginx containers). This means that Nginx would end up reverse-proxying an app container on a different host entirely.Now I'm sure Docker Swarm supports special configurations that say at least one Nginx container must be running on each node, but this strikes me as an anti-pattern. Even in that instance, is it worth the effort over Scenario 1?
Should nginx be packed into the same container as Django when deploying with Docker Swarm?
nginx will definitely work faster than Apache. I can't tell about fastcgi since I never used it with nginx but this solution seems to make more sense on several servers (one for static contents and one for fastcgi/PHP).If you are really targeting performance -and even consider C/C++- then you should give a try to G-WAN, an all-in-one server which provides (very fast) C scripts.Not only G-WAN has a ridiculously small memory footprint (120 KB) but it scales like nothing else. There's work ahead of you if you migrate from PHP, but you can start with the performance-critical tasks and migrate progressively.We have made the jump and cannot consider to go back to Apache!
I currently have one server with nginx that reverse_proxy to apache (same server) for processing php requests. I'm wondering if I drop apache so I'd run nginx/fastcgi to php if I'd see any sort of performance increases. I'm assuming I would since Apache's pretty bloated up, but at the same time I'm not sure how reliable fastcgi/php is especially in high traffic situations.My sites gets around 200,000 unique visitors a month, with around 6,000,000 page crawls from the search engines monthly. This number is steadily increasing so I'm looking at perfomrance options.My site is very optimized code wise and there isn't any caching (don't want that either), each page has a max of 2 sql queries without any joins on other tables, indexes are perfect as well.In a year or so I'll be rewriting everything to use ClearSilver for the templates, and then probably use python or else c++ for extreme performance.I suppose I'm more or less looking for any advice from anyone who is familiar with nginx/fastcgi and if willing to provide some benchmarks. My sites are one server with 1 quad core xeon, 8gb ram, 150gb velociraptor drive.
nginx/apache/php vs nginx/php
I had the same problem and I solved it this way through the WKNavigationDelegate:func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) { if navigationAction.navigationType == .linkActivated { guard let url = navigationAction.request.url else {return} webView.load(URLRequest(url: url)) } decisionHandler(.allow) }Hope it helps
I have a basic webview that loads a website that is fronted by an nginx reverse proxy that is just forwarding it to another site. I am able to load it using safari, chrome firefox etc on the device and emulator (as well as computer), but when I try to load it in the wkwebview it flashes a couple times then goes to a blank white screen.Notethis same app worked fine in iOS 10 - 11, but is now broke with iOS 12. Below is a simple code excerpt that shows what I'm doing:import UIKit import WebKit class ViewController: UIViewController, WKUIDelegate { var webView: WKWebView! override func loadView() { let webConfiguration = WKWebViewConfiguration() webView = WKWebView(frame: .zero, configuration: webConfiguration) webView.uiDelegate = self view = webView } override func viewDidLoad() { super.viewDidLoad() let myURL = URL(string:"https://test.com") let myRequest = URLRequest(url: myURL!) webView.load(myRequest) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } }I've attempted adding the following to my Info.plist, which also did not work:NSAppTransportSecurity NSAllowsArbitraryLoads NSExceptionDomains test.com NSExceptionAllowsInsecureHTTPLoads NSIncludesSubDomains It also shows this in the logs in xcode:[BoringSSL] nw_protocol_boringssl_get_output_frames(1301) [C1.1:2] . [0x7f82f8d0efc0] get output frames failed, state 8196When I try to debug it using Safari Dev Tools it shows that it's trying to load about:blank, which is strange, because again - it works in all other browsers. On the nginx side all I'm doing is a simple proxy_pass rule and when I hit it the endpoint in the app I can see in the nginx access logs that it responds with a 200. Anyone have ANY ideas?
iOS 12 wkwebview not working with redirects?
It turns out that it was linking to /home/foo/public_html/~foo. So, a circular symlink from /home/foo/public_html/~foo back to /home/foo/public_html works like a charm. Thanks for all your help!
I have an NGINX server with fastcgi/PHP running on it. I need to add userdirs to it, but I can't get PHP to execute the files - it just asks me if I want to download it. It does work without the userdir (e.g. it works on physibots.info/hugs.php, but not physibots.info/~kisses/hugs.php).Config:server { listen 80; server_name physibots.info; access_log /home/virtual/physibots.info/logs/access.log; root /home/virtual/physibots.info/public_html; location ~ ^/~(.+?)(/.*)?\.php$ { fastcgi_param SCRIPT_FILENAME /home/$1/public_html$fastcgi_script_name; fastcgi_pass unix:/tmp/php.socket; } location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; autoindex on; } location ~ \.php$ { try_files $uri /error.html/$uri?null; fastcgi_pass unix:/tmp/php.socket; } }
NGINX/PHP downloading instead of executing
I had the same issue, the problem is the nginx configuration. It defaults to a 1 minute read timeout forproxy_pass:Syntax: proxy_read_timeout time;Default: proxy_read_timeout 60s;Context: http, server, locationSeehttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeoutIn my case I've increased the timeout to 10 hours:proxy_read_timeout 36000s;
I'm using Go (Golang) 1.4.2 with Gorilla WebSockets behind an nginx 1.4.6 reverse proxy. My WebSockets are disconnecting after about a minute of having the page open. Same behavior occurs on Chrome and Firefox.At first, I had problems connecting the server and client with WebSockets. Then, I read that I needed to tweak my nginx configuration. This is what I have.server { listen 80; server_name example.com; proxy_pass_header Server; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://127.0.0.1:1234; } }My Go code is basically echoing back the client's message. (Errors omitted for brevity). This is myHandleFunc.var up = websocket.Upgrader{ ReadBufferSize: 1024, WriteBufferSize: 1024, } ws, _ := up.Upgrade(resp, req, nil) defer ws.Close() var s struct { Foo string Bar string } for { ws.ReadJSON(&s) ws.WriteJSON(s) }The JavaScript is pretty simple as well.var ws = new WebSocket("ws://example.com/ws/"); ws.addEventListener("message", function(evnt) { console.log(JSON.parse(evnt.data)); }); var s = { Foo: "hello", Bar: "world" }; ws.send(JSON.stringify(s));Go is reportingwebsocket: close 1006 unexpected EOF. I know that when I leave or refresh the pageReadJSONreturnsEOF, but this appears to be a different error. Also, the unexpected EOF happens by itself after about a minute of having the page open.I have anonerrorfunction in JavaScript. That event doesn't fire, butonclosefires instead.
Gorilla WebSocket disconnects after a minute
EDIT: See Ajeets answer below for the correct solutionI don't think mbstring (like OpenSSL) depends on an extension, it should just be built into PHP. I'm running Raspbian and NginX and if I create a file withand look at it then I see:
I need this php extension in order to use one of my Magento extension. How do I install php mbstring extension to my Nginx Ubuntu 14.04?
How do I install php mbstring extension in to Nginx Ubuntu
This is an NGINX Configuration i've used with Laravel 4 and Laravel 4.1 that works.server { listen 80; server_name sub.domain.com; set $root_path '/var/www/html/application_name/public'; root $root_path; index index.php index.html index.htm; try_files $uri $uri/ @rewrite; location @rewrite { rewrite ^/(.*)$ /index.php?_url=/$1; } location ~ \.php { fastcgi_pass 127.0.0.1:9000; fastcgi_index /index.php; include /etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~* ^/(css|img|js|flv|swf|download)/(.+)$ { root $root_path; } location ~ /\.ht { deny all; } }
I am trying to setup my Laravel 4 project using nginx . Here is my nginx server block for laravel :server { listen 80; root /home/prism/www/laravel/public; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }But my problem is , Its showing "404 not found" error for all other routes except the default one , that comes with default installation .
nginx configuration for Laravel 4
If you want to capture 4 to 6 characters, why you don't have put the quantifier inside the capture parenthesis?Something like that perhaps:location ~ "^/s/([0-9a-zA-Z]{4,6})$" {...Curly braces are used both in regex and for block control, you must enclose your regex with quotes (single or double) (<-- wiki nginx)
I am trying to setup a regex for the path/s/<4-6 character string here>where I capture the 4-6 character string as $1.I tried using the following two entries, but both faillocation ~ ^/s/([0-9a-zA-Z]){4,6}+$ { ... location ~ ^/s/([0-9a-zA-Z]{4,6})+$ { ...The first one comes up with 'unknown directive' and the second comes up with 'pcre_compile() failed: missing )'EDITThe following routes would be served by this location:/s/1234 (and I would capture '1234' in $1) /s/12345 (and I would capture '12345' in $1) /s/123456 (and I would capture '123456' in $1) /s/abcd (and I would capture 'abcd' in $1) /s/abcde (and I would capture 'abcde' in $1) /s/abcdef (and I would capture 'abcdef' in $1) /s/a1b2c (and I would capture 'a1b2c' in $1)The following routes would NOT be served by this location:/s/1 /s/12 /s/123 /s/a /s/ab /s/abc /s/abc1234 /s/12345678etc...
nginx location regex - character class and range of matches
I just had the same problem, and what did work is setting the nginx conf like this :error_page 500 502 503 504 /50x.html; location = /50x.html { root /home/username/sites/myProject/current/errorPages; } location = /image.png { root /home/username/sites/myProject/current/errorPages/50x_files; }And then reference the image simply as src="image.png". The same should apply to your css and js!Edit : I find a way to make it work for a bunch of file:error_page 500 502 503 504 /errorPages/50x.html; location /errorPages/ { root /home/username/sites/myProject/current/; }This way all the files in the errorPages folder will be available (e.g. src="/errorPages/image.png"), as nginx will try to match all "/errorPages/...". It is necessary to remove both the "=" after "location" (as it's not an exact match anymore) and the "internal;" in it (as the other resources will be called from the html and not internally by nginx).
I'm trying to add a custom error page for 503. I added these lines to server conf in nginx.conf file:error_page 500 502 503 504 /50x.html; location = /50x.html { root /home/username/sites/myProject/current/errorPages; internal; }It displays the custom page when uwsgi is down, however this doesn't show any images. I tried many different configurations I can think of, but no luck. How I can display image file and enable css for a custom error page?I put my custom error page into/home/username/sites/myProject/current/errorPagesand the file structure is:errorPages/50x.html errorPages/50x_files/50x.css errorPages/50x_files/50x.js errorPages/50x_files/image.png
nginx custom error page 502 with css and image files
If you need to increase the body size of files you upload via the ingress controller, you need to add an annotation to your ingress resource:nginx.ingress.kubernetes.io/proxy-body-size: 8mDocumentation available here:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
I am fairly new to GCP and I have a rest URI to upload large files.I have a ngress-nginx-controller service and want to change it to upload files larger than 1mb and set a limit.apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.35.0","helm.sh/chart":"ingress-nginx-2.13.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"LoadBalancer"}} creationTimestamp: "2020-09-21T18:37:27Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/version: 0.35.0 helm.sh/chart: ingress-nginx-2.13.0 name: ingress-nginx-controller namespace: ingress-nginxThis is the error it throws : 413 Request Entity Too Large 413 Request Entity Too Large nginx/1.19.2
Kubernetes nginx ingress controller cannot upload size more than 1mb
After searching for roughly 7 hours, I was finally able to find a solution to this issue in the Nginx forum:Nginx connet to .sock failed (13:Permission denied) - 502 bad gatewayWhat I simply did was changing the name of the user on the first line in/etc/nginx/nginx.conffile.In my case the default user waswww-dataand I changed it to myrootmachine username.
I'm deploying my Djano application on a VPS and I'm following the steps in the below link to configure my app with Gunicorn and Nginx.How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04Everything went well with the tutorial (gunicorn and nginx are running) but the issue is that when Im' visiting the VPS through the static IP its showing a white screen that is always reloading.After checking nginx log I found the following:(13: Permission denied) while connecting to upstream, client: , server: , request: "GET / HTTP/1.1, upstream: "http://unix:/root/myproject/myproject.sock:/", host: "", referrer: "http:///"
Nginx (13: Permission denied) while connecting to upstream
You can block IP ranges using theCIDR notation. Have a look at the article 'Nginx Block And Deny IP Address OR Network Subnets'You can useIP range calculators like this onethat do the math for you. For example your range '43.249.64.0-43.249.85.255' can be expressed as:43.249.64.0/2043.249.80.0/2243.249.84.0/23
Is it possible to deny range like 43.249.64.0-43.249.85.255? Or only by mask like 43.249.64.0/19 which includes up to 43.249.95.255 which makes it not good decision.
Is it possible to deny range of IPs on Nginx
You can slow the speed of localhost (network) by adding delay.Useifconfigcommand to see network device: on localhost it may beloand on LAN itseth0.to add delay use this command (adding 1000ms delay onlonetwork device)tc qdisc add dev lo root netem delay 1000msto change delay use this onetc qdisc change dev lo root netem delay 1msto see current delaytc qdisc show dev loand to remove delaytc qdisc del dev lo root netem delay 1000ms
I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
Using nginx to simulate slow response time for testing purposes
This one always does the job for me...https://github.com/petewarden/ParallelCurl
I'm using cURL to get some rank data for over 20,000 domain names that I've got stored in a database.The code I'm using ishttp://semlabs.co.uk/journal/object-oriented-curl-class-with-multi-threading.The array $competeRequests is 20,000 request to compete.com api for website ranks.This is an example request:http://apps.compete.com/sites/stackoverflow.com/trended/rank/?apikey=xxxx&start_date=201207&end_date=201208&jsonp=";Since there are 20,000 of these requests I want to break them up into chunks so I'm using the following code to accomplish that:foreach(array_chunk($competeRequests, 1000) as $requests) { foreach($requests as $request) { $curl->addSession( $request, $opts ); } }This works great for sending the requests in batches of 1,000 however the script takes too long to execute. I've increased the max_execution_time to over 10 minutes.Is there a way to send 1,000 requests from my array then parse the results then output a status update then continue with the next 1,000 until the array is empty? As of now the screen just stays white the entire time the script is executing which can be over 10 minutes.
cURL Multi Threading with PHP
No, you do not. They are not the same kind of compression. When you runrake assets:precompile, all you're really doing is joining a bunch of files into one file and dumping it to the disk. Actually, according to theofficial documentation, it is two files:When files are precompiled, Sprockets also creates a gzipped (.gz) version of your assets. Web servers are typically configured to use a moderate compression ratio as a compromise, but since precompilation happens once, Sprockets uses the maximum compression ratio, thus reducing the size of the data transfer to the minimum. On the other hand, web servers can be configured to serve compressed content directly from disk, rather than deflating non-compressed files themselves.This is important for you, because it allows you to use gzip, if you want, but it does not force you to do so.Gzip compression, which is real compression (not just concatenating files) reduces the amount of data you have to transfer, but at the expense of processor power (compressing and decompressing). It is likely to fairly dramatically improve your site, depending on page sizes and your (and your user's) hardware.
Do I have to configure nginx to compress assets (gzip set to on) if I have compressed rails assets withrake assets:precompile? I mean does it make sense or not? Will performance better or worse? Thank you!
Compressing rails assets and nginx gzip
The issue stems from the line -proxy_set_header Host $host;Your web server(WEBrick) in turn is including this when issuing the redirect response.You can change it to include the non-standard port -proxy_set_header Host $host:$server_port;which should resolve this.
I have nginx configured to proxy https traffic to an http server running on the same machine.Everything works fine when I configure nginx to listen on / proxy from https port 443. But I really want to listen on a non standard port. When I configure a non standard port, nginx receives the request and sends it to my http server, as it should, but the server is responding with an HTTP redirect back to the browser that tells it to redirect to 'https://server.com/someurl". I mean the redirect url looks good except for it's missing the correct port. Am I missing an HTTP header that I need to be setting in the proxy?Specifically I'm running an http instance of Tracks;http://getontracks.org. If it matters.My (working on standard port) nginx server configuration:location /{ proxy_pass http://localhost:50000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; }
Nginx proxy https to http on non standard port?
Apache's .htaccess provides flexible configuration. This allows users on a shared host to customize certain settings of an apache without having to alter the core apache configs.It is the standard server bundled in typical LAMP setups, although, many services use other web servers for in conjunction (like static files, video streaming, etc.).Since Apache is popular, it's easy to find a solution to any problems.Other than that, other solutions would probably be better.
Apache has been the de facto standard web server for over a decade, but recent years have brought us web servers that consume less RAM and handle many more requests per second using fewer threads and asynchronous i/o. In my opinion, I also find the configuration of these servers to be more straightforward and minimal.Why do people use Apache when asynchronous servers are so much more lightweight? Is there any clear benefit?
Why use Apache over NGINX/Cherokee/Lighttpd?
Tornado 4.0 introduced an, on by default, same origin check. This checks that the origin header set by the browser is the same as the host headerThecode looks like:def check_origin(self, origin): """Override to enable support for allowing alternate origins. The ``origin`` argument is the value of the ``Origin`` HTTP header, the url responsible for initiating this request. .. versionadded:: 4.0 """ parsed_origin = urlparse(origin) origin = parsed_origin.netloc origin = origin.lower() host = self.request.headers.get("Host") # Check to see that origin matches host directly, including ports return origin == hostIn order for your proxied websocket connection to still work you will need to override check origin on the WebSocketHandler and whitelist the domains that you care about. Something like this.import re from tornado import websocket class YouConnection(websocket.WebSocketHandler): def check_origin(self, origin): return bool(re.match(r'^.*?\.mydomain\.com', origin))This will let the connections coming through frominfo.mydomain.comto get through as before.
I have an older tornado server that handles vanilla WebSocket connections. I proxy these connections, via Nginx, from wss://info.mydomain.com to wss://mydomain.com:8080 in order to get around customer proxies that block non standard ports.After the recent upgrade to Tornado 4.0 all connections get refused with a 403. What is causing this problem and how can I fix it?
Under tornado v4+ WebSocket connections get refused with 403
here is a basic nginx config for the case you are going withunicorn/thinsolution:upstream rack_upstream { server 127.0.0.1:9292; } server { listen 80; server_name domain.tld; charset UTF-8; location / { proxy_pass http://rack_upstream; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* ^.+\.(jpg|jpeg|gif|png|css|js)$ { root /path/to/static/files; } }if you run nginx as root you'll can serve your site on port 80.otherwise changelisten 80tolisten SOME-AVAILABLE-PORTreplacedomain.tldwith your site namealso you can add extensions of files to be served by nginx in the(jpg|jpeg|gif|png|css|js)regex, delimiting them by|see more at:http://wiki.nginx.org/DirectiveIndexhttp://wiki.nginx.org/ServerBlockExamplehttp://wiki.nginx.org/FullExample
I want to deploy a simple Ruby Rack service with NGINX. I read various things on the internet, none of which were helpful enough. Lets say I have this (in reality it's a bit more complex but still < 200 lines of code service):require 'rack' class HelloWorld def call(env) [200, {"Content-Type" => "text/plain"}, ["Hello world!"]] end end Rack::Handler::Mongrel.run HelloWorld, Port: 9292I'd like to know what would be the best way to deploy this with NGINX. Maybe FCGI or something else?
How to deploy Ruby Rack app with NGINX
Are the errors coming from your backend? You may need to addproxy_intercept_errorson; alongside your proxy_pass.
I have the following vhost entryserver { listen 80; server_name example.com www.example.com; #access_log /var/log/nginx/nginx-access.log; location /media/ { root /home/luke/django/solentcms; } location /admin/media/ { root /home/luke/virts/django1.25/lib/python2.7/site-packages/django/contrib/admin/media; } location / { proxy_pass http://127.0.0.1:8001; } error_page 404 /404.html; location = /404.html { root /home/luke/django/solentcms/404; allow all; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/luke/django/solentcms/404; allow all; } }However, 404's and 50x errors are still be re-directed to the horrible nginx default pages.Any ideas as to why? This syntax works on one of my other servers.Cheers.
Nginx error pages not working
Do like this where you have your secondary locationlocation / { try_files $uri $uri/ =404; root /path/to/your/www; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }These 2 parameters are the magic sauce:fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Using nginx web server and php. nginx is working, I see 'Welcome to nginx!' but I get 'access denied' when trying to access a php page. I also installed php-fastcgi.Here is my nginx default conf:# redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; fastcgi_index index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; }I activitedsecurity.limit_extensions = .php .php3 .php4 .php5 .htmlandlisten = /var/run/php5-fpm.sockin /etc/php-fpm.d/www.conf andcgi.fix_pathinfo = 0in /etc/php5/fpm/php.iniI restarted nginx and php5-fpm.Thanks for helping.
access denied on nginx and php
You may use theroot directivewithin alocationblock, like this:server { server_name staging.example.com; root /some/other/location; location /siteA/ { root /var/www/; } }Thenhttp://staging.example.com/foo.txtpoints to/some/other/location/foo.txt, whilehttp://staging.example.com/siteA/foo.txtpoints to/var/www/siteA/foo.txt.Note that thesiteAdirectory is still expected to exist on the filesystem. If you wanthttp://staging.example.com/siteA/foo.txtto point to/var/www/foo.txt, you must use thealiasdirective:location /siteA/ { alias /var/www; }
How can I map a URI of the formstaging.example.com/siteAto a virtual server located at/var/www/siteA?The main restriction is that I do not want to create a subdomain for siteA. All examples of nginx.conf I've seen so far rely on having a subdomain to do the mapping.Thanks
Mapping a url path to a server in nginx
location ~ ^/([a-zA-Z0-9\.\-]*)/(.*) { if ($http_referer !~ "^$1.*$"){ return 403; } }
Is there a way, in nginx, to allow access to a "location" only to clients with a referrer that matches the current location name?This is the scenario:http://foooooo.com/bar.org/http://foooooo.com/zeta.net/etc etcI want the contents of the bar.org location available only if the referrer is bar.org. The same goes for zeta.netI know I can do this "statically", but there are a lot of those locations and I need to find a way to do this defining only one "dynamic" location.Sorry for my bad english.SOLUTIONI've solved this way:location ~/([a-zA-Z0-9\.\-]*)/* { set $match "$1::$http_referer"; if ($match !~* ^(.+)::http[s]*://[www]*[.]*\1.*$ ) { return 403; } }
Nginx: allow access only to referrer that match location name
I managed to solve the problem by going to127.0.0.1:80in my browser, which brought me to aGitLablogin page. I had forgotten that I had once installed GitLab but wasn't using it. After uninstalling GitLab, port 80 was no longer occupied.
I'm trying to updatenginxusingsudo apt-get install nginx, but it is giving me an error message related to port 80 being occupied. When I runsudo netstat -tlnp | grep 80I gettcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6845/nginx tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 1919/config.ru tcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN 6845/nginxAlthough I wasn't able to easily understand what each column means from the--helpfunction, I suppose that in this example6845is the process ID ofnginx. If I try to kill it usingsudo kill 6845and runsudo netstat -tlnp | grep 80again, I seetcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10130/nginx tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 1919/config.ru tcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN 10130/nginxIn other words, it seems likenginxhas immediately started listening on port 80 again under a different process ID. How can I stopnginxfrom running? (I've also triedsudo systemctl stop nginxbut to no avail).
How to stop nginx from using port 80
The typical setup that we recommend is to putHAProxyin front of Jetty, and configure HAProxy to offload TLS and Jetty to speak clear-text HTTP/2.With this setup, you get the benefits of an efficient TLS offloading (done by HAProxy via OpenSSL), and you get the benefits of a complete end-to-end HTTP/2 communication.In particular, the latter allows for Jetty to push content via HTTP/2, something that won't be possible if the backend communication is HTTP/1.1.Additional benefits include less resource usage, less conversion steps (no need to convert from HTTP/2 to HTTP/1.1 and viceversa), the ability to fully use HTTP/2 features such as stream resetting all the way to the application. None of these benefits will work if there is a translation to HTTP/1.1 in the chain.If Nginx is only used as a reverse proxy to Jetty, it is not adding any benefit and it is actually slowing down your system, having to convert requests to HTTP/1.1 and responses back to HTTP/2.HAProxy does not do any conversion so it's way more efficient, and allows a full HTTP/2 stack with all the benefits that it brings with respect to HTTP/1.1.
So far all the tutorials tell me that I need to enable SSL on my server to have HTTP/2 support.In the given scenario, we have nginx in front of the backend Tomcat/Jetty server(s), and even though performance-wise it worth enabling HTTP/2 on the backend, the requirement to have HTTPS there as well seems to be an overkill.HTTPS is not needed security-wise (only nginx is exposed), and is a bit cumbersome from the operational perspective - we'd have to add our certificates to each of the Docker containers that run the backend servers.Isn't there a way around that provides HTTP/2 support all the way (or at least similar performance), and is less involved to set up?
HTTP/2 behind reverse proxy
I send the real IP to django by setting a custom header:proxy_set_header X-Real-IP $remote_addr;Those headers are available inrequest.META
I'm running a service in localhost at127.0.0.1:8000and I'm proxying this by using:proxy_pass http://127.0.0.1:8000;Problem is that I need to pass the user's IP address to the service.Any ideas?
How to pass the remote IP to a proxied service? - Nginx
As it turns out, the linux distro running the containered nginx server was itself running a variation of nginx for any incoming request.Once we set theclient_max_body_sizeto 0 on the nginx configuration file which the OS ran, it worked.
I've deployed an on prem instance of Nexus OSS, that is reached behind a Nginx reverse proxy.On any attempt to push docker images to a repo created on the Nexus registry I'm bumping into a413 Request Entity Too Largein the middle of the push.The nginx.conf file is looking like so:http { client_max_body_size 0; upstream nexus_docker { server nexus:1800; } server { server_name nexus.services.loc; location / { proxy_pass http://nexus_docker/; proxy_set_header Host $http_post; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }The nginx is deployed using docker, and I've successfully logged in to it usingdocker login. I've tried multiple other flags, such as the chunkin and such. But nothing seems to work.
Docker push nexus private repo fail, 413 Request Entity Too Large
I've found the reason and a solution.Nginx detects if a variable is being used inproxy_pass(I don't know how it does that). If there is no variable it resolved the hostname at startup and caches the IP address. If there is a variable it uses a resolver (DNS server) to lookup the IP at runtime.So the solution is to specify the Kube DNS server like this:resolver kube-dns.kube-system.svc.cluster.local valid=5s; set $service "service-1"; proxy_pass "http://$service.default.svc.cluster.local";Note that the full local DNS name of the service must be used which you can get by runningnslookup service-1.
I'm running Nginx on Kubernetes.When I use the following proxy_pass directive it works as expected:proxy_pass "http://service-1.default";However the following does not work:set $service "service-1"; proxy_pass "http://$service.default";I get an error sayingno resolver defined to resolve service-1.defaultAs far as I can tellproxy_passis receiving the exact same string so why is it behaving differently?I need to use a variable because I'm dynamically getting the service name from the URL using a regex.
Nginx proxy_pass directive string interpolation
Replace this:location /api/v1/ { try_files $uri $uri/ /apiv1.php?$args; }With the following inside your server block:rewrite ^/api/v1/([^/]+)/([^/]+)/?$ /apiv1.php?class=$1&method=$2? last;Create a php file called apiv1.php and place in the root directory of your web server with the following lines of code:'; echo $method;Test by visiting the following link in your browser:http://myServer/api/v1/members/getInfo
I am a beginner with nginx and php, so please excuse my basic question.For a RESTful based API (nginx + php) I would need some help with nginx configuration.Here is the relevant snippet of the nginx configuration (as suggestedhere) for redirecting all /api/v1/* requests to my apiv1.php script:server { server_name myServer; root /usr/share/nginx/html; location /api/v1/ { try_files $uri $uri/ /apiv1.php?$args; } location ~ \.php$ { include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; }Now the issue is that when I typehttp://myServer//api/v1/resource/GetInfoin my browser, the apiv1.php script doesn't seem to receive the "resource/GetInfo". Actually, _GET and _REQUEST are empty, but _SERVER looks OK!In my /etc/php5/fpm/php.ini, the following relevant config is enabled:request_order = "GP" variables_order = "GPCS" register_argc_argv = Off auto_globals_jit = On.Do you maybe know why the php _GET and _REQUEST are empty? Is this related to my php configuration only?Best regards, M.
nginx configuration for a RESTful API
You can call them separately:#!/bin/bash docker-compose -f docker-compose.yml up -d docker-compose -f docker-compose-mongo.yml up -dOr combine bothnginxandmongoservices in the samedocker-compose.yml.
I am new to scripting and require some assistance. I am building docker container using YML file. I have YML code written to automate my web server (docker-compose.yml) and database server(docker-compose-mongo.yml).Now I want to build a bash script that will call for both the yml files and run together.I was wondering what commands do I need to type within my shell script to call for these two yml files and run them together. I initially just used#!/bin/bash run docker-compose.ymlBut the above code didn't work.Ps. below is my yml file for the web serverversion: "3" services: web: image: nginx:latest deploy: replicas: 5 resources: limits: cpus: "0.2" memory: 330M restart_policy: condition: on-failure ports: - "80:80" # networks: # - webnet # networks: # webnet:
Run docker-compose from bash script file
Escaping the braces in the limiting quantifiers is necessary in POSIX BRE patterns, and NGINX does not use that regex flavor. Here, you should not escape the limiting quantifier braces, but you need to tell NGINX that you pass the braces as a part of the regex pattern string.Thus, you need to enclose the whole pattern with double quotes:Uselocation ~ "/img/([0-9a-fA-F]{2})([0-9a-fA-F]+)$"Here is aregex demo.Note that in the current scenario, you can just repeat the subpattern:/img/([0-9a-fA-F][0-9a-fA-F])([0-9a-fA-F]+)$ ^^^^^^^^^^^^^^^^^^^^^^^^
The web project have static content into the some /content/img folder. The url rule is: /img/{some md5} but location in the folder: /content/img/{The first two digits}/Exampleurl: example.com/img/fe5afe0482195afff9390692a6cc23e1 location: /www/myproject/content/img/fe/fe5afe0482195afff9390692a6cc23e1This nginx location is correct but lot not security (the symbol point is not good in regexp):location ~ /img/(..)(.+)$ { alias $project_home/content/img/$1/$1$2; add_header Content-Type image/jpg; }The next location is more correct, but not work:location ~ /img/([0-9a-f]\{2\})([0-9a-f]+)$ { alias $project_home/content/img/$1/$1$2; add_header Content-Type image/jpg; }Help me find error for more correct nginx location.
How to use Nginx Regexp in the location
I foundherethat one needs to add the following setting to Django's configuration insettings.py:FORCE_SCRIPT_NAME = '/exampleproject'This seems to rewrite all paths for nested resources.
I am building an API withDjango REST frameworkwhich is served via Gunicorn and Nginx. The project "exampleproject" has to run at a subpath such as:https://100.100.100.100/exampleproject(example IP address). I do not have a domain name registered for the IP.Currently, the start page renders as expected athttps://100.100.100.100/exampleproject. However a the resource path for "products" does not work. Instead ofhttps://100.100.100.100/exampleproject/productsthe start page displayshttps://100.100.100.100/products- which does not work.I configured the subpath forexampleprojectin/etc/nginx/sites-enabled/defaultas follows:server { # ... location /exampleproject/ { proxy_pass http://localhost:8007/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }When I manually visithttps://100.100.100.100/exampleproject/productsNginx records the following in/var/log/nginx/access.log:"GET /products/ HTTP/1.1" 404 151 "-"
How to host a Django project in a subpath?
You can use variables from SSI modulе: $date_gmt and $date_localproxy_set_header THE-TIME $date_gmt;http://nginx.org/en/docs/http/ngx_http_ssi_module.html#variables
I am trying to inject the time of an nginx server into an HTTP header.I am able to add to an HTTP header, like so:proxy_set_header HELLO-WORLD 'something';But now, I want to be able to inject the time into an HTTP header, something that looks like this:proxy_set_header THE-TIME $time_var;Or something like that.Would that be possible?
Is there a way to get the current time in nginx?
I added:proxy_read_timeout 1200;to nginx.conf. This increased the timeout from the default which fixed the problem. I probably don't need to use 1200, it's just the first value I tried.
I am running a Django app on a Linux platform with gunicorn and Nginx. I allow users to upload a CSV file (approx 2MB) which the app processes and adds to the backend database. The problem is for large files something seems to be timing out after around 2 or 3 minutes and a page entitled 404 Not Found nginx/0.7.6 is displayed. The URL does not change however - i.e., it remains as the URL to the file upload page of my app.The Nginx error log shows:2011/09/08 13:28:05 [error] 1349#0: *303 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 213.146.112.122, server: _, request: "POST /app/import_csv/ HTTP/1.1", upstream:Any ideas what's happening? How can I increase this timeout?
Timeout when uploading a large file?
You just need a trailing slash for proxy_pass:proxy_pass http://app_name/;it helps you to cut the "appname" prefix so the config looks like:upstream app_name { server unix:/path/to/socket/file.sock fail_timeout=10; } server { listen 80 default_server; listen[::]:80 default_server ipv6only=on; root /webapps/; server_name my_hostname.com; location /appname/ { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_name/; }
I have following problem, i'm trying to put a Django app with an gunicorn server on my VPS running Nginx. My nginx config looks like this:upstream app_name { server unix:/path/to/socket/file.sock fail_timeout=10; } server { listen 80 default_server; listen[::]:80 default_server ipv6only=on; root /webapps/; server_name my_hostname.com; location / { proxy_set_header Host $http_host; } location /appname/ { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_name; }}However when i navigate tomy_server.com/appname/i constantly get 404 error. I'm still new to Nginx, could someone point me in the right direction on how to set the proxy_pass for/appname/path? I should point out that when location for/appname/is replaced with/the django app is running fine.
nginx location path with proxy_pass
I found a solution myself to this, at least for people using Ubuntu, there is a supported working version of nginx that supports Lua and many other things, you just have to do:apt-get install nginx-extrasInstead of the regular:apt-get install nginxExtras is NOT an add-on package for nginx, it is a fully compiled server, you can go here to see other version you might prefer:http://www.cambus.net/nginx-packages-in-debian-stable/https://wiki.debian.org/NginxHope this helps you as much as it did me.
So it might just be me that is not super bright or super unlucky when it comes to Google searches, but I can't actually find any way to run Lua in the Nginx config without having to recompile the entire server with LuaJIT. The thing is that we would like to do tiny edits of some variables without having to recompile our server on every build, which could be as much as several times a week, less complex = less stuff for us to fix.So my question is, is there a way to run Lua in Nginx configs without having to recompile the entire thing, as we would like to keep Nginx updated by the system and not be another thing we have to maintain?I found Nginx-extras while searching for Lua, but I can't find any data to back up that this should enable the ability to use Lua or not?
Running Lua in Nginx config?
It's a bit hidden in the docs, but you can use any of the common variables. This includes$scheme.
I was surprised to find that I couldn't find any information on logging the request protocol in an nginx access log. I usually share a server block for both HTTP (80) and HTTPS (443) traffic, and use a combined access log for both. I'd like to indicate in each line in the access log if the request was over HTTP or HTTPS.Is this possible, or do I need to use a separate server block for HTTPS and specify a separate access log for SSL?
Logging the request protocol in nginx?
Instead of running let's encrypt on the host, you should do everything inside Docker. And the best is there is already a solution for that:https://hub.docker.com/r/nginxproxy/acme-companionThis enables the proxy to automatically obtain and renew certificates.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed2 years ago.Improve this questionI have a Nginx server running onDockeron a Ubuntu host and I wanted to integrateLetsencryptcertificates on it. As I had theNginximage already created with all the conf setup, after reading different articles I decided to install Letsencrypt on the host and mount the/etc/letsencrypt/folder in a shared volume in theNginxcontainer. The problem I had is that symlinks belongs to the file system itself and cannot be resolved by the container which makes sense.My question is then: what would be the best way to approach this: Should I add all theLetsencryptsetup inside myNginxcustom Dockerfile to get it up and running? Is it possible though to create a separate container which only hasLetsencryptand share a volume from there? Or is it possible somehow to resolve this via changes on my current solution?Note that at the moment I'm creating a copy of the certificates and pasting them into the volume which is fine but I want to automate the renewal (usingcertbot renew --dry-run).Any help is much appreciated!
Letsencrypt + Docker - the best way to handle symlink? [closed]
OK, it took long, but I found it all out:TheServer Error (500)response comes from Django'sdjango.views.defaults.server_error(if no500.htmltemplate exists).TheInternal Server Errorfrom the bonus question comes from gunicorn'sgunicorn.workers.base.handle_error.nginx logs the 500 error in the access log file, not the error log file; presumably because it was not nginx itself that failed.For/fail_now, gunicorn will also log the problem in the access log, not the error log; again presumably because gunicorn as such has not failed, only the application has.My original problemdidactually appear in the gunicorn error log, but I had never searched for it there, because I had introduced the log file only freshly (I had relied on Dockerlogsoutput before, which is pretty confusing) and assumed it would be better to use the very explicitInternalErrorViewfor initial debugging. (This was an idea that was wrong in an interesting way.)However, my actual programming error involved sending a response with aContent-Dispositionheader (generated in Django code) like this:attachment; filename="dag-wönnegården.pdf". The special characters are apparently capable of making gunicorn stumble when it processes this response.Writing the question helped me considerably with diagnosing this situation. Now if this response helps somebody else, the StackOverflow magic has worked once again.
I have a Django app running on agunicornserver with annginxup front. I need to diagnose a production failure with anHTTP 500outcome, but the error log files do not contain the information I would expect. Thusly:gunicorn hassettingerrorlog = "/somepath/gunicorn-errors.log"nginx hassettingerror_log /somepath/nginx-errors.log;My app has anInternalErrorViewthedispatchof which does an unconditionalraise Exception("Just for testing.")That view is mapped to URL/fail_nowI have not modifiedhandler500When I run my app withDEBUG=Trueand have my browser request/fail_now, I see the usual Django error screen alright, including the"Just for testing."message. Fine.When I run my app withDEBUG=False, I get a response that consists merely ofServer Error (500), as expected. Fine.However, when I look intogunicorn-errors.log, there is no entry for this HTTP 500 event at all.Why? How can I get it?I would like to get a traceback.Likewise innginx-errors.log: No trace of a 500 or the/fail_nowURL.Why?Bonus question:When I compare this to my original production problem, I am getting a different response there: a 9-line document withInternal Server Erroras the central message.Why?Bonus question 2:When I copy my database contents to my staging server (which is identical in configuration to the production server) and setDEBUG=Truein Django there,/fail_nowworks as expected, but my original problem still shows up asInternal Server Error.WTF?
Django with gunicorn and nginx: HTTP 500 not appearing in log files
Yes, you need to restart the uWSGI process.Python keeps the compiled code in memory so it won't get re-read until the process restarts. The django development server (manage.py runserver) actively monitors files for changes, but that won't happen by default with other servers. If you want to enable automatic reloading in uWSGI, thetouch-reloadandpy-auto-reloaduWSGI arguments might help.
I'm working on a Django webapp that's running under nginx and uWSGI. When I deploy new Django code (e.g., settings.py), do I need to restart uWSGI? If so, why?Background: I had a scenario where I updated settings.py and some other code and deployed it. I did not see the changes in the webapp behavior until I restarted uWSGI.
Does uWSGI need to be restarted when Django code changes?
To detect if the mod_xsendfile apache module installed, you can try this code:if function_exists('apache_get_modules') && in_array('mod_xsendfile', apache_get_modules()) { header("X-Sendfile"); }But this code just check if the module installed only, that can cause errors if it's installed but configured wronglyanother possible way to do this to setup server-wide variable through Apache's .htaccess: XSendFile On XSendFileAllowAbove On SetEnv MOD_X_SENDFILE_ENABLED 1 and check it form php code:if ($_SERVER['MOD_X_SENDFILE_ENABLED']) { Header(...) }The common idea is the same for nginx - just pass the value of status variable to backend via HTTP-header or CGI/FastCGI variable.
About ApplicationI am working on an e-commerce application in PHP. To keep URL's secure, product download links are kept behind PHP. There is a file, say download.php, which accepts few parameter via GET and verifies them against a database. If all goes well, it serves file using readfile() function in PHP.About ProblemNow problem comes when file to be passed to readfile() is larger than memory limit set in php.ini As this application will be used by many users on shared-hosting we cannot relay on altering php.ini settings.In our effort to find workarounds, I first thought we can go for fread() calls in while loop but it seems that will impose problems as well as highlighted hereDownloading large files reliably in PHPSo my best option is to detect/check if server supports X-Accel-Redirect (in case of Nginx) / X-Sendfile (in case of Apache)If server supports X-Accel-Redirect / X-Sendfile, I can use them and in else block I can make system admin aware about memory limit enforced by php.iniIdeally, I want to use server side support like X-Accel-Redirect / X-Sendfile wherever possible, and if that doesn't work - I would like to have a fallback code to read files without readfile().I am not yet sure as how readfile() and fread() in while loop are different but seems while loop will create problem, again, as suggested inDownloading large files reliably in PHPHope to get some help, suggestions, codes, guidance.Thanks for reading.
How to detect X-Accel-Redirect (Nginx) / X-Sendfile (Apache) support in PHP?
Okay. I figured this out. Following the Digital Ocean guide forhow to configure nginx, I was settingclient_max_body_size 100Min the file/etc/nginx/nginx.conf. And for sure, changing things there definitely had impact on what the server did. Especially when I would mess something up in that file and the server stopped functioning.However, I had forgotten that inDeploying a Rails App on Ubuntu 14.04 with Capistrano, Nginx, and Pumawhich was my main resource for setting up my server, it shows that these parameters get set up not in the abovenginx.confbut rather in my rails app there is the file~/my_app/config/nginx.confin which the setup parameters already included the statementclient_max_body_size 10M;So I changed that statement as well as in the/etc/nginx/nginx.conffile on production. Voila! Now I can upload files up to 100M.
I'm using Rails and Nginx on Digital ocean and I've been trying to upload a 17.6 MB file and I'm still getting413 Request Entity Too Largeeven after settingclient_max_body_size 100Min my /etc/nginx/nginx.conf file.Here's the snippet from the file:http { ## # Basic Settings ## client_max_body_size 100M; sendfile on; tcp_nopush on; ... }After setting this I've usedsudo service nginx reload. When that didn't work I've even done a full reboot usingsudo shutdown -r nowand thencap production puma:startfrom my local machine. I've also triedclient_max_body_size 0;which, from what I understand should disable checking of file sizes entirely. Nothing works. Plus, in getting to this point, I've made some mistakes in the location of theclient_max_body_sizestatement and in those situations the server has failed to start correctly giving a "Something went wrong" error, so I'm pretty sure the changes I'm making are to the right file.Is there something I might be missing? Is there another place I'm missing to configure this? Is there something I'm missing in the way I'm currently configuring it? Any pointers would be greatly appreciated.
Still getting 413 Request Entity Too Large even after client_max_body_size 100M
In the nginx configuration (inside thelocationblock), specify this:proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme;Theproxy_redirecttells nginx that, if the backend returns an HTTP redirect, it should leave it as is. By default, nginx assumes the backend is stupid and tries to be smart; if the backend returns an HTTP redirect that says "redirect tohttp://localhost:8000/somewhere", nginx replaces it with something similar tohttp://yourowndomain.com/somewhere". But Django isn't stupid (or it can be configured to not be stupid).Django does not know whether the request has been made through HTTPS or plain HTTP; nginx knows that, but the request it subsequently makes to the Django backend is always plain HTTP. We tell nginx to pass this information with theX-Forwarded-ProtoHTTP header, so that related Django functionality such asrequest.is_secure()works properly. You will also need to setSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')in yoursettings.py.
My server runs Django + Gunicorn + nginx.I have added an SSL certificate and configured nginx to redirect http to https. When an https request is received, nginx passes it to Gunicorn as http.My program sometimes returnsHttpResponseRedirect, and the browser gets a redirect response and re-requests as http, so nginx redirects to https.How can I avoid this? How can I configure the server so that the first redirection points directly to an https URL?
Django's HttpResponseRedirect is http instead of https
First, let's have a quick overview of what anIngress Controlleris in Kubernetes.Ingress Controller:controller that responds to changes inIngressrules and changes its internal configuration accordinglySo, both the HAProxy ingress controller and the Nginx ingress controller will listen for theseIngressconfiguration changes and configure their own running server instances to route traffic as specified in the targetedIngressrules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.For the most part, Nginx comes with more batteries included for servingweb content, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.The available configurations for HAProxy can be foundhereand the available configuration methods for Nginx ingress controllerare here.I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.The differences go deeper than this, however, and these issues go into more detail on them:https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-modeHAProxy vs. Nginx
What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?
Nginx ingress controller vs HAProxy load balancer
You could mount your customnginx.confinto the container indevelopmentvia e.g.--volume ./nginx/nginx.conf:/etc/nginx/nginx.confand simply omit this parameter todocker runinproduction.If usingdocker-compose, the two options I would recommend are:Employ the limited support forenvironment variable interpolationand add something like the following undervolumesin your container definition:./nginx/nginx.${APP_ENV}.conf:/etc/nginx/nginx.confUsea separate YAML file for production overrides.
I'm just getting started with Docker. With the official NGINX image on my OSX development machine (with Docker Machine as the Docker host) I ran up against the bug withsendfileand VirtualBox which means the server fails to show changes I make to files.The workaround for this is to use a modified nginx.conf file that turns offsendfile.This guy's solutionhas an instruction in the Dockerfile to copy a customised conf file into the container. Alternatively,this guymaps the NGINX configuration to a new folder with modified conf file.This kind of thing works OK locally. But what if I don't need this modification on my cloud host? How should I handle this and other differences when it comes to deployment?
Docker: how to manage development and production settings?
1.)Install nginx2.)Proxy forward nginxto your node port. SeeDigital Oceans How-To.nginx.confserver { listen 80; server_name localhost; location / { proxy_pass http://localhost:9000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }3.)Start app.jswith node in your dist folder with the correct variables:$ export NODE_ENV=production; export PORT=9000; node dist/server/app.js4.)Browse to the hostnameconfigured in nginx in step 2.In case you get many404's you most likely are usingangular.js in HTML5 modeand need to re-wire your routes to serve static angular.js content. I described this and how-to tackle many other bugs that you may face in my blog article: "Continous Integration with Angular Fullstack".
I want to deploy a simple angular projet made with angular fullstack.https://github.com/DaftMonk/generator-angular-fullstackI tried :yo angular-fullstack test grunt buildThen, in dist I got 2 folders: server and public.how to deploy them on a linux server ?with forever/node and nginx ??? I want to self host my project.thanks
how to deploy yeoman angular-fullstack project?
This tutorial looks good, but it's a bit brief.I have apache installed. If you don't:sudo apt-get install apache2.cd /usr/lib/cgi-bin # Make a file and let everyone execute it sudo touch test.sh && chmod a+x test.shThen put the some code in the file. For example:#!/bin/bash # get today's date OUTPUT="$(date)" # You must add following two lines before # outputting data to the web browser from shell # script echo "Content-type: text/html" echo "" echo "Demo" echo "Today is $OUTPUT " echo "Current directory is $(pwd) " echo "Shell Script name is $0" echo ""And finally open your browser and typehttp://localhost/cgi-bin/test.shIf all goes well (as it did for me) you should see...Today is Sun Dec 4 ...Current directory is /usr/lib/cgi-bin ShellShell Script name is /usr/lib/cgi-bin/test.sh
I am running Ubuntu 11 and I would like to setup a simple webserver that responds to an http request by calling a local script with the GET or POST parameters. This script (already written) does some stuff and creates a file. This file should be made available at a URL, and the webserver should then make an http request to another server telling it to download the created file.How would I go about setting this up? I'm not a total beginner with linux, but I wouldn't say I know it well either.What webserver should I use? How do I give permission for the script to access local resources to create the file in question? I'm not too concerned with security or anything, this is for a personal experiment (I have control over all the computers involved). I've used apache before, but I've never set it up.Any help would be appreciated..
How do I call a local shell script from a web server?
You can create create multiple virtual hosts that allow you to host multiple sites, independent from each other. More info here:http://wiki.nginx.org/VirtualHostExample.A bit more detailed info here as well on how to setup virtual hostshttp://projects.unbit.it/uwsgi/wiki/RunOnNginx#VirtualHosting.
Is it possible to run multiple Django sites on the same server using Nginx and uWSGI?I suppose it's necessary to run multiple uWSGI instances (one for each site). I copied /etc/init.d/uwsgi to uwsgi2 and changed the port number. But, I got the following error:# /etc/init.d/uwsgi2 start Starting uwsgi: /usr/bin/uwsgi already running.How is it possible to run multiple uWSGI instances?Thanks
How to run multiple Django sites on Nginx and uWSGI?
You can try nginx's third party Strip module:http://wiki.nginx.org/NginxHttpStripModuleAny module you use is just going to remove whitespace. You'll get a better result by using a minifier that understands whatever you're minifying. e.g. Google's Closure javascript compiler.It's smart enough to know what a variable is and make it's name shorter. A whitespace remover can't do that.I'd recommend minifying offline unless your site is very low traffic. But if you want to minify in your live environment I recommend using nginx's proxy cache. (Sorry but I don't have enough reputation to post more than one link)Or you can look into memcached for an in-memory cache or Redis for the same thing but with disk backup.
Is there any way to automatically minify static content and then serve it from a cache automatically? Similar to have mod_compress/mod_deflate work? Preferably something I could use in combination with compression (since compression has a more noticeable benefit).My preference is something that works with lighttpd but I haven't been able to find anything, so any web server that can do it would be interesting.
Server-side auto-minify?
First, check your Angular app'stag - it needs to match the app's new location. So, for example, if you're hosting your app through nginx athttps://localhost/dev/, yourtag will need to be:You can find this tag in your app'sindex.html.Second, nginx won't automatically proxy all the traffic thatng serveuses for live-reload functionality. You can proxy this extra traffic by adding this to yournginx.conf:location ^~ /sockjs-node/ { proxy_pass http://127.0.0.1:4200; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_http_version 1.1; proxy_cache_bypass $http_upgrade; }This assumes thatng servehosts your app on port 4200 (which is the default).
I've been trying to research Nginx to configure a proxy with Angular 5 ng serve on localhost:4200, however only come up with results for serving a project that's been built. The configuration I've found from this research "somewhat" works, but results in a white page that isn't loading any data:dev:12 GET http://192.168.1.84/inline.bundle.js net::ERR_ABORTED dev:12 GET http://192.168.1.84/polyfills.bundle.js net::ERR_ABORTED dev:12 GET http://192.168.1.84/styles.bundle.js net::ERR_ABORTED dev:12 GET http://192.168.1.84/vendor.bundle.js net::ERR_ABORTED dev:12 GET http://192.168.1.84/main.bundle.js net::ERR_ABORTEDIt appears that it can't see the files served by ng serve, but is at least reaching the index.html page for the project. This is the configuration I am currently using:location /dev { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; rewrite ^/(.*)$ /$1 break; proxy_set_header Host localhost; proxy_pass http://localhost:4200; }What should I add to the /dev config?
How can I properly configure nginx to work with NG Serve and Angular CLI?
Finally, I got it working! I've been trying various things for about a week...The 301-redirects were caused by nginx actually trying to redirect the browser to /cable/ instead of /cable. This is because I had specified /cable/ instead of /cable in thelocationstanza! I got the idea fromthis answer.
I am trying to deploy an Action Cable -enabled-application to a VPS using Capistrano. I am using Puma, Nginx, and Redis (for Cable). After a couple hurdles, I was able to get it working in a local developement environment. I'm using the default in-process /cable URL. But, when I try deploying it to the VPS, I keep getting these two errors in the JS-log:Establishing connection to host ws://{server-ip}/cable failed. Connection to host ws://{server-ip}/cable was interrupted while loading the page.And in my app-specificnginx.error.logI'm getting these messages:2016/03/10 16:40:34 [info] 14473#0: *22 client 90.27.197.34 closed keepalive connectionTurning onActionCable.startDebugging()in the JS-prompt shows nothing of interest. Just ConnectionMonitor trying to reopen the connection indefinitely. I'm also getting a load of 301: Moved permanently -requests for /cable in my network monitor.Things I've tried:Using theasyncadapter instead of Redis. (This is what is used in the developement env)Adding something like this to my/etc/nginx/sites-enabled/{app-name}:location /cable/ { proxy_pass http://puma; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; }SettingRails.application.config.action_cable.allowed_request_originsto the proper host (tried "http://{server-ip}" and "ws://{server-ip}")Turning onRails.application.config.action_cable.disable_request_forgery_protectionNo luck. What is causing the issue?$ rails -v Rails 5.0.0.beta3Please inform me of any additional details that may be useful.
Rails 5 Action Cable deployment with Nginx, Puma & Redis
How to configure nginx to work with a java server. In the example Jetty is used.Edit/etc/nginx/sites-available/hostname:server { listen 80; server_name hostname.com; location / { proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }Consider disabling external access to port 8080:/sbin/iptables -A INPUT -p tcp -i eth0 --dport 8080 -j REJECT --reject-with tcp-resetAn example Jetty configuration (jetty.xml) might resemble: https 65536 8192 8192 This will cause Jetty to listen onlocalhost:8080and nginx to redirect requests fromdomain.com:80to the Jetty server.
I've been trying to set up nginx as proxy to jetty. I want to do something as explained inthis answerbut for Jetty not ring.I've created a.warand I placed it in~/jetty/jetty-dist/webapps/web_test-0.1.0-SNAPSHOT-standalone.warSay, I want to use the domain example.com with ip address 198.51.100.0.I've also copied/etc/nginx/sites-available/defaultinto the fileexample.comand I have it in the same directory.Can you help me configure nginx as proxy to jetty in my case? I know there are many references online about how to do this but they are all different and I got confused.What specific changes do I need to make in nginx? What changes do I need to make in jetty.xml? Do I need to make any other changes? Will my app be served at example.com/index.html?Current state of nginx is copied below:upstream jetty { server 127.0.0.1:8080 fail_timeout=0 } server { listen 80 default_server; #listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; server_name localhost; location / { proxy_pass http://jetty try_files $uri $uri/ =404; }EDITI was wondering if I need to use Jetty at all. Inthis setuphe just uses ring, which seems super easy? What do I gain by using jetty?
How do I configure nginx as proxy to jetty?
You don't even need a playbook to do this :Restarting nginx :ansible your_host -m service -a 'name=nginx state=restarted'(seeservice module)Kill a process by process idansible your_host -m command -a 'kill -TERM your_pid'(adjust signal, and use pkill/killall if you need to match a name; seecommand module)However, I wouldn't say that ansible shines if you're just using it for ad-hoc commands.If you need a tutorial to get you started with playbooks, there is oneover here.Now if you can to put these (the official name for service, commands, etc.. aremodules) in a playbook (let's call it playbook.yml), you can just :- hosts: webappserver tasks: - name: Stops whatever command: kill -TERM your_pid notify: - Restart nginx - name: Another task command: echo "Do whatever you want to" handlers: - name: Restart nginx service: name=nginx state=restartedCreate an inventory file (hosts) containing :# webappserver should resolve ! webappserverInvoke with :ansible playbook.yml -i hostsand it should work.This is all very basic and can be grasped easily reading the docs or any tutorial out there.
I recently dived into Ansible for one of my servers, and found it really interesting and time saving. I am running an Ubuntu dedicated server and have configured number of web applications written on Python and a few on PHP. For Python I am using uwsgi as the HTTP gateway. I have written shell scripts to start/restart a few processes in order to run the instance of a specific web application. What I have to do everytime is, connect ssh and navigate to that specific application and run the script.WHAT I NEEDI've been trying to find a way to write Ansible playbook to do all that from my personal computer with one line of command, but I have no clue how to do that. I have'nt found a very explanatory (for a beginner) documentation or help on the internet.QUESTIONHow can I restart Nginx with Ansible playbook? How can I kill a process by process id?
Ansible Playbook to run Shell commands
If Django is accessed using uwsgi_pass, then in the appropriate location(s) ...# All request headers should be passed on by default # Make sure "Token" response header is passed to user uwsgi_pass_header Token;If Django is accessed using fastcgi_pass, then in the appropriate location(s) ...# All request headers should be passed on by default # Make sure "Token" response header is passed to user fastcgi_pass_header Token;If Django is accessed using proxy_pass, then in the appropriate location(s) ...# All request headers should be passed on by default # but we can make sure "Token" request header is passed to Django proxy_set_header Token $http_token; # Make sure "Token" response header is passed to user proxy_pass_header Token;These should help eliminate the possibility that Nginx is not passing things along from your issue.
Disclaimer:I'm working in a project where exist an "huge" webapp that have an api for mobiles, so change the api is not an option.This application was developed time ago and several developers have worked on it,Having said that, the problem is this;In the api for mobile of this site (just views than returns json data), the code is looking for a token but does in the headers of request:token = request.META.get('HTTP_TOKEN')When I test this api locally, works fine, but in production doesn't, so, I try to figure out whats going on and found this:django converts headers, even custom headers to keys in request.META, I use urllib2 andrequestsfor test the api and the problem in production is that in production server the request.META never has a key called HTTP_TOKEN, so, doing a little of debug I seriously think the problem is the way we serve the django application.We are using django1.3, nginx, gunicorn, virtualenvwrapper, python2.7.My prime suspect is nginx, I think, in someway nginx receive the header but don' forward it to django, I try to do some research about this, but I only found infor about security headers and custom headers from nginx, but I dont find doc or something about how to tell nginx that allows that header and don't remove it.I need help here, the first thing is test if nginx receives the header, but I just know a little about nginx and I don't know how to tell it to log the headers of requests.ThanksUpdatenginx conf file
Missing custom header with django, nginx and gunicorn
I solved my problem: The snippethttp://flask.pocoo.org/snippets/35/does work, I was so stupid to have absolute URLs in my templates. I changed that tourl_for()and now it works like charm.
I have a Flask app running with gunicorn onhttp://127.0.0.1:4000:gunicorn -b 127.0.0.1:4000 webapp:appNow I would like to use nginx as a reverse proxy and forwardhttp://myserver.com/webapptohttp://127.0.0.1:4000in a way that everyhttp://myserver.com/webapp/subpathgoes tohttp://127.0.0.1:4000/subpath.The proxy/redirect works nicely when not using a subpath:upstream app { server 127.0.0.1:4000 fail_timeout=0; } server { listen 80 default; client_max_body_size 4G; server_name _; location / { proxy_pass http://app; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }How can I setlocation /webapp { #go to my gunicorn app, translate URLs nicely }This tip from the Flask developers didn't work:http://flask.pocoo.org/snippets/35/SOLVED: The snippethttp://flask.pocoo.org/snippets/35/works! I had a few absolute URLs in my templates (e.g./task/delete) and had to change everything tourl_for().Stupid ... but now it works like expected, I have my app on 'http://myserver.com/subpath'
Proxy a Flask app running on gunicorn to a subpath in nginx
You can use therootdirective insidelocation.
I need some help configuring nginx to load files from a different folder. Here is my config:index index.php; server { server_name domain.com; root /www/domain.com/www/; location / { try_files $uri $uri/ /php_www/index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index /php_www/index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } error_page 404 /404.html; error_log /var/log/nginx/error.log; }The problem is that /php_www/ is not located inside the root defined in nginx.I have 4 different folders that I need to do this with, here is what my folder structure looks like:/www/domain.com/www/ /www/domain.com/php_www/ /www/domain.com/content1/ /www/domain.com/content2/What I'm trying to do is when a visitor goes todomain.com/page1/content1/I want to load content from the content1 folder, for example. The reason for this is I have several git projects with separate repos...this will enable me to push certain areas of the website to production, without effecting anything else. I'd like to not have all my files/content accessible in the /www folder too, so urls can't be brute force attacked, looking for content.Hopefully this makes sense!Working solution (pulled fromthiscomment)location ^~ / { root /www/domain.com/php_www/; try_files $uri $uri/ /index.php; location ~* \.(?:php|html)$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; } }
How do I configure nginx to load try_files from a different folder?
Thenginx documentationsays:Server names are defined using the server_name directive and determine which server block is used for a given request.That means in your case that that you have to enteraridev-VirtualBoxwithin your browser instead of localhost.To get this working you have to enteraridev-VirtualBoxwithin your localHosts fileand point it to the IP of your VirtualBox PC.This would look something like follows:192.168.1.1 aridev-VirtualBox
I installed gitlab using itsinstallation guide. Everything was OK, but when I open localhost:80 in the browser all I see it the messageWelcome to nginx!. I can't find any log file with any errors in it.I am running Ubuntu in VirtualBox. My /etc/nginx/sites-enabled/gitlab config file reads:# GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.1.1:80; # e.g., listen 192.168.1.1:80; server_name aridev-VirtualBox; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }
Installed gitlab, but only nginx welcome page shows
Apparently this question is still getting traffic, so I feel like I should update it. I'm no longer using the nginx ingress, so I can't verify this works. According tohttps://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/:The ingress controller supportscase insensitiveregular expressions in thespec.rules.http.paths.pathfield. This can be enabled by setting thenginx.ingress.kubernetes.io/use-regexannotation totrue(the default is false).The example they provide on the page would cover it:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress-3 annotations: nginx.ingress.kubernetes.io/use-regex: "true" spec: rules: - host: test.com http: paths: - path: /foo/bar/bar backend: serviceName: test servicePort: 80 - path: /foo/bar/[A-Z0-9]{3} backend: serviceName: test servicePort: 80Original answer that no longer works.It appears that the solution is ridiculously simple (at least with an nginx ingress controller) - you just need to prepend the path with"~ ":apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: ~ /t[a-z]a backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80
I'd like to use regex in the path of an Ingress rule, but I haven't been able to get it to work.For example:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80I tried putting/t[a-z]afor the first path, but then any path I tried that should match that regex took me to the default backend instead of the service I expected.Note: I'm using an nginx ingress controller, which should be able to support regex.
How do I set up a Kubernetes Ingress rule with a regex path?
I found the solution, The problem is not at uwsgi side, there is a linux limitation : socket are 128 request long, so to enlarge the waiting queue, you have to tune the kernel :ie :echo 3000 > /proc/sys/net/core/netdev_max_backlog echo 3000 > /proc/sys/net/core/somaxconn
I am trying django on nginx + uwsgi. It works very well (faster than apache mod_wsgi), but if I have more than 100 concurrent connexion ( ie : tested with ab -n 100000 -c 150http://localhost:8081/), I have some broken pipe on uwsgi logs :nginx.conf :user myuser; worker_processes 8; events { worker_connections 30000; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream django { ip_hash; server unix:/home/myuser/tmp/uwsgi.sock; } server { listen 8081; server_name localhost; location / { uwsgi_pass django; include uwsgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }uwsgi is started like that :/usr/local/bin/uwsgi -s /home/myuser/tmp/uwsgi.sock --pp /home/myuser/projects/django/est/nginx --module django_wsgi -L -l 500 -p 8And the error messages from uwsgi are :writev(): Broken pipe [plugins/python/wsgi_headers.c line 206] write(): Broken pipe [plugins/python/wsgi_subhandler.c line 235]version are : 1.0.6 for nginx and 0.9.9.2 for uwsgiDo you know how to solve these error messages ?
django with nginx + uwsgi
I figured out what the issue was. The .ebextensions folder was hidden in my file system and was not being included in my deployment ZIP when I published to AWS.
I am trying to deploy a Node-based web service to elastic beanstalk but running into problems when posting too much data. The issue seems to be at the nginx layer, not the Node / express layer. The message I get is: 413 Request Entity Too Large 413 Request Entity Too Large nginx/1.6.2 Based on other answers on StackOverflow, I added a folder to the root of my project called .ebextensions and a file inside called nginx.config. The contents of this file are:files: "/etc/nginx/conf.d/proxy.conf" : mode: "000755" owner: root group: root content: | client_max_body_size 50M;I deployed this along with my node application and even restarted the app server. So far it seems to have no effect. Am I doing something wrong?
AWS Elastic Beanstalk - Request Entity Too Large (413)
Your regex is wrong, you're assuming the server is in the request path. To match the request paths in the headers use a regex like this one:reqrep ^([^\ ]*)\ /lang/blog/(.*) \1\ /blog/lang/\2you can use reqirep as well but that is only useful if your servers actually serve/BLog/lAnG/as well.
I would like to ask how HAProxy can help in routing requests depending on parts of the URL.To give you an overview of my setup, I have the HAProxy machine and the two backends:IIS website (main site)Wordpress blog on NGINX (a subsite)The use-case:I'm expecting to route requests depending on the URL:www.website.com/lang/index.aspx -> main sitewww.website.com/lang/blog/articlexx -> blog subsiteThe blog access URL is "/server/blog/lang/articlexx" so I have to rewrite the original client request to that format--which is basically switching "blog" and "lang".From how I understood the configuration documentation and some posts on the net, I could use reqrep/reqirep to change the request HTTP headers before it gets passed to a backend. And if that's right, then this configuration should work:frontend vFrontLiner bind x.x.x.x:x mode http option httpclose default_backend iis_website # the switch: x/lang/blog -? x/blog/lang reqirep ^/(.*)/(blog)/(.*) /if\2/\1/\3 acl blog path_beg -i /lang/blog/ use_backend blog_website if blog backend blog_website mode http option httpclose cookie xxblogxx insert indirect nocache server BLOG1 x.x.x.x:80 cookie s1 check inter 5s rise 2 fall 3 server BLOG2 x.x.x.x:80 cookie s2 check inter 5s rise 2 fall 3 backupThe problem:The requests being received by the blog_website backend is still the original URL "x/lang/blog".I might have missed something on the regex part but my main concern is whether my understanding correct or not to use the reqirep in the first place. I would appreciate any help.Thanks very much.
HAProxy and URL Rewriting Configuration
I had similar issue with nginx and unicorn setup.Every day I've seen in nginx error.log this error:failed (11: Resource temporarily unavailable) while connecting to upstreamThe way I fixed it was to change unix socket to tcp socket.so insteadupstream unicorn_app { server unix:/tmp/sockets/unicorn.sock fail_timeout=0; }now I'm usingupstream unicorn_app { server 127.0.0.1:3000 fail_timeout=0; }Hope it will help someone.
My unicorn server was running fine, but has stopped working and I can't figure out how to get it restarted.2011/04/18 15:23:42 [error] 11907#0: *4 connect() to unix:/tmp/sockets/unicorn.sock failed (111: Connection refused) while connecting to upstream, client: 71.131.237.122, server: localhost, request: "GET / HTTP/1.1", upstream: "http://unix:/tmp/sockets/unicorn.sock:/", host: "tacitus"my config files are at:https://gist.github.com/926006any help as to what my troubleshooting options should be would be greatly appreciated.best,Tim
unicorn nginx upstream server not starting
One of the possible solutions is to start the pods on each cluster node usingDaemonSetthat connect the S3 storage to the local directory usings3fs.S3FS-FUSE:This is a free, open-source FUSE plugin and an easy-to-use utility which supports major Linux distributions & MacOS. S3FS also takes care of caching files locally to improve performance. This plugin simply shows the Amazon S3 bucket as a drive on your system.Here is a good article that gives you step-by-step instructions on how to do it.Kubernetes shared storage with S3 backendThen you can use this directory as a Volume in your Pods, for example, as a directory with a static content for your proxy server.{ Or you can create a custom proxy server image with the s3fs tool inside and mount your S3 bucket directly into the Pod. Check outthisandthisarticles for the details.UPD:(This solution doesn't work yet because of limited support of FUSE in Kubernetes -FUSE volumes #7890) There is aworkaroundthat require to run a privileged container }There are two alternatives to s3fs available:ObjectiveFS- commercial FUSE plugin which supports Amazon S3 and Google Cloud Storage backendsRioFs- lightweight utility written using the C language. Doesn’t support appending to file, doesn’t support fully POSIX-compliant file system interface, and it can’t rename folders.Alternatively, you could tryTraefik ingress controller:traefik.ingress.kubernetes.io/redirect-regex: ^http://localhost/(.*)- Redirect to another URL for that frontend. Must be set withtraefik.ingress.kubernetes.io/redirect-replacement.traefik.ingress.kubernetes.io/redirect-replacement:http://mydomain/$1- Redirect to another URL for that frontend. Must be set withtraefik.ingress.kubernetes.io/redirect-regex.
I am usingthis ingress controllerand would like to setup a s3 proxy to some bucket. If I call in a browser the urlhttps://my-kube-server.org/img/dog.jpgI expect to see/download the image athttps://s3.eu-central-1.amazonaws.com/mybucket123/pictures/dog.jpgI can setup a rewrite rule and point to an external service as explained inthis example:kind: Service apiVersion: v1 metadata: name: s3-proxy spec: type: ExternalName externalName: s3.eu-central-1.amazonaws.com headers: - host: s3.eu-central-1.amazonaws.comBut I get errors from aws because it's required to have "Host:s3.eu-central-1.amazonaws.com" in the header. I cannot set this header neither in the s3-proxy service definition nor in the ingress rule (configuration-snippetdoesn't work because it will add another Host header after it's set already in the nginx.conf pod.My solution is to take the whole location block for this ingress rule and to include it as aserver-snippet, which is pretty brute force.Another option is to have an nginx pod+service behind ingress that takes care of setting the right headers. So the flow would be request -> ingress-controller -> nginx -> s3.Has anybody an idea how to proxy s3?
s3 proxy on kubernetes using Ingress
The problem turned out to be Nginx not accepting large files. Placing this in the location block of my nginx server config solved my issue:client_max_body_size 10M;
I have an app where the client makes a multipart request from example.com to api.example.com through https with Nginx, then api uploads the file to Amazon S3.It works on my machine but breaks when other people try it on a different network. Giving me this error:[Error] Origin https://example.com is not allowed by Access-Control-Allow-Origin. [Error] Failed to load resource: Origin https://example.com is not allowed by Access-Control-Allow-Origin. (graphql, line 0) [Error] Fetch API cannot load https://api.example.com/graphql. Origin https://example.com is not allowed by Access-Control-Allow-Origin.I'm using the cors npm package on the API like this:app.use(cors());All of this is going through an Nginx reverse proxy on DigitalOcean. Here this is my Nginx config:Individual server configs at/etc/nginx/conf.d/example.com.confand/etc/nginx/conf.d/api.example.com.conf, almost identical, just the addresses and names different:server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; include snippets/ssl-params.conf; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3000/; proxy_ssl_session_reuse off; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; proxy_redirect off; } }It works perfectly fine when I use it on localhost on my computer but as soon as I put it on DigitalOcean I can't upload. And it only breaks on this multipart request when I'm uploading a file, other regular cors GET and POST requests work.
CORS doesn't work despite headers set
I believe that theprerender examplehas the answer. If prerender is set to 1, it uses rewrite and then proxy_pass.So you would change this:if ($prerender = 0) { rewrite .* /index.html break; }to this:if ($prerender = 0) { rewrite .* /index.html break; proxy_pass http://[INTERNAL IP]:[PORT]; }I would make further modifications since you are using Node and don't need some of the stuff set up for static files.Here is my final answer:server { listen 80; server_name example.com; location / { try_files $uri @prerender; } location @prerender { #proxy_set_header X-Prerender-Token YOUR_TOKEN; set $prerender 0; if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator") { set $prerender 1; } if ($args ~ "_escaped_fragment_") { set $prerender 1; } if ($http_user_agent ~ "Prerender") { set $prerender 0; } if ($uri ~ "\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff)") { set $prerender 0; } #resolve using Google's DNS server to force DNS resolution and prevent caching of IPs resolver 8.8.8.8; if ($prerender = 1) { #setting prerender as a variable forces DNS resolution since nginx caches IPs and doesnt play well with load balancing set $prerender "service.prerender.io"; rewrite .* /$scheme://$host$request_uri? break; proxy_pass http://$prerender; } if ($prerender = 0) { proxy_pass http://[INTERNAL IP]:[PORT]; } } }I hope that helps. One thing that I will add is that I wouldn't use a prerender engine. Spiders can and do index links and pages that use javascript and PDFs even.Just my two cents.
I'm trying to useprerender.ioto get an snapshot of angularjs pages. Currently I have an NodeJS instance for the web app andnginxreverse proxy redirects requests from port80to4000.According to prerender nginx manual (https://gist.github.com/thoop/8165802) I can forward search-engine bot requests to the prerender url but because I already have a proxy for NodeJS application, I don't know how can I prerendertry_filesdirective.My question is, how can I use both NodeJS application proxy and prerender directive?
Using prerender with proxy in nginx
I've moved the most important points from the comments.Yep, that's the normal behavior. Nginx's master process needs root privileges to manage listening sockets on the machine.Thisforum thread states that youcanchange it, but it may cause problems. However, Nginx does allow to change the owner of the worker processes.It depends on how the uWSGI was installed. If uWSGI was installed viaapt-getyou can start (stop, restart) it like this:service uwsgi You installed uWSGI viapip, so thedaemonizeoption will do the trick:/path/to/uwsgi --daemonize /path/to/logfileYou can start it under any user you want, BUT if you decide to run it under root, you should specify thegidanduidoptions. uWSGI'sbest practices pagesays:Common sense: do not run uWSGI instances as root. You can start your uWSGIs as root, but be sure to drop privileges with the uid and gid options.Also take a look atmaster-as-rootoption.You can create as many processes and threads as you want, but it should depend on how many requests you're trying to process (concurrent or per second). You can read about thishere. I would try different configurations and choose which one works better.3b. Basically,worker_processeshelps to handle concurrent requests. Seethisquestion.*WARNING: you are running uWSGI without its master process manager*You didn't specify amasteroption in your .ini file. While master process is certainly unnecessary, it is very useful. It helps to effectively control workers and respawn them when they die.
I successfully managed to install: NGINX + uWSGI + Flask on a CentOS 6.x serverbut I still have some doubts in terms of configuration:1) I am running NGINX as a service: service nginx start/stop/restartif I type "ps aux | grep nginx", I can see 2 processes:- (by user root) master process /usr/sbin/nginx -c /etc/nginx/nginx.conf- (by user nginx) worker processis that OK?2) I setup a virtualenv for Flask and installed the uWSGI package under such virtualenv.Currenty I am starting the uWSGI manually by typing "uwsgi /somedir/uwsgi.ini", where uwsgi.ini is as follows:chdir = /myappdir uid = pyuser chmod-socket = 666 socket = /tmp/uwsgi.sock module = run callable = app virtualenv = /myappdir/myvirtualenvIs it possible to start uWSGI as a service, similarly to NGINX (as described at point 1) ? Is such case should the user be root or non-root?3) When I start the uWSGI, I am currently getting the following warning:*** Python threads support is disabled. You can enable it with --enable-threads ***I realized that in the "uwsgi.ini" configuration file you can also configure a number of processes and threads. Considering the server I am running has just 1 core, can I set up multiple processes and threads? and if so, how many?3b) On the NGINX configuration file "/etc/nginx/nginx.conf" it is also possible to specify "worker_processes", which by default are 1. Can I increase that, or it can be higher than 1 only for multicore servers?4) Beside the threads support disabled, when I start the uWSGI I also get these warnings. What do they mean?*** WARNING: you are running uWSGI without its master process manager *** *** Operational MODE: single process *** *** uWSGI is running in multiple interpreter mode ***
Python: uWSGI configuration for NGINX+FLASK
Nginx doesn't have it own queue, instead it pushes all requests to the application server, which have alistensocket:#include #include int listen(int sockfd, int backlog); (http://linux.die.net/man/2/listen)backlogdefines the length of this queue. You can read the full conversationhere.
Consider the following situation: you are deploying application that can serve 1 req./sec. What would happen if I send 10 request in 1 second? I wrote simple app to test that:https://github.com/amezhenin/nginx_slow_upstream. This test shows that your requests will be served _in_exact_same_order_ they were sent.For now, this looks like Nginx have some kind of queue for requests, but my colleague(administrator) sayd that there is no any queues in Nginx. So I wrote another question about epoll here:Does epoll preserve the order in which fd's was registered?. From that discussion I figured that epoll does preserves the order of requests.I have two questions:1) Is there any mistakes in reasoning/code above?2) Does Nginx have some sort of queue for requests on top of epoll? Or Nginx uses pure epoll functionality?Thank you, and sorry for my English :)
Does Nginx have separate queuing mechanism for requests?
It was my nginx configuration.within /./etc/nginx is a file called: nginx.confI had proxy_set_header Connection "upgrade";when it should be proxy_set_header Connection $http_connection;This fixed my problem and my database now works on the ubuntu side of things.
I have a web application that is being developed on a windows env and runs on ubuntu 16.04.I have no issues Posting info to my sqlite database fileblog.db(located in the /. directory of the project ) in my windows environment, however when I try the same action on my ubuntu server, I get the following error:Microsoft.AspNetCore.Server.Kestrel[17] Connection id "0HL8AR4JM7NOJ" bad request data: "Requests with 'Connection: Upgrade' cannot have content in the request body." Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Requests with 'Connection: Upgrade' cannot have content in the request body. at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Frame.ThrowRequestRejected(RequestRejectionReason reason) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.MessageBody.For(HttpVersion httpVersion, FrameRequestHeaders headers, Frame context) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Frame`1.d__2.MoveNext()The problem is, I'm not sure what is causing this error to occur. I don't think it is an issue with my code, but it is possible.What do you guys think the problem is? Could this be caused by nginx? Or is this caused by asp.net?Here is my Controller.csprivate ApplicationDbContext ctx = new ApplicationDbContext(); [HttpPost] public IActionResult Sent(string name, string info, string email) { var message = new ContactMessage { username = name, message = info, email = email, date = DateTime.Now }; ctx.messages.Add(message); ctx.SaveChanges(); return View(); }ApplicationDb.cspublic class ApplicationDbContext : DbContext { public DbSet messages { get; set; } public DbSet posts { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder builder) { builder.UseSqlite("Filename=./blog.db"); } }
asp.net core 2.0 Unable to Post to database
After about4 monthsof repetitive to-n-fro withamazon supportfailed to resolve the issue.All problems still persisting:The cache expires in about a day and misses after24 hours. ( My expiry is 1 year )All headers andaws settingsverified byamazon supportthemselvesUnfortunately, the company is still paying for this awful experience due to lockin.------ After 24 hours ------------ After 24 hours ------------ After 24 hours -------------- And so on.. -------Concluding, The problem still standsunresolvedandamazon supportseems to have given up. This is quiet a strange experience sinceawsis something we generally take for granted.:(
I wish to serve images from aS3 bucketwithCloudfrontas CDN frontend, for that I tried the following:What Iwish to acheive(Attempt 2) -- (Misses cloudfront cache randomly)I have the following setup to serve images: (Cloudfront-->Nginx-->S3)<<<<<<<< SampleS3headers >>>>>>>>>><<<<<<<< SampleNginx -> S3headers (AddedCache-Control) >>>>>>>>>><<<<<<<< SampleCloudfront -> Nginx -> S3headers >>>>>>>>>>What I amcurrently workingwith (Attempt 1) -- (Hits cloudfront as expected everytime)Cloudfront Settings:RespectsGETparams to support urls like (http://cdn.example.com/abc.jpg?v=1)CacheTTLset to157680000( Fallback forCache-Control)What am I screwing up inAttemp-2with my headers ? (Cloudfront missing randomly)Url(http://cdn.example.com/abc.jpg) & Url(http://cdn.example.com/abc.jpg?v=1) both will have sameETag, is that fine ?Update#AWSfollowed up onforums.aws.amazon.com, still waiting for a reply:https://forums.aws.amazon.com/thread.jspa?threadID=144286&tstart=0#Update2Recent hit/miss behavioral change from cloudfront without changing anything.Earlier the hits/misses were random with no fix patternNow, (with no change on my end) I am getting all hits 1 day and all misses the next day.This suggests that its 24 hour cache but TTL and cache headers suggest 5 year cache expiry.This is again weird and without any explanations.Hey,AWScan you see this ???
Cloud-front backed with Nginx (which proxies to S3) randomly missing already cached items?
If you do not know anything about web server configuration, I am assuming you also do not know how/where to edit the config file.The nginx conf file is located at/etc/nginx/nginx.conf(verified in Ubuntu 12.04)By default, nginx gzip module is enabled. So check from this service whether it is enabled on notusing an online tool like this.If it is disabled, add this before server {...} entry in nginx.conf# output compression saves bandwidth gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml; # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/07/25/nginx-gzip-ssl.html gzip_buffers 16 8k; # Disable gzip for certain browsers. gzip_disable “MSIE [1-6].(?!.*SV1)”;
I’m looking for "how to compress load time js file" and I try the solution of myquestion(I’m using Extjs).My friend suggestthistoo. But, it use Apache as web server. Anybody know how to do the trick in NGINX??My hosting uses nginx as web server and i don’t know anything about web server configuration.sorry, if my english bad..
how to deflate js file in nginX?
Try this one:location / { proxy_pass http://frontends; proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header REMOTE_ADDR $remote_addr; }Just addproxy_set_header REMOTE_ADDRand it should be work well.Tried with:Django 1.5.4Nginx 1.4.3Tornado 2.2.1
So I got a simple setup with nginx for static media and load balancing and tornado as webserver for django (4 servers running). My problem is remote_addr not getting passed on to django so I'm getting a KeyError:article.ip = request.META['REMOTE_ADDR']The remote address is getting sent through as X-Real-IP (HTTP_X_REAL_IP) thanks to the nginx.conf:location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect false; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; }As HTTP is prepended to the META key I can't just do proxy_set_header remote_addr $remote_addr. What I could do is read the X-Real-IP if no remote addr key is found but I'm curious if there's a smarter solution.Thanks!
REMOTE_ADDR not getting sent to Django using nginx & tornado
If you do have gcc installed, the problem stems from /tmp being mounted as noexec. The error doesn't exactly help, but if you remount /tmp as exec you can install passenger properly.mount -o remount,rw,exec,nosuid /tmp
I'm trying to install Passenger and Nginx on my VPS.I followedthese instructionsand replaced all links of all sources to the current version.But when i ran the Phusion Passenger installer for Nginx, something with gcc compiler went wrong:Compiling and installing Nginx... # sh ./configure --prefix='/opt/nginx' --with-http_ssl_module --with-http_gzip_static_module --with-cc-opt='-Wno-error' --add-module='/usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.17/ext/nginx' checking for OS + Linux 2.6.32-220.el6.x86_64 x86_64 checking for C compiler ... not found ./configure: error: C compiler gcc is not foundWhat should I do?OBS: My VPS works with CentOS 6.2 x64
C compiler gcc not found while installing passenger and nginx
Maybe the missing ; after fastcgi_pass?
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed10 years ago.Improve this questionI have this vhost confserver { # php/fastcgi listen 80; server_name trinityplex.com www.trinity.com; error_log /home/web/trinity_web/log/error.log; access_log /home/web/trinity_web/log/access.log; root /home/web/trinity_web/public; location / { index index.html index.htm index.php; } location ~ .php$ { fastcgi_pass 127.0.0.1:9000 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }When I restart nginx it dumpsStarting nginx: 2011/04/16 18:56:34 [emerg] 2492#0: invalid number of arguments in "fastcgi_pass" directive in /usr/local/nginx/sites-enabled/trinityplex.com:14
nginx trouble loading index file [closed]
Addingnginx.ingress.kubernetes.io/ssl-redirect: "false"toannotationswill disable the SSL redirect:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: project_name-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / backend: serviceName: project_name servicePort: 80Note thatfalseis wrapped in quotation marks. I found it didn't work without this string casting.
An SSL redirect is enabled by default in a Kubernetes NGINX ingress. How can this be disabled? Current implementation below:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: project_name-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: project_name servicePort: 80
Disable SSL redirect for Kubernetes NGINX ingress
This is the solution I came up with:auth.lua-- Some logic goes here -- .... -- .... ngx.var.return_status = 200nginx.confhttp { lua_package_path .....; lua_package_cpath ....; rewrite_by_lua_no_postpone on; server { set $return_status 1; location /foo { rewrite_by_lua_file " format; return 200; } } } }
I'm trying to figure out how to do the following:Request is coming in.HttpLuaModuleperforms some action against the request. If request is valid than Lua will finish processing withngx.exit(202). But there are some conditions that may (and will) occur during the processing andnginxmight return 403 , 404, 503 Errors.What I want to do is to write to access logs only requests that have 200 Status code. Basically I would like to do something like this:location /foo { content_by_lua_file "/opt/nginx/lua/process.lua"; if (status == 200) { access_log "/path/to/the/access_log" }I'm very new to both nginx and lua so for me it's a bit of a challenge to figure out where to place and if statement (ether aftercontent_by_lua_fileor in side lua file) and what this if statement should look like.
How to write only logs with 200 status
You should be able to usengx.var.arg_namewherenameis the name of the query parameter you want to access. SeeVariables with Infinite Names section in this tutorialfor details on query parameter handling; you may also check myblog postfor Lua nginx/openresty examples.As an alternative, you can usengx.req.get_uri_args()to retrieve all query parameters as one table. Seethis sectionin the same tutorial for the brief comparison between these methods.
I am trying to implement this-https://gist.github.com/MendelGusmao/2356310Lua,nginx based URL shortener,The only change i want to implement is when some query string parameter comes with shortened URL i need to take that parameter and insert into the long URL.e.g.http://google.com?test=2will be likehttp://abc.in/abcwhile hitting onhttp://abc.in/abc?test=3I get redirected to -http://google.com?test=3.For that i need to take query string parameters from $request_URI, can any one help with some code?
how to get query parameter in lua or nginx?
The decoded URI can be found inngx.var.uri. It does not contain the query string, if you need it seengx.var.query_string.EDIT: if you cannot use this, here is a simple way to unescape a URL in Lua.local hex_to_char = function(x) return string.char(tonumber(x, 16)) end local unescape = function(url) return url:gsub("%%(%x%x)", hex_to_char) endExample usage:local url = "/test/some%20string?foo=bar" print(unescape(url)) -- /test/some string?foo=barBut you should probably split the query stringbeforeusing it.
When I usengx.var.request_uriI'm getting back a string that contains %20 in place of spaces. Is there a urldecode() function or similar to decode my string?
how to urldecode a request_uri string in Lua
Ok, I found the answer. I can't describe how grateful I am to @mike in the following post:Error In PHP5 ..Unable to load dynamic library. I ran$ grep -Hrv ";" /etc/php5 | grep -i "extension="and it returned a large list of files and one of them was newrelic.ini in/etc/php5/cli/conf.d/which to be honest with you I wasn't even aware was a php directory. So I ransudo rm -rf /etc/php5/cli/conf.d/newrelic.iniand restarted nginx and php5-fpm, and problem solved :)Thanks @WayneWhitty for the suggestions! I am also going to let newrelic know that they should fix that on their uninstall script.
I am running Ubuntu 12.04 with Nginx and the latest PHP. The story goes like this: I tried to install the new relic PHP agent per the instructions for ubuntu:wget -O - http://download.newrelic.com/548C16BF.gpg | sudo apt-key add - sudo sh -c 'echo "deb http://apt.newrelic.com/debian/ newrelic non-free" > /etc/apt /sources.list.d/newrelic.list' sudo apt-get update sudo apt-get install newrelic-php5 sudo newrelic-install installAnd it doesn't work. After everything the PHP agent simply can't start. I even whipped up a quickphpinfo.phppage to see if the newrelic module was listed and it's not. So then I googled "New relic .deb" and came across this page:https://docs.newrelic.com/docs/server/server-monitor-installation-ubuntu-and-debianand followed the instructions. The install all goes through but the agent also doesn't start. I like to keep my servers clean so I decided "OK, since it doesn't work, until new relic support gets back to me and I can start from fresh I will remove the new relic stuff that was installed". So once again I followed the instructions on that link. The install seemed to work normally. However, if I execute the command "PHP" I get the following error:root@MYHOSTNAME:/home# php PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20121212 /newrelic.so' - /usr/lib/php5/20121212/newrelic.so: cannot open shared object file: No such file or directory in Unknown on line 0I made sure there is no reference to newrelic in my/etc/php/fpm/php.inifile and double checked to see if there was anything in that folder. Nothing.So my question is: how do I get rid of the error? How do I make PHP stop trying to load that newrelic.so module? Is there any reference to it somewhere that I might be missing?
PHP Startup: Unable to load dynamic library (NEW RELIC)
Like this:if (preg_match('/(?<=net).*(?=\.php)/', $subject, $regs)) { $result = $regs[0]; }Explanation:" (?<= # Assert that the regex below can be matched, with the match ending at this position (positive lookbehind) net # Match the characters “net” literally ) . # Match any single character that is not a line break character * # Between zero and unlimited times, as many times as possible, giving back as needed (greedy) (?= # Assert that the regex below can be matched, starting at this position (positive lookahead) \. # Match the character “.” literally php # Match the characters “php” literally ) "
What would be the best regular expression for this scenario?Given this URL:http://php.net/manual/en/function.preg-match.phpHow should I go about selecting everything between (but not including)http://php.netand.php:/manual/en/function.preg-matchThis is for anNginxconfiguration file.
Match the path of a URL, minus the filename extension
Whether you're starting nginx in a shell or using a daemon service (which is simply a wrapper around the command line api), the answer lies inthe command line API.As you learned, the default location nginx looks in for the configuration file is /etc/nginx/nginx.conf, but you can pass in an arbitrary path with the-cflag. E.g.:$ nginx -c /usr/local/nginx/confA couple other notes:I doubt there's any good reason to repeat "index.html" in your server block.I would name your configuration file "nginx.conf" (you currently indicate that it's just named "conf"). It's the standard.Familiarize yourself with another command line flag-t, which just checks to make sure your configuration file works. Runnginx -tevery time after modifying your configuration file and it will spit out any syntax errors. To reload the configuration after changes, usenginx -s reload.
I'm working on getting NGINX configured on a server and I've been able to get all of my files into/usr/local/nginx/html/. I've also created annginx.conffile in/usr/local/nginx/conf. All it contains is:server { root /usr/local/nginx/html; index index.html index.html; }I've been using/usr/local/because that's the only thing I have permissions to write in. When I go to look at the site, I still get the Nginxindex.htmlpage with the message:This is the defaultindex.htmlpage that is distributed with nginx on EPEL. It is located in/usr/share/nginx/html.You should now put your content in a location of your choice and edit the root configuration directive in the nginx configuration file/etc/nginx/nginx.conf.I guess my question is, how can I configure mynginx.conffile correctly so that it uses that conf file and pulls from the correction location for the site files?
NGINX change config location
That's because you are hitting the first location block and the index file is not found.
I'm trying to set up graphite to work with grafana in docker based on this project :https://github.com/kamon-io/docker-grafana-graphiteand when I run my dockerfile I get 403 Forbidden error for nginx.my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have :server { listen 80 default_server; server_name _; location / { root /src/grafana/dist; index index.html; } location /graphite/ { proxy_pass http:/myserver:8000/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin "*"; add_header Access-Control-Allow-Methods "GET, OPTIONS"; add_header Access-Control-Allow-Headers "origin, authorization, accept"; }But I still keep getting 403 forbidden. Checking the error log for nginx says :directory index of "/src/grafana/dist/" is forbiddenStopping and running it again it says :directory index of "/src/grafana/dist/" is forbiddenI'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding.Thanks in advance.
nginx 403 Forbidden error
Add the following to your nginx configuration stanza that proxies to NodeJS:proxy_set_header X-Real-IP $remote_addr;Now you can read the header 'X-Real-IP' in NodeJS
I have ngnix proxying to a nodejs server. I am trying to read the request client ip address/host name in my nodejs, but it's always::ffff:127.0.0.1But in my nginx access log, I can see the client ip address printed, not sure why my nodejs server can't get it.x.x.x.x - - [24/Aug/2017:14:28:01 -0700] "GET ...."
nginx how to get the request client ipaddress
If its an internal task that takes too much time for processing, use celery to run the task.http://docs.celeryproject.org/en/latest/userguide/tasks.htmlIf its not purely an internal task, eg: - uploading a large file, then increase the Nginxclient_body_timeoutto greater than60s.Its because of the default timeout in nginx config. Edit the Nginx virtual host file and add the following line inserver{}section.http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout# default is 60 seconds, For a 300 second timeout. client_body_timeout 300s;Edit:uwsgi_read_timeout 300s;is also needed. But its already in your config.
I'm trying to run my Django application using Nginx + uwsgi, but I receive504 Gateway Time-outafter one minute of loading.My app takes time to do what needed as it searches for specific things on several websites.My nginx conf is the next one:upstream uwsgi { server 127.0.0.1:8000; } server { listen 80; server_name server_ip; root /opt/emails/subscriptions; index index.html index.htm index.php; location /emailsproject/ { root /opt/emails/subscriptions/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://uwsgi; proxy_set_header Host $http_host; uwsgi_read_timeout 18000; } }My uwsgi script:description "uWSGI server" env PYTHONPATH=/opt/emails/subscriptions env DJANGO_SETTINGS_MODULE=emailsproject.settings start on runlevel [2345] stop on runlevel [!2345] respawn exec uwsgi_python --http-socket 127.0.0.1:8000 -p 4 --wsgi-file /opt/emails/subscriptions/emailsproject/wsgi.pyMy nginx is giving me the followin error message in error.log:2015/09/28 02:15:57 [error] 4450#0: *19 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 37.235.53.246, server: my_server_ip, request: "POST /home/ HTTP/1.1", upstream: "http://127.0.0.1:8000/home/", host: "my_server_ip", referrer: "http://my_server_ip/home/"Does anyone have any idea on how can I get rid of this ? I've tried the tons of stackoverflows solutions but none worked for me.
504 Gateway Time-out uwsgi + nginx django application
i have same issue, but the problem was in sqlalchemy, try to add this:@app.teardown_request def shutdown_session(exception=None): from extension import db db.session.remove()
i run my flask app, and it works good, but by the time the app is stopped and in my uwsgi logprobably another instance of uWSGI is running on the same address (127.0.0.1:9002). bind(): Address already in use [core/socket.c line 764]when i run touch touch_reload, app is working again. I run anything else on the server which may take the socket.my conf:nginx server { listen 80; .... location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } .... } server { listen 80; .... location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9003; } .... } uwsgi: chdir = /var/www/../ module = wsgihandler socket = 127.0.0.1:9003 wsgi-file = app/__init__.py callable = app master = true chmod-socket = 664 uid = root gid = root processes = 4 socket-timeout = 180 post-buffering = 8192 max-requests = 1000 buffer-size = 32768 logto = /var/www/.../log/uwsgi.log touch-reload = /var/www/.../touch_reload
why do i have error "Address already in use"?
You need to setunderscores_in_headers oninyour NGINX config.
I have multiple apps in Google Play and App Store. These send custom request headers but these headers include an underscore e.gapi_key.The server has now moved from PHP 5.2 on Apache to PHP 5.5 on nginx. On NGINX,apache_request_headers()andgetallheaders()are not available.Is there any way to read custom request headers on NGINX server without having to go and update all apps to remove the underscore? Parsing the$_SERVERvariable does not work either - any headers using and underscore are dropped.
Get headers with an underscore on NGINX
Addindex index.php;In the server block, if it doesn't work then you need to remove the$uri/because you don't want to do aautoindex onEDIT: Just noticed that you already figured out your problem, so I'll add the reasoning behind it, the reason why you neededautoindex on;is because without it nginx will follow thetry_filesrules,Check if there's a file called/, and of course it fails.Check if there's a directory called/(by adding root it would =/www/blog/), this check will succeed, so it tries to list the content of the folder.Since you didn't specifyautoindex on;so by default nginx should forbid directory listing, thus it would return a 403 forbidden error.The rest of the site works fine because it fails the$uri/test or doesn't reach it, because you probably don't have a folder calledimage.jpgorstylesheet.cssetc.
I'm setting up my blog on a new EC2 instance because one of the sites on the server that's currently hosting it is being DDoSed. I'm having some trouble with nginx, because I can either see all the pages fine but 403 on the index, or see the index but 404 on the pages (depending on the config I'm using)Here's my nginx config:server { listen 80; server_name www.test.com; server_name test.com; root /www/blog; include conf.d/wordpress/simple.conf; }And simple.conf:location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi.conf; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; }if I change the try_files $uri $uri/ /index.php?$args; to index index.php, the front page will work fine and the rest will be 404. If I leave it like that, the front page is 403.Here's the error log:2013/08/07 19:19:41 [error] 25333#0: *1 directory index of "/www/blog/" is forbidden, client: 64.129.X.X, server: test.com, request: "GET / HTTP/1.1", host: "www.test.com"That directory is 755 on the nginx user:drwxr-xr-x 6 nginx nginx 4096 Aug 7 18:42 blogIs there anything obvious I'm doing wrong ?Thanks !
403 forbidden on wordpress index with nginx, the rest of the pages work fine
Try this patch...-proxy_pass http://node/; +proxy_pass http://node;
If I run my expressjs app like so:coffee server.coffeeand navigate tolocalhost:8080, everything works just fine.However, when I reverse proxy 8080 with nginx with the following configuration:server { listen 0.0.0.0:80; server_name localhost; access_log /var/log/nginx/nodetest.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://node/; proxy_redirect off; } } upstream node { server 127.0.0.1:8080; }I get the following error in the Chrome Dev console:GET http://184.73.217.204/socket.io/xhr-polling//1300750540040 502 (Bad Gateway)and the following in nginx'serror.log2011/03/22 13:07:59 [error] 10269#0: *18 upstream prematurely closed connection while reading response header from upstream, client: 168.229.58.68, server: localhost, request: "GET /socket.io/xhr-polling//1300799281533 HTTP/1.1", upstream: "http://127.0.0.1:8080/socket.io/xhr-polling/1300799281533", host: "184.73.217.204", referrer: "http://184.73.217.204/"Any guidance appreciated!
502 Bad Gateway when using ExpressJS with nginx
The tilde (~) is an identifier for Nginx letting it know that the location block is using a REGEX to match the location."~" = REGEX match, case-sensitive"~*" = REGEX match, case-insensitiveNginx Docs
What is the tilde (~) doing in an Nginx Location directive? ie:location ~* \.(png|gif|jpg)$ { [...configuration] }
nginx location tilde
Flask vs Bottle comes down to a couple of things for me.How simple is the app. If it isverysimple, then bottle is my choice. If not, then I got with Flask. The fact that bottle is a single file makes it incredibly simple to deploy with by just including the file in our source. But the fact that bottle is a single file should be a pretty good indication that it does not implement the full wsgi spec and all of its edge cases.What does the app do. If it is going to have to render anything other than Python->JSON then I go with Flask for its built in support of Jinja2. If I need to do authentication and/or authorization then Flask has some pretty good extensions already for handling those requirements. If I need to do caching, again, Flask-Cache exists and does a pretty good job with minimal setup. I am not entirely sure what is available for bottle extension-wise, so that may still be worth a look.The problem with using bottle's built in server is that it will be single process / single thread which means you can only handle processing one request at a time.To deal with that limitation you can do any of the following in no particular order.Eventlet's wsgi wrapping the bottle.app (single threaded, non-blocking I/O, single process)uwsgi or gunicorn (the latter being simpler) which is most ofter set up as single threaded, multi-process (workers)nginx in front of uwsgi.3 is most important if you have static assets you want to serve up as you can serve those with nginx directly.2 is really easy to get going (esp. gunicorn) - though I use uwsgi most of the time because it has more configurability to handle some things that I want.1 is really simple and performs well... plus there is no external configuration or command line flags to remember.
I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:Use Bottle only -- As http server + AppUse Bottle on top of uwsgi -- Use uwsgi as the HTTP serverUse Bottle with nginx/uwsgiQuestions:If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?Would the uwsgi/bottle (or Flask) combination be considered production-ready?Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one?
Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle
If you are willing to forgo the "permanent" redirect status, I believe a 307 redirect instead of a 301 will preserve the POST. There actually is a redirect that is permanent and preserves the post, a 308, but it isn't well adopted yet by browsers and other user agents.
I have a website set up that uses the redirect method...server { listen 80; server_name example.org; return 301 https://$server_name$request_uri; }However when a page is posted to "http://example.com" it redirects to "https://example.com" and in the process, it strips the POST.I recognize this is how it works, however I need to somehow do one of the following...Do a redirect from http -> https while keeping the POST variable intactConvert the POST variable to a GET variable during the redirect (which would work fine)Redirect everything EXCEPT for one folderAny suggestions? I'm a bit lost...
Nginx loses POST variable with http -> https redirect
The main idea: all your controller does is to set the nginxx-accel-redirectheader. Once your controller method returns (which will be very fast), nginx will look at the header your Rails app set. If x-accel-redirect is set, then nginx serves the static file.Your controller will look something like:def show @attachment = Attachment.find(params[:id]) # Do anything else you need for authentication, etc. head(:x_accel_redirect => '/files/' + @attachment.filename, :content_type => @attachment.content_type, :content_disposition => "attachment; filename=\"#{@attachment.filename}\"") endThis alone won't do the trick. You need to also tell nginx about the files located at $RAILS_ROOT/files. Add this to the end of your nginx config inside the server block:location /files { root /path/to/rails_app; internal; }Put the static file into $RAILS_ROOT/files and it should work. No need for plugins or monkeypatching Tested with Rails 2.3.2 and 2.3.14.
Let's say I have a Rails 2.3.2 application fronted by nginx and served by mongrel in which I need to serve a large static file through Rails (to control access to it). I want the Rails app to delegate the transfer of the file to nginx, to avoid blocking the mongrel instance.The available information seems contradictory and incomplete.This postshows how to do it with Apache, and hints that it can also be done with ngninx - but no examples.This postandthis postshow how to do it using the a plugin that apparently Rails 2.3 makes uncessary.This postsuggests that maybe there isn't support for x-sendfile with nginx after all.I'd rather not muck around with plugins for things Rails can now do by itself.Has anybody gotten x-sendfile-like behavior to work using no plugins and Rails 2.3/nginx/mongrel? If not, what's the best documentation for getting it to work with a plugin (and/or monkeypatch) and Rails 2.3/nginx/mongrel?
Serving Large Files Through Nginx via Rails 2.3 Using x-sendfile
These rewrite rules made the scripts work:rewrite ^/foo/([^?]*)(?:\?(.*))? /bar/index.php?title=$1&$2; rewrite ^/foo /bar/index.php;
I'm looking to convert the followingmod_rewriterule to theNginx equivalent:RewriteRule ^foo/(.*)$ /bar/index.php?title=$1 [PT,L,QSA] RewriteRule ^foo/*$ /bar/index.php [L,QSA]So far I have:rewrite ^foo/(.*)$ /bar/index.php?title=$1&$query_string last; rewrite ^foo/?$ /bar/index.php?$query_string break;The problem is (I think!) that the query string doesn't get appended. I haven't found a way to port theQSAargument to Nginx.
How do I convert mod_rewrite (QSA option) to Nginx equivalent?
According toGunicorn, they suggest you use nginx to actually buffer clients and prevent slowloris attacks. So this buffering is likely a good thing. However, I do see an option further down on that link I provided where it talks about removing the proxy buffer, it's not clear if this is within nginx or not, but it looks as though it is. Of course this is under the assumption you have Gunicorn running, which you do not. Perhaps it's still useful to you.EDIT: I did some research and that buffer disable in nginx is for outbound, long-polling data. Nginx states on theirwikisite that inbound requests have to be buffered before being sent upstream."Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers."
It seems that nginx buffers requests before passing it to the updstream server,while it is OK for most cases for me it is very bad :)My case is like this:I have nginx as a frontend server to proxy 3 different servers:apache with a typical php appshaveet(a open source comet server) built by me with python and geventa file upload server built again with gevent that proxies the uploads to rackspace cloudfiles while accepting the upload from the client.#3 is the problem, right now what I have is that nginx buffers all the request and then sends that to the file upload server which in turn sends it to cloudfiles instead of sending each chunk as it gets it (those making the upload faster as i can push 6-7MB/s to cloudfiles).The reason I use nginx is to have 3 different domains with one IP if I can't do that I will have to move the fileupload server to another machine.
disable request buffering in nginx
I use the following rewrite in the top level server clause:rewrite ^(.*)/index\.html$ $1 permanent;Using this alone works for most URLs, likehttp://example.com/bar/index.html, but it breakshttp://example.com/index.html. To resolve this, I have the following additional rule:location = /index.html { rewrite ^ / permanent; try_files /index.html =404; }The=404part returns a 404 error when the file is not found.I have no idea why the first rewrite alone isn't sufficient.
I've seen a few ways to rewrite the$request_uriand add theindex.htmlto it when that particular file exists in the file system, like so:if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; }but i was wondering if the opposite is achievable:i.e. when somebody requestshttp://example.com/index.html, they're redirected tohttp://example.comBecause the nginx regexp is perl compatible, i tried something like this:if ( $request_uri ~* "index\.html$" ) { set $new_uri $request_uri ~* s/index\.html// rewrite $1 permanent; }but it was mostly a guesswork, is there any good documentation describing the modrewrite for nginx ?
nginx : rewrite rule to remove /index.html from the $request_uri
As the error message suggest and also from thedocumentation:The file uploads by the file provisioner are done as the SSH or PowerShell user. This is important since these users generally do not have elevated privileges on their own. If you want to upload files to locations that require elevated privileges, we recommend uploading them to temporary locations and then using the shell provisioner to move them into place.So thevagrantuser(if not modified) is used to scp the file but you can't access/etc/with it.To make it work you need to upload it to a temporary location and then use a shell provisioner to move it to target directory:config.vm.provision "file", source: "./bolt.local.conf", destination: "/tmp/bolt.local.conf" config.vm.provision "shell", inline: "mv /tmp/bolt.local.conf /etc/nginx/conf.d/bolt.local.conf"This work because theprivilegedoption is true by default onshell provisioners. But it is a bit convoluted to have two provisioners just to copy a configuration file, right ?Well if the file is already inside your share folder you can just use a shell provisioner to copy it in the nginx directory so you'll end up with something like this:# This is the default and serve just as a reminder config.vm.synced_folder ".", "/vagrant" config.vm.provision "shell", inline: "cp /vagrant/bolt.local.conf /etc/nginx/conf.d/bolt.local.conf"
I am working with a Nginx server. I want to copy a configuration file to /etc/nginx/conf.d with the Vagrantfile. The command I use is:config.vm.provision "file", source: "./bolt.local.conf", destination: "/etc/nginx/conf.d/bolt.local.conf"The error I receive is:Failed to upload a file to the guest VM via SCP due to a permissions error. This is normally because the SSH user doesn't have permission to write to the destination location. Alternately, the user running Vagrant on the host machine may not have permission to read the file.I am using the bento/ubuntu-16.04 box.I tried to search for a way to change the permissions for the provision command but I only found ways to change the owner for the config.vm.share_folder command.Do you know the answer?
Vagrant: config.vm.provision does not allow me to copy a file to etc/nginx/conf.d?
Ok, I was having the same problem and after some more research and couple of trial and error attempts I figured it out.Try adding the header X-Forwarded-Proto as in the following example:server { server_name example.com; proxy_set_header Host $host; # You need this line proxy_set_header X-Forwarded-Proto $scheme; location ^~ /jenkins { proxy_pass http://localhost:8080/; } listen 443 ssl; ssl_certificate cert.crt; ssl_certificate_key cert.key; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; }The $scheme will handle https in the front end and then you can call http in the backend using the location.
I have set up nginx as a reverse proxy on an ubuntu instance that is hosting jenkins, and a few other applications. I'm using nginx to route to the various applications based on a relative path. All traffic from the client to nginx is over https. Behind the firewall, nginx routes everything over http to the configured path and port numbers. It looks something like this:firewall | | --->https--->nginx---http--->jenkins | |The relevant part of the nginx config file is this:server { listen 443 ssl; ssl_certificate cert.crt; ssl_certificate_key cert.key; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; location /jenkins { proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:6969; } }The problem is that jenkins uses a simple authentication, and upon a successful login, it sends a 302 redirect. Nginx correctly proxies the url and port, but not the scheme. So the client follows the redirect over http, instead of https. In the browser I then get a 400 error:400 Bad Request The plain HTTP request was sent to HTTPS portI know that there is a scheme variable: $scheme. But I don't know how to tell nginx to map the http redirect from jenkins to https. All theexamplesI've looked at on stackoverflow seem to address slightly different situations.
how to handle nginx reverse proxy https to http scheme redirect
Assuming you have nginx proxying to port 8001, you want to do this:gunicorn -b 127.0.0.1:8001 your_project_name.wsgi:applicationYou need to run that from your project folder (where the manage.py file is)
I'm trying to deploy a django project with NGINX and gunicorn. I keep getting 502 Bad Gateway. I've been working nonstop on this for the past few days and I can't seem to get this deployed. I've gone through 3 tutorials on Digital Ocean, but they aren't correct, obviously.I keep getting 502 bad gateway, or if I try to use manage.py runserver, I get 400 bad request.I think my problem is with gunicorn. When I enter gunicorn -config, it saysusage: gunicorn [OPTIONS] [APP_MODULE] gunicorn: error: No application module specified.Every bit of documentation I can find says to simply type gunicorn wsgi:application, but when I do, it says "workers failed to boot." How do I set an application module?
Configuring Gunicorn: No application module specified
If I understand you correctly, you direct you browser to IP address (https://xx.xx.xx.xx/) instead of domain name and expect it to obey HSTS rule?ButRFC 6797 Appendix Aexplicitly exclude IP addresses:HSTS Hosts are identifiedonly via domain names-- explicit IP address identification of all forms is excluded.
I set up a cert for an IP address with nginx, and enabled http strict transport security:add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";The directive is in the headerHTTP/1.1 200 OK Server: nginx Date: Wed, 17 Sep 2014 22:46:54 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Access-Control-Allow-Origin: * X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Strict-Transport-Security: max-age=31536000; includeSubdomains; X-UA-Compatible: IE=Edge,chrome=1... but it's not respected by the browsers (instead they do for FQDN).
HTTP Strict Transport Security not respected for IP addresses