Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Here is the final solution that I got to work using webfaction.server { listen 12440; root /some/path/here/nginx/html/noahc/; server_name www.domain.net, domain.net; port_in_redirect off; location /{ error_page 404 = @foobar; } location @foobar { rewrite .* / permanent; } }
What I want to do whenever I get a 404 error on my domain, automatically 301 to the homepage.I have a lot of old blog posts and such that were linked to, but I don't have them on the blog and if anyone happens to click through from another site that they get kicked to the homepage.How can I do this inside nginx?server { listen 12680; root /home/noahc/webapps/nginx/html/noahc/; server_name www.noahc.net, noahc.net; error_page 404 @foobar; location @foobar { rewrite .* / permanent; } }
Nginx: Return 301 Redirect When 404 Error
Looks like you didn't set the environment variable with the api key that you're trying to use with:$apiKey = getenv(...);Please checkdocumentationhere as it looks like you're using the example code.Just for a test you can use:$apiKey = 'add here your api key';replacing the usage of getenv. It should work. Then you can set the api key in a config file or as env variable (depending on your application) in order to not hardcode it into the script.
This is the code for sending email using sendgrid i have correct api key still the browser displays error asHTTP/1.1 401 Unauthorized Server: nginx Date: Thu, 14 Jul 2016 08:14:32 GMT Content-Type: application/json Content-Length: 88 Connection: keep-alive {"errors":[{"message":"Permission denied, wrong credentials","field":null,"help":null}]}client->mail()->send()->post($mail); echo $response->statusCode(); echo $response->headers(); echo $response->body(); } ?>
Wrong credentials in sending mail using sendgrid
Alright, I answered my own question. I was missing passenger_ruby and passenger_root configurations in my nginx.conf file. Note that the passenger_ruby path needs to be the wrapper if you're using RVM.passenger_root /usr/local/rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9; passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.2-p290/ruby;
I'm getting the following error in nginx (with a 403) when I visit .com:[error] 5384#0: *1 directory index of "/u/apps//current/public/" is forbiddenI'm on Ubuntu 10.04 and I can't for the life of me get nginx, Passenger, Rails 3.1, and Capistrano to play nicely.I'm deploying to /u with Capistrano. Everything in /u is 755, owned by the app user./u/apps//current/public/ has all my assets, the favicon, and everything else you'd expect.When I addautoindex onto nginx.conf I get a listing of the public/ directory, which leads me to believe that I don't have a permission problem.My nginx.conf file is default expect for:server { listen 80; server_name .com; passenger_enabled on; root /u/apps//current/public/; }And my Capistrano deploy.rb file has nothing unusual.Any ideas why the rails app doesn't seem to be starting?
Rails 3.1, nginx, Passenger directory index forbidden
PHP does not recognize "bcadd()" gives the error. "bcadd()" function is included in "bcmath" PHP extention.Just installing the relevant bcmath extension would solved the issue.sudo apt-get install php7.0-bcmathPlease note, you should find the correct version of bcmath extension according to your PHP version. And restart apachesudo service apache2 restart
After installed "eduTrac SIS" and accessing "dashboard" got this errorUbuntu 16.4, PHP 7.0(php7.0-fpm), Apache2, Nginx,URL gives error 500 and nginx/error.log displays,FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to undefined function PHPBenchmark\bcadd() in /var/www/html/eduTrac-SIS/app/src/vendor/phpbenchmark/phpbenchmark/lib/PHPBenchmark/Utils.php:18
Fatal error: Uncaught Error: Call to undefined function bcadd()
No answer so far, so I'm doing it.I verified it myself -real_ipmodule changes the value of the connection origin internally, and for all intents and purposes, everything related to the source of the connection becomes that IP (got fromX-Forward-For,X-Real-IP, etc), including$binary_remote_addrvariable. So, it's safe to use it with request limit configuration.Obs: on the other hand, it saves the connection original IP on$realip_remote_addr.
I have anNginxserver pool behind a CDN + load balancer setup. CDN caches HTTP "read" requests (GET, HEAD, OPTIONS) and bypasses "write" requests (POST).I'm usingreal_ipmodule to get clients' IPs fromX-FORWARD-FORheader in a configuration like this:set_real_ip_from set_real_ip_from ... real_ip_recursive on; real_ip_header X-Forwarded-For;It can confirm it works. But, I also want to limit the request rate per client (I will assume every IP is a distinct client), to avoid robots and attacks, so I'm usinglimit_reqmodule as follows:http { limit_req_zone $binary_remote_addr zone=perip:10m rate=10r/s; location / { limit_req zone=perip burst=5; } }So, my question is: will$binary_remote_addrassume the original client's IP, the real originator of the request, once I configuredreal_ip, or internally Nginx doesn't override this as I'm expecting? Because if it doesn't, a configuration like that will certainly cause me serious problems.I suppose Nginx is smart enough for that, but once I couldn't find a confirmation about it on documentation and didn't have the chance to test it in a real and distributed scenario so far, I hope someone with previous experience doing this could tell me.Thank you.
Nginx rate limit and real IP module
After days of screwing around I found an answer on serverfault that suggested deleting the listening sock. So I ranrm ~/.config/valet/valet.sockand immediately the tailed php log showed[08-Sep-2019 16:55:48] NOTICE: fpm is running, pid 10316 [08-Sep-2019 16:55:48] NOTICE: ready to handle connectionsSo I guess that's all there was to it!
I've upgraded valet on my macbook (running catalina) and followed the laravel docs including re-running thevalet installcommand and am seeing unexpected502 Bad Gatewayerrors. I was checking the logs and found[27-Aug-2019 20:39:06] ERROR: Another FPM instance seems to already listen on /Users/myuser/.config/valet/valet.sock [27-Aug-2019 20:39:06] ERROR: Another FPM instance seems to already listen on /Users/myuser/.config/valet/valet.sock [27-Aug-2019 20:39:06] ERROR: FPM initialization failed [27-Aug-2019 20:39:06] ERROR: FPM initialization failed [27-Aug-2019 20:39:17] ERROR: Another FPM instance seems to already listen on /Users/myuser/.config/valet/valet.sock [27-Aug-2019 20:39:17] ERROR: Another FPM instance seems to already listen on /Users/myuser/.config/valet/valet.sock [27-Aug-2019 20:39:17] ERROR: FPM initialization failed [27-Aug-2019 20:39:17] ERROR: FPM initialization failedIt seems there's 3php-fpmprocesses running though they all are the same php version (7.3).Can anyone offer ideas of how to find where the otherphp-fpmprocess is being triggered from, and how to fix this issue?
Laravel Valet php-fpm already listening on valet sock
in order to block the specific user agent I included this code in the "server" block:if ($http_user_agent = "Mozilla/5.0 (Linux; Android 4.2.2; SGH-M919 Build/JDQ39) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.169 Mobile Safari/537.22"){ return 403; }and it worked as expected.
How do I block a user agent using nginx. so far I have something like this:if ($http_user_agent = "Mozilla/5.0 (Linux; Android 4.2.2; SGH-M919 Build/JDQ39) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.169 Mobile Safari/537.22") { return 403;}this is from a similar thread on this stack overflow.I run nginx as a reverse proxy for cherrypy server. I intend to filter a certain user agent using nginx alone but the above code doesn't work on my server.is that the correct way to do this? It wasn't included in any block in the nginx config. Should I add it to the "http" block or the "server" block
How to block a specific user agent in nginx config
After months, an answer is coming :)Github configuration file seems wrong.setdirective is used inserver,locationandifblocks.Syntax: set $variable value;Default: —Context:server, location, ifhttp://nginx.org/en/docs/http/ngx_http_rewrite_module.html#setGood luck!
I am trying to follow this example here-https://gist.github.com/morhekil/1ff0e902ed4de2adcb7a#file-nginx-confbut getting error-"set" directive is not allowed herewhat am I doing wrong? Note that I am using openresty and invoking nginx as-nginx -p `pwd`/ -c conf/nginx.confThe context of my nginx.conf matches exactly ashttps://gist.github.com/morhekil/1ff0e902ed4de2adcb7a#file-nginx-confIf I move the set variable to server section, I no longer get that error but a new error-nginx: [emerg] unknown "resp_body" variable
"set" directive is not allowed here
Maxmind no longer supports Geolite legacy, just Geolite2 :https://blog.maxmind.com/2018/01/02/discontinuation-of-the-geolite-legacy-databases/
Started couple days ago i can't downloadhttp://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gzhttp://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gzdatabases which i use to enablengx_http_geoip_modulemodule.It was free and available all the time but now. Does anybody know anything about recent changes with this DB?
GeoIP.dat.gz and GeoLiteCity.dat.gz not longer available? Getting 404 trying to load it
Sends a permanent redirect to the client:server { listen 80; rewrite ^(/users/\w+)$ https://$host$1 permanent; ... }for negative match you could use:if ($request_uri !~ "^/users/\w+$") { return 301 https://$host$request_uri; }
I'm trying to redirect requests to https in nginx,unlessit is of the form HOST/ANY_STRING_OF_CHARS/END_OF_URI, e.g.:http://host.org/about# no redirecthttp://host.org/users/sign_in# redirects tohttps://host.org/users/sign_inThis apparently works in Apache, but I don't understand how the bang works (ignore if it doesn't really work):RewriteRule !/([a-z]+)$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R]How can I do this in a nginx rewrite rule? This is not working as I'd hoped:rewrite !/([a-z]+)$ https://$server_name$request_uri redirect;This doesn't do the redirect either, in case I had the logic backwards:rewrite /([a-z]+)$ https://$server_name$request_uri redirect;Help please?
nginx URL rewrite using negative regex?
The problem is that Node.JS's HTTP Request module isn't following the redirect you are given.See this question for more:How do you follow an HTTP Redirect in Node.js?Basically, you can either look through the headers and handle the redirect yourself, or use one of the handful of modules for this. I've used the "request" library, and have had good luck with it myself.https://github.com/mikeal/request
I'm brand new to node.js, but I wanted to play around with some basic code and make a few requests. At the moment, I'm playing around with the OCW search (http://www.ocwsearch.com/), and I'm trying to make a few basic requests using their sample search request:However, no matter what request I try to make (even if I just query google.com), it's returning me 301 Moved Permanently 301 Moved Permanently nginx/0.7.65 I'm not too sure what's going on. I've looked up nginx, but most questions asked about it seemed to be asked by people who were setting up their own servers. I've tried using an https request instead, but that returns an error 'ENOTFOUND'.My code below:var http = require('http'); http.createServer(function (request, response) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('Hello World\n'); var options = { host:'ocwsearch.com', path: '/api/v1/search.json?q=statistics&contact=http%3a%2f%2fwww.ocwsearch.com%2fabout/', method: 'GET' } var req = http.request(options, function(res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function(d) { process.stdout.write(d); }); }); req.end(); req.on('error', function(e) { console.error(e); }); }).listen(8124); console.log('Server running at http://127.0.0.1:8124/');Sorry if this is a really simple question, and thanks for any help you can give!
Node.js Requests returning 301 redirects
You need to add the certificate for the domain you want to be redirected:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-https-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: foo.com http: paths: - backend: serviceName: foo-prod-front servicePort: 80 path: / - host: www.foo.com http: paths: - backend: serviceName: foo-prod-front servicePort: 80 path: / tls: - hosts: - foo.com - www.foo.com secretName: tls-secretI am not completely sure, whetherfrom-to-www-redirectworks with this setup, but you can replace it with the following lines, which do work:nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'foo.com' ) { rewrite ^ https://www.foo.com$request_uri permanent; }
I have a kubernetes setup that looks like this:nginx ingress -> load balancer -> nginx appafter getting an SSL certificate forwww.foo.com, I've installed it in my nginx ingress as a secret, and it works as expected - traffic towww.foo.comgets redirected to thehttpsversion instead, and browsers display a secure connection indicator. Great.Whathasn'tbeen easy, however, is getting the ingress to redirectnon-wwwtraffic to the www version of the site.I've tried usingkubernetes.io/from-to-www-redirect: "true", but it doesn't seem to do anything - navigating tofoo.comdoesn't redirect me to thewwwversion of the url, but either takes me to an insecure version of my site, or navigates me todefault backend - 404depending on whether i includefoo.comas a host with it's own path in my ingress.I have been able to set up a patchy redirect by adding the following to my actual application's nginx config -server { listen 80; server_name foo.com; return 301 http://www.foo.com$request_uri; }UPDATE:from-to-www-redirectDOES work; you just have to reference it withnginx.ingress.kubernetes.iorather thankubernetes.ioas I was. But, this only works forfoo.com- typing inhttps://foo.comexplicitly causes browsers to display a security warning and no redirect to the proper URL ofhttps://www.foo.comoccurs.Here's my current config for the nginx ingress itself:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-https-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: www.foo.com http: paths: - backend: serviceName: foo-prod-front servicePort: 80 path: / tls: - hosts: - www.foo.com secretName: tls-secret
nginx k8s ingress - forcing www AND https?
If you want to have best performance for your clients, just use a CDN. It will take care of gzipping for you and a lot of other stuff. If you need help you can useexpress-cdnmodule.If you don't like CDNs for some reasons, your best bet is using nginx. I see it tagged in your question, but you didn't mention anything about it. nginx is way faster than node.js. For nginx you can check it'sgzip-static module.If you still want to use node.js, thenconnect-gzip-staticis your best bet. It works almost same as nginx-gzip-static module. If there's a.gzfile counterpart of the requested file, then it will use that, else it will fallback to normal connect-static module.Don't forget to compile files beforehand, if you are using gulp, then you might also usegulp-gzip. If not just use gzip command.There's alsogzippo, it gzips on the fly, but it also caches the result in the memory. So it will be only gzipped once.
I don't want to use a library which gzips on the fly, because of the overhead.The website has some dynamic components which is implemented in node.js.I have some static js and css files as well as their gzipped counterparts. I want to serve the gzipped version only to browsers which support it.I considered using the static middleware in express to serve the static files, along with some URL rewriting middleware to conditionally serve the gzipped files. However, I cannot find any conditional rewrite module.I cannot believe that no one has done this, or that it needs so many work arounds. What am I missing?On a different note, is serving up static files via node.js too expensive? On the other hand, using Apache for static files and running node.js behind seems bad as well. What is the least stupid configuration for AWS EC2 hosting?
Serve static gzip files using node.js
PHP-FPM is much better than the old FastCGI handling of PHP. As of PHP 5.3.3 PHP-FPM is in core and the old FastCGI implementation isn’t available anymore.My answer was just down voted (after being online for quite some time) and I understand why, so here is a list why PHP-FPM is actually better than the old FastCGI implementation.First of all, it was known for quite some time that the FastCGI implementation is bad in the PHP community. A page that documents that can be found athttps://wiki.php.net/ideas/fastcgiworkwhere it says:php-cgi is not useful in production environment without additional “crutches” (e.g. spawn-fcgi from lighttpd distribution or php-fpm patch). This project assumes integration of such “crutches” and extending php-cgi to support for different protocols.daemonization (detach, pid file creation, setup environment variables, setuid/setgid/chroot)graceful restartseparate and improve transport layer to allow support for different protocolssupport for SCGI protocolsupport for subset of HTTP protocol…Here is a list of the things that PHP-FPM does better that was taken fromhttp://php-fpm.org/about/:PHP daemonization: pid file, log file,setsid(),setuid(),setgid(),chroot()Process Management. Ability to “gracefully” stop and start PHP workers without losing any queries. This allows gradually updating the configuration and binary without losing any queries.Restricting IP addresses from which requests can come from.Dynamic number of processes, depending on the load (adaptive process spawning).Starting the workers with different uid/gid/chroot/environment and differentphp.inioptions (no need for safe mode).LoggingSTDOUTandSTDERR.Ability to emergency restart all the processes in the event of an accidental destruction of the shared memory opcode cache, if using an accelerator.Forcing the completion of process ifset_time_limit()fails.Additional features: - Error header - Accelerated upload support -fastcgi_finish_request()- Slowlog with backtrace
I am using thistutorialto install nginx, php and mysql on my new web server.The tutorial is using ISPConfig 3 and there is an option to whether use FastCgi or PHP-FPM.I am wondering which is better of the two. In terms of performance and speed, which of the two is the best to use inline with nginx?BTW, I have also memcached and xcache enabled on my server.
FastCgi vs PHP-FPM using Nginx web server
Since you're using a proxy that translates https requests into http, you need to configure Django to allow POST requests from a different scheme (since Django 4.0) by adding this tosettings.py:CSRF_TRUSTED_ORIGINS = ["https://yourdomain.com", "https://www.yourdomain.com"]If this does not solve your problem, you can temporarily setDEBUG = Truein production and try again. On the error page, you will see a "Reason given for failure" that you can post here.
I'm running a simple Django application without any complicated setup (most of the default, Django allauth & Django Rest Framework).The infrastructure for running both locally and remotely is in a docker-compose file:version: "3" services: web: image: web_app build: context: . dockerfile: Dockerfile command: gunicorn my_service.wsgi --reload --bind 0.0.0.0:80 --workers 3 env_file: .env volumes: - ./my_repo/:/app:z depends_on: - db environment: - DOCKER=1 nginx: image: nginx_build build: context: nginx dockerfile: Dockerfile volumes: - ./my_repo/:/app:z ports: - "7000:80" ... # db and so onas you see, I'm using Gunicorn the serve the application and Nginx as a proxy (for static files and Let's Encrypt setup. The Nginx container has some customizations:FROM nginx:1.21-alpine RUN rm /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.dAnd the nginx.conf file is a reverse proxy with a static mapping:server { listen 80; location / { proxy_pass http://web; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } location /static/ { alias /app/my_repo/static/; } }Running this on the server after setting up let's encrypt in the Nginx container works without any issue, but locally I get the "CSRF verification failed. Request aborted." error every time I submit a form (e.g. create a dummy user in Django Admin). I exposed the web port and used it to submit the forms and it worked.Because of that, I deduce that there is something missing in the Nginx config or something to "tell" Django how to handle it. So, what I'm missing and how should I investigate this?
Django returning "CSRF verification failed. Request aborted. " behind Nginx proxy locally
Express.js official site has thisguide. Instructions:app.set('trust proxy', true)in js.proxy_set_header X-Forwarded-For $remote_addrin nginx.confYou can now read-off the client IP address fromreq.ipproperty
I have NGINX running as reverse proxy which forwards all http and https traffic to my node.js application, which listens to localhost:portHowever the issue I have is that the node.js application sees all incoming requests as coming from ::ffff:127.0.0.1How can I change the NGINX config such that the real IP will be passed through and forwarded to the node.js application?server { listen 80; listen [::]:80; listen 443; listen [::]:443; root /var/www/example.com/html; index index.html index.htm; server_name example.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:myport; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } # Requests for socket.io are passed on to Node on port x location ~* \.io { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:myport; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }Edit: The express.js/node.js application processes req.ip and has app.enable('trust proxy'); at startup
How to forward request IP from NGINX to node.js application?
I cleared the cache completely usingsudo php artisan cache:clear. Afterwards, the problem never occurred.Opposed to Ismoil's answer: never make the Laravel storage folder777. It poses a security risk.
I'm having struggles with editing the Laravel cache, which is located instorage/framework/cache. I've got a job running that saves to a certain cache, but every time the job runs, this error occurs:ERROR: file_put_contents(/var/www/html/---/storage/framework/cache/data/3c/c7/3cc7fd54b5a3cb08ceb0754f58371cec1196159a): failed to open stream: Permission deniedDetailsWhen I save to the same cache (e.g. with the same key, there is no error)I am running onnginxAlready have I run this commandsudo chown -R www-data:www-data storagein the folder the Laravel application is located, as well assudo chmod -R 775 /home///storagePerformingls -lh /storage/framework/cachereturns the following:drwsrwsr-x 55 www-data www-data 4.0K Jan 18 20:56 data.Now I'm just wondering what the full, correct, Laravel permission set is and how to restore that set-up.Any help is appreciated! Thank you in advance.
Laravel: file_put_contents() permission denied — correct storage/framework/cache permissions?
As there is no value forsendmail_fromyou need to set one inphp.ini:sendmail_from = "[email protected]"Or in the headers when you call tomail:mail($to, $subject, $message, 'From:[email protected]');The email address should follow RFC 2822 for example:[email protected]You <[email protected]>Failing that, have you actually installed a working email system?If not, you can install postfix with the following command:sudo apt-get install postfixSee below for more information on configuring postfix for use with PHP in Ubuntu:https://serverfault.com/questions/119105/setup-ubuntu-server-to-send-mail
I have searched everywhere for this and Ireallywant to resolve this. In the past I just end up using an SMTP service like SendGrid for PHP and a mailing plugin like SwiftMailer. However I want to use PHP.Basically my setup (I am new to server setup, and this is my personal setup following atutorial)Nginx Rackspace Cloud PHP 5.3 PHP-FPM Ubuntu 11.04Myphpinfo()returns this about the Mail entries:mail.log no value mail.add_x_header On mail.force_extra_parameters no value sendmail_from no value sendmail_path /usr/sbin/sendmail -t -i SMTP localhost smtp_port 25Can someone help me to as whyMail()will not work - my script is working on all other sites, it is a normal mail command. Do I need to setup logs or enable some PHP port on the server?My Sample script Thanks, your message was sent and our team will be in touch shortly. "; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; // SEND MAIL mail($to,$subject,$message,$headers); ?>Thanks
How can I use PHP Mail() function within PHP-FPM? On Nginx?
You'll very likely want to pass the url (the uri) to the auth-request endpoint as well. You can do this in one go:location = /api/auth { proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Original-METHOD $request_method; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_pass http://; }Bonus: I also passed the method! :tada:
I'm trying to figure out if it is possible to forward a query-parameter from the original URL to theauth_requesthandler/service?Users should be able to add the API-token as a query-parameter like this:https://example.com/api/user?token=237263864823674238476And not viaheaderorcookie. Can I access thetokenparameter somehow in the auth-service? Or write thetokenquery-parameter in a custom header with NGINX?Tried this so far:location = /api/user { auth_request /auth; proxy_set_header X-auth-token-from-query $arg_token; proxy_pass http://; }/authendpoint doesn't get theX-auth-token-from-queryheader but after returning a200the upstream-proxy does get the header.
nginx auth_request: access original query parameter
That loop message suggests that /files/whatever/public/index.html doesn't exist, so the try_files in location / doesn't find $uri when it's equal to/index.html, so the try_files always internally redirects those requests to the @ location which does the external redirect.Unless you have a more complicated setup than you've outlined, I don't think you need to do so much. You shouldn't need external redirects (or even internal redirects) or server-side cookie sending for a one-file js app. The regex match for app and api wasn't quite right, either.root /files/whatever/public; index index.html; location / { try_files $uri /index.html =404; } # Proxy requests to "/auth" and "/api" to the server. location ~ ^/(auth|api) { proxy_pass http://application_upstream; proxy_redirect off; }
I'm trying to build a single page app that utilizes HTML5 App Cache, which will cache a whole new version of the app for every distinct URL, thus I must redirect everyone to/and have my app route them afterward (this is the solution used ondevdocs.io).Here's my nginx config. I want all requests to send a file if it exists, redirect to my API at/authand/api, and redirect all other requests to index.html. Why is the following configuration causing my browser to say that there is a redirect loop? If the user hits location block #2 and his route doesn't match a static file, he's sent to location block #3, which will redirect him to "/" which should hit location block #1 and serve index.html, correct? What is causing the redirect loop here? Is there a better way to accomplish this?root /files/whatever/public; index index.html; # If the location is exactly "/", send index.html. location = / { try_files $uri /index.html; } location / { try_files $uri @redirectToIndex; } # Set the cookie of the initialPath and redirect to "/". location @redirectToIndex { add_header Set-Cookie "initialPath=$request_uri; path=/"; return 302 $scheme://$host/; } # Proxy requests to "/auth" and "/api" to the server. location ~* (^\/auth)|(^\/api) { proxy_pass http://application_upstream; proxy_redirect off; }
Nginx config for single page app with HTML5 App Cache
Thanks to Sergey Moiseev's commentthe answer is quite simple.go to your configuration file and add the followingtypes { text/plain sh; }this maps the extension.shto mime-typetext/plain
I have a directory index configured and every time I click a file it gets downloaded.I want to tell nginx to show the content for text files instead of downloading them.I still want the download to work when I use wget on those text files.How can I do that?
How to configure nginx to show file content instead of downloading it?
I'm not quite sure what your hosting company means by their comment but you won't be able to run BOTH Apache and Nginx on port 80. Once one is bound to port 80 the other will be unable to bind to it.Probably the best configuration in your current situation would be to put Nginx on port 80 and Apache on 8000 or similar.Use nginx to serve static files (see try_files because"if" is evil) and then proxy all requests for PHP to port 8000 using theHTTP proxy module.The other common configuration for PHP with Nginx is to use PHP-FPM and proxy via FastCGI, just google "PHP-FPM Nginx {Your OS} tutorial" for a tutorial.Theremuch debateabout the performance of PHP-FPM/mod_php but in my personal experience I have found PHP-FPM more performant.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed11 years ago.I'm trying to install Nginx on my current cloud Cent OS server which has Apache httpd installed and running.My hosting company tells me that Nginx and Apache can both run on port 80 at the same time so my plan was to transform .htaccess and Apache conf of Wordpress sites to Nginx after installing it via Yum.I also Googled about this and some people suggest using Nginx as a reverse proxy and serve static files only but run Apache with PHP because Apache has PHP embedded and would consume less memory even though it doesn't support multiple concurrent requests like Nginx.My gut feeling is that switching everything over to Nginx would be beneficial but unsure at this stage.Also, is there anything I should watch out for when doing this switch over?What would you do if it was you in this situation?
Apache and Nginx both on port 80 [closed]
Tryos.date("!%Y-%m-%dT%TZ")oros.date("!%Y-%m-%dT%TZ",t)ifthas the date in seconds since the epoch.
How would you convert a timestamp to an ISO 8601 format (such as2009-01-28T21:49:59.000Z) in Lua?I'm specifically trying to do it by using theHttpLuaModulein Nginx.
Timestamp to ISO 8601 in Lua
uwsgi_paramsets a wsgienvironkey of the given name to the application. You can use this for headers, which follow the CGI convention of using anHTTP_prefix. the equivalent of yourproxy_set_headerwould be:uwsgi_param HTTP_X_GEOIP_COUNTRY $geoip_country_code;note that the header name must be in upper case and with dashes replaced by underscores, to be recognized as a valid header in wsgi.Alternativly, it looks like the environ is accessible in flask, asrequest.environ, so you could keep your uwsgi_param the same, but read it asrequest.environ['GEOIP_COUNTRY_CODE']. This is probably preferable, actually, since you can distinguish them from actual request headers that way.
I set up python/flask/uwsgi+nginx web app and it works fine. I want to use geoip, I set it up on nginx side:location / { include uwsgi_params; uwsgi_pass unix:/tmp/qbaka-visit.sock; ... uwsgi_param GEOIP_COUNTRY_CODE $geoip_country_code; }But now I don't know how to read this property in python. Prior to uwsgi I used simple flask builtin webserver + nginx proxy_pass, in which case I usedproxy_set_header X-Geo-Country $geoip_country_code;and read this argument usingrequest.headers, but for UWSGI params I couldn't figure out how to read them.
How to read UWSGI parameters in python/flask passed from nginx
I did heaps of googling before coming here but did some more just now, within 5 minutes I had my answer :PSeems I'm not the only person to have this issue:error_page 403 /e403.html; location = /e403.html { root html; allow all; }http://www.cyberciti.biz/faq/unix-linux-nginx-custom-error-403-page-configuration/Seems that I was right in thinking that access to my error page was getting blocked.
Im trying to display the error page in /temp/www/error403.html whenever a 403 error occurs.This should be whenever a user tries to access the site via https (ssl) and it's IP is in the blovkips.conf file, but at the moment it still shows nginx's default error page. I have the same code for my other server (without any blocking) and it works.Is it blocking the IP from accessing the custom 403 page? If so how do I get it to work?server { # ssl listen 443; ssl on; ssl_certificate /etc/nginx/ssl/site.in.crt; ssl_certificate_key /etc/nginx/ssl/site.in.key; keepalive_timeout 70; server_name localhost; location / { root /temp/www; index index.html index.htm; } # redirect server error pages to the static page error_page 403 /error403.html; # location = /error403.html { # root /temp/www; # } # add trailing slash if missing if (-f $document_root/$host$uri) { rewrite ^(.*[^/])$ $1/ permanent; } # list of IPs to block include blockips.conf; }Edit:Corrected error_page code from 504 to 403 but I still have the same issue
Return custom 403 error page with nginx
are these GET request (all in the same TCP/IP connection) processed via the server inparallel or in sequence?It is processed in sequence. It is called pipelining. Pipelining is part of HTTP/1.1 and it means that the client need not wait for the current request to complete before sending the next request over a persistent connection. It can send several requests over the same connection without waiting for responses for previous requests. The requests are processed in FIFO manner i.e. The client can send several requests in sequence, and the server is supposed to send a response to each request in the same order the request was received. So if the server you are using in HTTP/1.1 compliant, then it should be handled in sequence.
Me get a lot of Googlebot requests.Googlebot requests up to 11 different files via11 HTTP GET request, all inone single TCP/IPconnection.Are these GET request (all in the same TCP/IP connection) processed via the server inparallelor in sequence?Or is it up the the server?in this case, how doesNginxhandle this?
Multiple HTTP GET requests in one TCP/IP connection - processed parallel or sequential
By default, nginx outputs the directory index in UTC time. If you want it to display the time in your local timezone, you should set theautoindex_localtimedirective to on.autoindex_localtime on
How to fixNginxtimezone? I've configurednginxto serve a directory but datetime of creation is one hour after my real time.I've added to/etc/init.d/nginxexport TZ='Europe/Bratislava'thensudo service nginx reload sudo service nginx restartBut it didn't help, there should be 14:19 instead of 13:19.EDITTried to changeUbuntudefault timezone but the datetimes aren't changed.sudo dpkg-reconfigure tzdata
Nginx shows wrong time/timezone
Use rewrite directive within proper location block. So for example you have basic location which will handle all requestslocation / { /*your rules here*/ }You will need to add another block, which will do for you handling of specific pathlocation /mypath { rewrite ^/mypath$ /real/path/to/file/thisfile.html; }Also for your server to think in that block thatthisfile.htmlis default you can usetry thisfile.htmldirectiveIt is all well explained on official pageOfficial Nginx RewriteModule page
This should be really easy to do but I'm hitting my head on the wall. If I get a request for www.mysite.com/mypath I want to serve the content of www.mysite.com/myotherpath/thisfile.html. How can I do this with an nginx config.
nginx rewrite virtual directory to file
nginx can forward via HTTP protocol, so just point it to the correct port and you're set:server { location /anything { proxy_pass http://localhost:8080/whatever; } }
It seems that nginx is used with php, ruby and python.Anyone has an example of how to setup nginx to work with jetty/tomcat in backend?Thanks.
How to configure nginx to work with Jetty6 webserver?
Ok so I finally managed to fix my issue. Here are the different steps which allowed to make this work:1.nginx : I don't really know if this is needed but as my application is running with Unicorn, I added this into my nginx confupstream websocket { server 127.0.0.1:28080; } server { location /cable/ { proxy_pass http://websocket/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; } }And then in myconfig/environments/development.rbfile:config.action_cable.url = "ws://my.app.com/cable/"2.Allowed request origin: I have then noticed that my connection was refused even if I was usingActionCable.server.config.allowed_request_originsin myconfig/environments/development.rbfile. I am wondering if this is not due to the development default ashttp://localhost:3000as stated in the documentation. So I have added this:ActionCable.server.config.disable_request_forgery_protection = trueI have not yet a production environment so I am not yet able to test how it will be.3.Redis password: as stated in the documentation, I was using aconfig/redis/cable.ymlbut I was having this error:Error raised inside the event loop: Replies out of sync: # /var/www/webroot/ROOT/public/shared/bundle/ruby/2.2.0/gems/em-hiredis-0.3.0/lib/em-hiredis/base_client.rb:130:in `block in connect'So I understood the way I was setting my password for my redis server was not good.In fact your have to do something like this:development: <<: *local :url: redis://user:[email protected]:6379 :host: my.redis.com :port: 6379And now everything is working fine and Actioncable is really impressive.Maybe some of my issues were trivial but I am sharing them and how I resolved them so everyone can pick something if needed
I am using Jelastic for my development environment (not yet in production). My application is running with Unicorn but I discovered websockets with ActionCable and integrated it in my application.Everything is working fine in local, but when deploying to my Jelastic environment (with the default NGINX/Unicorn configuration), I am getting this message in my javascript console and I see nothing in my access logWebSocket connection to 'ws://dev.myapp.com:8080/' failed: WebSocket is closed before the connection is established.I used to have on my local environment and I solved it by adding the needed ActionCable.server.config.allowed_request_origins in my config file. So I double-checked my development config for this and it is ok.That's why I was wondering if there is something specific for NGINX config, else than what is explained on ActionCable git pagebundle exec puma -p 28080 cable/config.ruFor my application, I followed everything fromenter link description herebut nothing's mentioned about NGINX configurationI know that websocket with ActionCable is quite new but I hope someone would be able to give me a lead on thatMany thanks
NGINX configuration for Rails 5 ActionCable with puma
In order for Sinatra to correctly assemble the url used for redirects, it needs to be able to determine whether the request is using ssl, so that the redirect can be made usinghttporhttpsas appropriate.Obviously the actual call to thin isn't using ssl, as this is being handled by the front end web server, and the proxied request is in the clear. We therefore need a way to tell Sinatra that it should treat the request as secure, even though it isn't actually using ssl.Ultimately the code that determines whether the request should be treated as secure is in theRack::Request#ssl?andRack::Request#schememethods. Theschememethods examines theenvhash to see if one of a number of entries are present. One of these isHTTP_X_FORWARDED_PROTOwhich corresponds to theX-Forwarded-ProtoHTTP header. If this is set, then the value is used as the protocol scheme (httporhttps).So if we add this HTTP header to the request when it is proxied from nginx to the back end, Sinatra will be able to correctly determine when to redirect tohttps. In nginx we can add headers to proxied requests withproxy_set_header, and the scheme is available in the$schemevariable.So adding the lineproxy_set_header X-Forwarded-Proto $scheme;to the nginx configuration after theproxy_passline should make it work.
I have a Sinatra app running in nginx (using thin as a back-proxy) and I'm usingredirect '/'statements in Sinatra. However, when I access the site under https, those redirects send me tohttp://localhost/rather than tohttps://localhost/as they should.Currently, nginx passes control to thin with this commandproxy_passhttp://thin_cluster, wherethin_clusterisupstream thin_cluster { server unix:/tmp/thin.cct.0.sock; }How can I fix this?
How to fix Sinatra redirecting https to http under nginx
The solution that works for me is to add the directivesproxy_intercept_errorsanderror_pageto thelocation /in NGINX:server { listen 80; ... location / { proxy_pass http://spa-server/; proxy_intercept_errors on; error_page 404 = /index.html; } location /other/ { proxy_pass http://other/; } ... }Now, NGINX will return the /index.html i.e. the SPA from thespa-serverwhenever an unknown URL is requested. Still, the URL is available to Angular and the router will immediately resolve it within the SPA.Of course, now the SPA is responsible for handling "real" 404s. Fortunately, this is not a problem and a good practice within the SPA anyway.UPDATE: Thanks to @dan
I have NGINX set up as a reverse proxy for a virtual network of docker containers running itself as a container. One of these containers serves an Angular 4 based SPA with client-side routing in HTML5 mode.The application is mapped to location / on NGINX, so thathttp://server/brings you to the SPA home screen.server { listen 80; ... location / { proxy_pass http://spa-server/; } location /other/ { proxy_pass http://other/; } ... }The Angular router changes the URL tohttp://server/homeor other routes when navigating within the SPA.However, when I try to access these URLs directly, a 404 is returned. This error originates from thespa-server, because it obviously does not have any content for these routes.The examples I found for configuring NGINX to support this scenario always assume that the SPA's static content is served directly from NGINX and thustry_filesis a viable option.How is it possible to forward any unknown URLs to the SPA so that it can handle them itself?
NGINX, proxy_pass and SPA routing in HTML5 mode
The problem was myHostheader in the cloud upstream, I hadproxy_set_header Host $http_host;But it needed to beproxy_set_header Host my-domain.com;
I have an Nginx reverse proxy inside a docker container, which listens to port 3000 and is exposed to 3002:docker run -p "3002:3000" ....The idea is that this reverse proxy will proxy/my-appto the instance running in my laptop on port 8080; and/my-app/apito the cloud instance, inhttps://my-domain.Here's the configuration:upstream my-laptop { server host.docker.internal:8080; # this is a magic hostname for the laptop's IP address. keepalive 64; } upstream cloud { server my-domain.com:443; keepalive 64; } server { listen 3000; include ssl/ssl-certs.conf; include ssl/ssl-params.conf; location /my-app { proxy_pass http://my-laptop; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /my-app/api { proxy_pass https://cloud; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } ... }The issues are:when I hithttps://localhost:3002/my-appI get a 301 response to/my-app/(trailing slash). I don't know why is that. The local app instance is shown in the browser, so I guess I can let it slide for the moment?when I hithttps://localhost:3002/my-app/api/students, I get a 301 response tohttps://cloud/my-app/api/students. This causes CORS issues, of course, and the endpoint doesn't return data.Now, I have configured reverse proxies a couple of times, so I am completely shocked that I'm not seeing what's wrong, this is not my first time.I have tried tweaking with the upstreams, the proxy_set_headers, compared with another reverse proxy that I have for a different app; I'm out of ideas.What am I doing wrong?
Why is my Nginx reverse proxy doing a 301 redirect instead of proxying?
You can use themapdirective to rewrite your header:map $upstream_http_locationafterlogon $new_location { ~regexp new_value; } proxy_hide_header LocationAfterLogon; add_header LocationAfterLogon $new_location;See the documentation:http://nginx.org/en/docs/http/ngx_http_map_module.html
BackgroundSo I've got a server running a tomcat application hidden behind an Apache proxy. The proxy provides a more user friendly url as well as SSL encryption with automatic redirects so that the app is only accessible on https.I'm busy migrating this to an nginx proxy.One of the issues I've had is that upon login, my app sends back a "LocationAfterLogon" header in the http response in the form ofhttp://192.168.x.x:8080/myapp/index.jsp.That IP address returned is from the proxied server not visible on the internet. So then the browser gets a connection error trying to navigate to it.As a workaround, I've used nginx directives:proxy_hide_header: to hide the LocationAfterLogin header coming back from the proxied serveradd_header: to add a new LocationAfterLogin url.So my config looks as follows#header for location after logon of demo app add_header LocationAfterLogon http://example.com/demo/index.jsp; #hide the real LocationAfterLogon proxy_hide_header LocationAfterLogon;The ProblemI need to be able to do a regex replace or similar on LocationAfterLogon because it won't always be to index.jsp, depending on which url was intercepted by the login page.I am aware that I can also rewrite the tomcat app to send back a relative URL instead, but I'd like to do it all in nginx config.I've also read about nginx more_set_headers. Haven't tried it yet. Does it allow me to edit the headers?Apache has theHeader editdirective which I was using previously, so I'm looking for something like that.TL;DRIs is possible to edit a header location using regex replace or similar in Nginx?
Edit a header value in nginx
Ultimately your bottlenecks are not going to be in the particular routing mechanisms for requests unless you really muck up the configuration. So arguably a waste of time to be focused too much on basing decisions on things at that level.Go watch my talk from PyCon for some context on where bottlenecks are really going to be.http://lanyrd.com/2012/pycon/spcdg/
I am experimenting with various setups for deploying django apps. My first choice was using a simple apache server with mod_wsgi, which I had implemented before for private use. Since the current deployment is for public use, I am looking at various options. Based on the information available online, it seems it is good to have nginx for serving static content as well as a reverse proxy for a dynamic content server. Now given my previous knowledge of Apache I was considering using the same for dynamic content. But then I came across Gunicorn and later uWSGI. Currently I am implementing uWSGI. I see that it allows multiple protocols including http.What are the advantages of using one protocol over the other. I understand that given my requirement of scaling the app over multiple servers, means that I cannot use Unix sockets, which seem to be recommended in some tutorials. So the other choices are TCP socket with uwsgi or with http. Do they have much theoretical difference. I am not aware of the details of uwsgi protocol and would like to know, if using it over http protocol would make things faster?
Is uwsgi protocol faster than http protocol?
Unfortunately nginx doesn't support sub-domains on IP addresses like that.You would either have to modify the clients hosts file (which you said you didn't want to do)...Oryou can just set your nginx to redirect like so:location /jenkins { proxy_pass http://jenkins:8080; ... } location /other-container { proxy_pass http://other-container:8080; }which would allow you to access jenkins at192.168.1.2/jenkinsOryou can try and serve your different containers through different ports. E.g:server { listen 8081; location / { proxy_pass http://jenkins:8080; ... } } server { listen 8082; location / { proxy_pass http://other-container:8080; ... } }And then access jenkins from192.168.1.2:8081/
I'm looking for a way to configure Nginx to access hosted services through a subdomain of my server. Those services and Nginx are instantiated with Docker-compose.In short, when typingjenkins.192.168.1.2, I should access to Jenkins hosted on192.168.1.2redirected with Nginx proxy.A quick look of what I currently have. It doesn't workwithouta top domain name, so it works fine onplay-with-docker.com, but not locally with for example192.168.1.2.server { server_name jenkins.REVERSE_PROXY_DOMAIN_NAME; location / { proxy_pass http://jenkins:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host:$server_port; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }To have a look of what I want:https://github.com/Ivaprag/devtools-composeMy overall goal is to access remote docker containers without modifying clients' DNS service.
Subdomains, Nginx-proxy and Docker-compose
Replaceproxy_pass https://node_app_production;withproxy_pass http://node_app_production;Restart the nginx and you should be all set. Seenginx proxy pass Node, SSL?
i am configuring my node.js app with nginx. It is working fine for http but it is not working for https. When i try to access secure domain. i get this error.502 Bad Gateway nginx/1.4.6 (Ubuntu)Here is my nginx conf fileupstream node_app_dev { server 127.0.0.1:3000; } upstream node_app_production { server 127.0.0.1:3000; } server { listen 80; server_name mydomain.com; access_log /var/log/nginx/dev.log; error_log /var/log/nginx/dev.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://node_app_dev; proxy_redirect off; } } server { listen 443 ssl; server_name mydomain.com; access_log /var/log/nginx/secure.log; error_log /var/log/nginx/secure.error.log debug; ssl on; ssl_certificate certs/mycert.crt; ssl_certificate_key certs/mykey.key; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass https://node_app_production; proxy_redirect off; } }
node.js app with nginx 502 bad gateway error
After struggling on this same issue for some days I found that the problem was that the Firewall was preventing the websocket from working. I had Pandas Antivirus installed and Firewall was enabled in it. When I turned it off and used Windows firewall and opened that incoming port then it started working.Hope it helps
I'm new to front end web app development. I'm receiving a WebSocket connection failure as follows:WebSocket connection to 'ws://127.0.0.1:7983/websocket/' failed: Error in connection establishment: net::ERR_EMPTY_RESPONSEI looked up this WebSocket error and found diverted to the following pages.Shiny & RStudio Server: "Error during WebSocket handshake: Unexpected response code: 404"WebSocket connection failed with nginx, nodejs and socket.ioRstudio and shiny server proxy settingI then downloaded nginx on my Windows 7 machine and added the following comment in nginx.conf, saved and executed runApp().location /rstudio/ { rewrite ^/rstudio/(.*)$ /$1 break; proxy_pass http://localhost:7983; proxy_redirect http://localhost:7983/ $scheme://$host/rstudio/; }This didn't seem to solve the issue. I think I may need to add some extra stuff into the nginx.conf file or put it in a specific directory. Please assist. Thanks!EDITED the nginx.conf script as follows:location /rstudio/ { rewrite ^/rstudio/(.*)$ /$1 break; proxy_pass http://127.0.0.1:5127; proxy_redirect http://127.0.0.1:5127/ $scheme://$host/rstudio/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }
Shiny Websocket Error
nginx is built to be efficient with memory and its default configurations are also light on memory usage. Nothing will go wrong if you add more buffers, but nginx will consume more RAM.Eight buffers was probably chosen as the smallest effective count that was a square of two. Four would be too few, and 16 would be greater than the default needs of nginx.The “too many buffers” answer depends on your performance needs, memory availability, and request concurrency. The “good” threshold to stay under is the point at which your server has to swap memory to disk. The “best” answer is: as few buffers as are necessary to ensure nginx never writes to disk (check your error logs to find out if it is).Here are nginx configurations I use for a large PHP-FPM application on web hosts with 32 GB of RAM:client_body_buffer_size 2m; client_header_buffer_size 16k; large_client_header_buffers 8 8k; fastcgi_buffers 512 16k; fastcgi_buffer_size 512k; fastcgi_busy_buffers_size 512k;These configurations were determined through some trial and error and by increasing values from nginx configuration guides around the web. The header buffers remain small because HTTP headers tend to be lightweight. The client and fastcgi buffers have been increased to deal with complex HTML pages and an XML API.
Reading the nginx documentation, theproxy_buffercommand has this explanatory message:This directive sets the number and the size of buffers, into which will be read the answer, obtained from the proxied server. By default, the size of one buffer is equal to the size of page. Depending on platform this is either 4K or 8K.The default is eight 4k or 8k buffers. Why did the authors ofnginxchoose eight, and not a higher number? What could go wrong if I add more buffers, or a bigger buffer size?
How many nginx buffers is too many?
You can still use it but you will have to re-define/re-write it like herehttp://www.php.net/manual/en/function.getallheaders.php#84262
I am trying to switch from apache to nginx on my server. The only problem here is the getallheaders() function I used in my PHP scripts which does not work with Nginx. I have tried the user contributed notes on php site on getallheaders finction but that does not return all request headers.Please tell me how to solve this problem. I would really like to switch to Nginx.
PHP getallheaders alternative
You can set your root to the common prefix of the two paths you want to use (in this case, it's /), then just specify the rest of the paths in the try_files args:location /static/ { root /; try_files /tmp$uri /srv/www/site$uri =404; expires 30d; access_log off; }It may seem disconcerting to use root / in a location, but the try_files will ensure that no files outside of /tmp/static or /srv/www/site/static will be served.
I have a Nginx config that works fine and serves static files properly:location /static/ { alias /tmp/static/; expires 30d; access_log off; }But what I want to do now is that if the static file doesn't exist in/tmp/static, Nginx looks for the file in/srv/www/site/static. I am not sure how to achieve that, I have tried a few things withtry_files, but I don't know how to properly use it.
Getting Nginx to serve static files from several sources
Is a Linode 1GB enoughWell, it'll all run on that. You don't say what sort of load you want to support though.So - here's what you want to do.Add some basic monitoring into the mix - mem/cpu/disk/network traces + record them.Script your server so you can go from an empty vm to working system automatically. There's all sorts of stuff out there - puppet/chef/vagrant. You're already using python, soansiblemight suit you.Now test it. Fire up a local VM (or hire a Linode one by the hour) and stress-test it.Rent a bigger one + test that too.Now you know what size VM you need and when you'll need to switch.
Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed9 years ago.Improve this questionI want to deploy a django project with the following stack: Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL.I was thinking to use aLinode 1GBserver which has:1 GB RAM1 CPU Core24 GB SSD Storage2 TB Transfer40 Gbit Network In125 Mbit Network OutAt the beginning I expect to have very low traffic.Is a Linode 1GB enough or should I choose a better one with more RAM/cores?I would like to choose the minimum one that fits my needs now and upgrade as the traffic grows.Bonus general question: How can I calculate the server requirements for a specific stack and traffic?
Minimum server requirements for a django project [closed]
the problem here is that bothmlflowandnginxare trying to run on thesame port...first lets deal with nginx:1.1 in /etc/nginx/sites-enable make a new filesudo nano mlflowand delete the exist default.1.2 in mlflow file:server { listen YOUR_PORT; server_name YOUR_IP_OR_DOMAIN; auth_basic “Administrator’s Area”; auth_basic_user_file /etc/apache2/.htpasswd; #read the link below how to set username and pwd in nginx location / { proxy_pass http://localhost:8000; include /etc/nginx/proxy_params; proxy_redirect off; } }1.3. restart nginxsudo systemctl restart nginxon your server run mlflowmlflow server --host localhost --port 8000Now if you try access the YOUR_IP_OR_DOMAIN:YOUR_PORT within your browser an auth popup should appear, enter your host and pass and now you in mlflownow there are 2 options to tell the mlflow server about it:3.1 set username and pwd as environment variableexport MLFLOW_TRACKING_USERNAME=user export MLFLOW_TRACKING_PASSWORD=pwd3.2 edit in your/venv/lib/python3.6/site-packages/mlflowpackages/mlflow/tracking/_tracking_service/utils.pythe functiondef _get_rest_store(store_uri, **_): def get_default_host_creds(): return rest_utils.MlflowHostCreds( host=store_uri, username=replace with nginx user password=replace with nginx pwd token=os.environ.get(_TRACKING_TOKEN_ENV_VAR), ignore_tls_verification=os.environ.get(_TRACKING_INSECURE_TLS_ENV_VAR) == 'true', )in your .py file where you work with mlflow:import mlflow remote_server_uri = "YOUR_IP_OR_DOMAIN:YOUR_PORT" # set to your server URI mlflow.set_tracking_uri(remote_server_uri) mlflow.set_experiment("/my-experiment") with mlflow.start_run(): mlflow.log_param("a", 1) mlflow.log_metric("b", 2)A link to nginx authentication dochttps://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/
As I am logging my entire models and params into mlflow I thought it will be a good idea to have it protected under a user name and password.I use the following code to run the mlflow servermlflow server --host 0.0.0.0 --port 11111works perfect,in mybrowser i typemyip:11111and i see everything (which eventually is the problem)If I understood the documentation and the followinghttps://groups.google.com/forum/#!topic/mlflow-users/E9QW4HdS8a8link here correct, I should use nginx to create the authentication.I installednginx open sourcreandapache2-utilscreatedsudo htpasswd -c /etc/apache2/.htpasswd user1user and passwords.I edited my/etc/nginx/nginx.confto the following:server { listen 80; listen 443 ssl; server_name my_ip; root NOT_SURE_WHICH_PATH_TO_PUT_HERE, THE VENV?; location / { proxy_pass my_ip:11111/; auth_basic "Restricted Content"; auth_basic_user_file /home/path to the password file/.htpasswd; } }but no authentication appears.if I change the conf to listen tolisten 11111I get an error that the port is already in use ( of course, by the mlflow server....)my wish is to have a authentication window before anyone can enter by the mlflow with a browser.would be happy to hear any suggestions.
How to run authentication on a mlFlow server?
When youdocker execyou can see you have several process/ # ps -ef PID USER TIME COMMAND 1 root 0:00 nginx: master process nginx -g daemon off; 6 nginx 0:00 nginx: worker process 7 root 0:00 /bin/sh 17 root 0:00 ps -ef / #and in Linux, each process has its own stdin, stdout, stderr (and other file descriptors), in /proc/pid/fdand so, with yourdocker exec(pid 7) you display something in/proc/7/fd/1If you dols -ltr /proc/7/fd/1, it displays something like/proc/4608/fd/1 -> /dev/pts/2which means output is being sent to terminalwhile your nginx process (pid 1) displays his output in/proc/1/fd/1If you dols -ltr /proc/1/fd/1, it displays something like/proc/1/fd/1 -> pipe:[184442508]which means output is being sent to docker logging driver
If I do :docker run --name nginx -d nginx:alpine /bin/sh -c 'echo "Hello stdout" > /dev/stdout'I can see "Hello stdout" when I do :docker logs nginxBut when the container is running (docker run --name nginx -d nginx:alpine) and I do :docker exec nginx /bin/sh -c 'echo "Hello stdout" > /dev/stdout'or when I attach the container with :docker exec -it nginx /bin/shand then :echo "Hello stdout" > /dev/stdoutI can't see anything in docker logs. And since my Nginx access logs are redirected to /dev/stdout, I can't see them as well.What is happening here with this stdout ?
docker run, docker exec and logs
You have to define upstream directly. Currently your nginx can not proxy to your web application.http://nginx.org/en/docs/http/ngx_http_upstream_module.htmlupstream backend { server backend1.example.com weight=5; server backend2.example.com:8080; server unix:/tmp/backend3; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup; } server { location / { proxy_pass http://backend; } }
I use shiny server to build a web-app on port 3838, when i use nginx in my server it works well. But when I stop nginx on my server and try to use docker nginx, I find the site comes to a '502-Bad Gate Way' error and nginx log shows:2016/04/28 18:51:15 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, ...I install docker-nginx by this command:sudo docker pull nginxMy docker command line is something like (for clear i add some indent):sudo docker run --name docker-nginx -p 80:80 -v ~/docker-nginx/default.conf:/etc/nginx/conf.d/default.conf -v /usr/share/nginx/html:/usr/share/nginx/html nginxI create a folder name 'docker-nginx' in my home dir, move my nginx conf file in this folder, and then remove my original conf in etc/nginx dir just in case.My nginx conf file looks like this:server { listen 80 default_server; # listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.1:3838/; proxy_redirect http://127.0.0.1:3838/ $scheme://$host/; auth_basic "Username and Password are required"; auth_basic_user_file /etc/nginx/.htpasswd; # enhance the performance proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } }
docker nginx connection refused while connecting to upstream
You need at least two server blocks, andnginxwill select the more specific server block to handle the request. Seethis documentfor details.You will need a server block forxyz.example.comsuch as:server { listen 80; server_name xyz.example.com; location / { proxy_pass http://$1.foo.com; } }Then either adefault_serveror a wild card server, such as:server { listen 80; server_name *.example.com; return http://foo.com/; }Or:server { listen 80 default_server; return http://foo.com/; }
I'm using wildcard inserver_name. I want to redirect all subdomains ofexample.com(configured as *.example.com) tofoo.comexceptxyz.example.comI have configuration as followsserver { listen 80; server_name *.example.com; location / { proxy_pass http://$1.foo.com; } }I don't want to change any request coming toxyz.example.com
How to exclude specific subdomains server_name in nginx configuration
http://agentzh.blogspot.co.uk/2011/03/how-nginx-location-if-works.htmlmight be of interest to you in understanding howifworks. In your case, when theifcondition matches, the request is now being served within theifcontext, andtry_filesis not inherited by that context. Or ashttps://www.digitalocean.com/community/tutorials/understanding-the-nginx-configuration-file-structure-and-configuration-contextssays "Another thing to keep in mind when using an if context is that it renders a try_files directive in the same context useless."Also, if thetry_filesfalls back to@cdnthen any headers you've added previously are forgotten, it starts again in the newlocationblock, and so the headers need to be added there.As to how to fix it; you can set variables insideif, andadd_headerignores an empty value, so something like this should work:set $access-control-output 0; location ~* \.(css|js|jpe?g|png|gif|otf|eot|svg|ttf|woff|woff2|xml|json)$ { set $access-control-output 1; try_files $uri @cdn; } set $acao = ""; set $acam = ""; if ($access-control-output) { set $acao = $http_origin; set $acam = "GET, OPTIONS"; } map "$access-control-output:$request_method" $acma { "1:OPTIONS" 1728000; default ""; } location @cdn { add_header 'Access-Control-Allow-Origin' $acao; add_header 'Access-Control-Allow-Methods' $acam; add_header 'Access-Control-Max-Age' $acma; return 301 https://example.com$request_uri; }Edit: You don't care about the headers in the @cdn fallback, in which case you should be able to have something like this:map $request_method $acma { "OPTIONS" 1728000; default ""; } location ~* \.(css|js|jpe?g|png|gif|otf|eot|svg|ttf|woff|woff2|xml|json)$ { add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Methods' "GET, OPTIONS"; add_header 'Access-Control-Max-Age' $acma; try_files $uri @cdn; } location @cdn { return 301 https://example.com$request_uri; }
I have a simplelocationblock in my nginx config which matches static files for my website. What I want to do, is to check if the file exists usingtry_files, and if it doesn't, redirect to a URL (in this case specified in the@cdnlocation block). I also want to set some CORS headers.Below is the relevant configuration.location ~* \.(css|js|jpe?g|png|gif|otf|eot|svg|ttf|woff|woff2|xml|json)$ { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS'; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS'; } try_files $uri @cdn; } location @cdn { return 301 https://example.com$request_uri; }The problem is that I get a 404 response if the file does not exist, instead of a 301 redirect. The configuration worked/works fine before adding the CORS headers. If I remove the handling of the headers, everything works as intended, and I get a 301 response back.Now I have done a bit of reading about why the if directive is bad and should be avoided, but I still don't know why it breaks my configuration. If I understood correctly, it has something to do with eitheriforadd_headerbeing part of a rewrite module or something like that, and I guess that conflicts withtry_files. Perhaps I am not accurate here, but either way I am not sure how to fix it.Why does the presence ofifand/oradd_headermake nginx give me a 404 instead of a 301 when a file could not be found, and how do I fix it? Thanks in advance!
if conditions break try_files in nginx configuration
Answer to your last question - in nginx, thelistendirective is only allowed in theservercontext (that means per virtual host).According tomanual:Thelistendirective can have several additional parametersspecific to socket-related system calls. These parameters can be specified in anylistendirective, but only once for a given address:port pair.So if you have more than 1 virtual host (serverdefinition in nginx config), then you can use thereuseportoption in any 1 of them. Non-socket related options (likesslorspdy) can still be set for more than 1listendirective.SIDE NOTE:What thereuseportdirective really does:Nginx from version 1.9.1 supports setting theSO_REUSEPORTTCP socket parameter. In modern OS (Linux kernel since 3.9), this enables the kernel to have more socket listeners for each socket (ip:port).Without it, when new connection arrives, kernel notified all nginx workers about it and all of them try toacceptit.With this option enabled, each worker has its own listening socket and on each new connection, kernel chooses one of them which will receive it - so there is no contention.More info about benefits, drawbacks and benchmarks ofreuseportoption can be read on thisNginx blog post
I'm right understand that it's wrong to use "reuseport" for same IP:PORT pair on different virtual hosts:http { server { listen 192.168.0.1:80 reuseport; server_name server1; … } server { listen 192.168.0.1:80 reuseport; server_name server2; … } }This config gives me:nginx: [emerg] duplicate listen options for 192.168.0.1:80 in /etc/nginx/vhosts/server1.local.conf:66ornginx: [emerg] listen() to 0.0.0.0:80, backlog 511 failed (98: Address already in use)So I've to use unique IP:PORT pairs for every virtual host?In same time server-wide "listen 80 reuseport;" works just fine, but is it doing same as per unique IP:PORT ?
Nginx's "reuseport" for same IP:PORT pair on different virtual hosts
I know this isn't exactly what you asked but might help future people who search for this issue, like @yvoyer suggested, my issue was the trailing slash too, my server used nginx and fpm, and in nginx // does not euqal /, so i had to do a bit of fixes on my virtual host conf and it worked fine after that. I'll just paste the conf for whoever needs it, or suggests a better one.location / { try_files $uri @pass_to_symfony; } location ~ /app_dev.php/ { try_files $uri @pass_to_symfony_dev; } location @pass_to_symfony{ rewrite ^ /app.php?$request_uri last; } location @pass_to_symfony_dev{ rewrite ^ /app_dev.php?$request_uri last; } location ~ ^/app(_dev)?\.php($|/) { include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; # replace with your sock path }
I am getting this error message when try to open/app_dev.phpAn error occurred while loading the web debug toolbar (404: Not Found). Do you want to open the profiler?When I click ok, I am getting then the error:app_dev.php/_profiler/5053258a822e1and404 Not foundI am using nginxThank you very much for your help.EDIT: Here is the error log:[error] 18369#0: *9 open() "/var/www/Symfony/web/app_dev.php/_wdt/5056f875afc98" failed (20: Not a directory), client: 127.0.0.1, server: symfony, request: "GET /app_dev.php/_wdt/5056f875afc98 HTTP/1.1", host: "symfony", referrer: "http://symfony/app_dev.php" [error] 18369#0: *9 open() "/var/www/Symfony/web/404" failed (2: No such file or directory), client: 127.0.0.1, server: symfony, request: "GET /app_dev.php/_wdt/5056f875afc98 HTTP/1.1", host: "symfony", referrer: "http://symfony/app_dev.php"EDIT 2:When i try to access app_dev.php the page opens but without the toolbar and when I try with app_dev.php/ I am getting the**Oops! An Error Occurred The server returned a "404 Not Found". Something is broken. Please e-mail us at [email] and let us know what you were doing when this error occurred. We will fix it as soon as possible. Sorry for any inconvenience caused. **error.
Symfony 2: 404 Not Found Error when tryes to open /app_dev.php
$connection is a counter, not the total number of used connections right now. So it's intended to grow.Keepalive connections cannot be discarded, so the room is worker_processes * worker_connections - keepalive connections
The nginx documentation saysmax_clients = worker_processes * worker_connectionsbut how does the keepalive factor into this? I have my configuration setup with 2 worker_processes and 8192 worker_connections; that means I can theoretically handle a maximum of 16384 concurrent connections. Pushing out 16384 streams of data concurrently is ginormous, but if I have a 60s keepalive_timeout then with each client hogging a connection for 1 minute that number has a completely different meaning. Which is it?Connected to all this is the $connection variable that can be used with the log_format directive. I defined the following log format so I could analyze the server's performance:log_format perf '$request_time $time_local $body_bytes_sent*$gzip_ratio $connection $pipe $status $request_uri';That $connection variable is reporting around 11-12 million connections! I'm no math major, but obviously that number is way higher than worker_processes * worker_connections. So what is it supposed to represent?In short, I'm trying to figure outhowto determine a good value for worker_connection.
In nginx, what is the relationship between worker_connections, keepalive_timeout and $connection
I have made nginx serve static files without even passing those requests to node by adding location directive to the app's nginx configuration file (which is included in nginx.conf):location ~ /(img|js)/ { rewrite ^(.*)$ /public/$1 break; } location / { proxy_pass http://localhost:3000/; ... }In case request comes to /img or /js directory nginx serves files from /public/img or /public/js directory respectively. All other requests are proxied to node.You can add more directories if you need (like /css or /views, if you store templates there that you want to use both in node and in browser) and have any directory structure inside those directories, nginx just prepends /public to them and gets files from there without your node app even knowing about it.
I have several apps running behind an Nginx reverse proxy, one of which is a Node server with Express.js. I'm proxyingdomain.com/demo/app/tolocalhost:7003/using this Nginx config:http { ... server { listen 80; server_name domain.com; ... location /demo/app { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Scheme $scheme; rewrite ^/demo/app/?(.*) /$1 break; proxy_pass http://localhost:7003; } ... } }This works great andappreceives requests as if it was rooted at/. The problem isapphandles its own static files and might make requests for routes such ascss/app.cssorimages/image.jpg. But because of the reverse proxy, these actually exist at/demo/app/css/app.cssand/demo/app/images/image.jpgrespectively.I've solved this by getting Nginx to pass to Node a custom header indicating the root path, which the Node server prepends to the URLs of all subsequent requests. But now my code is littered with these root path strings. For example, part of my back-end templates:link(rel='stylesheet', href="#{basePath}/css/base.css") link(rel='stylesheet', href="#{basePath}/css/skeleton.css") link(rel='stylesheet', href="#{basePath}/css/layout.css")What's a more elegant way to handle this? Isn't there a way to get Nginx to recognize requests coming from an upstream server and automatically forward them to that server?
Nginx Reverse Proxying to Node.js with Rewrite
First you need to know the path to your nginx.exe file.Once you have that right click on your desktop and click on the new text document.Then type or paste in the following text:c:\ cd c:\nginx start nginx.exe cmd /kNow save the file with whatever name you want to use but add the .bat extension to it. Example nginx.batNow you can click on the file and it should open the command prompt, change the directory, start the server, and then leave the prompt open in the correct directory so you can run your commands.
I've installed Nginx web server on my machine under Windows 7 with php.When I start the "nginx.exe", the command prompt opens for a second and then closes automatically, so I can't control it through the command prompt. Couldn't find a solution anywhere.What I want is to open the "nginx.exe" and use various commands there.Otherwise the server is working.
Can't run the Nginx executable file
Following thisanswerYou can use this nginx configuration file in order to proxy the Dropwizard application inside your server from port 8080 to port 80:server { listen 80; server_name api.example.com; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }For your Angular application you can eitherserve static assetsfrom Dropwizard or set avirtual host via NginxAs a side note, remember to configure CORS in your mainClass from your Dropwizard application:@Override public void run(Configuration configuration, Environment environment) throws Exception { configureCors(environment); environment.jersey().register(new HelloWorldResource(template)); } private void configureCors(Environment environment) { FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class); filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*"); filter.setInitParameter(CrossOriginFilter.ALLOWED_METHODS_PARAM, "GET,PUT,POST,DELETE,OPTIONS"); filter.setInitParameter(CrossOriginFilter.ALLOWED_ORIGINS_PARAM, "*"); filter.setInitParameter(CrossOriginFilter.ACCESS_CONTROL_ALLOW_ORIGIN_HEADER, "*"); filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin"); filter.setInitParameter("allowCredentials", "true"); }
I'm developing an application using angularjs application frontend having as backend dropwizard. I'm planning to use Nginx as gateway for the backend dropwizard server and as an asset server (images and maybe the angularjs application).My question is what is the best strategy for deployement:Bundling angularjs with the dropwizard backend and using nginx as frontend?Deploying angularjs application on the nginx server?Thanks in advance,
How to deploy an angularjs application frontend with Nginx and dropwizard
"Access-Control-Allow-Origin" is a response header, not a request header. It is returned by a HTTP server when a HTTP client sends a request with an OPTION method. For example, the ajax API in browsers sends an OPTION request before trying a POST request when the targeted URL is not the current page URL (see Cross Origin Resource Sharing issue). This OPTION request contains the "Origin" header which holds the current page beginning of URL (scheme + domain). The Ajax API will send the POST request only if the response contains the header "Access-Control-Allow-Origin" with a URL matching the main page one.You only need to worry about such headers if you want to access dynamic content from another server than the one serving the current page. It doesn't seem to be your case here.For more information about CORS, see this wikipedia page:https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
I'm working on a web application (Angular + Rails) that server assets through CloudFront CDN. The application is served though nginx that's correctly set up to set "Access-Control-Allow-Origin" header. CloudFront is set up to forward the header.Problem is that the header is missing on the first response for an Angular template, but it's correctly present on subsequent responses (if I refresh the page).For example, if I clear all history and cache in Chrome and visit the page, the response for a template file will not have "Access-Control-Allow-Origin" header. If I refresh the page, the response for the template will have the header.I noticed that if I clear all history and cache, but not cookies, it continues to work correctly.It behaves similar on Firefox. If I clear all history and cache it doesn't work on the first response, but it works correctly on subsequent responses. After clearing all history and cache but without cookies, it continues to work correctly, unlike on Chrome.Also, if I open development tools and disable cache in Firefox, the header is missing on every response.Do you know what the problem might be or where should I look next?Thanks.
Access-Control-Allow-Origin missing on the first response
Recently I configured HTTP2 for NodeJS APP on MAMP Pro with NGINX. I wrote short article about ithttps://www.linkedin.com/pulse/nodejs-http2-server-mamp-pro-nginx-sergei-iastrebov/I think it'll help you.
Is there any way to enable HTTP2 support in MAMP Pro? I want to test and improve some of my local development websites with HTTP2 support.I've been searching for a while now, but haven't found a single solution.
HTTP2 support in MAMP Pro
Sounds right to me.If the agent (in this case Firefox) says 200 OK, it means the transfer happened.
I'm trying to disable all the caches in nginx for testing purpose.I've set the following lineadd_header Cache-Control no-cache;I see that the page itself is not cached, but the images, css, and javascripts are. I suspect that this is because Firefox is getting "304 Not Modified" header.Is there a way to prevent it?P.S:I think I found it myself. Firefox shows '200 OK' all the time now.Is this correct way?I've addedif_modified_since off; add_header Last-Modified "";
How to prevent "304 Not Modified" in nginx?
Just moved out this block to server directory and addedvod hls;inside.location ~ \.m3u8$ { include cors.conf; vod hls; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 403; } }
I have a problem with nginx config for hls streaming. I usekaltura nginx vod moduleand try to addngx_http_secure_link_moduleto protect the stream. The strange thing is I have 404 error if I enablengx_http_secure_link_module(logs below). I think that is because it can't find a file with index.m3u8 on the end, but if I comment secure link block it works fine.Also I tried to add alias insidelocation ~ \.m3u8$ {}block, but it didn't work.What am I doing wrong? How to protect my stream?My stream link:https://stream.example.com/hls/c14de868-3130-426a-a0cc-7ff6590e9a1f/index.m3u8?md5=0eNJ3SpBd87NGFF6Hw_zMQ&expires=1609448340My NGINX config:server { listen 9000; server_name localhost; # root /srv/static; location ^~ /hls/ { # the path to c14de868-3130-426a-a0cc-7ff6590e9a1f file alias /srv/static/videos/1/; # file with cors settings include cors.conf; vod hls; # 1. Set secret variable set $secret "s3cr3t"; # 2. Set secure link secure_link $arg_md5,$arg_expires; secure_link_md5 "$secure_link_expires $secret"; # if I comment this block everything works fine (but security) location ~ \.m3u8$ { if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 403; } } } }NGINX logs:
NGINX open() failed (20: Not a directory) hls vod with secure link module
I ended up reverting back to the default rabbitmq.config file, then modified my nginx config block to the below, based on another stackoverflow answer that I can't find right now.location ~* /rabbitmq/api/(.*?)/(.*) { proxy_pass http://127.0.0.1:15672/api/$1/%2F/$2?$query_string; proxy_buffering off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location ~* /rabbitmq/(.*) { rewrite ^/rabbitmq/(.*)$ /$1 break; proxy_pass http://127.0.0.1:15672; proxy_buffering off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }Also, I had browser caching for JS files, which was causing issues and have disabled that.I will try to re-enable SSL piece-by-piece but do have the example URL working for now:https://example.com/rabbitmq/
I'm trying to access the RabbitMQ interface over HTTPS/SSL with nginx, and I can't figure out what I'm missing.Here's my rabbitmq.conf file:[ {ssl, [{versions, ['tlsv1.2', 'tlsv1.1']}]}, {rabbit, [ {reverse_dns_lookups, true}, {hipe_compile, true}, {tcp_listeners, [5672]}, {ssl_listeners, [5671]}, {ssl_options, [ {cacertfile, "/etc/ssl/certs/CA.pem"}, {certfile, "/etc/nginx/ssl/my_domain.crt"}, {keyfile, "/etc/nginx/ssl/my_domain.key"}, {versions, ['tlsv1.2', 'tlsv1.1']} ]} ] }, {rabbitmq_management, [ {listener, [ {port, 15671}, {ssl, true}, {ssl_opts, [ {cacertfile, "/etc/ssl/certs/CA.pem"}, {certfile, "/etc/nginx/ssl/my_domain.crt"}, {keyfile, "/etc/nginx/ssl/my_domain.key"}, {versions, ['tlsv1.2', 'tlsv1.1']} ]} ]} ]} ].All works ok when I restart rabbitmq-serverMy nginx file looks like this:location /rabbitmq/ { if ($request_uri ~* "/rabbitmq/(.*)") { proxy_pass https://example.com:15671/$1; } }Now, I'm guessing there's something with the ngnix config not being able to resolve the HTTPS URL, as I'm getting 504 timeout errors when trying to browse:https://example.com/rabbitmq/Obviously, this is not the correct FQDN, but the SSL cert works fine without the /rabbitmq/Has anyone been able to use the RabbitMQ Management web interface on an external connection over a FQDN and HTTPS?Do I need to create a new "server" block in nginx config dedicated to the 15671 port?Any help would be much appreciated!
RabbitMQ Management Over HTTPS and Nginx
This command would work as expected:after I runng build --prodthen run the following command in the dist/ folderpm2 start /usr/bin/http-server -- -p 8080 -d falseUpdateI have found a better solution:which ngthen it will print /usr/bin/ng then type thispm2 start /usr/bin/ng -- serve --prod
How can I run:ng serve --prodwith pm2?ng serve from angular-cli, Angular2. I'm running on DigitalOcean.I have tried to test withhttp-server -p 4200 -d falsein the dist/ folder afterng build --prodWhen I request from the domainhttps://www.unibookkh.com/, i got 404 error: (I've already setup nginx to listen to port 4200.I test with http-server because I think I maybe can run pm2 through this commandpm2 start my_app_process.jsonwheremy_app_process.json{ "apps": [ { "name": "angular", "cwd": "~/angular2", "args": "-p 4200 -d false", "script": "/usr/bin/http-server" } ] }Any better ideas of how to get it working with PM2?
How Can I run PM2 with Angular-Cli? - Angular2
The key is:error removing unix socket, unlink(): Permission denied [core/socket.c line 198]You (very probably) previously run a uwsgi instance as root creating the unix socket file with root permissions.Now your instance (running instead as www) is not able to re-bind() that socket as it is not able to unlink it (no permissions)Just remove the socket file and retry.
Could not start uwsgi process via ini flaguwsgi --ini file.iniNot any uwsgi pidsps aux | grep uwsgi root 31605 0.0 0.3 5732 768 pts/0 S+ 06:46 0:00 grep uwsgifile.ini[uwsgi] chdir =/var/www/lvpp/site wsgi-file =/var/www/lvpp/lvpp.wsgi master = true processes = 1 chmod-socket=664 socket = /var/www/lvpp/lvpp.sock pidfile= /var/www/lvpp/lvpp.pid daemonize =/var/www/lvpp/logs/lvpp.log vacuum = true uid = www gid = www env = DJANGO_SETTINGS_MODULE=settingsfile lvpp.log*** Starting uWSGI 2.0.10 (32bit) on [Wed Apr 8 06:46:15 2015] *** compiled with version: 4.4.7 20120313 (Red Hat 4.4.7-11) on 17 March 2015 21:29:09 os: Linux-2.6.32-431.29.2.el6.i686 #1 SMP Tue Sep 9 20:14:52 UTC 2014 machine: i686 clock source: unix pcre jit disabled detected number of CPU cores: 1 current working directory: /var/www/lvpp writing pidfile to /var/www/lvpp/lvpp.pid detected binary path: /var/www/lvpp/site/env/bin/uwsgi setgid() to 503 setuid() to 501 chdir() to /var/www/lvpp/site/ your processes number limit is 1812 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) error removing unix socket, unlink(): Permission denied [core/socket.c line 198] bind(): Address already in use [core/socket.c line 230]It worked early. But when I invokedkill -9 uwsgi.pidI could not start uwsgi process again.Why can't I start uwsgi process again?
Could not start uwsgi process
It turns out the nginx configuration file described two servers, and I was adding the location snippet to the wrong one.When I added it to the correct one and reloaded nginx, the file was returned with the expected content-type:HTTP/1.1 200 OK Server: nginx/1.10.1 Content-Type: application/pkcs7-mime Content-Length: 245 { "applinks": { "apps": [], "details": [ { "appID": "APPPREFIX.com.mycompany.app", "paths": [ "/home*" ] } ]}
In order to set up universal links for an iOS app, I have created an apple-app-site-association file, and placed it in the /public directory of my Rails app.I can curl it at the correct address, but it returns the wrong content type. Instead ofapplication/jsonorapplication/pkcs7-mimeit returnsapplication/octet-stream, as you can see in the response here:curl -i https://example.com/apple-app-site-association HTTP/1.1 200 OK Server: nginx/1.10.1 Content-Type: application/octet-stream Content-Length: 245 Last-Modified: Mon, 21 Nov 2016 12:45:00 GMT Strict-Transport-Security: max-age=31536000 { "applinks": { "apps": [], "details": [ { "appID": "APPPREFIX.com.mycompany.app", "paths": [ "/home*" ] } ] }I am attempting to specify a Content-Type in the nginx configuration:/etc/nginx/sites/sites-available/sitename: server { ... location /apple-app-site-association { default_type application/pkcs7-mime; } }I have saved this change and restarted nginx. This doesn't make any difference to the response from curl. I've also triedlocation /public/apple-app-site-association {}and a few other variations, to no effect.What is the correct way to set up nginx to deliver this file with the correct content type?
How to set correct content-type for apple-app-site-association file on Nginx/Rails
Try:supervisorctl reread supervisorctl reloadThat should start the service. I did this as root under Ubuntu 13.04.EDIT:I've had trouble since I posted this with SIGHUP'ing Supervisor processes. I would just like to share a little snippet I found elsewhere:sudo kill -HUP `sudo supervisorctl status | grep $APP_NAME | sed -n '/RUNNING/s/.*pid \([[:digit:]]\+\).*/\1/p'`The below will send a SIGHUP to the process running APP_NAME. This is useful for Gunicorn graceful reloading.Joe
if I run command (to start the app) via supervisor:sudo supervisorctl start myappit is throwing the error of:myapp: ERROR (no such process)I created a file called myappsettings.conf:[program:myapp] command = /usr/local/bin/gunicorn -c /home/ubuntu/virtualenv/gunicorn_config.py myapp.wsgi user = ubuntu stdout_logfile = /home/ubuntu/virtualenv/myapp/error/gunicorn_supervisor.log redirect_stderr = trueWhat is the issue here?Thank you.
ERROR (no such process) Nginx+Gunicorn+Supervisord
You're mixing up things, so let me clarify.Python's standard way of publishing applications via web servers isWSGI--you can think of it as a Python's native CGI.uWSGIis a WSGI-compliant server that usesuwsgiprotocol to talk to other uWSGI instances or upstream servers. Usually the upstream server isnginxwithHttpUwsgiModulethat allows it to communicate using uwsgi protocol--with nginx you have additional layer of protection for your app server, load balancing and serving the static files. In most scenarios,You Should Be Using Nginx + UWSGI. To answer your question, uWSGI is installed and run separately from nginx, and they both need to be configured to communicate to each other.Pure WSGI is pretty low-level, so you may want to use a WSGI-compliantframework. I guess the top two areDjangoandFlask.For a hello world Flask setup,Serving Flask With Nginxseems to be a good article.
I'm new to linux development. I'm a bit confused on the documentation i read. My ultimate goal is to host a simple python backed web service that would examine an incoming payload, and forward it to other server. This should be less than 30 lines of code in python.I'm planning to use nginx to serve up python file. From my research, i also need a python web framework. I chose to go with uwsgi. I'm so confused. which one do I need? an nginx uwsgi module, or uwsgi server? i don't want to put django just for this simple purpose.thenginx documentationmention thatDo not confuse the uwsgi protocol with the uWSGI server (that speaks the uwsgi protocol)So, does that mean, i don't need to install uwsgi server separately? do i just install nginx, and start configuring? I'm using nginx 1.4.4Could someone share a step by step configuration procedure on how to configure uwsgi with nginx, along with a sample python code(hello world maybe)? i can configure nginx just fine, but i don't know how to make it serve python pages. all the docs i could find involves having django on top.
difference between uwsgi module in nginx and uwsgi server
Someone answered with the correct solution here, but the post disappeared...You have to disable http2 for all server blocks on one IP Adress / Port. If there is one server block configured to enable http2 it is enabled for all server blocks on this IP.
due tothisSafari Issue with HTTP/2 and Form POSTS I wanted to disable serving one Webpage via HTTP/2. So I just removed the "http2" from the server_name directive in corresponding nginx server block.server { listen x.x.x.x:443 ssl; server_name xxxx; [...] }But after I restarted NginX and opened the website in various browsers the HTTP/2 Protocol is still used... What am I doing wrong?My NginX version is 1.10.1Greets Jan
How to disable http2 in nginx
There's a full list of web servers etc that support HTTP/2 athttps://github.com/http2/http2-spec/wiki/ImplementationsHTH
I have installed SPDY Indicator chrome extension. It is detecting some sites as SPDY enabled and some as HTTP/2 enabled.Which are the web servers that currently support HTTP/2? I know nginx support SPDY, but does it support HTTP/2? If it does, how can I enable it?UpdateThanks to GolezTrol.The answer is no.Yes as of September 2015SupportedIIS supports HTTP/2 in Windows 10[50]OpenLiteSpeed 1.3.7 and 1.4.4 support HTTP/2 draft 16.[51]SPDY, but no HTTP/2The following list is out of date and it's probably not worth maintaining.Nginx provides experimental support for SPDY (Draft 3.1) via a module[52]Apache doesn't support SPDY in the current 2.4.x version, but mod_spdy allows adding it[53]LiteSpeed Web Server currently supports SPDY/3.1[54]
Which web servers support HTTP/2
Thetry_filesdirective only supports these syntaxes:try_files file ... uri; try_files file ... =code;It doesn't support:try_files file ... uri =code;The difference betweenfileandurihere, is that forfilearguments, NGINX will check theirexistencebefore moving on to next argument; foruri, it won't.If the last argument has form of a=code, then all prior arguments to it arefiles (checked for existence).From this, you can get a conclusion that with request URI/foo/barand this config:root /var/www; location /foo/ { try_files $uri $uri/ =404; }... Will not trigger 404 error if any of thefiles exist:/var/www/foo/bar/var/www/foo/bar/directory (if you haveautoindexenabled)/var/www/foo/bar/index.html(orindex.php, etc.) (due to value ofindex)Onlywhen none of the above exist, NGINX will trigger 404 error.
I have a example web server with only one index.html file in a www directory. I can setup a nginx with following configuration:location /subfolder { alias /data/www; try_files $uri $uri/ /index.html; }In browser I can see correct response on my local domaintest.local/subfolder, alsotest.local/subfolder/somethingreturns a default nginx page (it is normal because root is not set)if I change a configuration tolocation /subfolder { alias /data/www; try_files $uri $uri/ /index.html =404; }response totest.local/subfolderis still correct, buttest.local/subfolder/somethingand all URI with /subfolder prefix return a index.html of correct response also status is 200 not 404. If I remove/indexfrom try_files I get the same resultI wonder how nginx process request with =404 fallback, but cant find any information, not even in a official docs.UDAPTE:I found out that a alias directive should end with an/but still dont get a =404 functionality and purpose because a status is still 200ok
How nginx process =404 fallback in try_files
Updating one line with default php based config worked,server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /var/www/html/laravel/public; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } }Here changedtry_files $uri $uri/ =404;totry_files $uri $uri/ /index.php?$query_string;
I have myLaravel 5.4app setup inUbuntu 16.04server withnginxandphp7.0-fpm, it gives502 Bad GatewayNginx virtualhost config,server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /var/www/html/laravel/public; index index.php index.html; # Make site accessible from http://localhost/ server_name localhost; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri /index.php =404; include fastcgi_params; fastcgi_keep_conn on; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php7.0-fpm.sock; } }Tried the following but still not working,Changedfastcgi_pass unix:/var/run/php7.0-fpm.sock;tofastcgi_pass 127.0.0.1:9000;Changedtry_files $uri $uri/ /index.php?$query_string;totry_files $uri $uri/ /index.php$is_args$args;Restarted service after each change,service nginx restart service php7.0-fpm restartI can access only the main route with this config,server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /var/www/html/laravel/public; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } }
502 Bad Gateway for Laravel 5.4 with nginx and php7.0-fpm in Ubuntu
Seems like I just found the solution to my own question: Allocating more outgoing ports viaecho "10240 65535" > /proc/sys/net/ipv4/ip_local_port_rangesolved the problem.
An nginx/1.0.12 running as a proxy on Debian 6.0.1 starts throwing the following error after running for a short time:connect() to upstreamip:80 failed (99: Cannot assign requested address) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.1", upstream: "http://upstreamip:80/", host: "requesteddomain.com"Not all requests produce this error, so I suspect that it has to do with the load of the server and some kind of limit it hit.I have tried raising ulimit -n to 50k and worker_rlimit_nofile to 50k as well, but that does not seem to help. lsof -n shows a total of 1200 lines for nginx. Is there a system limit on outgoing connections that might prevent nginx from opening more connections to its upstream server?
nginx proxy: connect() to ip:80 failed (99: Cannot assign requested address)
Start with/etc/nginx/nginx.conf, all of the other files areincludedinto it. Seethis documentfor details.Usenginx -Tto see the complete configuration asnginxsees it.
There are multiple nginx conf files on single installation. Here is what I found:/opt/nginx/conf/nginx.conf/etc/nginx/nginx.conf/etc/nginx/sites-available/defaultmore in /etc/nginx/conf.dmore in /etc/nginx/sites-availableWhat's the use of those multiple conf files? What is going to happen if there are conflict? Which one is the master copy?
nginx: why multiple conf files?
If you go inside the containerdocker exec -it /bin/bashand check the log locationls -la /var/log/nginx/, you will see the following output:lrwxrwxrwx 1 root root 11 Apr 30 23:05 access.log -> /dev/stdout lrwxrwxrwx 1 root root 11 Apr 30 23:05 error.log -> /dev/stderrClearly, the logs are written in stdout. You can also try doingcat access.loginside the container and it still doesn't show anything.Now, the right way to get your logs is going outside the container and doingdocker logs Then, you would see your logs. Hope this helps!
I've recently pulled a nginx image:docker pull nginxI can run it successfully and go tohttp://server_nameand see the "Welcome to Nginx" page:docker run -d -p 80:80 nginxBut then when I try to check logs:docker exec 6c79549e3eb4f6e5fc06f049b67814ac4560ce2cdd7cc6ae84b44b5ae09a9a05 cat /var/log/nginx/access.logIt just hangs and outputs nothing. Same with error log. Now if I create a test.txt file in that same folder and use docker exec to (view) cat the file, I executes without hanging or any issues.Even if I try to run it in interactive mode, it just hangs:docker run -i -t -p 80:80 nginxOnce again the terminal hangs on the next line doing nothing, but it seems to work because I can access the nginx welcome page.Really confused what is going on, I've tried to search for this problem, but have not found any solution so far. Without being able to view the logs, it is going to be pretty hard to debug :) Also shouldn't the access logs be moved to stdout in the nginx container since by convention docker containers log to stdout?
Unable to use -lt when running Nginx Docker or cat logs
Requests to/.envare, by all means, malicious.Many apps (Laravel based for example) use.envfiles to keep very sensitive data like database passwords. Hackers/their automation scripts attempt to check if.envis public accessible.If they can red.envfiles in the first place, this indicates an improperly configured server and a server admin who have set up the server in such a bad way, should be deemed responsible for the consequences...The consequences are typically one thing. Hacker, once obtained.envdata, has database credentials and, with little sniffing, finds the URL to PhpMyAdmin. Becausetypically, a "bad configuration" includes publicly accessible PhpMyAdmin.Next thing you know, they email you that your database is gone and they have it. The only way to get it back, unless you have a backup, is paying up some cryptocurrency.What to doEnsure.envare not in publicly accessible directory in the first place. Even if they are, have NGINX deny access to them, e.g. deny access to all hidden files:location ~ /\. { deny all; }Whether you have any.envfiles on your system or not, you can be sure the traffic associated with requesting them on the web, is malicious. To reduce any CPU load and prevent their further attempts to find website exploits, you can use thehoneypot approach, e.g.:location ~ /\.env$ { include includes/honeypot.conf; }... will triggerimmediatefirewall ban against an IP which tried to read.envfiles. This proves useful because.envexploitation can be just one out of many possible other attacks, and since the related IP is blocked, it can try no more.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed2 years ago.The community reviewed whether to reopen this question1 year agoand left it closed:Original close reason(s) were not resolvedImprove this questionI was just looking through our logs after getting some intermittent 5xx errors on a Heroku hosted site, and in there I discovered many errors that were emanating from localhost and were requests for hidden files, usually .env but also things like stuff like ".well-known/assetlinks.json" and occasionally .env in non-existant subfolders.The requests are not frequent (15 - 30 per day), but appear to have been going on for a week. They are also being met with a "access forbidden by rule" which as far as I can tell is nginx.The request look similar to:2020/09/28 14:37:44 [error] 160#0: *1928 access forbidden by rule, client: 10.45.153.152, server: localhost, request: "GET /.env HTTP/1.1", host: REMOVEDI don't have any ENV files on the server, and the nginx seems to be blocking the requests, so it doesn't feel like there is any harm. Restarting all the dynos seemed to have killed the activity (based on a few hours having passed), but what worries me is that these appear to be "coming from inside the house". Is there something here that I should be concerned about? Is this a case of a bot exploiting a bug in a system that has local access?
Do these .env GET requests from localhost indicate an attack? [closed]
Yes. However, I figured out by myself. Your service has to be enabledexternalTrafficPolicy: Local. That means that the actual client IP should be used instead of the internal cluster IP.To accomplish this runkubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Running Kubernetes on GKEInstalled Nginx controller with latest stable release by using helm.Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.Ingress configapiVersion: extensions/v1beta1 kind: Ingress metadata: name: staging-ingress namespace: staging annotations: kubernetes.io/ingress.class: nginx ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y" spec: rules: - host: staging.com http: paths: - path: / backend: serviceName:staging-service servicePort: 80I connected to the controller pod and checked the nginx config and found this:# Deny for staging.com/ geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae { default 1; x.x.x.x 0; y.y.y.y 0; }It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.
Kubernetes whitelist-source-range blocks instead of whitelist IP
First of all, follow thisbest practice guideto build your angular app structure. The index.html should be placed in the root folder. I am not sure if the following steps will work, if it's not there.To use a nginx, you can follow this small tutorial:Dockerized Angular app with nginx1.Create a Dockerfile in the root folder of your app (next to your index.html)FROM nginx COPY ./ /usr/share/nginx/html EXPOSE 802.Rundocker build -t my-angular-app .in the folder of your Dockerfile.3.docker run -p 80:80 -d my-angular-appand then you can access your apphttp://localhost
I have an AngularJS app that has this structure:app/ ----- controllers/ ---------- mainController.js ---------- otherController.js ----- directives/ ---------- mainDirective.js ---------- otherDirective.js ----- services/ ---------- userService.js ---------- itemService.js ----- js/ ---------- bootstrap.js ---------- jquery.js ----- app.js views/ ----- mainView.html ----- otherView.html ----- index.htmlHow do I go about creating my own image out of this and running it on a container? I've tried running it with no success with a Dockerfile and I'm relatively new to Docker so apologies if this is simple. I just want to run it on a http server (using nginx perhaps?)I've tried these for help, to no success:https://www.quora.com/Can-I-have-an-Angular-app-on-Docker-containerAngularJS and NodeJS app in DockerDockerize your Angular NodeJS application
How to create a Docker container of an AngularJS app?
Thenet::ERR_CONTENT_LENGTH_MISMATCHis a caching issue. You're telling Nginx to bypass the cache if certain conditions are met (in your case$http_upgrade).You should've specified the caching location for nginx in a configuration file somewhere. A quick fix will be to delete the contents of this folder, restart nginx, and then try accessing the site again. Another quick fix at the expense of caching is to remove the lineproxy_cache_bypass $http_upgrade;If you provide more details on your caching setup, perhaps this answer could be improved.
I'm developing an Express-driven site, that is going through an nginx proxy. Sometimes when loading a page in the browser, I get this:GET http://myapp.local/css/bootstrap.css net::ERR_CONTENT_LENGTH_MISMATCHIf I refresh the page, it usually goes away. But if refresh over and over and over, it will come up again.What is the problem here? What can I do to narrow down the issue here? Here is mynginxconf for this server:server { listen 80; server_name www.myapp.local; rewrite ^(.*) http://myapp.local$1 permanent; } server { listen 80; server_name myapp.local; access_log /vagrant/nginx/logs/myapp.local/access.log; error_log /vagrant/nginx/logs/myapp.local/error.log; location / { proxy_pass http://localhost:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }This is definitely something to do with the nginx proxy. Because if I access the site using just the IP address and Node port:http://10.10.10.10:8080then I never ever get the error. But if I access it using the proxied vhost:http://myapp.localthen I will eventually get the error (maybe 1 out of 10 chance I see it).
Express and nginx net::ERR_CONTENT_LENGTH_MISMATCH
Merging files withcat certificate.crt ca_bundle.crt >> certificate.crt, merges the file without adding any next line character in it. After merging the files, open the newly created file, i.e, certificate.crt, and you'll see the file structure as follows:-----BEGIN CERTIFICATE-----certificate-1-text-----END CERTIFICATE----------BEGIN CERTIFICATE-----certificate-2-text-----END CERTIFICATE-----If your certificate looks like this, you can fix this by adding adding a new line character just before 5 hyphens of second begin certificate, i.e, it should look as follows after editing:-----BEGIN CERTIFICATE-----certificate-1-text-----END CERTIFICATE----------BEGIN CERTIFICATE-----certificate-2-text-----END CERTIFICATE-----
I generated my SSL from SSLforFree/ZeroSSL, and according to the steps for installation listed on their website,https://zerossl.com/help/installation/nginx/Downloaded the SSL FilesMoved them to the ServerMerged the certificate.crt & ca_bundle.crt with (cat certificate.crt ca_bundle.crt >> certificate.crt)Added following lines in the hosts file of nginx:ssl on;ssl_certificate /etc/ssl/certificate.crt;ssl_certificate_key /etc/ssl/private.key;Restarted Nginx Server with (sudo service nginx restart)Error received, and checked the error details by (journalctl -xe)Error was:nginx: [emerg] PEM_read_bio_X509_AUX(SSL: error:0908F066:PEM routines:get_header_and_data:bad end line)
SSL on Nginx throws error (SSL: error:0908F066:PEM routines:get_header_and_data:bad end line)
Cloudflare sets theCF-Connecting-IPand theX-Forwarded-Forheaderson every requestYou can simply get the IP from their special header:let ip = req.headers['cf-connecting-ip']If you expect requests outside of Cloudflare, you can get these IPs the following way:let otherIp = req.headers['x-forwarded-for'] || req.connection.remoteAddressThough, be wary, that other Proxies (like Nginx) will also set thex-forwarded-forheader.
Cloudflare changes the IP addresses of incomming requests because Cloudflare is a middleware between my website and the Internet, a proxy.How should Iget the initial IP address of the request, not Cloudflare its IP address. I heard about themod_cloudflarebut does this plugin only updates the IP address in my logs (?) And I didn't find a version for Nginx.
Get client IP address of a request instead of Cloudflare's IP address
location ~ .*files/projectX/files/users/.*jpg$ { expires -1; add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; }This does the trick.
I have -unfortunately Windows- Nginx server that I use for a static content (like product photos and so on). Currently I have had a global setting for caching, but now I need to change it little.I have a folder which path looks something like this:E:\xampp\srv\project-files\projectX\files\users\user-hash\visualisator\viewsAs you can see in the path is theuser-hashvariable which changes. And in this folder I have *.jpg files that need to have cache disabled.I have already tried something like this (located on top of the other (global) location settings):location ~ /users/ { alias "E:/xampp/srv/project-files/projectX/files/users"; expires -1; add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; }And I have been hoping that it will at least disable cache for all the files in this folder and further. But the only result I am getting from this ishttp 403.I can live with disabled cache from the folderusersand further if it will works, but the best solution would be to disable cache for the whole path (withuser-hashvariable included) and just for a specific file type (*.jpg).Any idea or recommendation how to achieve this? PS: NGinx is new for me, I have spent like 8 hours tops with this technology, so sorry if it is stupid question, but I can't possibly figure it out or find it anywhere.Thank you in advance!
NGINX, Disable cache in specific folder for a specific file type
It is actually much simpler:location = / { # Exact domain match, with or without slash } location / { # Everything except exact domain match }Becauselocation = /is more specific, it is always preferred if only the domain name is called (order of the location blocks does not matter).You need regex in Nginx much less than you would think, because normal location blocks match every URL that matches the beginning, solocation /blamatches every URL on the domain which starts with /bla (like /blabla, or /bla/blu, or /bla.jpg). I would mainly recommend regex if you need capturing groups and do something with those.
I would like to create a location which catches bothhttp://example.comandhttp://example.com/and another location which catches everything else. The first one will serve static html and the other one is for the api and other stuff. I've tried with this:location ~ /?$ { root /var/www/prod/client/www; } location ~ /.+ { # https://stackoverflow.com/questions/21662940/nginx-proxy-pass-cannot-have-uri-part-in-location rewrite ^/(.*) /$1 break; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_connect_timeout 1s; proxy_send_timeout 30s; proxy_read_timeout 300s; send_timeout 300s; proxy_redirect off; proxy_pass http://backendPro; }But does not work. I've tried with many options in first location, but when it matches first then second is not matching and vice-versa:location / { location ~ \/?$ { location ~ ^/?$ {
nginx location with and without trailing slash
According to Nginxdocs:This directive is available as part of nginx commercial subscription.
I am new to nginx. I am using -health_check uri=/some/uribut on running the test with this command -sudo /usr/sbin/nginx -t -c /etc/nginx/nginx.confI get the following error -nginx: [emerg] unknown directive "health_check" in /etc/nginx/sites-enabled/abc.conf:121Can someone tell what is wrong here..? I have used apache2 to do all jing-bang but I do not have much knowledge about nginx..
nginx unknown directive health_check
If you look at/etc/nginx/sites-enabled/you see two files,default.saveanddefaultjust remove one of themsudo rm -rf /etc/nginx/sites-enabled/default.save
When I try restart nginx and write in console this commandnginx -tI have an error:nginx: [emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/default.save:20 nginx: configuration file /etc/nginx/nginx.conf test failedsites-enabled/defaultserver { listen 80 default_server; listen [::]:80 default_server ipv6only=on; server_name localhost; passenger_enabled on; rails_env production; root /home/hh/public; access_log /var/log/nginx/host.access.log; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }nginx.confuser www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }One hour ago everything worked fine, but after restart nginx I have this issue.
nginx: [emerg] a duplicate default server
Why not move the images up and then deny all?location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) { add_header Cache-Control public; add_header Cache-Control must-revalidate; expires 7d; } location / { deny all; }there is no syntax for NOT matching a regular expression. Instead, match the target regular expression and assign an empty block, then use location / to match anything else. -Fromhttp://wiki.nginx.org/HttpCoreModule#locationEdit: Removed "=" from "location /" To quote the docs:location = / { # matches the query / *only.* } location / { # matches *any query*, since all queries begin with /, but regular # expressions and any longer conventional blocks will be # matched first. }My bad.
I'm trying to setup nginx so "static.domain.com" can only serve images. This is what I have come up with, but I know it can be done more efficiently. I want to serve 403.html if someone tries to access any .htm, .php, directory (anything else I'm missing?) files. Of course, with the exception of 403.htm and static.htm files.Any ideas how I can secure this properly?server { listen xx.xx.xx.xx:80; server_name static.domain.com; root /www/domain.com/httpdocs; index static.htm; access_log off; error_log /dev/null crit; error_page 403 /403.html; # Disable access to .htaccess or any other hidden file location ~ /\.ht { deny all; } location ~* \.php { deny all; } # Serve static files directly from nginx location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) { add_header Cache-Control public; add_header Cache-Control must-revalidate; expires 7d; } }
nginx - serve only images
Here is a solution usingngx_http_mirror_module(available since nginx 1.13.4):server { location / { proxy_pass http://17.0.0.1:8000; mirror /s1; mirror /s2; mirror /s3; } location /s1 { internal; proxy_pass http://17.0.0.1:8001$request_uri; } location /s2 { internal; proxy_pass http://17.0.0.1:8002$request_uri; } location /s3 { internal; proxy_pass http://17.0.0.1:8003$request_uri; } }nginx will:send the same request to all serverswait for all of them to finishrespond with thehttp://17.0.0.1:8000response (and ignore the others)
The following fragment will pick one server at a time. Is there a way to hit them all at once?upstream backend { server 17.0.0.1:8000; server 17.0.0.1:8001; server 17.0.0.1:8002; server 17.0.0.1:8003; } server { location / { proxy_pass http://backend; } }
Is there a way to configure Nginx to broadcast incoming requests to multiple upstream servers simultaneously?
I found a linked question with a post that solved this problem. The post instructed the following insertion prior toapp.UseAuthentication();var fordwardedHeaderOptions = new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto }; fordwardedHeaderOptions.KnownNetworks.Clear(); fordwardedHeaderOptions.KnownProxies.Clear(); app.UseForwardedHeaders(fordwardedHeaderOptions);Couple this with your Nginx configuration; it must forwarding this information:# # Proxy WEB # location / { proxy_pass http://; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }To be thorough, I had already tried forwarding header information, to no avail. These two lines did the trick:fordwardedHeaderOptions.KnownNetworks.Clear(); fordwardedHeaderOptions.KnownProxies.Clear();Kudos to charlierleeover in this post.
I have a AspNet Core 2.0 App which authorizes users with Azure AD using the OpenIdConnect API. The callback uris of the Azure App Entry are defined ashttps://localhost:44369/signin-oidcandhttps://domain.tld/signin-oidc. When I deploy my app on localhost with IIS Express everything works fine and I can authenticate users correctly.When I deploy my app to a Linux system with Nginx configured as a reverse proxy to the app authentication doesn't work. Azure AD shows the following error message:AADSTS50011: The reply address 'http://domain.tld/signin-oidc' does not match the reply addresses configured for the application. More details: not specifiedObviously my app tells Azure AD to redirect back to the http address and Azure AD refuses to do so (fortunately). I guess the problem is that my app thinks it uses http because it listens onhttp://localhost:5000/for the reverse proxy.public void Configure(string name, OpenIdConnectOptions options) { options.ClientId = _azureOptions.ClientId; options.Authority = $"{_azureOptions.Instance}{_azureOptions.TenantId}"; options.UseTokenLifetime = true; options.CallbackPath = _azureOptions.CallbackPath; options.RequireHttpsMetadata = true; }This is the code I use to configure OpenIdConnect. Specifying an absolute path for CallbackPath yields in an exception. Is there any other way to tell OpenIdConnect to allways use https for the CallbackPath?In case my Nginx is not configured correctly this is part of my configuration:location / { # redirect to ASP.NET application proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; }Any help is highly appreciated!
AspNetCore Azure AD Connect Callback URL is http, not https
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated inhttps://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.mdyou should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: type: NodePort ports: - port: 3306 protocol: TCP selector: app: mysql --- apiVersion: apps/v1beta2 kind: Deployment metadata: name: mysql labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: password ports: - name: http containerPort: 3306 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mysql labels: app: mysql annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: mysql.example.com http: paths: - path: / backend: serviceName: mysql servicePort: 3306My LoadBalancer (created by Nginx Ingress) port configuration looks like :80 (TCP) forwarding to 32078 (TCP) Stickiness options not available for TCP protocols 443 (TCP) forwarding to 31480 (TCP) Stickiness options not available for TCP protocolsmysql.example.comis pointing to my ELB.I was expecting something like, from my local box i can connect to MySql if try something like :mysql -h mysql.example.com -u root -P 80 -pWhich is not working out. Instead ofNodePortif i try withLoadBalancer, its creating a new ELB for me which is working as expected.I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
How to access MySql hosted with Nginx Ingress+Kubernetes from client
Your header contains underscore (_). By default, nginx treats headers with an underscore as invalid and drops them.You should enableunderscores_in_headersdirective.Otherwise, consider changing the header name to one without underscores.GH-clientwill be perfectly valid and proxied to your backend server.
I am a beginner at nginx. I have a simple webserver on 8080 that I want to pass all traffic to in this rather small environment. My proxy seems to work except that a custom header is not there when it gets to my upstream server. The server block is below. What would I need to add to this to keep my custom header? In this case the custom header was set in angularjs but I don't think that has anything to do with it as it works fine going directly to 8080 on the server. ($httpProvider.defaults.headers.common['GH_client'] = client_id();)server { server_name localhost; location / { proxy_pass http://localhost:8080; proxy_redirect off; proxy_pass_header X-CSRF-TOKEN; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; }}Thanks for any help.
simple nginx reverse proxy seems to strip some headers
One option is to use theauth_requestmodule. It's not designed with your scenario in mind and is not a default Nginx module so you need to build from source to compile it in with ./configure --with-http_auth_request_module.auth_request is used to pre-authenticate Nginx requests via an remote HTTP call. As long as the response header is HTTP 200, then the initial request is processed as normal. This could be used to send the request to your AdminService and the response would be able to determine what happened next.Something like:# Default location location / { auth_request /AdminService; # Look for X_UpstreamHost: header in the response auth_request_set $x_upstreamhost $upstream_http_x_upstreamhost; # Use the value of the response header to choose the internal processing host proxy_pass http://$x_upstreamhost; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } # Send requests for AdminService to the AdminService # This expects AdminService to be listening on a path called AdminService # and based at ##adminip## location /AdminService { proxy_pass http://##adminip##; }This will send incoming requests first to the host defined by AdminService. This service must response with a normal 200 header and also x_upstreamhost: #internalHost#. Where #internalHost# is the ip or dn of the host you want to handle the request.Try it out and if you run into issues, post your server {} block and somebody will take a look.
What:I want to make requests to a web service from within nginx on each request that goes through nginx and apply some process based on the response I get from the web service.Application:I am using nginx as a reverse proxy and have multiple webservices to which traffic is routed to. I want to add an additional webservice (Lets call this AdminService) that would act as admin, this service would handle things like security, billing and other traffic analytics and preprocessing. For every request the goes through nginx I need to make a request to AdminService, admin service will then analyse the request updating some statistics and the likes and respond with some tags. nginx will then update some headers based on the returned tags and forward the request to the appropriate url.I've taken a look at the Lua module and it doesn't seem to do webservice call. I also see that there are Java, Groovy and Clojure modules available, is this perhaps what I should be looking at? Otherwise what should I be looking at?
How do I make web service calls within nginx?
You can access thePOSTbody via theFCGI_stdinstream. For example, you can read from it one byte at a time usingFCGI_getchar, which is a short form forFCGI_fgetc(FCGI_stdin). You can read larger chunks of data in a single call usingFCGI_fread. All of this I found looking atthe source. These sources often reference something called ”H&S”—this stands for "Harbison and Steele", the authors of the bookC: A Reference Manual, and the numbers refer to chapters and sections of that book.And by the way, it's called “stdio” for “STanDard Input/Output”. Not “studio”. The functions should mostly behave like their counterparts withoutFCGI_prefixed. So for details, look at the man pages ofgetchar,freadand so on.Once you have the bytes in your application, you can write them to file, either using normal stdio operations or files opened viaFCGI_fopen. Note however that the input stream will not directly correspond to the content of an uploaded file. Instead, MIME encoding is used to transfer all form data, including files. You'll have toparse that streamif you want to access the file data.
I am using a library fromhttp://fastcgi.com/in C++ application as a backend and nginx web-server as a front-end.Posting files from HTML-form successfully and can see the temporary files on nginx server side. But I can't figure out how to access a body of multipart POST request using fastcgi_stdio. This is my HTML-form. Test Server My nginx conf file:location /upload { # Pass altered request body to this location upload_pass @test; # Store files to this directory # The directory is hashed, subdirectories 0 1 2 3 4 5 6 7 8 9 should exist upload_store /www/test; # Allow uploaded files to be read only by user upload_store_access user:rw group:r all:r; # Set specified fields in request body upload_set_form_field $upload_field_name.name $upload_file_name; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; # Inform backend about hash and size of a file upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$|^description$"; upload_cleanup 400 404 499 500-505; } include fastcgi.conf; # Pass altered request body to a backend location @test { fastcgi_pass localhost:8080 }Now, how can handle/get the POST request body in my fastcgi c++ application and how to write it in proper file at the fastcgi app side ?Is there any better fast module to achieve this?Thank you.
how to access a body of POST request using fastcgi C/C++ application
You have a few questions in there. For the "add nodes to haproxy without restarting it":What I do for a similar problem is prepopulate the config file with server names.. e.g. web01, web02 ... web20 even if I only have 5 web servers at the time. Then in my hosts file I map those to the actual ips of the web servers.To add a new server, you just create an entry for it in the hosts file and it will start passing health checks and get added.For automated orchestration, it really depends on your environment and you'll probably have to write something custom that fits your needs. There are paid solutions (Scalr comes to mind) to handle orchestration too.
I'm very sure this problem has been solved, but I can't find any information anywhere about it...How do sysadmins programmatically add a new node to an existing and running load balancer ? Let's say I have a load balancer running and already balancing say my API server between two EC2 instances, and suddenly there's a traffic spike and I need a third node in the load balancer but I'm asleep... It would be wonderful if I had something monitoring probably RAM usage and some key performance indicators that tell me when I should have another node, and even better if it could add a new node to the load balancer alone...I'm confident that this is possible and even trivial to do withnode-http-proxyanddistribute, but I'd like to know if this is possible to do with HAproxy and/or Nginx... I know Amazon's elastic load balancing is probably my best bet but I want to do it on my own (I want to spawn instances from rackspace, EC2, Joyent and probably others as it's convenient).Once again, spawning a node is easy, I'd like to know how to add it to haproxy.cfg or something similar with Nginx without having to reload the whole proxy, and doing that programatically. Bash scripting is my best bet for this but it still does have to reload the whole proxy which is bad because it loses connections...
Programmatically add nodes to a load balancer like Haproxy?
indentation was wrong. Needed to be multiline. i had it as single line when trying to fix it in parser/etc/pki/tls/certs/server.crt: mode: "000400" owner: root group: root content: | -----BEGIN CERTIFICATE----- MIIFSDCCBDCgAwIBAgIJAOA6rlWwXuEJMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
To enable https on my node js app I have followed thispageAnd included.ebextensions\https-instance.configfile in my app. I sent my csr to Godaddy and downloaded my cert file.When running app on local host I create my https server using the keys like below and app runs finevar options = { cert: fs.readFileSync('e03aae55b05ee109.pem'), key: fs.readFileSync('key.pem') };In myhttps-instance.configfile I pasted the contents of my key and cert into area specified in the link. I validated the YAML using anonline parserIm getting an 503 service unavailble error when hitting domain and on checking the app logs its throwing out[emerg] 21361#0: PEM_read_bio_X509_AUX("/etc/pki/tls/certs/server.crt") failed (SSL: error:0906D064:PEM routines:PEM_read_bio:bad base64 decode)I have validated my key and cert as described here and theymatchAny thoughts on what else could be wrong?
SSL: error:0906D064:PEM routines:PEM_read_bio:bad base64 decode
uwsgi_paramsfileis located inside Nginx directory (/etc/nginxon my Linux machine)me:~$ ls /etc/nginx | grep uwsgi uwsgi_paramsand has no extension.
I am having trouble wrapping my head around how this file works. I seem to see it in every example. See below:server { listen 80; server_name sivusto3.fi; access_log /var/log/customersite3/access_log; location / { root /var/www/customersite3; uwsgi_pass 127.0.0.1:3033; include uwsgi_params; } }
Where should the uwsgi_params file be located and what is its extension?
I had the same issue using Gunicorn, nGinx, Django andRequestsevery time I did:response = requests.get('http://my.url.com/here')the workers would timeoutI solved the problem by switching from Syncronous (sync) workers to Asynchronous (eventlet) workers.if you are launching command line add:-k 'eventlet'if you are using a config file add:worker_class = "eventlet"
I have a Django application running in Gunicorn behind Nginx. Everything works fine, exect for one strange thing: I have a "download" view and a RESTful json API. When call the download view I use urllib2 to access the json API to get information. And excactly when I try to do this http get request to the json api, the request times out with an error HTTP Error 504: Gateway Time-out.When I run the code with ./manage.py runserver everything works fine. The http get request to the json api also only takes a few miliseconds, so no danger of running into a timeout.Here the Situation in Pseudo code:myproject/views.py:(accessible as:http://myproject.com/download)1 def download(request, *args, **kwargs): 2 import urllib2 3 opener = urllib2.build_opener() 4 opener.open('http://myproject.com/api/get_project_stats')Theopener.open()call in line four runs into a timeout when running in Gunicorn, when running with./manage.py runservereverytihng works fine (and the api call only takes a few miliseconds.Has anyone had the same problem? And more important: How have you solved it?
Gunicorn worker timeout
Eventually gave up on trying to do this "neatly".The final solution was just to make a settings variable that I prefixed to the static_url and projects urls.py file. No SCRIPT_NAME or anything complicated on the nginx side.
I have already gone through some previous threads:How do I set subdirectory in nginx with Djangohow to deploy django under a suburl behind nginxServing flask app on subdirectory nginx + uwsgiThe basic lesson is that you should only need to configure your site(s-available) to achieve this. I have now tried various permutations ofserver { listen 80; server_name www.example.com; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /path/to/project; } location /project/ { root /path/to/project; include /etc/nginx/uwsgi_params; uwsgi_param SCRIPT_NAME /project; uwsgi_modifier1 30; uwsgi_param PATH_INFO "$1"; uwsgi_pass unix:/tmp/project.sock; } }Everything runs perfectly when I define location to be "/" (and remove SCRIPT_NAME, modifier1, PATH_INFO and root doesn't matter. But trying to use a subdirectory always results in Page not found (404):Request URL: http://www.example.com/project/project(edit) It's ADDING a directory to the request. What am I not figuring out?(tried forced_script_name - should't have to use this and gives other types of headaches - and uwsgi config setting)EDIT:location /project/ { root /path/to/project; include /etc/nginx/uwsgi_params; uwsgi_param SCRIPT_NAME /project; uwsgi_pass unix:/tmp/project.sock; }Does not work ... The socket is there and works when I configure for / - I just can't see what I'm missing.UPDATE:location ~ /project(?/.*|$) { include /etc/nginx/uwsgi_params; uwsgi_pass unix:/tmp/project.sock; uwsgi_param PATH_INFO $path_info; uwsgi_param SCRIPT_NAME /project; }This loads up the site but all links point tohttp://example.com/link/to/somethinginstead ofhttp://example.com/project/link/to/something
nginx serving Django in a subdirectory through uWSGI
This issue turned out to be very specific to my setup (Nginx, PHP-FCGI, Symfony).There were a handful of issues in play that caused the issue:Symfony does not include aContent-LengthnorConnection: closeheaderPHP-FCGI does not support thefastcgi_finish_requestfunctionNginx buffers the response from PHP-FCGI because Gzip is onThe solution was to switch from PHP-FCGI to PHP-FPM in order to get support forfastcgi_finish_request. Symfony internally calls this before executing the kernel terminate logic thereby definitively closing the connection.Another way to solve this would be to turn off Gzip on Nginx, but this wasn't really an option for me.
My understanding ofkernel.terminateis that it triggersafterthe response has been returned to the client.In my testing tough, this does not appear to be the case. If I put asleep(10)in the function that's called on kernel.terminate. the browser also waits for 10 seconds. The processing seems to be happening before the response is sent.I have the following in config:calendar: class: Acme\CalendarBundle\Service\CalendarService arguments: [ @odm.document_manager, @logger, @security.context, @event_dispatcher ] tags: - { name: kernel.event_subscriber }My subscriber class:class CalendarService implements EventSubscriberInterface { public static function getSubscribedEvents() { return array( 'kernel.terminate' => 'onKernelTerminate' ); } public function onKernelTerminate() { sleep(10); echo "hello"; } }UPDATEThis appears to be related to Symfony not sending aContent-Lengthheader. If I generate that, the response return properly.// app_dev.php ... $kernel = new AppKernel('dev', true); $kernel->loadClassCache(); $request = Request::createFromGlobals(); $response = $kernel->handle($request); // --- START EDITS --- $size = strlen($response->getContent()); $response->headers->set('Content-Length', $size); $response->headers->set('Connection', 'close'); // ---- END EDITS ---- $response->send(); $kernel->terminate($request, $response);
Response returned only after kernel.terminate event
Doing SSL -> SSL doesnt send the whole TCP packets to your webserver. AWS decypts the packets using the certificate and re-encrypt it. Your backend only receives the modified packets. The viable option is to change the protocols to TCP but you will neednginx proxy patchfor http headers or to work better.I'm having same problem as well and waiting for either AWS to enable NPN negotiaition on ELBs or nginx add the accept-proxy patch to its module.
We've been using nginx compiled with the spdy module for some time now and despite only being draft 2 of the specs are quite pleased with its performance.However we now have the need to horizontally scale and have put our EC2 instances behind an Elastic Load Balancer.Since ELB doesn't support the NPN protocol we have set the listeners to the following:SSL 443 -> SSL 443We have also enabled the new proxy-protocol as described here:http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.htmlEverything works completely fine with this configuration. Our app is successfuly loadbalanced across our instances.However when runninghttp://spdycheck.org/it reports that SPDY is not enabled. Yet if I point spdycheck to the elastic IP of a single instance, it correctly reports SPDY as being enabled.Any help would be greatly appreciated.
Has anyone managed to get SPDY to work behind an Amazon ELB?
If you are using Docker Machine onWindows, docker has limited access to your Windows filesystem. By default Docker Machine tries to auto-share yourC:\Users(Windows) directory.So the folder.../Dev/docker/nginx-www/nginx/html/must be located somewhere underC:\Usersdirectory in the host.All other paths come from your virtual machine’s filesystem, so if you want to make some other host folder available for sharing, you need to do additional work. In the case of VirtualBox you need to make the host folder available as a shared folder in VirtualBox.
I just want to test Docker and it seems something is not working as it should. When I have my docker-compose.yml like this:web: image: nginx:latest ports: - "80:80"when in browser I run mydocker.appdomain (sample domain pointed to docker IP) I'm getting default nginx webpage.But when I try to do something like this:web: image: nginx:latest volumes: - /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/ ports: - "80:80"when I run:docker-compose up -idwhen I run same url in browser I'm getting:403 Forbiddennginx/1.9.12I'm using Windows 8.1 as my host.Do I do something wrong or maybe folders cannot be shared this way?EDITSolution (based on @HemersonVarela answer):The volume I've tried to pass was inD:\Dev\dockerlocation so I was using/d/Dev/dockerat the beginning of my path. But looking athttps://docs.docker.com/engine/userguide/containers/dockervolumes/you can read:If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.so what I needed to do, is to create mynginx-ww/nginx/htmldirectory inC:\users\marcindirectory, so I ended with:web: image: nginx:latest volumes: - /c/Users/marcin/docker/nginx-www/nginx/html/:/usr/share/nginx/html/ ports: - "80:80"and this is working without a problem. Files are now shared as they should be
Docker compose - share volume Nginx
your config looks okI think the problem is this (correct me if I'm wrong):you have console.local.com listening on port 81,that means you need to access it ashttp://console.local.com:81/when you access it ashttp://console.local.com/(no explicit port so defaults to port 80) nginx will check, notice that noting is listening on port 80 for that server_name, and consequently will pass the request to the default server-block. Since the defaut server-block is the first one (in the absence of configuration to change it) you end up in the chat.local.com handling.In all likelyhood you want to change your console.local.com to listen on port 80 also since:the server_name directive in both serverblocks is enough to differentiate the requeststhat avoids you having to add the :81 to the domainname in the requests all the time
I am trying to configure nginx on two ports with the same instance, for example on port 80 and port 81, but no luck so far. Here is an example of what I am trying to do:worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name chat.local.com; location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_buffering off; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 81; server_name console.local.com; location / { proxy_pass http://127.0.0.1:8888; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_buffering off; } } }When I try to run console.local.com, it shows the content from chat.local.com. Is there a way to make nginx run on two ports? Thanks in advance!
How to run Nginx on multiple ports
You should upgrade to nginx 1.7.3 or higher.Feature: weak entity tags are now preserved on response modifications, and strong ones are changed to weak.
If I setup nginx to use gzip, it removes any etag header.The reasoning behind this is that the same resource cannot be byte-for-byte identical given that gzip has various compression levels.But nginx also removes a weak etag, which just means that the resources are semantically equivalent. This seems like incorrect behavior by nginx.Am I missing something? If not, is there a way to fix this?wiki
nginx - missing etag when gzip is used
In short: it's not possible because PostgreSQL has its own handshake that precedes the SSL handshake.To avoid this you can simply set PostgreSQL to use SSL at its level, and use Nginx's TCP stream as pass-through (the communication is encrypted end-to-end by PostgreSQL).Source:https://www.postgresql.org/message-id/d05341b9-033f-d5fe-966e-889f5f9218e5%40proxel.seSadly that confirms what I feared. Adding SNI to the PostgreSQL protocol wont help with solving your use case because the PostgreSQL protocol has its own handshake which happens before the SSL handshake so the session will not look like SSL to HA Proxy.Just like HA Proxy does not support STARTTLS for IMAP[1] I do not think that it will ever support SSL for the PostgreSQL protocol, SNI or not.To solve your use case I recommend using something like stunnel, which does support SNI, to wrap the unencrypted PostgreSQL protocol in SSL.Setting SSL at Nginx's TCP stream level will trigger the following errors (also reported here:https://www.postgresql.org/message-id/flat/15688-55463748a04474a5%40postgresql.org):when trying to connect withpsqlyou get:psql: error: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.when looking at the Nginx error log you see:2021/02/01 19:18:01 [info] 6175#6175: *3 client 127.0.0.1:57496 connected to 127.0.0.1:5433 2021/02/01 19:18:01 [info] 6175#6175: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking, client: 127.0.0.1, server: 127.0.0.1:5433
I’ve been trying to use NGINX as a TLS terminator for my PostgreSQL database but without success.When I try to connect to the database I get the following error:server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.When I remove thessloption inlistenI can connect without any errors. I’ve tried running another service(Eclipse Mosquitto) with the same NGINX settings, TLS enabled, and it works fine.I’m using Postico as DB tool.Here are the NGINX settings I'm using.# nginx.conf stream { server { listen 20000 ssl; # Can’t connect with postgre but with mosquito # listen 20000; # Can connect with postgre and mosquitto proxy_pass 192.168.1.123:30000; include /home/custom/ssl_conf.conf; } } # ssl_conf.conf ssl_certificate /etc/nginx/fullchain.pem; ssl_certificate_key /etc/nginx/privkey.pem; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on;
NGINX TLS termination for PostgreSQL
nginx discards the"HTTP/1.1 304 Not Modified\r\n".nginx uses (and eats) theStatusheader.If my fastcgi program returns the header"Status: 304\r\n".Then I get this response:HTTP/1.1 304 Server: nginx/1.6.2 Date: Sat, 21 May 2016 10:49:27 GMT Connection: keep-aliveAs you can see there is noStatus: 304header. It has been eaten by nginx.
I have a custom FastCGI application behind Nginx and I'm struggling to get Nginx to return anything other than a 200 status code.I've tried the following:Setting fast_cgi_intercept_errors on.Returning codes viaApplicationStatusin theEndRequest.Returning Errors on the StdError stream.Sending any of following headers:"Status: 404 Not Found""HTTP/1.1 404 Not Found""X-PHP-Response-Code: 404""Status: 404 Not Found;""HTTP/1.1 404 Not Found;""X-PHP-Response-Code: 404;"Any help would be great, I'm very stuck.
FastCGI and Nginx - Return HTTP Status
I've recently wrotea moduledoing exactly what you need. You might need something likeHAProxybeforenginxto support TLS tunneling.
are there any solution/patch that would make nginx work with a socks upstream?something like this:server { location / { proxy_pass socks5://ip:port/ } }
socks5 proxy/tunnel for nginx upstream?