Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You can using proxy_pass mapping to your tomcat server port, for example : if your tomcat port is 8080, your conf/nginx.conf should be configured like this:... http { ... server { location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; } } }restart it sbin/nginx -s reload, then when you can accesshttp://127.0.0.1, the request forward to tomcat.Configuration file is placed commonly under:/etc/nginx/nginx.conf
I am new to Nginx and I need your help,According to many forums I understood that all our static pages are stored in Nginx. When there is request comes I have to pass that request to tomcat for data and after response from tomcat response generated.Currently, I have just done that I request directly passed to tomcat and respond to request. but I think that is not solution for performance.So anyone can Help me?
NGINX with Tomcat configuration
Yes you can, in the configuration you are importing for the server, you can just do this:location /health/startup { add_header Content-Type text/plain; return 200 'healthy'; }
I am using the Nginx container to host a SPA application in Kubernetes.Aside from the static files hosted for the SPA app, I also need to host the routes for health checks. So, the route/health/startupneeds to return the texthealthywhen a GET request is sent to it.I suppose I could just make a folder called "health" and then put a file in it called startup with the texthealthyin it. But that seems a bit odd to me. And I worry that if the file structure may get changed and that then my health checks will start failing.Is there a way, using theNginx containerto return a given value (ie "healthy")when a request comes to a specific route?(And not mess up the rest of my static file serving that is going on.)
Return a string using the Nginx container
There is no way you can add extra server configuration without modifyingnginx.conffirst. But good news is that you will have to modify nginx.conf only for once.Just add this line in yournginx.confinclude /etc/nginx/config.d/*.conf;You can name directory and path as per your choice. create directory and save your extra configuration in that asextra.confwith.confextension. Any files you save with.confextension in this directory/etc/nginx/config.dwill be automatically added to your nginx.conf.You can even save multiple configurations likeextra1.confandextra2.conffor different uses and you can delete one without affecting other.
I want to add this extra config to my nginx.conf:server { listen 0.0.0.0:8081; rewrite ^ https://$host$request_uri? redirect; }But as my app is deployed in a hosting service I don't want to modify the already presentnginx.conf. It can be problematic.Is there any way I can add this extra configuration without modifying nginx.conf?
Nginx how to add extra server configuration to without modifying nginx.conf
Use Nginx (and Nginx only) for SSL, that's the standard. As you set, Nginx works as a reverse proxy so it will feed you program with local unencrypted data for the given encrypted data on port 443, so it won't work if you also use SSL on your node program
I'm using Nginx to publish static content on port 80 and a 433 redirect (with SSL) to the NodeJS. The configuration of Nginx is as follows:server { listen 443 ssl; ssl_certificate /opt/projetos/nodejs-project-ssl/vectortowns-cert.pem; ssl_certificate_key /opt/projetos/nodejs-project-ssl/vectortowns-key.pem; ssl_protocols SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; server_name 127.0.0.1:443; location / { proxy_pass https://127.0.0.1:8443; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; }}With NodeJS, Express, and EJS, I'm publishing dynamic content to port 8443, also configured to use HTTPS. See the javascript code below (only the parts that are important for the question).// other codes... /* Enable https */ var privateKey = fs.readFileSync('/opt/projetos/nodejs-project-ssl/vectortowns-key.pem'); var certificate = fs.readFileSync('/opt/projetos/nodejs-project-ssl/vectortowns-cert.pem'); var credentials = { key: privateKey, cert: certificate }; // other codes... /* Controllers */ app.use(require('./controllers')); https.createServer(credentials, app).listen( configuration.server.port, configuration.server.address, function(){ logger.info('Server started: ' + configuration.server.address + ':' + configuration.server.port); });My questions are:Do I need to configure the SSL certificate and key in Nginx and NodeJS or just in one of them?If I only need one, what would be the best option (NodeJS or Nginx)?Thank you!!
Configuring HTTPS on Nginx and NodeJS
To get rid of the deprecated message, you need to use different ppa: repository.You have to remove existing packages and the deprecated repository. Then, add the new repository and install the packages you need:# Remove old ppa: and its packages sudo add-apt-repository ppa:ondrej/php5-5.6 --remove --yes sudo apt-get --purge remove php5-common # Add the new ppa: sudo add-apt-repository ppa:ondrej/php sudo apt-get update # If you are using it with Apache, run: sudo apt-get install libapache2-mod-php5.6 # If you are using it with Nginx, run: sudo apt-get install php5.6-fpmSubsequently, you have to make changes to web server configuration, since some paths have changed in the PHP-FPM configuration, etc.More infohere.
I installed php on ubuntu 14.04 with nginx but the version installed was php 5.5.9. Since I wanted to upgrade it to php 5.6 I fired the below commands:sudo apt-get install software-properties-common sudo add-apt-repository ppa:ondrej/php5-5.6 sudo apt-get update sudo apt-get upgrade sudo apt-get install php5I got a message stating that the ppa is depricated but however php 5.6 was installed and working fine only that it was showing asPHP 5.6.23-1+deprecated+dontuse+deb.sury.org~trusty+1 (cli)I later went on and entered the commandLC_ALL=C.UTF-8 add-apt-repository ppa:ondrej/phpandsudo apt-get install php5.6which again installed php5.6 for me.Now when I do:php -vI getPHP 5.6.23-1+deb.sury.org~trusty+2 (cli)and when I do:php5 -vI getPHP 5.6.23-1+deprecated+dontuse+deb.sury.org~trusty+1 (cli)How do I remove the deprecated one?
Remove php 5.6.23-1+deprecated+dontuse+deb.sury.org~trusty+1
Also, need to share files to thephp:fpmdocker container too. The answer is to run dockerphp:fpmimage with volume too:docker run -it -p 127.168.66.66:9000:9000 -v /var/www/html/:/var/www/html/ php:fpm
I can't configNginxwithphp-fpmcorrectly. When I get any php script, I get Nginx404 Not founderror in browser:File not found.In my php-fpm logs I get:172.17.42.1 - 28/Apr/2015:09:15:15 +0000 "GET /index.php" 404for any php script call and in Nginx logs I get:[error] 28105#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://127.168.66.66:9000", host: "localhost"My Nginx vitualhost config is:server { listen 80; root /var/www/html; index index.html; server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.168.66.66:9000; #fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } }I had spinning php-fpm Docker image fromofficial php repository, that run by:docker run -it -p 127.168.66.66:9000:9000 php:fpmThedocker pscommand show next info:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dbf9f7d1c6f9 php:fpm "php-fpm" 8 seconds ago Up 7 seconds 127.168.66.66:9000->9000/tcp serene_curieWhat's wrong with my config?P.S. Any static files (css, js, images) works on Nginx.
error 28105#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
Let me show you a common pattern for cross-application authentications you can use with Nginx:1) Build standalone service called auth_service, work independently from the web applications as required2) Each subdomain apps will have an individual location that proxies to the same authentication servicelocation = /auth { proxy_pass http://auth_service.localhost/authenticate; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; }3) Individual web app uses "/auth" location to pass login/pass (based on POST data, headers or temporary tokens)4) Standalone service's handler "/authenticate" accepts web apps login/pass and returns 200 or 401 if failedThe root of this approach is "/auth" location sits on each own subdomain based application, the server side dispatches the call to the single authentication end point which can be re-used efficiently and you can avoid code duplication.This moduleAuth Requestis not build by default, but comes with source code. Before use just compile Nginx with --with-http_auth_request_module option.UPDATE: Since Nginx 1.5.4 this plugin comes in standard distribution without require to compile it in separately.
I'm building an ecosystem of applications under a common domain, with each application under a separate subdomain. I have built an authentication application for the ecosystem, but it requires each other application to be specially configured to use it. Is there a way to configure nginx to manage user sessions, possibly forwarding user information as headers to the various applications?
How can I set up an automatic authentication layer in nginx?
The thing is that, when you "rewrite" into URI having protocol and hostname (that ishttp://app.newexample.com/in your case), Nginx issues fair HTTP redirect (I guess the code will be 301 aka "permanent redirect"). This leaves you only two mechanisms to transfer any information to the handler of new URL:cookieURL itselfSince you are redirecting users to the new domain, cookie is no-go. But even in the case of a common domain I would choose URL to transfer this kind of information, likeserver_name app.example.com; location /app/ { rewrite ^/app/(.*)$ http://app.newexample.com/$1?from_old=yes; }This gives you the freedom to process at either Nginx or in a browser (using JavaScript). You may even do what you wanted intially, issuing a special HTTP header for JavaScript in new app server Nginx configuration:server_name app.newexample.com; location /app { if ($arg_from_old) { add_header X-From-Old-Site yes; } }
Right now, I am migrating the domain of my app fromapp.example.comtoapp.newexample.comusing the followingnginxconfig:server { server_name app.example.com; location /app/ { rewrite ^/app/(.*)$ http://app.newexample.com/$1; } }I need to show-up a popup-banner to notify the user of the domain name migration. And I want to this based upon thereferreror some-kind-of-other-header atapp.newexample.comBut how can I attach an extra header on the aboverewriteso that the javascript would detect that header and show the banner only when that header is present coz the user going directly atapp.newexample.comshould not see that popup-banner?
Sending extra header in nginx rewrite
Adding the following to nginx.conf fixed the issue for me.location / { ... include uwsgi_params; uwsgi_param HTTP_X_FORWARDED_PROTOCOL https; uwsgi_param UWSGI_SCHEME $scheme; }Along with adding the following to settings.py:SESSION_COOKIE_SECURE = True SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') CSRF_COOKIE_SECURE = True
I've got django running in uwsgi behind nginx. When I try to accesshttps://site/admin/I get the expected login screen. Logging in via the form seems to succeed, however, I simply end up back at the login screen. Firebug shows a redirect to the plainhttp://site/admin/url which is then redirectec by nginx to the https url.Help! I'm confused as to how to force the admin app to use only https urls.Note this seems to be a related, unanswered question:https://example.com/admin redirects to https://admin in Django Nginx and gunicorn
Accessing Django Admin over HTTPS behind Nginx
Okay, I figured this out. The app ran fine in development mode, so I knew something production-specific was screwing it up. I went into config/environments/production.rb and changes these settings:# Full error reports are disabled and caching is turned on config.consider_all_requests_local = false # changed from true config.action_controller.perform_caching = true # changed from falseAnd then after restarting passenger, rails showed me the error w/ stacktrace. Turns out I was forgetting to precompile the asset pipeline!
I'm having a real tough time diagnosing a 500 error from my application running in production. I've had it working before, but after re-deploying via Capastrano I am unable to get it going.Here are the facts:The server is setup with nginx + passenger, and I'm using PostgreSQL.Static assets are working properly, as in I'm able to access them just fine in a browser.I can access the rails console viaRAILS_ENV=production bundle exec rails consoleand perform Active Record actions (like retrieving data from the db).Within console, I can runapp.get("/"), which returns a 500 error as well (after first showing the query that was run to load the model).The production.log file is never written to. I've set permissions 777 on it just for the hell of it. I've also set the log level to :debug with nothing to show for it.The nginx log (which passenger also uses) shows no indication of errors, it just notifies about cache misses.Because nothing of use is being logged, I have no idea what to do here. I've tried setting full permission on the entire app directory with no help. Restarted the server multiple times, nothing. The database is there and rails can clearly communicate with it. I'm not sure what I did to get it to run the first time around. I just don't know why rails isn't outputtinganythingto the log.
How to properly diagnose a 500 error (Rails, Passenger, Nginx, Postgres)
First things first: a "web server" is just a piece of software that serves content over the http(s) protocol. That's the minimum functionality. So you threw around a lot of additional features...JBOSS/Tomcat is not only a "web server"; a tomcat provides functionality to have a java application responding to requests sent to that server, a JBOSS is much more, it provides special techniques "to deploy" your software into the production environment, and more...All these products have the "web server" functionality, but they distinguish in what happens behind the http request, that's what's generating the "answer".To confuse you a little more, you can run ASP.NET in an apache web server (that has to be extended with facilities to "execute .NET code"). And of course you can build composites of all these products, since the http protocol can be used by proxies. For example you can use an apache web server as client access point that authenticates against some database and then forwards the requests to a firewalled IIS server that only allows connections from the apache. So you can implement an authentification (or load balancer) that may be unsupported on your windows server...Hope that cleared some things...rob
I have been a java web application developer,and now I work on .net framework.When I work in java web,we use the tomcat/jboss to deploy our application. I thought the tomcat/jboss is web server.When I work in asp.net, I use IIS to deploy the application,then I thought the IIS is another kind of web server.These days,I am learning rails,then I heard the nginx. From google,it is also a kind of web server.However I found that some people said we can use nginx and IIS together,or other combination.Now,I am confused,in my opinion a web server should handle request from the client and return the result.Each web server should have its own suitation,for example, tomcat for java,iis for asp.net.But why apache/nginx?BTW,I do not mean apache/nginx is useless,I am just not Familiar with this.I wonder if someone can explain it for me?
what is the difference between apache/nginx/IIS
The django docs you linked to do not suggest you use apache as a reverse proxy. They simply suggest you use a separate web server, so I'd say the docs are not clear on that subject -- they are not suggesting anythingwrong.My initial answer was assuming you had nginx as a reverse proxy because port 80 is the HTTP port, the one used when a browser attempts to go to a url with no port specified.There are numerous complete guides to setting up nginx + apache via a quick google search but here is the gist for setting up nginx:location / { # proxy / requests to apache running django on port 8081 proxy_pass http://127.0.0.1:8081/; proxy_redirect off; } location /media/ { # serve static media directly from nginx root /srv/anuva_project/www/; expires 30d; break; }All you need to do is remove the proxy lines from your apache configuration and add the proxy statements to your nginx.conf instead.If you really want to serve your site from port 8081, you could potentially have nginx listen on port 8081 and have apache listen on a different port.The point is, apache sits in some obscure port, only serving requests sent to it from nginx, while static file serving is handled by nginx.
I've setup my Django application on Apache+mod_wsgi. To serve the static files I'm using Nginx, as suggested on Django's project website.http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/Apache is running on port 8081 and nginx is on port 80. Now some people have suggested that my configuration is wrong and I should reverse the roles of Apache and Nginx. I'm not sure why that should be. And if indeed my configuration is wrong, why would django website suggest the wrong method?
Configuration for Django, Apache and Nginx
Nginx only supports h2c (which is what HTTP/2 without HTTPS is called), so you can not connect using HTTP/1.1 and then upgrade.In fact if you try to connect using HTTP/1.1 then nginx will error asit doesn’t support HTTP/1.1 and HTTP/2 on the same port unless you are using HTTPS.So for curl you have to use this syntax to jump straight into HTTP/2:curl --http2-prior-knowledge localhost:80Not sure if there is a similar config in Envoy as don’t use that but if the above works with curl and still get same error with Envoy then at least you’ve got the answer to your Nginx question.However I’m not at all sure it’s necessary or even wise to enable h2c for your back end server. For a start that disables HTTP/1.1 as discussed above and any other applications, services, or even curl test commands that use HTTP/1.1 will break.In additionthe benefits of HTTP/2 at the back end are questionable, and the main benefit is for client to server connections rather than server to server back end connections.
I have Envoy Proxy handling SSL termination. Nginx (1.17.0 in a docker container, compiled--with-http_v2_module) is one of several upstream services. As a result, Nginx receives traffic on port 443 but does not use thesslmodule:server { listen 443; server_name example.com www.example.com; root /var/www/html; ...This works fine, but if I try to addhttp2to the end of the listen line I receive:curl: (1) Received HTTP/0.9 when not allowed... not just for example.com in question, butallservers.I would like Envoy to speak with Nginx via HTTP/2 for obvious performance reasons.Is there some trick to make nginx use http2 on port 443 without SSL termination?Edit:The corenginx.conf:user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; client_max_body_size 64M; sendfile on; keepalive_timeout 65; fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=multipress:100m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; gzip on; include /etc/nginx/conf.d/*.conf; }Note that I can successfully curl the current Envoy HTTP2 server with an explicitcurl --http2command. The problem is the HTTP2 connection between Envoy and Nginx.
Is it possible to run HTTP/2 on NGINX port 443 without ssl?
You are using theuwsgimodule of nginx. Uvicorn exposes anasgiAPI. Therefore you should use a "reverse proxy" configuration instead of anuwsgiconfiguration.You can get more info on the uvicorn documentation:https://www.uvicorn.org/deployment/#running-behind-nginx(see theproxy_passline)
Files:# main.py: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"}-# nginx.conf: events { worker_connections 128; } http{ server { listen 0.0.0.0:8080; location / { include uwsgi_params; uwsgi_pass unix:/tmp/uvi.sock; } } }-# Dockerfile FROM python:3 COPY main.py . RUN apt-get -y update && apt-get install -y htop tmux vim nginx RUN pip install fastapi uvicorn COPY nginx.conf /etc/nginx/Setup:docker build -t nginx-uvicorn:latest . docker run -it --entrypoint=/bin/bash --name nginx-uvicorn -p 80:8080 nginx-uvicorn:latestStarting uvicorn as usual:$ uvicorn --host 0.0.0.0 --port 8080 main:appWorks - I can accesshttp://127.0.0.1/from my browser.Starting uvicorn behind nginx:$ service nginx start [ ok ] Starting nginx: nginx. $ uvicorn main:app --uds /tmp/uvi.sock INFO: Started server process [40] INFO: Uvicorn running on unix socket /tmp/uvi.sock (Press CTRL+C to quit) INFO: Waiting for application startup. INFO: Application startup complete.If I now requesthttp://127.0.0.1/then:Nginx: Responds with 502 Bad Gatewayuvicorn: Responds withWARNING: Invalid HTTP request received.Hence a connection is established but something is wrong about the configuration.Any ideas?
Nginx reverse proxy on unix socket for uvicorn not working
For versionpgAdmin 4 v3.0, until the issue is actually fixed, here's a quick command-linehackbased onthis.cat > quickfix.txt <<THE_END class ReverseProxied(object): def __init__(self, app): self.app = app def __call__(self, environ, start_response): script_name = environ.get("HTTP_X_SCRIPT_NAME", "") if script_name: environ["SCRIPT_NAME"] = script_name path_info = environ["PATH_INFO"] if path_info.startswith(script_name): environ["PATH_INFO"] = path_info[len(script_name):] scheme = environ.get("HTTP_X_SCHEME", "") if scheme: environ["wsgi.url_scheme"] = scheme return self.app(environ, start_response) app.wsgi_app = ReverseProxied(app.wsgi_app) THE_END sudo sed -i '/app = create_app()/r quickfix.txt' /usr/local/lib/python3.5/dist-packages/pgadmin4/pgAdmin4.py rm quickfix.txtThe commands above insert a piece of code into the file/usr/local/lib/python3.5/dist-packages/pgadmin4/pgAdmin4.py, right after the lineapp = create_app().Also, make sure the path topgAdmin4.pyon your system is correct. You may need to adjust the snippet above.Then, configure nginx as follows:location /pgadmin-web/ { proxy_pass http://127.0.0.1:5050/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Script-Name /pgadmin-web; }For reference, also have a look atpgAdmin4.pyon GitHub.
I got some trouble: pgadmin working perfect behind nginx in location /, but it wont work behind location /pgadmin Work great:location / { proxy_http_version 1.1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:5050; }Wont work:location /pgadmin { proxy_http_version 1.1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:5050; }May be i need some specific rewrite?
pgadmin4 wont work in specific location behind nginx
Use include directive for such factorization:includeCreate file in the nginx config folder like /etc/nginx/conf.d/location_php.cnf (not .conf to avoid auto-loading by nginx)location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm-client2.sock; fastcgi_index index.php; include fastcgi_params; }and then include it into server blocks:server { listen 443; server_name client1.localhost.eu; ssl on; ssl_certificate ...; ssl_certificate_key ...; root /var/www/client1; include /etc/nginx/conf.d/location_php.cnf; # OR use relative path to nginx config root: # include conf.d/location_php.cnf; }
I have to configure multi https website with a dedicated certificate for each website. It works fine like that.server { listen 443; server_name client1.localhost.eu; ssl on; ssl_certificate ...; ssl_certificate_key ...; root /var/www/client1; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm-client1.sock; fastcgi_index index.php; include fastcgi_params; } } server { listen 443; server_name client2.localhost.eu; ssl on; ssl_certificate ...; ssl_certificate_key ...; root /var/www/client2; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm-client2.sock; fastcgi_index index.php; include fastcgi_params; } }Now, I would like to factorize the "location" block, because it is always the same. Is it possible ? (I have also tried to have only on server block, but it's not possible to put a variable in the ssl attribute)Thanks a lot for your help.Eric
A same "Location" rule for multi "Server" block
Nginx is areverse proxy server- its role on your server is to accept HTTP requests and proxy them to another process on the same host. The "upstream" that the error message is talking about is referring to the bit in nginx's configuration (part of which is the/etc/nginx/sites-available/defaultfile) that tells it where to send incoming requests. The error message you're seeing indicates that nginx received a request, but couldn't send it to the other process it was supposed to.When your server rebooted, the nginx process started back up, but your Rails process -- the one that's meant to be listening on port 3001 -- did not!How you restart the Rails process depends on the way that you started it before and the way your server is configured. It may be as simple ascd'ing into your Rails application's directory on the server and running:rails server -b 127.0.0.1 -p 3001 -e production -d...but, to prevent problems like this from happening in the future (and to improve the performance of your Rails app!), it would be better to use some kind of production-ready Rails application server. I would recommend usingPhusion Passengerbecause it's the most turn-key solution -- theiruser's guide for nginxdescribes installation and configuration -- but there areplentyof alternatives. There's a great writeup of what your options are, what they all mean, and how they relate onthe top answer of this StackOverflow question.
I am hosting my Rails app on Rackspace with nginx webserver.When calling any Rails API, I see this message in /var/log/nginx/error.log: *49 connect() failed (111: Connection refused) while connecting to upstream, client: 10.189.254.5, server: , request: "POST /api/v1/users/sign_in HTTP/1.1", upstream: "http://127.0.0.1:3001/api/v1/users/sign_in", host: "anthemapp.com"What is the upstream block?What is /etc/nginx/sites-available/default? Is this where I can configure this?Why am I receiving the error above?I spent several hours with 5-6 different Rackspace tech people (they didn't know how to resolve this). This all started when I took the server into rescue mode and followed the steps here:https://community.rackspace.com/products/f/25/t/69. Once I came out of rescue mode and rebooted the server, I started receiving the error I am writing about. Tnx!
connect() failed (111: Connection refused) while connecting to upstream
The domaindjango.pommesky.comdoesn't look like it's alive, so it's possible that Nginx is receiving requests with wrongHost:field in theHTTP request header.(sect. 14.23) So Nginx serves adefaultcatch-all page.You can disable thedefaultNginx site by removing the/etc/nginx/sites-enabled/defaultlink, and then restarting the daemon.sudo rm -v /etc/nginx/sites-enabled/default sudo service nginx restartYou can reenable by recreating the link:sudo ln -sf /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default sudo service nginx restartThe other thing you can try is to setup Bind or another DNS daemon to serve afakepommesky.comzone with all the subdomains you want and use that DNS while you're developing your site.Of course you can also register that domain with a hosting provider, and then use the DNS zone editor in its control panel to setup your subdomains and all thePTRsyou want to whatever public IP addresses you need.
I'm a very beginner in python and django. However I'm trying to create a server to deploy my application. But when I want to access my app, I always get the default nginx page "Welcome to nginx".This server is on Ubuntu 12.04 (precise) I've installed nginx, python, django and uwsgi packages with apt. Next I've created a django project to /var/www/djangoApps and a django app to /var/www/djangoApps/testAppThis is my /etc/nginx/sites-available/djangoApps :server { listen 80 server_name django.pommesky.com; rewrite ^(.*) http://www.django.pommesky.com/$1 permanent; } server { listen 80; server_name www.django.pommesky.com; access_log /var/log/nginx/djangoApps_access.log; error_log /var/log/nginx/djangoApps_error.log; location /media { alias /var/www/djangoApps/media/; } location /static { alias /var/www/djangoApps/static/; } location / { uwsgi_pass unix:///run/uwsgi/app/djangoApps/socket; include uwsgi_params; } }And this is my /etc/uwsgi/apps-available/djangoApps.ini :env = DJANGO_SETTINGS_MODULE=djangoApps.settings module = django.core.handlers.wsgi:WSGIHandler() chdir = /var/www/djangoApps socket = /run/uwsgi/djangoApps/socket logto = /var/log/uwsgi/djangoApps.logThe uwsgi log doesn't show anything, everything seems to run well, it finishes by spawned uWSGI worker ... But /var/log/nginx/djangoApps_access.log; and /var/log/nginx/djangoApps_error.log; don't exist, which is very strange. I can't figure out what's wrong with my configuration. Please help me ...
Django + uwsgi + nginx redirect to default page "Welcome to NGINX"
I see NGINX has aticketfor this that has been closed, but the solution did not work for me.I did, however, get NGINX up and running again with Passenger by running a customized installation. It's obviously a compatibility issue with versions 2 and up.First I just pulled down the NGINX source (1.0.15).In my /usr/localwget http://www.nginx.org/download/nginx-1.0.15.tar.gz nginx_sourceUntartar -xvz nginx-1.0.15.tar.gzThen run the passenger installation. Choosing option 2 (Customized Instalation)sudo passenger-install-nginx-moduleThere it prompts for where the source is/usr/local/nginx-1.0.15and where you want it installed/usr/local/nginx(in my case).Everything worked fine from there, anyone know of any real fixed for NGINX 2, please let me know.
At one point I've had everything running fine on my system with NGINX, Rails, and Passenger.Yesterday I did a fresh install of Passenger, and nowpassenger-install-nginx-modulefails./.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/passenger-3.0.13/ext/nginx/../common/libpassenger_common.a /.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/passenger-3.0.13/ext/nginx/../common/libboost_oxt.a -lstdc++ - lpthread -lm -lpcre -lssl -lcrypto -lz Undefined symbols for architecture x86_64: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [build] Error 2This exact problem was posted yesterday on ServerFault, but I think it's more likely to be answered here (I apologize if that is a problem).https://serverfault.com/questions/399304/cannot-install-phusion-passenger-3-0-13-with-nginx-1-2-1Thanks for any help.
Passenger NGINX module Failing
Passenger fusion v4+ enables reading of environment variables directly from bashrc file. Make sure that bashrc lives in the home folder of the user under which passenger process is executed (in my case, it was ubuntu, for ec2 linux and nginx)Here is thedocumentationwhich goes into details of bashrc
We are running ubuntu servers with Nginx + Phusion Passenger for our rails 3.0x apps.I have an environment variable set in /etc/environment on the test machines:MC_TEST=trueIf I run a console (bundle exec rails c) and output ENV["MC_TEST"] I see 'true'. But, if I put that same code on a page ( <%= ENV["MC_TEST"] %> ) it doesn't see anything. That variable does not exist.Which leads me to question:1 - What is the proper way to get environment variables into passenger with nginx (not apache SetEnv)?2 - Why does Passenger not have a proper environment?
phusion passenger not seeing environment variables?
In the end I found this excellent article which covers how to display a notification when an update is available. The user is then able to click the notification which updates the app.https://dev.to/drbragg/handling-service-worker-updates-in-your-vue-pwa-1pip
I have a Vue app that is used both in the browser and as a PWA. I would like to ensure users receive the latest version whenever updates have been pushed to the server.I am usingNginx,Djangoandvue-clialong with@vue/cli-plugin-pwa.Currently when Inpm run buildand then push the new version to the server, users get the old version of the app (in browser as well as PWA on their phones). To get the new version they do a hard refresh in the browser or for the PWA they close the app and reopen it again.Is there a way to ensure a version check is done every time the app is loaded so that the new version is retrieved?
Automatically update Vue site / PWA with new release
Redirection is not involved in your problem.ingress-controller is listening on both port, 80 and 443. When you configure an ingress with only 80 port, if you reach the 443 port you are redirected to the default backend, which is expected behaviour.A solution is to add an other nginx-controller, that will only listen on 80 port. And then you can configure your ingresses withkubernetes.io/ingress.class: myingress. When creating the new nginx-controller, change the command--ingress-class=myingressof the daemonset. It will then handle only ingress annotated with this class.If you use helm to deploy it, simply override thecontroller.ingressClassvalue.
I'm using kubernetesingress-nginxand this is my Ingress spec.http://example.comworks fine as expected. But when I go tohttps://example.comit still works, but pointing to default-backend with Fake Ingress Controller certificate. How can I disable this behaviour? I want to disable listening on https at all on this particular ingress, since there is no TLS configured.kind: Ingress apiVersion: extensions/v1beta1 metadata: name: http-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: example.com http: paths: - backend: serviceName: my-deployment servicePort: 80I've tried thisnginx.ingress.kubernetes.io/ssl-redirect: "false"annotation. However this has no effect.
Kubernetes ingress-nginx - How can I disable listening on https if no TLS configured?
For RTSP<->WebRTC / RTMP<->WebRTC conversions, you need to run some kind of WebRTC gateway / media server software that works with all these formats/protocols and can transmux between all of them. Try Wowza / Unreal Media Server / Flashphoner.https://en.wikipedia.org/wiki/Comparison_of_streaming_media_systemsSo in your case you want to publish the screen from browser to media server via WebRTC (H264 codec is a must) and then pull RTMP stream from the media server to nginx server with nginx-rtmp module.Note that the opposite is possible too: You could push a stream to media server via RTMP, (for example, OBS screen capture) and then send this stream from media server to web browser(s) via WebRTC.The main issue in these conversions is codec compatibility: H264 must be used for video, but if you need audio then you will have to do Opus to AAC transcoding.
I am trying to build a service that streams your screen from a browser to clients (something like twitch).What I have accomplished is I have built a working nginx server with rtmp, I tested it using OBS. That works pretty well.And my question is how to stream a screen from a browser (not from OBS or other broadcasters) using WebRTC to nginx server with RTMP?
How to use WebRTC to stream video to RTMP?
here is youranswerBack-references to condition patterns are identified by{C:N}where N is from 0 to 9. Back-references to rule patterns are identified by{R:N}where N is from 0 to 9. Note that for both types of back-references,{R:0}and{C:0}, will contain the matched string. For more detailed info you can have a look on:IIS URL Rewrite {R:N} clarificationFurther info you can find on:https://learn.microsoft.com/en-us/iis/extensions/url-rewrite-module/url-rewrite-module-configuration-reference#using-back-references-in-rewrite-rules
ON my job I was asked to migrate a php application running on iis/azure into running via nginx and fpm over a GNU+Linux machine.Then on the process I encountered a file namedweb.configcontaining entries for example: Or So far I thought that an nginx mapping like that:^some_regex^ index.php?someaction=$1;Would do the job in both actions (somehow). But I still cannot understand the difference between {C:1} and {R:1} on regex maches I understand that in my nginx would be something like $1 (a subregex match) but what different to the web.config is {C:1} and {R:1} entries?I am asking because I may need to change the nginx's subregex matches a bit regarding if the match is {C:1} or {R:1}.
Difference between {R:1} and {C:1} on iis web.config file
Above setup works fine. My issue was with DNS records - I added an A record directingdev.domain.comto the IP address of the server I'm running the node apps on.
IssueI'm trying to set up nginx so I can have my domain,domain.comrun by a node web app on port 3000, and the subdomaindev.domain.comrun by a second node web app on port 3001. When I run this configurationdomain.comis connected to the right port, butdev.domain.comjust gives a page that says the server can't be reached.Edit:If I go toIP_ADDRESS:3000I get the same content asdomain.com, but if I go toIP_ADDRESS:3001I get what should be atdev.domain.com. Based on this it seems like the apps are running fine on the right ports, and I'm just not routing the subdomain correctly.CodeI edited/etc/nginx/sites-available/defaultdirectly so it has:server { listen 80 default_server; server_name domain domain.com www.domain.com; location / { proxy_pass http://127.0.0.1:3000; } } server { listen 80; server_name dev.domain dev.domain.com www.dev.domain.com; location / { proxy_pass http://127.0.0.1:3001; } }Other than that file everything else is a fresh installMy logicI'm very new to nginx but this seems like any requests fordomain.comwould get sent to port 3000, and requests fordev.domain.comwould go to 3001.Any help or critique of what I've done so far would be greatly appreciated!
Configure nginx for two node apps, with one on a subdomain
If anyone having this problem, I've solved it by mounting the folders into docker container.I've mounted bothetc/letsencryptandetc/sslfolders into dockerDocker has-vflag to mount volumes. Don't forget to openport 443for the container.Based on how you mount it it's possible to enable https in docker container without changing nginx paths.docker run -d -p 80:80 -p 443:443 -v /etc/letsencrypt/:/etc/letsencrypt/ -v /etc /ssl/:/etc/ssl/
I know how toconfigure let's encrypt for nginx. I'm having hard time configuring let's encrypt with nginx inside a docker image. Let's encrypt certificates are symlinked inetc/letsencrypt/livefolder and I don't have permission to view the real certificate files inside/etc/letsencrypt/archiveCan someone suggest a way out ?
How to configure Let's encrypt certificates for nginx inside a docker image?
You should be able to use the$hostvariable instead:server { listen 80; server_name api.example.com beta.example.com apibeta.example.com nodebeta.example.com app.example.com; return 301 https://$host$request_uri; }
I have this code. I just want each of the server_name in the list to redirect to its own name https. But, if I dohttp://beta.example.com, it redirects tohttps://api.example.com(or whatever the first item in the list is)server { listen 80; server_name api.example.com beta.example.com apibeta.example.com nodebeta.example.com app.example.com; return 301 https://$server_name$request_uri; }
nginx redirect multiple servers to SSL
As the apache regex has '^', we can put '^' to force matching from the start of the path too.location ~ ^/(xampp|security|phpmyadmin|licenses|webalizer|server-status|server-info) { proxy_pass http://127.0.0.1:8080$request_uri; .... allow/deny directives come here }[EDIT] The matched string inside the brackets is stored in $1. So you may tryhttp://127.0.0.1:8080/$1if that's what you want. However, my understanding is that you want to pass the entire uri path to the apache server. In that case, it's simpler to use nginx variable $request_uri.
I am using Nginx as a reverse proxy for my Apache instillation and as a security feature it blocks access to phpmyadmin, webalizer etc for everyone except localhost but using nginx it makes Apache think it is localhost so it displays it publicly for everyone. Order deny,allow Deny from all Allow from ::1 127.0.0.0/8 \ fc00::/7 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 \ fe80::/10 169.254.0.0/16 ErrorDocument 403 / I need to turn the above rules pattern matching regex into the following.location /phpmyadmin { proxy_pass htt://127.0.0.1:8080/phpmyadmin; allow 127.0.0.1; deny all; }Much appreciated for help from anyone who is familiar with regex in Nginx.The following method works but breaks normal site urls that would be search engine friendly such as domain.com/forums/server-infolocation ~ /(xampp|security|phpmyadmin|licenses|webalizer|server-status|server-info) { deny all; }
Nginx Block/Deny Access to multiple locations regex
In my capistrano deploy.rb I have:desc "Zero-downtime restart of Unicorn" task :restart, :except => { :no_release => true } do run "kill -s USR2 unicorn_pid" endThis is well documented in "Lighting fast, zero-downtime deployments with git, capistrano, nginx and Unicorn".
I have a RubyOnRails project, and deployed it with Unicorn on nginx on an Ubuntu server.I need to restart Unicorn if I change one of configuration files, but it makes my site shut down when I kill Unicorn's master process and start it again withbundle exec.Is there any way to make Unicorn work with new files without killing the process and going down?
How do I reload Unicorn without killing the master process?
The probable reason why yourproxy.confis not being used is because you are using current version of EB, which runs onAmazon Linux 2(AL2). However, the proxy settings you are trying to use are for old version of EB running on AL1.For AL2, the nginx settings should be placed in.platformfolder, not in.ebextenationsas shown in thedocs.Thus you can try the following:File.platform/nginx/conf.d/myconfig.confwith the content ofclient_max_body_size 25M;Please note that I can't verify the nginx setting it self. It still may not work as it may be wrong setting or have wrong form. But the use of.ebextenationsinstead of.platformis definitely an issue on AL2 EB environments.
I have looked on all possible stackoverflow posts and have tried all the different aproaches. None worked. There seems also no official documentation on this. Everything works fine in my local app, and I can upload images of any size, but as soon as its deployed in my elastic beanstalk, I seem to have a limit of 1M per image upoad.The Problem: Every time a user posts an image that is larger than 1MB, I receive the 413 error message with nginx.Elastic Beanstalk Log: 2020/07/28 17:22:53 [error] 10404#0: *62 client intended to send too large body: 2800500 bytes, client: 172.31.18.162, server: , request: "POST /comment/image_post/11003031 HTTP/1.1", host: "myapp.com", referrer: "https://myapp.com/11003031"What I did to try to solve the problem:I created a .ebextensions folder in my node,js application root folder, added the below code and called it proxi.config, and pushed it to my github which deploys to Elastic Beanstalk via pipeline. I can see the proxi.config in my repository but for some reason it is automatically overwritten by the load balancer (I guess from what I have been reading).proxi.config container_commands: 01_reload_nginx: command: "service nginx reload" files: "/etc/nginx/conf.d/proxy.conf": mode: "000755" owner: root group: root content: | client_max_body_size 25M;If this is complicated to solve, is there no other way to increase the 1M limit?
413 Request Entity Too Large - Elastic Beanstalk + Load Balancer + Node.js application
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.Different kinds of reverse proxies supportESI(Edge Side Includes)technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.Nginx has such technology as well. It is calledSSI (Server Side Includes).location /file { ssi on; proxy_pass http://go.example.com; }Your upstream server can produce body with contentandnginx will replace this in-body directive with content of the file.But you mentioned streaming...It means that files will be of arbitrary sizes and building response withSSI would certainly eat precious RAMresources so we need aPlan #B.There is "good enough" method to feed big files to the clients without showing static location of the file to the client. You can use nginx's error handler to server static files based on information supplied by upstream server. Upstream server for example can send back redirect 302 with Location header field containing real file path to the file. This response does not reach the client and is feed into error handler.Here is an example of config:location /file { error_page 302 = @service_static_file; proxy_intercept_errors on; proxy_set_header Host $host; proxy_pass http://go.example.com; } location @service_static_file { root /hidden-files; try_files $upstream_http_location 404.html; }With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
I have two servers:NGINX (it exchanges file id to file path)Golang (it accepts file id and return it's path)Ex:When browser client makes request tohttps://example.com/file?id=123, NGINX should proxy this request to Golang serverhttps://go.example.com/getpath?file_id=123, which will return the response to NGINX:{ data: { filePath: "/static/..." }, status: "ok" }Then NGINX should get value from filePath and return file from the location.So the question is how to read response (get filePath) in NGINX?
NGINX read body from proxy_pass response
You're looking for ssl pass-through. You'll set up your nginx to use TCP load balancing (even if you only have one server it's still thought of as load balancing) and ssl passthrough. Note that nginx will be unable to access any of the content and that you will lose almost all of the advantages of using a proxy other than the ability to do load balancing. Seethese instructionsfor a specific configuration example.
I want to use Nginx to expose my NodeJS server listening on port 443.I don't want to manage the SSL certificate with Nginx. I would rather do that on the NodeJS server using theSNICallbackoption ofhttps.createServer.How do I setup thenginx.confto support this?
Forward HTTPS traffic thru Nginx without SSL certificate
The configuration you've written is correct. I'd give one caveat (assuming your config is otherwise standard):It will only output the X-Robots-Tag when the result code is 200, 201, 204, 206, 301, 302, 303, 304, or 307 (e.g. content matches a disk file, a redirect is issued, etc.). So if you have an/archive/index.html, a hit tohttp://yoursite.com/archive/will give the header. If theindex.htmldoes not exist (404), you won't see the tag.Thealwaysparameter will output the header for all response codes, assuming the location block is processed:location ~ .*/(?:archive|filter|topic)/.* { add_header X-Robots-Tag "noindex, follow" always; }Another option will guarantee the header is output on a URI match. This is useful for when there's a chance that a location block may not get processed (due to short-circuiting, like withreturnor alaston a rewrite etc):http { ... map $request_uri $robot_header { default ""; ~.*/(?:archive|filter|topic)/.* "noindex, follow"; } server { ... add_header X-Robots-Tag $robot_header; ... }
I'm using the followingNginxconfiguration to prevent the indexing of content in some of my folders when I use thex-robots taglocation ~ .*/(?:archive|filter|topic)/.* { add_header X-Robots-Tag "noindex, follow"; }The content remains indexed but I can't debug theNginxconfiguration.My questions: is the configuration I use correct and if I should wait till googlebot re-crawls content and de-indexes the content? Or is my configuration wrong?
Correct nginx configuration to prevent indexing of some folders
Create a log file for upstream to check request is going to which serverhttp { log_format upstreamlog '$server_name to: $upstream_addr {$request} ' 'upstream_response_time $upstream_response_time' ' request_time $request_time'; upstream backend { # ip_hash; server 1.2.3.4; server 5.6.7.8; } server { listen 80; access_log /var/log/nginx/nginx-access.log upstreamlog; location / { proxy_pass http://backend; } }and then check your log filesudo cat /var/log/nginx/nginx-access.log;you will see log liketo: 5.6.7.8:80 {GET /sites/default/files/abc.png HTTP/1.1} upstream_response_time 0.171 request_time 0.171
I done congfiguration in nginx for redirection and it works successfully. But in that i want load balancing :- for that i already createload-balancer.confas well as give server name into that file like :-upstream backend { # ip_hash; server 1.2.3.4; server 5.6.7.8; } server { listen 80; location / { proxy_pass http://backend; } }In both instances i did same configuration and it default uses round-robin algorithm so in that request transfer via one pc to another pc..... but it were not workingcan any one suggest me anything that secong request going to another server 5.6.7.8so i can check load balancing.Thankyou so much.
How to test load balancing in nginx?
From your logs:upstream: "http://127.0.0.1:5000/"I see, nginx is trying to connect to 5000 port on the same machine and it is refusing the connections. What is running on 5000 port? You may need to look into that.
Trying to deploy my first app (Back-end). But I meet an error of the type 502 Bad Gateway.2016/05/03 14:46:14 [error] 2247#0: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.43.183, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "myHost.eu-west-1.elasticbeanstalk.com" 2016/05/03 14:50:23 [error] 2566#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.8.36, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "myHost.eu-west-1.elasticbeanstalk.com" 2016/05/03 14:55:04 [error] 2566#0: *61 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.43.183, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "myHost.eu-west-1.elasticbeanstalk.com"I use for my Back-End Framework SparkJava, which launches on the port 4567. Thus I Extended the configuration of Nginx (nginx/1.8.1). But the problem always persists.server { listen 4567 default_server; listen [::]:4567 default_server ipv6only=on; }For Information : My Back-End communicates with a database (RDS aws amazon)
connect() failed (111: Connection refused) while connecting to upstream. Java (SparkJava) amazon Elastic
You dont want nginx.conf in the project root and its not necessary. Also, you don't want direct changes to nginx.conf, you will instead want specific files for different websites in /etc/nginx/sites-available which you enable with alnin /etc/nginx/sites-enabled.As far as the config:server { root /var/www/mysite/; #or whereever your site files are index index.html; location /{ try_files $uri $uri/ =404; } }You are missing the root portion which tells nginx where the site is located.
I'm writing an AngularJS single page application using nginx.I just switched from apache to nginx, but I cant make my config file working. I'm trying to rewrite everything toindex.htmlto let Angular do the routing.Mynginx.confis as follow:server { index index.html; location / { expires -1; add_header Pragma "no-cache"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; try_files $uri $uri/ /index.html; } }Is it possible to have mynginx.conffile in the project root like my.htaccessdid?
Nginx single application config
location ~ ^(/[^/]+)(/.+)$ { root ...; if (!-d "$document_root$1") { return 404; } try_files $1$2 /default$2 =404; }
Given this folder structure:root folder + default + settings1.txt + settings2.txt ... + settingsN.txt + user00001 + settings1.txt ... ... + userN + settings1.txt ...And this example url:domain.com/user00009/settings1.txtOrdomain.com/xavi/somefile.txtI would like to write a rule that let me do:folder exists ? check file : 404 file exists ? serve users file : serve default fileI tried using try_files but I think I can only use $uri i get the whole url, would be great if I could work with the slugs ( $1 = user00009 & $2 = settings1.txt )Then maybe I could put:location / { root /... try_files $1/$2 default/$2 =404; }Any idea?Note: I know i could server the files from outside nginx ( in this case django ) but Im trying to speed things up
Nginx try_files (folders + files) fallback
HTTP 426 error meansupgrade required:The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.oranother info:The HTTP426 Upgrade Requiredclient error response code indicates that the server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.In your situation, you need to check what version of the HTTP protocol you are using. It seems too low. Look atthis thread. In that case, you had to upgrade from1.0to1.1.You need to upgrade your HTTP protocol version in NGINX config like there:This route is for a legacy API, which enabled NGINX cache for performance reason, but in this route's proxy config, it missed a shared configproxy_http_version 1.1, which default to use HTTP 1.0 for all NGINX upstream.And Envoy will returnHTTP 426if the request isHTTP 1.0.
When I am accessing a Istio gatewayNodePortfrom the Nginx server usingcurl, I am getting response properly, like below:curl -v "http://52.66.195.124:30408/status/200" * Trying 52.66.195.124:30408... * Connected to 52.66.195.124 (52.66.195.124) port 30408 (#0) > GET /status/200 HTTP/1.1 > Host: 52.66.195.124:30408 > User-Agent: curl/7.76.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < server: istio-envoy < date: Sat, 18 Sep 2021 04:33:35 GMT < content-type: text/html; charset=utf-8 < access-control-allow-origin: * < access-control-allow-credentials: true < content-length: 0 < x-envoy-upstream-service-time: 2 < * Connection #0 to host 52.66.195.124 left intactThe same when I am configuring through Nginx proxy like below, I am gettingHTTP ERROR 426through the domain.Note: my domain is HTTPS -https://dashboard.example.comserver { server_name dashboard.example.com; location / { proxy_pass http://52.66.195.124:30408; } }Can anyone help me to understand the issue?
Nginx returns 426
You are missing arootfor one of yourlocationblocks, but as they all share the sameroot, it should be moved to theservercontext anyway.You do not need both thetry_filesand theif...rewrite. The same functionality can be achieved usingtry_filesalone.The lastlocationblock is unnecessary as it uses the samerootaslocation /.server { listen 80; server_name sibdomain.domain.com; root /var/www/multiapp/dist; location ~* \.(?:css|js|map|jpe?g|gif|png)$ { } location / { index index.html index.htm; try_files $uri $uri/ /index.html?path=$uri&$args; } error_page 500 502 503 504 /50x.html; }Seethis documentfor more.
I am trying to serve an Angular deployed site (I have a dist directory with the index.html file) in Nginx. In that directory I have:index.htmljs filescss filesassetsI don't have experience in Nginx, so I am trying to serve correctly this site. My configuration for that site is:server { listen 80; server_name sibdomain.domain.com; location ~* \.(?:css|js|map|jpe?g|gif|png)$ { } location / { root /var/www/multiapp/dist; index index.html index.htm; try_files $uri $uri/ =404; if (!-e $request_filename){ rewrite ^(.*)$ /index.html?path=$1 break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/multiapp/dist; } }Right now the only file that is being loaded in the index, but the css and js file return 404. In apache this site was working correctly, I had this rule:RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # not rewrite css, js and images RewriteCond %{REQUEST_URI} !\.(?:css|js|map|jpe?g|gif|png)$ [NC] RewriteRule ^(.*)$ index.html?path=$1 [NC,L,QSA]And I usedthis siteand they provided me this translation:# nginx configuration location ~* \.(?:css|js|map|jpe?g|gif|png)$ { } location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.html?path=$1 break; } }I checked the logs of nginx and I noticed that it's trying to load those files from another path, the error is this one:2017/12/13 22:47:55 [error] 14096#0: *50 open() "/usr/share/nginx/html/polyfills.22eec140b7d323c7e5ab.bundle.js" failed (2: No such file or di$rectory), client: xx.xxx.xxx.xx, server: subdomain.domain.com, request: "GET /styles.15b17d579c8773c5fdbd.bundle.css HTTP/1.1"So, why its try to load from /usr/share/nginx/html/ and not the root path that I configured? What should I modify in order to be able to load the css and js files? thanks
Load CSS and Js files with Nginx
after some digging, I think what our problem was is that our configuration did not have max_requests set, the children were never recycling. We did have process_idle_timeout set, but we had some scripts running on cron that were keeping the processes alive.so if for everybody else:// amount of requests it handles before recycling the processpm.max_requests = 500;// max time alive as an idle processpm.process_idle_timeout = 10s;
Connect failed: php_network_getaddresses: getaddrinfo failed: System errorThe"System error"part really throws me off.I've been battling this error for a few months, it is very sporadic. It appears to be coming from my database connector.Restartingphp-fpmseems to alleviate the issue for ~24 hours until it starts acting up again. I had thought it was maybe hitting max children withphp-fpm, but after checkingphp-fpmstatus, it is not.I've tried to correlate the error with thesyslogandnginxerror log for the application, I'm running out of ideas. Any ideas on how to troubleshoot this issue?
Connect failed: php_network_getaddresses: getaddrinfo failed: System error
You need to turn off the nginx proxy buffering.location /delay { proxy_pass http://127.0.0.1:8080; proxy_buffering off; }and reload the confignginx -s reload
I am using gunicorn and flask for a web service. I am trying to get my head around running a streaming route (not sure if that is the correct terminology).my route looks like this:@app.route('/delay') def delay(): from time import sleep def delay_inner(): for i in range(10): sleep(5) yield json.dumps({'delay': i}) return Response(delay_inner(), mimetype="text/event-stream")I expect that the server would yield the output each time that delay_inner does a yield. But, what I am getting is all the json responses at once, and only when the delay_inner finishes execution.What am I missing here?--EDIT-- I have fixed the issue for Flask and Gunicorn, I am able to run it as expected by using the flask server, and by going to the Gunicorn port. It streams the data as expected. However, and I should have mentioned this in the original post, I am also running behind nginx. And that is not set up correctly to stream. Can anyone help with that?
Streaming server issue with gunicorn and flask and Nginx
As you mentioned in the comments your application runs behind a webserver. In this case Nginx.You are using some sort oflinkTo(methodOn(MyController.class).myMethod(name)).withSelfRel());to generate links. In this case take a look atControllerLinkBuilder. As you can see in line 190 Spring HATEOAS builds a link based on the current request. In addition, request headerX-Forwarded-Proto,X-Forwarded-HostandX-Forwarded-Sslare queried and used if available.That is what you missed to configure in order to build proper links with Spring HATEOAS.Because you complain that onlyhttpsis missing in your links, Nginx already setsX-Forwarded-Forbut skipsX-Forwarded-Proto. I assume that Nginx and your application communicate overhttpotherwise you wouldn't have trouble. You can ignoreX-Forwarded-Ssl. It is only relevant if Nginx and your application talking overhttps. In that case you wouldn't see any issue either.Below you find a complete Nginxlocationblock for reference.X-Forwarded-Protohas been set tohttpsin order to inform the proxied system that links have to containhttpsin any URLs (only if backend system processes aforedmetnioned request header).location /yourapp { proxy_pass http://localhost:8080/yourapp; proxy_redirect default; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; }For further reading please consult Nginx documentation for thehttp_proxy_module.
I am usingSpring HATEOASin my web application. My application runs behind aNginxwebserver. I am sending following url with HTTPS headerGEThttps://national.usa.com/testapp-rest/api/user/654rtrtet-5grt-fgsdf-dfgs-765ytrtsdhshfgsh/newAuthenticationStatus Code:200 OK Response Headersview sourceAccess-Control-Allow-Headers:x-requested-with, Accept, Content-Type, Origin, Authorization, X-Auth-Token Access-Control-Allow-Methods:POST, GET, OPTIONS, PUT, DELETE Access-Control-Allow-Origin:* Access-Control-Expose-Headers:X-Auth-Token Access-Control-Max-Age:3600 Cache-Control:no-cache, no-store, must-revalidate Connection:keep-aliveContent-Type:application/json Pragma:No-cacheServer:XXX/1.6.0 Strict-Transport-Security:max-age=31536000 Transfer-Encoding:chunkedRequest Headers view sourceAccept:application/json, text/plain, */*Accept-Encoding:gzip, deflate, sdchBut when I see response headers, I see HATEOAS links are only returning HTTP. how to fixed this issue? Please guide."links: [{rel: "self",…}]0: {rel: "self",…}href: "http://national.usa.com /testapp-rest/api/user/5435fdsg-45gfdgag-rewtdf43434-43543fsd "relEdit: Yes I using following code to create linksresource.add(ControllerLinkBuilder.linkTo(ControllerLinkBuilder.methodOn(TestController.class).getStudentResponse(response.getStudentId())).withSelfRel());
spring HATEOAS links issue for HTTP and HTTPS
Nginx finds the longest matching location and processes it first, but your return at the end of the server block was being processed regardless. This will redirect everything but /exception/ which is passed upstream.server { listen 127.0.0.1:80; access_log off; location / { return 301 https://localhost$request_uri; } location /exception/ { proxy_pass http://127.0.0.1:7080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Accel-Internal /internal-nginx-static-location; } }
I would like to redirect all http traffic to https with a handful of exceptions. Anything with /exception/ in the url I would like to keep on http.Have tried the following suggested byRedirect all http to https in nginx, except one filebut it's not working. The /exception/ urls will be passed from nginx to apache for some php processing in a laravel framework but that shouldn't matter.Any suggestions for improvement much appreciated!server { listen 127.0.0.1:80; location / { proxy_pass http://127.0.0.1:7080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Accel-Internal /internal-nginx-static-location; access_log off; } location /exception/ { # empty block do nothing # I've also tried adding "break;" here } return 301 https://localhost$request_uri; }
nginx redirect all http to https with exceptions
Every Rails site has:meta content="authenticity_token" name="csrf-param'Or could have a submit button where thename="commit"At least that's what I have consistently seen.Header responses are not reliable, here are three from various Rails sites:Server:Apache/2.2.14 (Ubuntu) Server:nginx Server: thin 1.4.1 codename ChromeoYou know nginx and Thin are popular in the Rails community, but that's not conclusive enough to say there is Rails behind it. You would need to run a script that scrapes the site and looks for the meta-tag above.BeautifulSoupis a pretty good if your script is going to be in Python.Mechanizegem is great if you are going with Ruby.
I am part of a team that manages a public facing cloud platform at my company. We have a large user base running VM's that face the internet. I would like to run an automated scan of our address space and see if anyone is running a Rails app so I can notify them to upgrade their version of Rails to avoid a critical security vulnerability that came out this week.I've noticed that in some Apache deployments, there is a Passenger Header that is useful:X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.0.3However, this is not reliable. I'm wondering if there is a reliable way to detect Rails running behind a web server either with response headers or some kind of a GET / POST that can be definitive. Thanks!
Detect if Rails is Running a Site
use logging.StreamHandler as logging handler
I have a server running nginx + UWSGI + python. UWSGI is running as a daemon with the flag set:--daemonize /var/log/uwsgi.logwhich logs all application errors.I've noticed that on error if I use a python print statement it will write to the log but only on an error. The standard python logging library doesn't seem to affect the log in any situation.How do I point the python logging libraries to use the UWSGI log?
How to write to log in python with nginx + uwsgi
Thetry_filesdirective automatically tries to find static files, and serve them as static, prior to giving up, and letting the request be served as a script.http://nginx.org/r/try_filesChecks the existence of files in the specified order and uses the first found file for request processing; the processing is performed in the current context. The path to a file is constructed from the file parameter according to the root and alias directives. It is possible to check directory’s existence by specifying a slash at the end of a name, e.g. “$uri/”. If none of the files were found, an internal redirect to the uri specified in the last parameter is made.Note that although you're already usingtry_files, it appears that perhaps your path handling isn't up to spec.As for your own answer with a temporary solution,there's nothing wrong with using a rewrite or two, but that said, it looks like you'd benefit from thealiasdirective.http://nginx.org/r/aliasDefines a replacement for the specified location.However, you've never explained why you're serving stuff out of/tmp. Note that/tmpis often automatically cleared by somecronscripts, e.g., on OpenBSD, the/etc/dailyscript would automaticallyfindand remove files older than about 7 days (on adailybasis, as the name suggests).In summary, you should first figure out what is the appropriate mapping between the web view of the filesystem and your filesystem.Subsequently, if a prefix is found, just use a separatelocationfor the assets, together withalias.Else, figure out the paths fortry_filesto work as intended.
I have a problem with my Nginx configuration. I have 2 servers, one with nginx and one with my webApp in symfony3. Here is my configuration :location /portal/mysite/ { set $frontRoot /srv/data/apps/mysite-portal-stag/current/web; set $sfApp app.php; # Change to app.php for prod or app_dev.php for dev root /srv/data/apps/mysite-portal-stag/current/web; rewrite ^/portal/mysite/(.*)$ /$1 break; try_files $uri @sfFront; } location @sfFront { root /srv/data/apps/mysite-portal-stag/current/web; fastcgi_pass myserver:myport; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $frontRoot/$sfApp; fastcgi_param SCRIPT_NAME /portal/mysite/$sfApp; }The webSite work for all the php scripts but all the assets (static files) are broken files. I don't understand enough how Nginx works to indicate what are the static files and "tell" my proxy that they aren't script.
Serving remote static files with symfony3
For "how nginx processes configuration files", a simple way to look at it would be:reading of configuration starts with/etc/nginx/nginx.confdirectives are read from top to bottomincludeing a file inserts it at the location of theincludesimilar to the way the C preprocessor doesa setting has a scope, such ashttp,server, orlocation, in order from wide to narrowsome settings can occur at multiple scope levelssetting a parameter at a narrower scope overrides a setting of the same parameter at a wider scopeconflicting settings within the same scope are rejectedFor detail, see the Debian wiki article on Nginx directory structure.
In my /etc/nginx/nginx.conf file I have config. as:-user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; }Now, I don't like to pollute above default nginx.conf file, so I kept configuration in /etc/nginx/conf.d/default.conf as:-worker_processes 2; events { worker_connections 2048; }My question is, In above scenario will nginx override or pick configuration for worker_processes and worker_connections from default.conf file or nginx.conf file ? Also, I would like to know how nginx processes configuration files in short ?
How nginx pick the configuration order?
location = /login { default_type "text/html"; alias /home/vagrant/own/base/assets/login.html; }I think the approach above is effective, there maybe other answers.
I want to configure nginx to server HTML files for viewing instead of downloading.server { listen 5000; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; #location / { # root html; # index index.html index.htm; #} location = / { root /home/vagrant/own/base/assets; index index.html; } location = /login { # root /home/vagrant/own/base/assets; alias /home/vagrant/own/base/assets/login.html; } location /index.html { root /home/vagrant/own/base/assets; } }This is my conf file, when I browse to/login, my browser tries to offer the file for download instead of viewing it. Thanks for your help.
How to configure nginx to serve HTML files for viewing instead of downloading?
Since Nginx Version 1.9.0,NGINX support ngx_stream_core_module module, it should be enabled with the --with-stream. When stream module is enable they are possible to ssh protocol tcp proxystream { upstream ssh { server localhost:22; } server { listen 80; proxy_pass ssh; } }https://www.nginx.com/resources/admin-guide/tcp-load-balancing/
I want to make the ssh server on port 22 available through a subdomain on port 80.I thought it should by something like this:server { listen ssh.domain.tld:80; server_name ssh.domain.tld; location / { proxy_pass http://localhost:22; } }But it won't work. nginx will accept this and start with this configuration, but I only get empty responses fromssh.domain.tld:80.What am I missing?
How to configure nginx to make ssh server via subdomain.domain.tld:80 available
run echo $PATH, does /usr/local/sbin appear? if not: Try sourcing your ~/.bashrc file and see if it appears: source ~/.bashrcrun echo $PATH again. It should apear.
I do$ brew install nginxand get:==> Downloading http://nginx.org/download/nginx-1.2.2.tar.gz Already downloaded: /Library/Caches/Homebrew/nginx-1.2.2.tar.gz ==> Patching patching file conf/nginx.conf ==> ./configure --prefix=/usr/local/Cellar/nginx/1.2.2 --with-http_ssl_module --with-pcre --with-ipv6 --with-cc-opt=-I/usr/local/include --with-ld-opt=-L/usr/local/lib --conf ==> make ==> make install ==> Caveats In the interest of allowing you to run `nginx` without `sudo`, the default port is set to localhost:8080. If you want to host pages on your local machine to the public, you should change that to localhost:80, and run `sudo nginx`. You'll need to turn off any other web servers running port 80, of course. You can start nginx automatically on login running as your user with: mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/nginx/1.2.2/homebrew.mxcl.nginx.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist Though note that if running as your user, the launch agent will fail if you try to use a port below 1024 (such as http's default of 80.) Warning: /usr/local/sbin is not in your PATH You can amend this by altering your ~/.bashrc fileI have this in my ~/.bashrc file:export PATH=$PATH:/usr/local/sbinWhen I run nginx -v or sudo nginx -t i get this:-bash: nginx: command not foundhave I not installed nginx properly?
NGINX brew install command not found
run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :host => s.hostshould actually be:run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :hosts => s.host
For the life of me I can't figure out how to make this work properly.The problem is similar to what others have, such as:How to do a rolling restart of a cluster of mongrelsWe, however, are using Nginx/Passenger instead of Mongrel.The issue is that on a deploy if we use this standard :restart task:task :restart, :roles => [:app], :except => {:no_release => true} do run "cd #{deploy_to}/current && touch tmp/restart.txt" endIt touches the restart.txt file across every web server, but any passenger instances currently serving requests need to finish before the new ones are spawned it seems. This creates a serious delay and causes our app to be unavailable for up to 2 minutes while everything is coming back up.In order to get around that the plan is to do the following:deploy codego to server 1, remove it from the load balancerrestart nginx-passenger on server 1wait 60 secondsadd server 1 back to load balancergo to server 2 (repeat steps 3 - 5)To accomplish this, I attempted this:(lb.txt is the file that the load balancer looks for)task :restart, :roles => [:app], :except => {:no_release => true} do servers = find_servers_for_task(current_task) servers.map do |s| run "cd #{deploy_to}/current && echo '' > public/lb.txt", :host => s.host run %Q{rvmsudo /etc/init.d/nginx-passenger restart > /dev/null}, :host => s.host sleep 60 run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :host => s.host end endThisalmostworks, however, during the deploy it seemed to run the loop through the servers once per servers listed in the :app role. We currently have 6 app servers, so the loop runs 6 times, restarting nginx-passenger 6 times per server.I just need this loop to run through one time.I know it seems that eventually passenger will get rolling restarts, but they do not seem to exist yet.If it helps, we are using Capistrano 2.x and Rails 3Any help would be great.Thanks.
Nginx rolling restart of Rails app with capistrano
The reason for your error is because your copying the nginx SSL configuration to a folder nginx does not load by default. After changing this line in the Dockerfile -COPY default-ssl.conf /etc/nginx/sites-available/defaultTo this -COPY default-ssl.conf /etc/nginx/conf.d/default-ssl.confI'm able to reach Nginx with https.
I want to be able to access an nginx docker container via the https athttps://192.168.99.100. By now I have done the following:Dockerfile:FROM nginx COPY certs/nginx-selfsigned.crt /etc/ssl/certs/ COPY certs/nginx-selfsigned.key /etc/ssl/private/ COPY default-ssl.conf /etc/nginx/sites-available/default EXPOSE 443I have the correspondent certificate files in foldercerts.Thedefault-ssl.conf:server { listen 80; listen 443 ssl; server_name localhost; ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }docker-compose.yamlversion: '3' services: nginx: image: mynamespace/nginx_pma container_name: nginx_pma build: context: . ports: - 443:443 - 80:80So, when I run this, I am able to access: 192.168.99.100 which shows NGINX welcome page, but I am unable to make it work onhttps://192.168.99.100.The host is Windows 7 with docker toolbox.Any sugestions?
Set up https access to nginx docker container
The command is executed inside the container- you are using a pulled fluentd container which does not have your start.sh file in it. You can eitherA. bind mount it into the container#docker-compose.yml fluentd: image: fluent/fluentd:latest volumes: - ./start.sh:/start.sh command: /start.shor B. build it into the image# Dockerfile FROM fluent/fluentd:latest COPY start.sh /start.sh #docker-compose.yml fluentd: build: .
I'm following this post:http://eric-price.net/blog/centralized-logging-docker-aws-elasticsearchThis is what my docker-compose.yml looks like :version: "2" services: fluentd: image: fluent/fluentd:latest ports: - "24224:24224" command: start.sh networks: - lognet nginx: image: nginx-pixel ports: - "80:80" logging: driver: fluentd networks: - lognet networks: lognet: driver: bridgemystart.shis in the same directory as the yml file. When I rundocker-compose up -dthis is what I get :ERROR: for fluentd Cannot start service fluentd: oci runtime error: exec: "start.sh": executable file not found in $PATH ERROR: Encountered errors while bringing up the project.My docker-compose info:docker-compose version 1.8.0, build f3628c7 docker-py version: 1.9.0 CPython version: 2.7.9 OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
docker compose oci runtime error, executable file not found in $PATH
The server doesn't answer because it is not defined as an upstream.try this:upstream my_server { server 172.17.0.2:10000; } server { listen 80; server_name landing.example.com; location / { proxy_pass http://my_server; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_redirect http:// $scheme://; } }Here you define the upstream server (your server by IP or hostname) and make sure to forward the headers too so the server answering knowns who to answer to.
Onlanding.example.com:10000have I a webserver that works fine, which is a Docker container that exposes port10000. Its IP is172.17.0.2.What I would like is having a nginx reverse proxy on port80, and send the visitor to different Docker containers depending on the URL they visit.server { listen 80; server_name landing.example.com; location / { proxy_pass http://172.17.0.2:10000/; } access_log /landing-access.log; error_log /landing-error.log info; }When I do this, I get502 Bad Gatewayand the log says2016/04/14 16:58:16 [error] 413#413: *84 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: landing.example.com, request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:10000/", host: "landing.example.com"
How to set nginx reverse proxy?
Thanks tothishtaccess to nginx.conf converter, and some tricks and tests I've made,here is the corresponding nginx.conf file.I hope it will help people. ;)EDIT: link to my configuration is dead, but the converter is still available. As long as you have a valid Apache configuration you're good to go.
I'm currently developing a RESTful api as a bridge between my ios/web application and their shared database, and content.I found my way to implement RESTful api in PHP onthis blog.I started my development on my OVH Apache-based server. Unfortunately, they didn't provide oauth support on web hosting services and there is no way to install it. OVH told me I needed a dedicated server or a VPS server for this.Now, I'm going to work ondotCloud. It's a great alternative, I think, but their servers (seemingly Amazon EC2's ones), are nginx-based. This would be the first time I've used a nginx server and I need your help for "translating"this .htaccessto a nginx.conf file.Before asking for your help, I tried to find a nginx.conf file for this but no one worked. When I pushed them to my dotcloud app, the http service of my app crashed and dotcloud cli said :14:55:44 [www.0] WARNING: The service crashed at startup or is listening to the wrong port. It failed to respond on port "http" (80) within 30 seconds. Please check the application logs.Thanks for any help in advance :)
nginx.conf for a restful api
Finally I figured out how to solve this problem. At first we need to uncheck "Enable security" option at Manage Jenkins page. With security disabled we can trigger our jobs with requests likehttp://ci.your_domain.com/job/job_name/build.If you want to add token to trigger URL we need to Enable Security, choose "Project-based Matrix Authorization Strategy" and give Admin rights to Anonymous user. After it in Configure page of your project will be "Trigger builds remotely" option where you can specify token so your request will look likeJENKINS_URL/job/onru/build?token=TOKEN_NAMESo with disabled security we need to protecthttp://ci.your_domain.comwith nginx http_auth except urls like/job/job_name/build'.And of course we need to hide 8080 port from external requests. Since my server is on Ubuntu I can useiptablesfirewall:iptables -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT iptables -A INPUT -p tcp --dport 8080 -j DROPBut! On ubuntu (I am not sure about other linux oses) iptables will disappear after reboot. So we need to save them with:iptables-saveAnd it is not the end. With this command we just get a file with iptables. On startup we need to load iptables and the easiest way is to use 'uptables-persistent' package:sudo apt-get install iptables-persistent iptables-save > /etc/iptables/rulesTake a closer look at iptables if neededhttps://help.ubuntu.com/community/IptablesHowTo#Saving_iptablesand good luck with Jenkins!And there is good example for running jenkins on subdomain of your server:https://wiki.jenkins-ci.org/display/JENKINS/Running+Hudson+behind+Nginx
I installed jenkins on my server and I want to protected it with nginx http auth so that requests to:http://my_domain.com:8080 http://ci.my_domain.comwill be protected except one location:http://ci.my_domain.com/job/my_job/buildneeded to trigger build. I am kinda new to nginx so I stuck with nginx config for that.upstream jenkins { server 127.0.0.1:8080; } server { listen x.x.x.x:8080; server_name *.*; location '/' { proxy_pass http://jenkins; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; auth_basic "Restricted"; auth_basic_user_file /path/.htpasswd; } }I tried smth like above config but when I visithttp://my_domain.com:8080there is no http auth.
Protect Jenkins with nginx http auth except callback url
I know this post is quite old but adding application/json mime type to nginx configuration file plus restarting the server should work.When you request the json file try to debug the response header and check if the Content-Type header was successfully changed to application/json.
Switching to nginx for a site, one issue I'm having is serving up static json files.I added to the mime types:application/zip zip; ... application/json json; ...and restarted but it tried serving it up as a download (iehttp://domain.com/json-tmp/locations.json). What else would I need to configure?thx
configuring nginx to serve static json files
Confluence 7.3+ launches Companion with a custom protocol prefixed withatlassian-companion:. This is constructed using a hidden iframe to prevent the page from redirecting.Therefore, to resolve this issue, please addatlassian-companion:to yourdefault-srcorframe-srcexclusions in your Content Security Policy. For example:frame-src atlassian-companion:;.
We use the Confluence Companion tool to edit files from Confluence locally (https://confluence.atlassian.com/doc/edit-files-170494553.html) but since the last update of that tool, it is no longer working. I found out that it is because of the CSP directive that we've set in NGINX, but no matter the changes i make; nothing works.Original CSP directive:add_header Content-Security-Policy "default-src https: wss: blob: goedit: 'unsafe-inline' 'unsafe-eval'; connect-src https://*.atlassian.com 'self' ws:; img-src blob: https: data: 'unsafe-inline' *; font-src https: data:" always;Result:Refused to frame '' because it violates the following Content Security Policy directive: "default-src https: wss: blob: goedit:". Note that 'frame-src' was not explicitly set, so 'default-src' is used as a fallback.So i figured, let's add frame-src;add_header Content-Security-Policy "default-src https: wss: blob: goedit: 'unsafe-inline' 'unsafe-eval'; connect-src https://*.atlassian.com 'self' ws:; frame-src 'self'; img-src blob: https: data: 'unsafe-inline' *; font-src https: data:" always;But now it reports;Refused to frame '' because it violates the following Content Security Policy directive: "frame-src 'self'".Kinda lost here, in the first place why it loads .... nothing? Just '', i'd expect a website there or something, but no matter the changes i make to frame-src, it keeps complaining.What i tried:frame-src 'self'; frame-src '*'; frame-src ''; frame-src 'self' data:; frame-src '*.mydomain.com'; frame-src 'none';Even tried to allow all frames via X-FRAME-OPTIONS as well as adding frame-ancestors and combining all of the above in various ways, but the result is the same.Help is very much appriciated.Thanks!
Refused to frame '' because it violates the following Content Security Policy directive
You need to use a combination ofproxy_pass_request_headers onandunderscores_in_headers onsince your header contains an underscore.underscores_in_headersneeds to be placed in your http block.See:http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headersOLD ANSWERYou are looking forproxy_pass_headerSee:Module ngx_http_proxy_module
I have a firewall that is the SSL terminator and sets the remote_user header. This header should be passed onto an application, however we have an nginx proxy sitting in the middle.Browser over SSL -> Firewall proxy -> Nginx proxy -> AppI cannot for the life of me figure out how to pass the remote_user header to the App from the Firewall. Nginx seems to swallow it. $remote_user doesn't have it (which makes sense). $http_remote_user doesn't have it (which doesn't make sense). I tried to find it in $proxy_add_* but couldn't.How do I proxy pass the remote_user header when the SSL terminator isn't nginx?EDIT1: Other things I have tried:proxy_pass_request_headers on; proxy_pass_header remote_user; proxy_set_header other_user $http_remote_user; proxy_pass http://scooterlabs.com/echo;
Proxy pass remote_user with nginx
Websockets is not currently supported, we are working on adding it and I will update here when it is available.Thank youEdit: Websocket support is available in all regions, the annotation for it is:annotations: ingress.bluemix.net/websocket-services: service-name
When the client tries to connect our ingress defined endpoint via awss://request, the app returns 400 bad request, which according to socket.io docs is due to missing headers removed by load balancing proxies like nginx.apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: nginx.org/websocket-services: service-name spec: tls: - hosts: - www.myhost.com rules: - host: www.myhost.com http: paths: - path: / backend: serviceName: service-name servicePort: 80From the logs in the IBM provided ingress controller it seems to be a fork of thisnginx ingress controller. Which says that the annotationnginx.org/websocket-servicesadds support for websockets by adding directives to the generated nginx conf to pass the required headers. We have tried this as per above but to no avail.Has anyone had any success making this annotation work? Any workarounds for adding to the generated nginx conf?Any IBM people know if this functionality was intentionally removed from the fork? And if there is any way to add support for websockets in the IBM version of Kubernetes?
How to add websocket support to an ingress resource in Kubernetes on IBM Bluemix?
There are a few formatters out there, such as:Nginx Formatter(python) by 1connect which has a nice locally runable tool, works very good!Nginx Formatter(python)at blindage.org , didnt try that one but it seems good by his example outputs.Nginx Beautifier(javascript) also available atnginxbeautifier.comas a tiny js tool just likejsbeautifier.comand of course also open source ongithubyou can run it locally too by:installing it from npm(nodejs package manager):npm install -G nginxbeautifierinstalling it from arch aur(arch user repository):pacaur -S nginxbeautifiercloning from by github repository(gitandgithub):git clone https://github.com/vasilevich/nginxbeautifier.gitinstructions on how to use the program locally are available once you execute nginxbeautifier -h or nginxbeautifier --help and also on the github page itself.Full disclosure I am the developer and the maintainer of "nginxbeautifier.com"and the relevant github pageplease report any issuses there,some of the code in nginxbeautifier was acctualy inspired by the first option mentioned.
I have got this messy config for example:server { listen 80 default; server_name localhost; location / { proxy_method $foo; proxy_pass http://foobar:8080; } }and I would like to make it look like:server { listen 80 default; server_name localhost; location / { proxy_method $foo; proxy_pass http://foobar:8080; } }How can I format Nginx configurations in a better way?
How can I apply consistent indentation and formatting to Nginx config files?
Pool Directives are aPHP-FPM conventionwhere multiple "pools" of child processes can be started and have different configurations. The default name for the pool directives file iswww.conf.Take a look atthis linkfor more information and sample configurations.
I know whatphp.iniis for which can be found in/etc/php/7.0/fpmdirectoryI can't find documentation whatwww.confis designed for? It can be found in/etc/php/7.0/fpm/pool.d
What is www.conf?
The documentationsuggests that it's the "prefix path":–prefix=pathdefines a directory that will keep server files.This same directory will also be used for all relative pathsset by configure (except for paths to libraries sources) andin the nginx.conf configuration file. It is set to the/usr/local/nginxdirectory by default.By contrast:–conf-path=pathsets the name of annginx.confconfiguration file. If needs be, NGINX can always be started with a different configuration file, by specifying it in the command-line parameter-c file. By default the file is namedprefix/conf/nginx.conf.However,this is a documentation bug, andyour include paths will in fact be relative to the "config path".
While I'm compiling NGINX, I get this message:nginx path prefix: "/tmp/app" nginx binary file: "/tmp/app/progs/nginx/sbin/nginx" nginx configuration prefix: "/tmp/app/progs" nginx configuration file: "/tmp/app/progs/nginx.conf"Does NGINX use thepath prefixor theconfiguration prefixforincludedirectives innginx.conf?
Which prefix does NGINX use for "include"?
For nginx, PHP is never Javascript. Nginx can't distinct between PHP which renders html and PHP which renders javascript (please correct me if I'm wrong).So the way to go would be either to setup a seperate folder with PHP files which generate all JS (code is not tested!):location ~ \normal_php/.php$ { include /var/ini/nginx/fastcgi.conf; fastcgi_pass php; fastcgi_param SCRIPT_FILENAME /var/www/dyndev.dk/public/secure/index.php; } location ~ \js_php/.php$ { expires 1y; add_header Cache-Control "public"; include /var/ini/nginx/fastcgi.conf; fastcgi_pass php; fastcgi_param SCRIPT_FILENAME /var/www/dyndev.dk/public/secure/index.php; }...or send the header with PHP itself:<?php header('Expires: '. gmdate('D, d M Y H:i:s \G\M\T', time() + (60 * 60))); // 1 hour
I have a problem withexpiresheaders on javascript files which are generated by PHP..The website has two types of javascript files. One part is static javascript files and one part is dynamically generated by PHP.conf without expires headersHere noexpiresheaders are added to the.jsfiles (All files returnHTTP 200)location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { include /var/ini/nginx/fastcgi.conf; fastcgi_pass php; fastcgi_param SCRIPT_FILENAME /var/www/index.php; }conf with expires headersWhen adding a location for.jsfiles then all dynamically generated files returnHTTP 404location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { include /var/ini/nginx/fastcgi.conf; fastcgi_pass php; fastcgi_param SCRIPT_FILENAME /var/www/dyndev.dk/public/secure/index.php; } location ~ \.(js|css)$ { expires 1y; add_header Cache-Control "public"; }How to handle both the static and dynamically generated.jsfiles withexpiresheaders?All dynamically genereated javascript files are named*-php.jsFile structure/var/www/public/index.php # All none-static file requests are pointed to index.php /var/www/public/js/main.js # Static files /var/www/js-dynamically_generated.php # This file is outside the public www, but is routed by PHP since the file doesn't exists inside the public /jsPHP routingwww.example.com/ -> index.php www.example.com/js -> static content www.example.com/js/dynamically_generated-php.js -> js-dynamically_generated.php
nginx with expires on javascript files (dynamically generated by PHP)
One lazy (yet recommendedandprofessional) way of going about app updates is running automation script, likeFabricorAnsible.However, if you wish to proceed the manual way (which is tedious), you might do something like:Pull from gitRun migrationspython manage.py migrate(This should ensure changes you made locally to your models reflect in production DB)Run static collections to ensure new statics are reflected in server /static/ folder like so:python manage.py collectstaticThen, restart yourDjango Servernot Nginx. So something like:sudo service your_django_server_running_instance restartOn digitalOcean for instance (when used One-Click Install), your django server running instance is likely calledgunicornThen you might want to look intoautomating your postgresql dbas well
I am relatively new to Python/Django and have successfully deployed my first app. I want to update it now with some new changes, but I am not sure what the proper process is. My setup is ubuntu/nginx/gunicorn/postgres.At the moment I am taking the following steps:Stop nginx: sudo service nginx stopStop gunicorn: sudo service gunicorn stopBackup the db? (not implemented - cant find it on the server)Git Pullpython manage.py migratepython manage.py collectstaticrestart gunicorn: sudo service gunicorn startrestart nginx: sudo service nginx restartThis is working, but I would appreciate some guidance if this is the complete, most accurate and safest way to do this please?
Updating Django App on server
proxy_cache_min_usesjust counts the number of requests after which the response from upstream will be cached.Requests are evicted from cache when they are not accessed within an expiration time or when the size of the cache exceeds a max value (using LRU algorithm). You can tune the proxy cache via theproxy_cache_pathdirective (herea nice doc with examples).
nginx proxy has a directiveproxy_cache_min_usesbut I can't find what's the time window used or how to set one. Because if it doesn't use any time window and just waits for the requests to reach some counter then eventually all requests will do, if you keep nginx running for long enough.Or a relatively rare request would be quickly evicted from the cache because of least recently used policy and I shouldn't be too concerned about that?Thanks
proxy_cache_min_uses time window
When you hold down F5:You've started hundreds of requests.Those requests have filled your gunicorn request queue.The request handlers have not been culled as soon as the connection drops.Your latest requests are stuck in the queue behind all the previous requests.Nginx times out.For everyone.Solutions:Set up rate-limiting buckets in Nginx, keyed on IP, such that one malicious user can't spam you with requests and DOS your site.Set up a global rate-limiting bucket in Nginx such that you don't overfill your request queue.Make Nginx serve a nice "Reddit is under heavy load" style page, so users know that this is a purposeful eventOr:Replace gunicorn with uwsgi. It's faster, more memory efficient, integrates smoothly with nginx, and most importantly: It will kill the request handlerimmediatelyif the connection drops, such that F5 spam can't kill your server.
I'm trying to publish a Django application on the production server using Nginx + Gunicorn. When I doing a simple stress test on the server (holding the F5 key for a minute) the server returns a504 Gateway Time-outerror. Why does this happen? This error only appears for the user when doing multiple concurrent requests, or the system will be fully unavailable to everyone?
Django Nginx Gunicorn = 504 Timeout
I use this directive in the pool configuration file for PHP-FPM:catch_workers_output = yes
FastCGI doesn't want to log PHP errors properly. Well, that's not entirely true: it logs errors fine, with a little fiddling; it just won't log anything else, such as warnings.The notorious FastCGI -> Nginx log bug isn't an issue, necessarily. Errors and warnings from php-fpm go straight to Nginx--but only if they're uncaught. That is, ifset_error_handlersuccessfully intercepts an error, no log entry is appended. This means that I can see parse errors, but that's about it.php-fpm doesn't log PHP errors by itself (separate from nginx) without a bit of a hack. php-fpm's instance configuration file includes these two lines by default:php_admin_value[error_log] = /mnt/log/php-fpm/default.log php_admin_flag[log_errors] = onI changed the error_log path, obviously. I had to add the following line to get it to actually log anything:php_admin_value[error_reporting] = E_ALL & ~E_DEPRECATED & ~E_STRICTVersion note: the E_STRICT part is unnecessary, as I'm using PHP 5.3.27, but I plan on upgrading to 5.4 at some point. With this line, it logs errors--and only errors--to/mnt/log/php-fpm/default.log. Now, this setserror_reportingto the same value that I have set in php.ini, so something is obviously wrong here. In addition, it doesn't log caught errors: the behavior is identical to that of the nginx log. I tried using the numeric value (22527) instead, but still no luck.I don't care in which log file the entries end up (nginx versus php-fpm), but I do need caught errors to be logged somewhere. I could resort to injecting my own error and exception handlers, but that's a bit hackish, so I'd rather avoid that.
Nginx + FastCGI + PHP (php-fpm) not logging caught errors/warnings
UWSGI supports keep-alive via --http-keepalive option if you receive requests via http./tmp$ cat app.py def application(env, start_response): content = b"Hello World" start_response('200 OK', [ ('Content-Type','text/html'), ('Content-Length', str(len(content))), ]) return [content]Run with:/tmp$ uwsgi --http=:8000 --http-keepalive -w app &> /dev/nullAnd we can seeconnectcalls via strace:~$ strace -econnect wrk -d 10 -t 1 -c 1 http://127.0.0.1:8000 connect(3, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 Running 10s test @ http://127.0.0.1:8000 1 threads and 1 connections Thread Stats Avg Stdev Max +/- Stdev Latency 92.32us 56.14us 2.81ms 97.20% Req/Sec 11.10k 389.34 11.84k 68.32% 111505 requests in 10.10s, 7.98MB read Requests/sec: 11040.50 Transfer/sec: 808.63KB +++ exited with 0 +++See? Only one connection.
Is it possible somehow to pass through the keepalive limitation of uwsgi? If not, what is the best way of persistent connection implementation. I'm using NGiNX + uWSGI (Python), and I want clients to have asynchronous updates from server.
uWSGI keepalive
Make surefastcgi_intercept_errorsis set toon, and use theerror_pagedirective:location / { fastcgi_pass 127.0.0.1:9001; fastcgi_intercept_errors on; error_page 502 =503 /error_page.html; # ... }
Is it possible to replace 502 errors on nginx.conf (php-fpm problems), with 503?502 = bad gateway503 = server overloadednginx: 502googlebot: Hmmm, I don't like that... sorry but... penalized...nginx: 503googlebot: Hmmm, no problem, I will try again later...nginx: thank you for your willingness to understand
How to replace nginx errors
I actually ran into the same problem and solved it (at least in my situation) by complete mistake...In thenginx walkthroughon the mono project's site, it says to enter these lines in your nginx.conf file:index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx;Well, I set this up in the exact same way (or so I thought) on two VMs. Problem is, one VM had it's root url work and one didn't. What it turned out to be was that I forgot the semi-colon on the 'index' line on the VM that worked, so that the 'fastcgi_index' line was interpreted as part of the 'index' line.So on the VM that didn't work, I removed that semi-colon. And guess what? It worked. So then I added the semi-colon and entirely removed the 'fastcgi_index' line and it still worked. So based on this anecdotal evidence and some guess work, I'd say that the 'fastcgi_index' line should not be included in MVC applications. Well, at least MVC 3, I haven't tested anything else.
I'm trying to convert an ASP .NET MVC 2 app to run on nginx/mono 2.8. So far it seems to work quite well except that the default route doesn't work when the path is empty. I am proxying all requests through to the fastcgi server and I get served up with an ASP .NET 404 not found page.i.e. This doesn't workhttp://mysite.comBut this doeshttp://mysite.com/homeMy Global.asax.cs file looks like thisusing System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace MyProject { public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); // Default route routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new string[] {"MyProject.Controllers"} ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } }EDIT: Some more info on my setup. I am running OS X 10.6 if that makes any difference. Also the same problem exists for the default route of any areas in the MVC project.
Mono MVC 2 home route doesn't work
Make sure that the user nginx is running as (in most cases 'nobody' or 'www-data') has permission to read the contents of your home directory /home/admin.Also you can look into the nginx logs and read exactly what the error was.
I followedthistutorial more or less... I installed the passenger gem, executed passenger-install-ginx-module, sucessfully installed nginx and inserted this into the config:server { listen 80; server_name localhost; root /home/admin/sintest/public; # <--- be sure to point to 'public'! passenger_enabled on; }In /home/admin/sintest I have: an empty public folder, the config.ru:require 'sinatra' set :env, :production disable :run require './app.rb' #the app itself run Sinatra::Applicationand a test sinatra app.rb:require 'sinatra' get '/' do "hello world!" endNow when I run nginx and open uphttp://localhostwhat I get is: 403 ForbiddenWhat am I doing wrong? Have I missed something?
Sinatra on Nginx configuration - what's wrong?
It is not deprecated, no. The problem is that the packaged module you are trying to install was made for an older Nginx version that is distributed through the system default repository. This appears in theinstallation guidethat you've mentioned:At this point we assume that you already have Nginx installed from yoursystemrepository.What this means is that it is assumed that you have a specific version of Nginx (1.14.0in your case) installed, for which the packaged module was built. This is emphasized in thenew passenger documentation:If you want to use our packaged Nginx module, you must use your distro's provided Nginx package. If for example you have the repo provided by NGINX setup, you will instead need tocompile a dynamic module compatible with that Nginx.The link in the last quote will bring you to the guide on how to compile a dynamic passenger module and enable it in Nginx configuration. I will not repeat the whole process to keep the answer short but the general approach is this:Get passenger module for Nginx source code.Get Nginx source code for the version you have installed.Compile Nginx with the passenger module:cd /path-to-nginx-source-dir ./configure --prefix=/opt/nginx \ --with-some-configure-flag \ --add-dynamic-module=$(passenger-config --nginx-addon-dir) \ --add-module=/path-to-some-other-nginx-module make sudo make installMake Nginx to load the module by adding this line tonginx.conf:load_module modules/ngx_http_passenger_module.so;Personally, I'd rather chosen the 'nginx-behind-nginx' approach than building the module. That is you have Nginx any version you like but it runs as a reverse proxy for another Nginx with passenger enabled (Passenger Standalone). With an unnoticeable penalty to performance this will be much easier to maintain (install, update). See thisguidefor details.
I updated nginx from version1.14to1.18 (Ubuntu)onUbuntu 18.04.Doing so appeared to break passenger. So I uninstalled and attempted to reinstall the Open Source Passenger version via thePassenger installation Ubuntu 18.04 instructions.I got to this line:sudo apt-get install -y libnginx-mod-http-passengerWhich throws this errorlibnginx-mod-http-passenger : Depends: nginx-common (< 1.14.1) but 1.18.0-3ubuntu1+bionic1 is to be installedUpdateI also attempted with the enterprise version. Following the enterprise version installation instructions, I received a similar error message:libnginx-mod-http-passenger-enterprise : Depends: nginx-common (< 1.14.1) but 1.18.0-3ubuntu1+bionic1 is to be installedI did attempt to research the issue and I foundthis issue on Phusion's GitHubas well asthis more recent issue. It appears that what most people are doing is rolling back their nginx version to1.14.
Is Passenger Deprecated for Nginx versions above 1.14?
A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress.Thisseems to be a good example, usingnginx-ssl-proxy container.
My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.How do I do this?Everything I'm seeing online involves Ingress and nginx-ingress in particular.But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?
How to get HTTPS on AKS without ingress
This is not a routing problem on nginx part, but the browser trying to access an absolute URI from the root of your domain. Use relative URIs (remove the leading slash):
HyWe're trying to get our website working on kubernetes (running in a container using nginx). We use ingress to route to the site, here is our configuration:nginx-conf:server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } }Kubernetes deployment:apiVersion: apps/v1beta1 kind: Deployment metadata: name: mywebsite spec: replicas: 2 template: metadata: labels: app: mywebsite spec: containers: - name: mywebsite image: containerurl/xxx/mywebsite:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: mywebsite spec: ports: - port: 82 targetPort: 80 selector: app: mywebsiteIngress config:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myIngress annotations: kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - xxx.com secretName: tls-secret rules: - host: xxx.com http: paths: - path: /mywebsite backend: serviceName: mywebsite servicePort: 82When i go to xxx.com/mywebiste, the index.html is loading, but the css and js are not loaded (404):index.html: My Website ...because it trys to get the resources here, as example of the css:xxx.com/styles.30457fcf45ea2da7cf6a.css...instead of:xxx.com/mywebsite/styles.30457fcf45ea2da7cf6a.cssIf tried different things like:nginx.ingress.kubernetes.io/add-base-url nginx.ingress.kubernetes.io/app-root...etc., but nothing seems to work.Any ideas? Thanks for your help.Regards, Peter
Kubernetes Ingress Nginx loading resources 404
Nginx can handle SSL termination nicely, and this will offload ssl processing power from your application servers.If you have a secure private network between your nginx and application servers I recommend offloading ssl via nginx reverse proxy. In this practice nginx will listen on ssl, (certificates will be managed on nginx servers) then it will reverse proxy requests to application server on non ssl (so application servers dont require to have certificates on them, no ssl config and no ssl process burden).If you don't have a secure private network between your nginx and application servers you can still use nginx as reverse proxy via configuring upstreams as ssl, but you will lose offloading benefits.CDNs can do this too. They are basically reverse proxy + caching so I dont see a problem there.Good read.
Scenario:I have an express.js server which serves variations of the same static landing page based on wherereq.headers.hostsays the user is coming from - think sort of like A/B testing.GET tulip.flower.comservespages/flower.com/tulip.htmlGET rose.flower.comservespages/flower.com/rose.htmlAt the same time, this one IP is also responsible for:GET potato.vegetable.comservingpages/vegetable.com/potato.htmlIt's important that these pages are servedFAST, so they are precompiled and optimizedin all sorts of ways.The server now needs to:Provideseparatecertificates for*.vegetables.com,*.fruits.com,*.rocks.netOptionally provide no certificate for*.flowers.comOffer HTTP2The problem is that HTTP2 mandates a certificate, and there's now multiple certificates in play.It appears thatit's possibleto use multiple certificates on one Node.js (and presumably by extension Express.js) server, but is it possible to combine it with a module likespdy, and if so, how?Instead of hacking node, would it be smarter to pawn the task of sorting out http2 and SSL to nginx? Should the caching network like Imperva or Akamai handle this?
Multiple SSL Certificates and HTTP/2 with Express.js
You won't do mistakes like that ifyou lint your code,run under strict mode, and don't use global variables like that.Also in nodejs web applications you generally want to make the server stateless and keep all the data in the databases. This would also make it a more scalable architecture.In applications that are security super important, you can throw heavy fuzzy testing at it to find problems like that too.If you do all this, plus having a strict code review process, you won't have to worry about it at all.FastCGI doesn't prevent this problem as a single or a few connections is used to communicate with the server that processes the requests(node.js in this case) and HTTP connections are multiplexed through it. A node.js process will handle multiple requests at a time.You can potentially somehow make a solution that launches a thread but it'll be a lot slower. In case if you are using node.js for things that are required to be have high reliability or can't afford small mistakes(for example health related devices), node.js is the wrong platform for it.
Just started deal with NodeJS web apps and have a fundamental question.Since i came from the PHP realm, i know PHP have abuilt-in HTTP serverbut no one actually use it and we used nginx and in the prehistoric projects Apache as HTTP server, when i came into ExpressJS i found that all examples talking about listening to the HTTP server that ExpressJS open (via http NodeJS module of-course) but no one talking about use it via FastCGI (nginx -> FastCGI (e.g.node-fastcgi) -> my ExpressJS app) like i used to do with PHP (nginx -> PHP-fpm -> my PHP env) and i wonder why?As far as i understood, NodeJS app is very fast, non-blocking I/O and so on but there is a security hole using the app like everybody show, since the service that run have same common resources in the JavaScript environment, one user can share by mistake (or not) sensitive information with others, for instance. let's assume the developer made a mistake like this:router.post('/set-user-cc', function(res){ global.user = new User({ creditCard: req.param('cc') }); });And other user do request like that:router.get('/get-user-cc', funciton(req, res){ res.json(global.user); });At this point each user will get the user's CC info.Using my ExpressJS app via FastCGI will open a clean JavaScript environment for each HTTP request and users won't hurt each other.It'll nice to hear form NodeJS (web) apps experienced developers why no one suggest to use the FastCGI solution (searched on Google and found almost nothing) and if so, why it's too bad?(p.s. the example is just to demonstrate the problem it's not something that someone actually did, but as we know lot of stupid people exists in the universe :)Thank you!
Use ExpressJS app via FastCGI
Actually, your configuration should work. You can check it using curl:# curl -i -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1" http://localhost/i.php HTTP/1.1 403 Forbidden Server: nginx/1.3.6 Date: Wed, 26 Dec 2012 10:05:34 GMT Content-Type: application/octet-stream Content-Length: 62 Connection: keep-alive Browser not supported. Please update or change to another one.Access log is also worth checking (by default log_format includes $http_user_agent variable).By the way, what's in /etc/nginx/fastcgi.conf?Another thing is that if you want people with real MSIE 6 browsers to see your message, you'd better do something like this:location ~ \.php$ { ... if ($http_user_agent ~ "MSIE 6" ) { return 403; } error_page 403 /old_browser.html; }and create old_browser.html with your message. Please note that this file should be larger than 512 bytes in order to ensure that MSIE will display it's contents instead of some standart IE 403 message. Tools likehttp://browsershots.orgare perfect for debugging such cases. :)
I'd like nginx to return an error 403 if user-agent is MSIE 6 and to display a custom error message. I used this code and everything worked the first minutes. Then it just returned the error without the message! Don't know why... Here's the code (I tried to put ' instead of ", to have plain text without ' or ", still no luck):location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; if ($http_user_agent ~ "MSIE 6" ) { return 403 "Browser not supported. Please update or change to another one."; } }EDIT: Forgot to say that it's in the php block because I want to block MSIE 6 only for PHP requests.
Nginx: return error 403 and display a message
Turns out my little test server was quite competitive with nginx once I told it to read files in binary mode instead of list mode.I think a lot of the discussion in the rest of this thread may be confusing for someone unfamiliar with erlang and erlang server design. I didn't want to delete the thread since there is good information about nginx in it (and I can't, it has answers already), but I encourage anyone looking into making an erlang based server to do some research and write lots of tests, and not go only what you've read here.
I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this isthisthread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason.I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test.What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?
Nginx's speed, and how to replicate it
Fromhere:We do not plan on supporting this. Phusion Passenger Standalone is meant to be its own web server, designed to handle the most basic use cases very very well at the expense of customizability. That it happens to be using Nginx under the hood today is an implementation detail that the user should not bother with; it may very well happen that we one day swap Nginx with something else. If you want any kind of complex customization you should use Phusion Passenger for Nginx instead.
In older passengers (3.0.0) it was possible to configure the standalone nginx passenger (passenger start). In the.passenger-Dir there was a complete nginx installation (3.0.0-x86_64-ruby1.9.2-macosx-10.6/nginx).In 3.0.2 there only is a sbin-dir. the config directory is missing. Where can I find the config files?
configure nginx in passenger 3.0.2 stand alone
Let's Encrypt blocks Amazon AWS domains because the domain names are transient and are subject to change.https://community.letsencrypt.org/t/policy-forbids-issuing-for-name-on-amazon-ec2-domain/12692/4
I tried to put the command to get the certificate but it gave me this error: An unexpected error occurred: The server will not issue certificates for the identifier :: Error creating new order :: Cannot issue for "ec2-34-237-242-160.compute-1.amazonaws.com": The ACME server refuses to issue a certificate for this domain name, because it is forbidden by policy
Can I use Nginx Certbot to put ssl in an aws default ec2 domain?
After long debugging, we found that nginx will cachetest-service.internalips. And aws will chang it's internal load balancer's ips.So nginx cached ips are no more exist. so we need to provide new ips.Solution:nginx has providedresolverdirectivelocation /test { resolver 10.0.0.2 127.0.0.1 valid=30s; set $backend_servers test-service.internal; proxy_pass http://$backend_servers:38102; #proxy_pass http://test-service; proxy_set_header HOST $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; }We have changed two things.Added resolver.Removed upstream (resolver is not supported in nginx. nginx-plus support the upstream with resolver)resolver 10.0.0.2 127.0.0.1 valid=30s; set $backend_servers test-service.internal; proxy_pass http://$backend_servers:38102;Now we are using aws dns server10.0.0.2to resolvetest-service.internalafter every 30s
I know the this question is asked multiple times and not related to aws.2020/07/29 10:23:17 [error] 6#6: *37749 connect() failed (113: Host is unreachable) while connecting to upstream, client:I am facing this issue while I have deployed nginx in aws cloud.localtion configurationlocation /test { proxy_pass http://test-service; proxy_set_header HOST $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }Upstream section like belowupstream test-service { server test-service.internal:38102; keepalive 10; }Heretest-service.internalis my route53 hosted zone entry and it is pointing to someinternal application load balancerof aws.When I deploy/restart nginx server, it works well but after few days (around two/three days) it will hang in proxy pass only. When I load html content, it works perfectly but proxy pass call stuck.Any solution would be helpful?Thanks.
connect() failed (113: Host is unreachable) while connecting to upstream nginx for aws
You have to grant permissions for www-data user.sudo chown -R www-data:www-data .well-known
I'm trying to verify a file upload for SSL certificate. The file needs to be.well-known/acme-challenge/fileI have successfully placed the file as above, but while accessing the same file from the webhttp://weburl.com/.well-known/acme-challenge/file, 404 error is coming up. When I place the same file in.well-known/the file can be access from the pathhttp://weburl.com/.well-known/filesuccessfully.My nginx configuration:server { listen 80; server_name weburl.com; root /var/www/html; location ~ /.well-known { allow all; } location ~ /\.well-known/acme-challenge/ { allow all; root /var/www/html; try_files $uri =404; break; } }
.well-known/acme-challenge nginx 404 error
What kind of response are you getting? If there is an error in your response, you may need to add thealwaysflag or the header may not be added.http://nginx.org/en/docs/http/ngx_http_headers_module.htmlSyntax: add_header name value [always];If the always parameter is specified (1.7.5), the header field will be added regardless of the response code
I'm trying to get NGINX to check if a request headeruser_header_tokenis present. If it is not present, redirect to the login site. If it is present, set a cookie with the header's value. The cookie is empty when it is set currently instead of the$http_variable I'm trying to set it to. Does anyone see what I'm doing that's preventing this this cookie from being set to the header's value?http { include /etc/nginx/mime.types; server { listen 80; location / { if ($http_user_header_token = "") { rewrite ^.*$ https://loginsite.com/; return 403; } add_header Set-Cookie user_header_token=$http_user_header_token; root /usr/src/ui/; index index.html; } } }
NGINX set cookie based on value of a header
deny allwill have the same consequence but leaves the possibilities of slip-ups:If you have auth_basic and/or allow in a parent block with a satisfy directive, requestssatisfyingthose criteria(s) will have access in an inheriting block that at face value is denying access. This is of no concern if you don't use this feature.The issue is illustrated in thisanswer, suggesting not using the satisfy+allow+deny at server{} level because of inheritance.I've come to the conclusion a return 403 (or even a 404, as the rfcsuggestsfor purposes of no information disclosure) is less error prone if I know the ressource should under no circumstances be accessed via http, even if "authorized" in a general context.
Disregarding best practices, does usingreturn 403achieve the exact same effect asdeny all;? From the docs:Deny:Denies access for the specified network or address.Return:Stops processing and returns the specified code to a client.Does "denies access" mean the same as "stops processing and returns the specified code"? If not, what does "denies access" really mean?
Nginx: Difference between deny all; and return 403;
I was able to set the umask forphp5-fpmservice by editing it'sunit.servicefile as suggestedhereandhere. The complete and working solution for Debian 8 is this:Manually edit/etc/systemd/system/multi-user.target.wants/php5-fpm.servicefile and addUMask=0002line inside[Service]section.Run commandsystemctl daemon-reloadRun commandsystemctl restart php5-fpm.serviceNow the service file looks like this:[Unit] Description = The PHP FastCGI Process Manager After = network.target [Service] Type = notify PIDFile = /var/run/php5-fpm.pid ExecStartPre = /usr/lib/php5/php5-fpm-checkconf ExecStart = /usr/sbin/php5-fpm --nodaemonize --fpm-config /etc/php5/fpm/php-fpm.conf ExecReload = /bin/kill -USR2 $MAINPID ; Added to set umask for files created by PHP UMask = 0002 [Install] WantedBy = multi-user.targetNote that:You can not usesystemctl edit php5-fpm.servicecommand aseditoption was introduced insystemctlversion 218 but Debian 8 ships with version 215.Adding*.conffile as suggested in comments for thisanswerdid not work for me, but maybe I messed up something (comments are welcome for this as editing unit file is not something that I feel comfortable with).
I'm runningphp5-fpmwithnginxconnected via port (not socket). It's stock Debian Jessie with all packages installed viaapt-get.I'm trying to change default umask for www-data user thatphp5-fpmis using from0022to0002to allow group write permissions. I've tried:editing/etc/init.d/php5-fpminit script and adding--umask 0002to thestart-stop-daemoncall, but it was ignored;addingumask 0002to/var/www/.profileas/var/wwwis a home directory forwww-datauser, but it didn't help (I'm not surprised).I'm not usingupstartsothis solutionis not for me.Also, no matter what I've tried, the commandsudo -u www-data bash -c umaskalways returns0022.
How to set umask for php5-fpm on Debian?
Isolvedadding phusion passenger.nginx config is now :server{ listen 80; passenger_enabled on; passenger_app_env production; passenger_ruby /../ruby-2.3.0/ruby; root /path to application/public; client_max_body_size 4G; keepalive_timeout 10; [...] location /cable{ passenger_app_group_name websocket; passenger_force_max_concurrent_requests_per_process 0; } }You have toremovedefault folder config/redis/cable.yml and move that file to /config only.For SSL just enable default ssl options and it will works .-)Thanks everyone for the help
Hello I'm trying to serve a simple chat using ror 5.0.0 beta (with puma) working on production mode (in localhost there are no problems).This is myNginxconfiguration:upstream websocket { server 127.0.0.1:28080; } server { listen 443; server_name mydomain; ssl_certificate ***/server.crt; ssl_certificate_key ***/server.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/jenkins.access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:3000; proxy_read_timeout 90; proxy_redirect http://localhost:3000 https://mydomain; location /cable/{ proxy_pass http://websocket/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; break; } }This isconfig/redis/cable.ymlproduction: url: redis://localhost:6379/1development: url: redis://localhost:6379/2test: url: redis://localhost:6379/3andconfig/environments/production.rb# Action Cable endpoint configuration config.action_cable.url = 'wss://mydomain/cable' # config.action_cable.allowed_request_origins = [ 'http://example.com', /http:\/\/example.*/ ] # Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies. config.force_ssl = falseAnd this is the error i'm receiving:application-[...].js:27 WebSocket connection to 'wss://mydomain/cable' failed: Error during WebSocket handshake: Unexpected response code: 301Any tips? :) Thanks
RoR 5.0.0 ActionCable wss WebSocket handshake: Unexpected response code: 301
in nginx:proxy_pass http://@squid;in squid:http_port 3128 vhostand that's all you need for fix thishttps://i.stack.imgur.com/9FSB8.jpgerror
Similar tothis, I am trying to host a squid proxy behind nginx:example.com- the main siterelay.example.com- the squid server.So far, when I try to use the squid proxy, it will complain about accessing an illegal page, for example, if I try to accesshttp://www.google.com, I get an Invalid URL error saying that the URL/http://www.google.com(note the preceding /). Could anyone suggest why this is happening, or a fix for nginx or perhaps in the squid config?upstream @squid { server localhost:3128; } server { listen 80; server_name relay.example.com; location / { proxy_pass http://@squid/$scheme://$host$uri; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Request-URI $request_uri; proxy_redirect off; } }https://imgur.com/qtgrZI9The log from squid gives:1423083723.857 0 127.0.0.1 NONE/400 3530 GET /http://www.google.com/ - HIER_NONE/- text/htmlAnd nginx for the same request:12.34.56.78 - - [04/Feb/2015:16:02:03 -0500] "GET http://www.google.com/ HTTP/1.1" 400 3183 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:35.0) Gecko/20100101 Firefox/35.0" "-"
Reverse proxy from nginx to squid
They are just 2 different ways of running WSGI applications.Have you tried googling formod_wsgi nginx?Any wsgi compliant server has that entry point, that's what the wsgi specification requires.Yes, but that's only howuwsgicommunicates with Nginx. Withmod_wsgithe Python part is run from within Nginx, withuwsgiyou run a separate app.
There seems to be mod_wsgi module in Apache and uwsgi module in Nginx. And there also seems to be the wsgi protocol and uwsgi protocol.I have the following questions.Are mod_wsgi and uwsgi just different implementations to provide WSGI capabilities to the Python web developer?Is there a mod_wsgi for Nginx?Does uwsgi also offer theapplication(environ, start_response)entry point to the developers?Is uwsgi also a separate protocol apart from wsgi? In this case, how is the uwsgi protocol different from the wsgi protocol?
What is the difference between mod_wsgi and uwsgi?
A shared memory zone is a general term. Within the context of Nginx, a shared memory zone is defined so that worker processes can share stuff, for example, counters when you want to apply access limits.In case you're not familiar with worker processes, check this image.
According to the nginx documentation, theproxy_cache_pathdirective has a parameter calledkeys_zone. The documentation also refers a concept of "shared memory zone".In addition, all active keys and information about data are stored in a shared memory zone, whose name and size are configured by the keys_zone parameter. One megabyte zone can store about 8 thousand keys.Is the "shared memory zone" a general term? or a specific term used by nginx? What does the "shared" exactly mean?
What does the "shared memory zone" mean in nginx?
HAProxydoes support that.HAProxy can offload TLS and forward to a backend that speaksh2c.Details on how to setup this configuration are available inthis blog post.
We have a java web server which is able to serve content over h2c (HTTP/2 clear text)We would like to reverse proxy connections established using h2 (i.e. standard HTTP/2 over SSL) to the java server in h2c.Enabling HTTP/2 on nginx is simple enough and handling incoming h2 connections works fine.How do we tell nginx to proxy the connection using h2c rather than http/1.1 ?Note: a non-nginx solution may be acceptableserver { listen 443 ssl http2 default_server; server_name localhost; ssl_certificate /opt/nginx/certificates/???.pem; ssl_certificate_key /opt/nginx/certificates/???.pk8.key.pem; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass http://localhost:8080/; ## <---- h2c here rather than http/1.1 } }CONCLUSION(June 2016)This can be done with haproxy using a configuration file as simple as the one below.Querying(HttpServletRequest)req.getProtocol()clearly returnsHTTP/2.0global tune.ssl.default-dh-param 1024 defaults timeout connect 10000ms timeout client 60000ms timeout server 60000ms frontend fe_http mode http bind *:80 # Redirect to https redirect scheme https code 301 frontend fe_https mode tcp bind *:443 ssl no-sslv3 crt mydomain.pem ciphers TLSv1.2 alpn h2,http/1.1 default_backend be_http backend be_http mode tcp server domain 127.0.0.1:8080
Reverse proxying HTTP/2 from h2 to h2c
Here is the solution described again, maybe a little bit more convenient:To fix this problem, I changed in the site-configuration (/etc/nginx/sites-available) of nginx the following variables:proxy_set_header Connection $connection_upgrade;toproxy_set_header Connection $http_connection;For me this solved the problem.
I am running currently a webserver with ASP.NET Core 3.1 and a Blazor project. Recently when upgrading to .NET 6.0 I encountered (even with a blank Blazor project) some problems with a websocket error message in the browser only when deployed on my webserver (see message below).Locally (on Windows 11 x64, VS 22 Preview 4) there are no error messages...Webserver: Debian 10 x64, .NET 6.0 SDK installed, running on NGINX with websockets enabled (reverse proxy).Do I miss out on something or is it a problem with the current state of .NET 6.0 and NGINX? I already tried to access the webpage locally on the debian server and the same error message occurs.Help would be much appreciated!Greetings!Error messages within order:Information: Normalizing '_blazor' to 'http://192.168.178.35/_blazor'.blazor.server.js:1 WebSocket connection to 'ws://192.168.178.35/_blazor?id=wnPt_fXa9H4Jpia530vPWQ' failed:Information: (WebSockets transport) There was an error with the transport.Error: Failed to start the transport 'WebSockets': Error: WebSocket failed to connect. The connection could not be found on the server, either the endpoint may not be a SignalR endpoint, the connection ID is not present on the server, or there is a proxy blocking WebSockets. If you have multiple servers check that sticky sessions are enabled.Warning: Failed to connect via WebSockets, using the Long Polling fallback transport. This may be due to a VPN or proxy blocking the connection. To troubleshoot this, visit https://aka.ms/blazor-server-using-fallback-long-polling.
.NET 6.0: new Blazor project throws Websocket error
I've got thesolution!!! And it is working perfectly fine for me! Thanks to @elad and contributors. I've done some extensive amount of testing(more than 2 MONTHS!) and never had a problem. I'll not disrespect the author by explaining what the snippet does as it has already been described enough, line-by-line.It took me long enough because there were open issues on the repo and had to be sure. And now I am sure that those issues are workable with understanding of different components. Afterall, this is not magic!Have a look and let me know if you still have doubts/questions.
I have been looking at various solutions around but when I put it all together, it looks very confusing.I am trying to implement pm2 cluster mode for my application which has socket.io implementation. Now, I understand the concept that statelessness is required in order to make my app work properly in cluster mode. And socket.io is NOT stateless. The confusion is,1) Our friend Camsaysthat just implementingsocket.io-rediswould work fine when we'll spawn on the maximum number of CPUs.2) While socket.iosaysand I quote,Note:sticky-sessionis still needed when using the Redis adapter."For 1), According to my research, Internet should disagree that it would work. Maybe Mr. Cam got lucky to have transport method aswebsocketand might not had to deal withpolling. But at the same time I think it should work, sinceredis-adapteris what we are using to make it stateless.INFO: It worked for me withwebsocketas transport method but I couldn't test it withpolling.For 2), I think we can combine Mr. Joni'sadviceto run it with "pm2" IN "cluster" but on different ports. And then our belovednginx'supstreamgroup withip_hashwould give uskind ofsame effect.Additionally, I want to make my application elastic. NOT just on cluster level but both scale up and out. What are the best practices given that my application included socket.io implementation and session token management inredis?Am I missing something or am I totally wrong here? Which would be the best way to scale?
How to make socket.io work properly with pm2 cluster mode?
Capfile:require 'capistrano/puma'it helped me
I used ansible script for server setup:playbook.ymlGemfileAnd when I deployed my application to server, I see this in nginx/error.log:2016/09/30 20:43:07 [crit] 1352#0: *1 connect() to unix:/home/deploy/applications/spa_backend/shared/tmp/sockets/puma.sock failed (2: No such file or directory) while connecting to upstream, client: *, server: , request: "GET / HTTP/1.1", upstream: "http://unix:/home/deploy/applications/spa_backend/shared/tmp/sockets/puma.sock:/",OS: Ubuntu 14.04.5
puma: puma.sock No such file or directory
Seems that modulengx_http_proxy_moduleis not installedRunnginx -Vto view how nginx is configured. If it is configured with option--without-http_proxy_modulethan nginx doesn't have proxy module and should be recompiled.
I'm having a problem with nginx configuration.When I set the configuration like this:server { server_name redmine; listen 80; location / { proxy_pass http://172.16.0.70:33000; } }I get this error nginx: [emerg] unknown directive "proxy_pass".My nginx version is nginx/1.8.0.Someone know what i'm missing or what i'm doing wrong? Thanks.
Nginx unknown directive "proxy_pass"
Finally i found what is the problem.I'm using Nginx web server, when i change nginx config files:sendfile on;becamesendfile off;My image not corrupt anymore. So its not php or curl problem. Interesting article:http://technosophos.com/node/172
I trying download a zip file using curl from one virtual host to another, in a same server. Zip file contains *.php and *.jpg files.The problem is:sometimes JPG files get corrupt, like this:Here is my code :$out = fopen(ABSPATH.'/templates/default.zip','w+'); $ch = curl_init(); curl_setopt($ch, CURLOPT_FILE, $out); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_URL, 'http://share.example.com/templates/default.zip'); curl_exec($ch); curl_close($ch); $zip = new ZipArchive; if ($zip->open(ABSPATH.'/templates/default.zip') === TRUE) { if($zip->extractTo(ABSPATH.'/templates')) { echo 'OK'; } $zip->close(); } //$zip->close();I don't understand what happen to my jpg. I also tried using pclzip.lib.php, but no luck. How to solve this problem ?Thanks in advance
Corrupt image when extract from zip
You can create a new group for each virtual host and add www-data and other granted users to this. Then set that group as the owner of your files (chown). With specifying an appropriate permission (like 775) you will be there.
I currently setup a single user on my virtual host like this:sudo useradd -d /website/ -m user -s /usr/bin/rssh sudo chown root:root /website/ -R #Don't get why I need this part but doesn't work without! sudo chmod 755 /website/ sudo chown -R user:www-data /website/public_html sudo chmod 755 /website/public_htmlThis works foruserto add and edit folders and files within/website/public_html.I now want to be able to add other users with the ability to add and edit folders and files within/website/public_html. The issue with this, is that if I get into using groups and add users to the groupwww-dataand change the chmod to 775 the users will then be able to edit other virtual hosts websites for example/website2/public_html.All users (as you can see above) can only access the server through sftp (-s /usr/bin/rssh). Users are also locked to their home directories with the help of settings fromsshd_config†. As of that I suppose I could add all the users to the same group (www-data) andchmod 775the directory or is that not safe?For exampleheresomeone mentions that giving the virtual hosts775permissions may allow for users to insert php scripts that could delete everything. But without it being775this also does not allow for php to create files.†:Match user user ChrootDirectory /website/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding no
How to securely have many to many users on virtual hosts
Shared libraries are .so (or in Windows .dll, or in OS X .dylib) files. All the code relating to the library is in this file, and it is referenced by programs using it at run-time. A program using a shared library only makes reference to the code that it uses in the shared library.Static libraries are .a (or in Windows .lib) files. All the code relating to the library is in this file, and it is directly linked into the program at compile time. A program using a static library takes copies of the code that it uses from the static library and makes it part of the program. [Windows also has .lib files which are used to reference .dll files, but they act the same way as the first one].There are advantages and disadvantages in each method:Shared libraries reduce the amount of code that is duplicated in each program that makes use of the library, keeping the binaries small. It also allows you to replace the shared object with one that is functionally equivalent, but may have added performance benefits without needing to recompile the program that makes use of it. Shared libraries will, however have a small additional cost for the execution of the functions as well as a run-time loading cost as all the symbols in the library need to be connected to the things they use. Additionally, shared libraries can be loaded into an application at run-time, which is the general mechanism for implementing binary plug-in systems.Static libraries increase the overall size of the binary, but it means that you don't need to carry along a copy of the library that is being used. As the code is connected at compile time there are not any additional run-time loading costs. The code is simply there.Personally, I prefer shared libraries, but use static libraries when needing to ensure that the binary does not have many external dependencies that may be difficult to meet, such as specific versions of the C++ standard library or specific versions of the Boost C++ library.
What is the difference between static and shared libraries?I use Eclipse and there are several project types including Static Libraries and Shared Libraries? Does one have an advantage over the other?
What is the difference between static and dynamic modules in nginx? [duplicate]
This is happening most likely because your passenger user does not have permissions to run your application or your application itself is not starting up properly.
I have rvm, passenger, ruby 1.9.3, nginx but I now get this errorCannot spawn application '/path/to/my/app': Could not read from the spawn server: Connection reset by peer (104)I havepassenger_rootset to the output ofpassenger-config --rootandruby-1.9.3-p125forpassenger_rubyI did have to dorvmsudo passenger-install-nginx-modulebecause passenger kept trying to install with 1.8.7 support rather than 1.9.3i've even set spawn mode to conservative.Is there anything I'm missing out?
Cannot spawn application
Try this:location ~ ^/(php_status|ping)$ { # access_log off; allow 127.0.0.1; allow MY_IP_ADRESS; deny all; include fastcgi_params; # This is important fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; }
I've been using Nginx 1.2.1 for a while now, and because of security issues, I decided to upgrade to 1.9.2.Problem is : php-fpm status page is now serving me a fully blank page.HTTP response code says : 200 ok, but content = 0 bytesWhat I tried :Checking Nginx user / group : it's www:www (as it was before) Checking Php-FPM user / group : it's www:www (as it was before) During aptitude upgrade, I chose to keep my config filestail /var/log/nginx/error.log says : nothing tail /var/log/nginx/mywebsite-error.log says : nothing tail /var/log/php-fpm/php5-fpm.log says : nothing except some process trace finished but nothing relevantI've been using this code before the upgrade, no problem :location ~ ^/(php_status|ping)$ { # access_log off; allow 127.0.0.1; allow MY_IP_ADRESS; deny all; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; }Therefore, I tried using the syntax :fastcgi_pass 127.0.0.1:9000;but that leads to a 502 from nginx and I don't think the issue is there.I'm running out of options ...Thanks you for your help.
PHP-FPM status page is blank after nginx update from 1.2.1 to 1.9.2