Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
If you have a script with a name that is part of your public interface then you need to start versioning this script explicitly, and keeping old versions around for older clients.e.g. /assets/script.1.0.js, /assets/script.1.1.js etcThe key part is that you need to be keeping the old ones around, and the code doesn't change without the name changing explicitly. The Rails asset pipeline can't do this for you, since there's usually only the very latest version of the script kept current.As with all public interfaces, you will need to spend more time on managing this process than you would for an internal-only script.
I have a script 3rd party websites are using:/assets/script.js. For obvious reasons, I can't ask them to change the link every time I deploy to point to the latest fingerprinted version of the script. I got a few caching issues where users still see old versions of/script.js. Are there any ways to make the cache go away directly forscript.jsinstead ofscript-9dc5afea3571ba2a883a72b0da0bb623.js?More Information: Rails on Passenger + Nginx. Looking for ways to serve thescript.jsfile instead if the finger-printed file and invalidate the cache on every deployment.I thought about adding ETags based on the deployment git revision, but have no idea how to do this. Nginx has no built in ETags support. There are unsupported old third party modules that do this. I can useadd_header Etag="something"for this, but how do I add the git version there.Any other ideas and options?Thanks!
3rd Party Script Caching in Rails 3.1
I've found 1 possible solution with the suggestion from a co-worker.I'm now passing URI as a query parameter in nginx. So my config is now this:location / { try_files $uri $uri/ /index.html?uri=$uri }Then in my router configuration in VueJS:const routes = [ { path: '/', component: Landing, beforeEnter: (to, from, next) => { const { uri } = to.query; if (uri != null && uri != '/') { next(false); router.push(uri); } else { next(); } } }, ...This seems to do the trick, although looks a bit dodgy.
I'm trying to setup a vue-router on my nginx server. The issue I'm having is that my route doesn't work if I enter url directly to the browsermyapp.com/mypath.I've tried server configuration as described in thevue router docsas well as suggested similar configurations on stack overflow. My current nginx location configuration as follows:location / { try_files $uri $uri/ /index.html; } location /subscribe { proxy_pass http://127.0.0.1/subscribe // express API app }All that does is redirects any path to my root component (path:/) and not/mypath. This does make sense andlocationseems to only redirect to the index file. How can I redirect direct link ofmyapp.com/mypathto/mypathroute in my VueJS app?Here is how my vue routes setup now:... const routes = [ { path: '/', component: Landing }, { path: '/mypath', component: MyPath } ]; const router = new VueRouter({ mode: 'history', routes }); new Vue({ el: '#app', router, render: h => h(App) });
vue-router, nginx and direct link
Nginx is a web server and is concerned with web server stuff, not with how to run Python programs. uWSGI is an application server and knows how to speak WSGI with Python (and other languages now). Both Nginx and uWSGI speak the uWSGI protocol, which is an efficient protocol over UNIX sockets.Nginx deals with http requests from/responses to the outside world (possibly load balancing, caching, etc.). Your Flask application deals with WSGI requests/responses. uWSGI knows how to start your application (possibly with multiprocessing and/or threading) and bridge the gap between HTTP and WSGI.There are other HTTP servers besides Nginx, and other WSGI servers besides uWSGI, but they all use the same workflow: the HTTP server passes to the WSGI server, which manages your application process and passes back to the HTTP server.This setup is known as areverse proxy. It allows each tool to do what it's good at and not be concerned about the other parts of the process. There is nothing particularly inefficient about it, until you get to trulymassivescales.
So from the Python/Flask docs, they both recommend not running the Flask web server as the production web server which makes sense. My question is, am I then able to run my Flask application on top of an Nginx server? Why do all the guides on the internet recommend wrapping Flask around uWSGI, Tornado, or some other WSGI server? What does it mean for something to be WSGI? Isn't Flask WGSI compliant?I am particularly lost becausehere, the first response states:Apache and Nginx are both HTTP servers.They can serve static files like (.jpg and .html files) or dynamic pages (like a Wordpress blog or forum written in a language like PHP or Python).Howeverthispost states:Nginx is a web server. It serves static files, however it cannot execute and host Python application. uWSGI fills that gap.It just seems inefficient for my application to be handled by a server (ex: uWSGI) and then another server (ex: Nginx).
Why does running Flask with Nginx require a WSGI wrapper?
Best way (IMHO) is using apache+mod_wsgi. Both uWSGI and Gunicorn are not windows-friendly (albeit uWSGI has cygwin support, so you can eventually try it [if you are brave enough]).Another approach would be installing a linux vm (virtualbox, vmware...) on the windows machine to host your application and proxy requests from local IIS to the virtual system.
I have a django project. I have installed nginx server. I want to run nginx along with django on windows machine. I have tried a few blogsNginx Django Uwsgi. But all of them required uwsgi. But the uwsgi is not installing on windows and gives "uname" error. Is there any way to install and run nginx and django on windows ? Any pointers will be very helpfull thanks.
How to run django with nginx on a windows machine?
You should readhttp://wiki.nginx.org/WordPresse.g.location /blog { try_files $uri $uri/ /blog/index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(/blog)(/.*)$; }
I'm migrating from apache2 to nginx. I cant figure out how to to rewrite thisrewrite rulesfor wordpress.This is actually my configuration fileserver { listen 80; root /usr/share/nginx/blog.com/public_html; index index.html index.htm index.php; server_name blog.com www.blog.com; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/blog.com/public_html$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }I'm using php5-fpm.And this is the rule I would like to add :# BEGIN WordPress RewriteEngine On RewriteBase /blog/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] # END WordPressCan you please help me? Thanks :)
From apache to nginx: wordpress rewrite rule
I suggest you try supervisord in this case.http://supervisord.org/Edit: Here is an dockerized example of httpd and ssh daemon:https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.dockerfile:FROM sinet/nginx-node:latest RUN mkdir /usr/src/app WORKDIR /usr/src/app RUN git clone https://name:[email protected]/joaocromg/front-web-alferes.git WORKDIR /usr/src/app/front-web-alferes RUN npm install RUN npm install bower -g RUN npm install gulp -g RUN bower install --allow-root COPY default.conf /etc/nginx/conf.d/ RUN nginx -g 'daemon off;' & # command 1 blocking CMD ["gulp watch-dev"] # command 2 not executedSomeone know how can I solve this?
How to run two commands on Dockerfile?
You should be able to create anginx-app.conffile in the same directory as your app.yaml file. There is an example of using the nginx configuration file in a Flex environment located here:https://github.com/GoogleCloudPlatform/getting-started-php/tree/master/4-auth.This same file is referenced in Google's documentation here:https://cloud.google.com/appengine/docs/flexible/php/runtime#customizing_nginxOnce you have that file created, you should be able to add any property you need and then rebuild your project to see the changes take effect.
How can I edit the Google App EngineNGINXconfiguration?There doesn't seem to be much support in the Google docs in regards to the NGINX configuration for apps running in the Google App Engine flexible environment.My app is running fine, but I get this 413 error when I try and upload an audio file (.wav or .mp3).413 Request Entity Too Large -- nginxMy app is running Django (python 3), with Cloud Postgres SQL and Cloud Storage enabled.I researched the error, and it seems I can set a nginx.config file so that it includes "client_max_body_size 80M" - but like I said, there is no documentation regarding how to manually config NGINX on deploy.Any suggestions?
How can I edit the NGINX configuration on Google App Engine flexible environment?
Inside a docker container thelocalhostand127.0.0.1refer to the container itself. In order to access the host machine running dockerd with your container you must refer to the host by its public hostname/IP as if it was another machine on the network.
Need help. Have been trying for a solution to this issue and could not see an answer or rather I have not come across any.I have a docker container with NGINX, acting as a reverse proxy. Docker for Windows version 1.12.5(9503).upstream mysite { server 127.0.0.1:8090; #server localhost:8090; (have also tried this option) } server { listen 0.0.0.0:80; server_name localhost; location / { proxy_pass http://mysite; } }In the above code localhost:8090 is a url of a website that is hosted on IIS on my host machine. When I access the url on NGINX, I get the following error2016/12/27 08:11:57 [error] 6#6: *4 no live upstreams while connecting to upstream, client: 172.17.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "http://googlesite/", host: "localhost" 172.17.0.1 - - [27/Dec/2016:08:11:57 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:50.0) Gecko/20100101 Firefox/50.0" "-"Tried to access the url on the host machine(simple HTML site, single page with only simple html, hosted on IIS with anonymous access granted to all.)curl localhost:8090Getting the following error:curl: (7) Failed to connect to localhost port 8090: Connection refusedAm new to Docker and NGINX. Would like to know if it is possible to access urls on the host machine? If Yes, then where am I wrong.The same configuration works, if I use google.co.in instead of 127.0.0.1:8090.Thanks.
curl: (7) Failed to connect to localhost port 8090: Connection refused
You can have multiplelistendirectives perserver:server { listen 5005 ssl; listen 6006 ssl; server_name ; ssl_certificate ; ssl_certificate_key ; location /tags.txt { add_header 'Access-Control-Allow-Origin' '*'; } }
I am new to nginx. I am having trouble with my setup, I want my server to run with multiple port on public.For example:server { listen 443 ssl; server_name ; ssl_certificate ; ssl_certificate_key ; location /tags.txt { add_header 'Access-Control-Allow-Origin' '*'; } }From the above setup I am now able to access perfectly. But what if I have http://localhost:6006 and http://localhost:5005 multiple ports in my localhost and I want to publish it. I tried to access it using this https - mydomainname : port 6006 and https - mydomainname : port 5005 but it fails.Should I make a setup for another port? Like for port 6006server { listen 6006 ssl; server_name ; ssl_certificate ; ssl_certificate_key ; location /tags.txt { add_header 'Access-Control-Allow-Origin' '*'; proxy_pass http://localhost:6006; } }and port 5005server { listen 5005 ssl; server_name ; ssl_certificate ; ssl_certificate_key ; location /tags.txt { add_header 'Access-Control-Allow-Origin' '*'; proxy_pass http://localhost:5005; } }How to fix it?
NGINX: How to setup multiple port in one server or domain name?
Basically connections are established to make requests using it. So for instance endpoint for given key may accept 5 connections per hour from given IP address. But it doesn't mean only 5 requests can be made but much more - if the connection is not closed after a request (from HTTP 1.1 it's by default kept alive).E.g. an endpoint accepts 5 connections and 10 requests from given IP address. If connection is established for every request only 5 requests overall can be made. If connection is kept alive single client may make all the requests. If there are 5 clients, every establishes a connection and keeps it alive there are 2 request approx. that can be made by each client - however one can make all the request if it's fast enough.
While I am configuring my nginx, I found two modules:ngx_http_limit_conn_moduleandngx_http_limit_req_moduleone is for limiting connection per defined key, and one for limiting request.My question is what is the relationship (and difference) between a HTTP connection and a request. It seems that multiple HTTP requests can use one common HTTP connection, what is the principle under this?
What is the relationship between HTTP connection and a request?
Maybe I am too late.As a complement of your own answer there is a solution not having to add the nginx user to the node group.Create a directory only for the socket file, assign it to the node user and www-data (or whatever group the nginx is) group and set the group-id bit (SGID) on that directory.mkdir -p /var/lib/yourapp/socket chown nodeuser:nginxgroup /var/lib/yourapp/socket chmod g+rxs /var/lib/yourapp/socketAll files created inside this directory will automatically be owned by the nginxgroup group.
I am running an nginx server and a node express web server, using daemontools, setup to communicate over Unix Domain Sockets. There's just a few problems:The socket file stays present on shutdown, so I have to delete it when bringing the server back up, otherwise I will get the EADDRINUSE error.The nginx server runs as the nginx user, and the node server runs as the node user.The socket file gets created by Express when the server starts up and umask sets the permissions on the socket file to 755.The setuidgid application sets the group to the default group of the user, both the node username in this case.The deployment scripts for the application and daemontools' run script execute before the node server instance gets launched, so there's no way to set the permissions on the file, as it has to get recreated during the launch process.If I chgrp and chmod g+w the socket file, everything works fine. Is there a way to set this up so that the node application's socket file gets generated with the correct permissions for nginx to be able to write to it without compromising the security independence of one application or the other? I would even be okay with adding nginx to the node user's group, if there was still a way to set the permissions on the socket file so that it would be group writable.
Node Express Unix Domain Socket Permissions
Nginx adds its header just before the origin server, so you will have:cache-control: public, max-age=10 cache-control: public, max-age=60and the origin header will replace the nginx header.The solution? Use nginx v1.4.3 that has the module more_set_headers and more_clear_headers in order to replace or clear the headers from origin.You can download the module fromhere.Herehow to download nginx 1.4.3 and how to install it.Herehow to use the directives.
When you use the add_header directive in nginx, the header is added to the response coming from the origin server.Say the origin server returns cache-control public, max-age=60. But in the nginx reverse proxy location you set something like:add_header cache-control public, max-age=10What does this do exactly? There are 2 different scenarios I can think of:1) Nginx respects the cache-control header from the origin server and stores the content in its cache with an expiration of 60 secs. Then passes on the response with an overwritten header causing the client to store the resource in its cache with an expiration of 10s.or..2) Nginx overwrites the response headers first and then interprets them. It stores the resource with an expiration of 10 secs and passes the response to the client which also caches the it with an expiration of 10 secs.
Nginx add_header and cache control
No language. It is primarily designed as a static and front-end proxy server.The server itself is written in C and supports C-compatible plug-ins, but the plug-in architecture is heavily geared towards interfacing with other servers on the back end, not to add, e.g., PHP support.
which server side language nginx webserver do support? For example apachi-tomcat is for java, wammp is for php. and secondly it is installed on my pc i need to know that how can i access it via http and in which do i need to put my applications
which server side language nginx webserver supports
After I check theHow to set up Apache2 and PHP-FPM via unix socket?, I changed my docker-compose.yml toversion: '2' services: web: image: nginx:latest ports: - "8018:80" volumes: - ./code:/code - ./site.conf:/etc/nginx/conf.d/default.conf - /private/var/log/nginx:/var/log/nginx - "phpsocket:/var/run" networks: - code-network php: image: php:fpm volumes: - ./code:/code - ./php-fpm.conf:/usr/local/etc/php-fpm.conf - ./www.conf:/usr/local/etc/php-fpm.d/www.conf - ./zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf - "phpsocket:/var/run" networks: - code-network networks: code-network: driver: bridge volumes: phpsocket:And override zz-docker.conf to[global] daemonize = no [www] listen = /var/run/php7-fpm.sock listen.mode = 0666Finally When I visitedhttp://localhost:8018, phpinfo page showed up!
There is my docker-compose.ymlversion: '2' services: web: image: nginx:latest ports: - "8018:80" volumes: - ./code:/code - ./site.conf:/etc/nginx/conf.d/default.conf - /private/var/log/nginx:/var/log/nginx - /private/var/run/php7-fpm.sock:/var/run/php7-fpm.sock networks: - code-network php: image: php:fpm volumes: - ./code:/code - ./php-fpm.conf:/usr/local/etc/php-fpm.conf - ./www.conf:/usr/local/etc/php-fpm.d/www.conf - /private/var/run/php7-fpm.sock:/var/run/php7-fpm.sock networks: - code-network networks: code-network: driver: bridgeAnd in the site.conf I write like thisfastcgi_pass unix:/var/run/php7-fpm.sock;I also change the listen address tolisten = /var/run/php7-fpm.sockin www.conf. And in my MAC, there is a file named php7-fpm.sock in folder /private/var/run with mode 666After I randocker-compose up -d, the containers was running success.But when I visitedhttp://localhost:8018, it returned 502. After I checked the nginx error log, I found out this2017/11/01 13:08:39 [error] 6#6: *1 connect() to unix:/var/run/php7-fpm.sock failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7-fpm.sock:", host: "localhost:8018"Btw, before I tried using unix socket mode. I succeed in visitinghttp://localhost:8018with tcp/ip mode.
how to connect nginx to php-fpm using unix socket in docker
Setting to 64 does not throw this error anymore. Probably it was showing current value - 32 which was not enough.
When I runnginx -tI get errornginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32Then I go and update nginx.conf file, line server_names_hash_bucket_size to 32.Then I runservice nginx reloadThen run againnginx -tand I am getting the same error.Why it does not take effect and how could I fix this?
Nginx: you should increase server_names_hash_bucket_size: 32 - did this but no effect
Make oneserverblock a default server and give the otherserverblock the one trueserver_name.server { listen 80 default_server; listen 443 ssl default_server; ssl_certificate ...; ssl_certificate_key ...; return 301 https://www.example.com$request_uri; } server { listen 443 ssl; server_name www.example.com; ssl_certificate ...; ssl_certificate_key ...; ... }The default server forhttpsrequires a valid certificate. Assuming you have a wildcard certificate - most of thessl_statements could be moved into the outer block and be inherited by both server blocks. For example:ssl_certificate ...; ssl_certificate_key ...; ssl_...; server { listen 80 default_server; listen 443 ssl default_server; return 301 https://www.example.com$request_uri; } server { listen 443 ssl; server_name www.example.com; ... }Seethis documentfor more.
I have my below nginx config, I'm trying to redirect everything tohttps://wwwregardless of what comes in for examplehttp://example.com,http://www.example.comorhttps://example.com.I've looked at numerous topics on SO and tried a couple of things but still stumped, I can't ever gethttps://example.comto redirect to thehttps://wwwpattern!?server { listen 80; listen 443 ssl; server_name example.com; return 301 https://www.example.com$request_uri; } server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_dhparam /etc/nginx/ssl/dhparams.pem; ssl_session_timeout 30m; ssl_session_cache shared:SSL:10m; ssl_buffer_size 8k; add_header Strict-Transport-Security max-age=31536000; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }
Nginx: Redirect non-www to www https
This should workserver { listen 80; server_name vmdk; access_log /var/log/nginx/localhost.access.log; root /srv/vmdk/public; location / { deny all; # deny by default location ~ "\.(vmdk|vmx)$" { allow all; # allow vm disk images, etc. } location ~ "/$" { allow all; autoindex on; # allow listing directory contents } } }
I need toonlyserve .vmdk and .vmx files on a virtual server, no matter what directory level. This is my current configuration (right now it serves all, but if I uncomment the lower part it serves nothing):server { listen 80; server_name vmdk; access_log /var/log/nginx/localhost.access.log; root /srv/vmdk/public; location ~ (./?|\vmdk|\vmx)$ { autoindex on; } #location / { # deny all; #} }How do I achieve this?If it is easier to achieve with Apache, then an example Apache configuration is also appreciated.
NGINX: serve ONLY specific file types in all directories
server { listen 80; ## Listen on port 80 ## server_name example.com; ## Domain Name ## index index.html index.php; ## Set the index for site to use ## charset utf-8; ## Set the charset ## location ^~ /forum/showPost { rewrite ^/forum/showPost(.*)$ $1 permanent; } location ^~ /business/showDetails { rewrite ^(.*)business/showDetails(.*)$ classifieds$1 permanent; } }
I have redesigned a website and changed the url formats too. Now i need to change the old url to new one.Here is my old url:http://www.example.com/forum/showPost/2556/Urgent-ResposeThe new url will be:http://www.example.com/2556/Urgent-ResposeHow to redirect to new url using nginx by removing/forum/showPostfrom url?Edited: Also this url:http://www.tikshare.com/business/showDetails/1/Pulkit-Sharma-and-Associates,-Chartered-Accountants-in-BangaloreNew url:http://www.tikshare.com/classifieds/1/Pulkit-Sharma-and-Associates,-Chartered-Accountants-in-BangaloreAbove link is complete removing whereas this link is to replacebusiness/showDetailswithclassifieds
NGINX: remove part of url permanantly
location ~* ^/just_test/(.+)$ { root /some/path/to/web/root; try_files /just_test/1/$1 /just_test/2/$1 /just_test/3/$1 @missing; }
Because of the way that our git repos are setup I have some static content that might be in one directory - and other content that might be in another directory. How can I ask nginx to search in two places for a static file like a stylesheet?I originally thought that try_files had my answer - but I can't seem to get it to work.try_files $uri /dir1/static/$uri /dir2/static/$uri @missing;
Nginx - Search for static content in multiple directories?
There are a couple ways to achieve this. What you are referring to is usually calledservice discoveryand comes in many forms. I'll describe two of them that I have used before.The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API.https://github.com/jwilder/nginx-proxyis one of the popular ones and should work well for prototyping scalable services in Compose.Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such ashttps://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system likehttps://github.com/kelseyhightower/confd.
I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
Automatically append docker container to upstream config of nginx load balancer
Basically, you do the same thing you did to get everything for your first application running minus the Nginx installation. So, however you got your Unicorn instance for your first application running, do it again for your next application.You can then just add another server block into your Nginx config with an upstream that points to that new Unicorn instance.One Nginx running for the entire machine will do fine, with one Unicorn running per application.Hope this helps some.Here is a sample of the additional server block you would need to add for Nginx to serve additional applications:upstream unicorn_app_x { server unix:/path/to/unicorn/socket/or/http/url/here/unicorn.sock; } server { listen 127.0.0.1:80; server_name mysitehere.com aliasfor.mysitehere.com; root /path/to/rails/app/public; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://unicorn_app_x; break; } } }
How can I host multiple Rails apps with nginx and Unicorn?I currently have one site up and running thanks to "Deploying to a VPS".I have searched but I need a step-by-step guide to get this working. The results I found are not so well explained to help me understand how to accomplish this.
How can I host multiple Rails apps with nginx and Unicorn?
Three ways to do this, really.Create an alias in.bashrcto always run composer with the corresponding versionSomething likealias ncomposer=`/path/to/php /path/to/composer.phar `Specify the path to PHP version insidecomposer.pharitselfThis is specified at the start of the file:#!/path/to/php php. Then composer should run withcomposer.pharNB!The line will disappear upon self-update, so it's not a reliable solution.Move up the path with the newest PHP versionIf you placeC:\nginx\phpfirst, it should be used by default when using composer.Hope this helps!
I have already use WAMP 2.5 with PHP 5.5.12, and with Composer. The php is on:C:\wamp\bin\php\php5.5.12For new project, I need to use nginx and installed PHP 7. The php is on:C:\nginx\phpNow, using GitBash MINGW32, I tried to install laravel 5.3 using Composercreate-projectbut it said[InvalidArgumentException] Could not find package laravel/laravel with version 5.3 in a version installable using your PHP version 5.5.12.I already put bothC:\wamp\bin\php\php5.5.12andC:\nginx\phpon Windows System PATH variable.How do I change the PHP version used by Composer?
Change PHP version used by Composer on Windows
Solved!Problem was, that long long time ago I installed pow (super simple automated rails server which run application on app_name.local domain). And this beast left LaunchAgent script which updatepfto forward port 80 to pow port.
In my current job we have development environment made with docker-compose. One container is nginx, which provide routing to other containers. Everything seems fine and work to my colleague on windows and osx. But on my system (osx El Capitan), there is problem with accessing nginx container on port 80.There is setup of container from docker-compose.ymlnginx: build: ./dockerbuild/nginx ports: - 80:80 links: - php volumes_from: - app ... and moreIn./dockerbuild/nginxthere is nothing special, just nginx config as we know it from everywhere.When I run everyting withdocker-compose createanddocker-compose start. Thendocker psgive me3b296c1e4775 docker_nginx "nginx -g 'daemon off" About an hour ago Up 47 minutes 0.0.0.0:80->80/tcp, 443/tcp docker_nginx_1But when I try to access it for example via curl I get error.curl: (7) Failed to connect to localhost port 80: Connection refusedI try to run container with port 81 and everything works fine.Port is really binded to docker22:47 $ sudo lsof -i -n -P | grep TCP ... com.docke 14718 schovi 38u IPv4 0x6e9c93c51ec4b617 0t0 TCP *:80 (LISTEN) ...Firewall in osx is turned off and I have no other security.
Can't access docker container on port 80 on OSX
According to the nginx documenthttp://nginx.org/en/docs/http/ngx_http_core_module.html#resolverName servers are queried in a round-robin fashion.They are using RR.
How does nginx picks a resolver if you define several like:... resolver 108.x.x.x 120.x.x.x 19.x.x.x valid=30s; ...Is it in a round-robin fashion? or there is some failover logic in there?
how does nginx picks a resolver when there are multiple defined?
I just had to start PM2 withbin/wwwinstead ofapp.js. Express generator and everything...
I've been trying to deploy my Node project on a brand new DO droplet, but i'm having some problems with PM2.My steps are a follows:Node came installed on the Droplet image (Ubuntu, Node v4.4.4)Installed PM2 globallySetup Nginx to reverse proxy 127.0.0.1:3000Cloned my project and did npm installAll i get is Nginx complaining about a 502 Bad Gateway.If i look at the Nginx error.log i get this:connect() failed (111: Connection refused) while connecting to upstream, client:client.ip, server:my.server, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "my.server"PM2 doesn't have much to say about anything. Nothing inpm2 logsand status isonline.I tried skipping PM2 and just doing npm start which worked perfectly. I also tried setting up a dummy hello world application instead, and using that with PM2 - it also worked.So this is currently where i'm at:My project + PM2: doesn't work.My project without PM2: works.Hello World app + PM2: works.I'm not really sure where to go from here.. I could just skip PM2 and use node, but i do want the features of PM2.Any ideas?
PM2 and Nginx: 502 Bad Gateway
Nginx open source version supports thehashdirective that may work similarly (not exactly the same though) to the sticky session mechanism provided by commercial version:The generic hash method: the server to which a request is sent is determined from a user-defined key which may be a text, variable, or their combination. For example, the key may be a source IP and port, or URI:upstream backend { hash $request_uri consistent; server backend1.example.com; server backend2.example.com; }https://www.nginx.com/resources/admin-guide/load-balancer/So how do you use 4 octets from IPv4 with the hash method? Let's find how to get the client IP from the Embedded Variables sectionhttp://nginx.org/en/docs/http/ngx_http_core_module.html#variables$remote_addr client addressSo the code looks like:upstream backend { hash $remote_addr consistent; server backend1.example.com; server backend2.example.com; }UPDATE:If take a look at the Stream module (TCP proxy), the very first example shows exact the same approach:upstream backend { hash $remote_addr consistent; server backend1.example.com:12345 weight=5; server backend2.example.com:12345; server unix:/tmp/backend3; } server { listen 12346; proxy_pass backend; }
I'm currently running two back end servers on my network and load balancing with Nginx on Windows.I am load testing the system at the moment however all of my traffic is directed at one server. This is because the ip_hash algorithm sorts traffics by the first 3 octets i.e. 111.222.333.XXXThis is a problem because all of the traffic I am aiming at the server has the same base address (The same first 3 octets) therefore none of my traffic is going to the other server. Does anyone know a way to patch or change the ip_hash algorithm to filter through 4 octets.Thanks
Patching Nginx to ip_hash 4 octets instead of 3
There's no standard "Linux config file" -- the syntax is totally dependent on what program is reading the file and how that program is parsing it.Nginx recognizes only lines beginning with#as comments.http://nginx.org/en/docs/beginners_guide.html#conf_structure
Given:codein/etc/nginx/sites-available/mySite.conf. We know that#Codemeans it will get ignored. But how if I want to comment out:Code Code Code Codewithout using#? I have people see use'''in python to comment out multiple lines. Does this work for Linux config files as well?
How to comment out multiple lines in a Linux config file?
You either need a slash before it or an escaped slash.location ~ (category/(?!paid)) { .. } location ~ (category\/(?!paid)) { .. }
I'm trying to match/category/anything, except/category/paidin nginx location.I have the following regex, but it's not working. Google tells me that I can use lookahead in nginx. Am I doing something wrong?location ^/category(?!/paid)/ { }
lookahead regex in nginx location
Tryupstream app1_server { server app1:8501; } upstream app2_server { server app2:8501; } server { listen 80; listen [::]:80; server_name trace.devops.rightsense.ai; location / { proxy_pass http://app1_server; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_http_version 1.1; } } server { listen 80; listen [::]:80; server_name demo.devops.rightsense.ai; location / { proxy_pass http://app2_server; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_http_version 1.1; } }Each domain listens on same port and reverse-proxies to local network on the ports you specify. To differentiate between hosts, specify the server_name field.
I installed Nginx on my server (my server uses WHM). And on this server has two accounts. Each account will run a server a NextJS site and each account has its own domain.Site1will run on port 3000Site2will run on port 3004What I want to do is:I want to accessdomain1I see the content of mysite1in NextJS that runs onlocalhost:3000And when I accessdomain2I see the content of mysite2on NextJS running onlocalhost:3004I tried to do a Nginx implementation forsite1. But when I accessed it I saw a Cpanel screen, and the url wasdominio1/cgi-sys/defaultwebpage.cgiHere's the Nginx implementation I tried to do:server { listen 80; server_name computadorsolidario.tec.br www.computadorsolidario.tec.br ; location / { proxy_pass http://localhost:3004; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; } }So how do I do this setting for nginx to have this behavior? And I'm changing the correct file?Note: I created the configuration file in/etc/nginx/conf.d/users/domain1/domio1.confAnd within/etc/nginx/conf.d/usershave several configuration files with the name of the accounts you have on the server. (They are already implemented.)
Nginx - Redirect domain to localhost:port content
In principle, the NGINX ingress controller is indeed scalable -- it pulls its entire configuration from the Kubernetes API server and is in itself basically stateless.In practice, this depends very much on how your ingress controller is set up. First of all, the ingress controller will not auto-scale by itself. If you have deployed it using aDeploymentcontroller, you can usehorizontal pod autoscalingas described in the documentation. If you have deployed it using aDaemonSet, the ingress controller will automatically scale up and down with your cluster (maybe even automatically, if you're using thecluster autoscaler).In both scenarios, you're going to need aServicedefinition (possibly of typeNodePortorLoadBalancer, to allow for external traffic) that matches all pods created by the deployment/daemon set to distribute traffic among them.
When the Ingress Nginx controller reach its full capacity does it auto scale? Is Kubernetes Ingress even scalable?
Can kubernetes Ingress Nginx be autoscaled?
I believe you do need to move the SSL termination to the ingress controller because I am having the same issue and I appear to be in a permanent redirect situation. The traffic comes into the NLB on 443 and is terminated and sends to the backend instances over port 80. The ingress sees the traffic on port 80 and redirects to https:// and thus begins the infinite loop.
I've tried the following to get HTTP to redirect to HTTPS. I'm not sure where I'm going wrong.ingress-nginxobject:apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:... service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https spec: type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: httpmy-ingressobject:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: my-namespace annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" spec: tls: - hosts: - app.example.com rules: - host: app.example.com http: paths: - path: / backend: serviceName: my-service servicePort: 80I get a308 Permanent Redirecton HTTP and HTTPS. I guess this makes sense as the NLB is performing the SSL termination and therefore forwarding HTTP to the Nginx service? I guess I would need to move the SSL termination from the NLB to the Nginx service?Thanks
How to redirect HTTP to HTTPS with Nginx Ingress Controller, AWS NLB and TLS certificate managed by AWS Certificate Manager?
It should be as simple as addinghttp2in the end of yourlistendirective.Example:server { listen 80 http2;However, keep in mind that most browsers do not support unencrypted HTTP/2, and so will still serve content as HTTP/1.1.
Is there a way to enable h2c aka HTTP2 cleartext in Nginx 1.9.5 onward?I've tried using h2 over TLs inhttps://chronic101.xyzand it works, however I would like to implement h2c on port 80 as well.Thanks,chrone
How to enable h2c in Nginx?
It seems that my problem was that I did not create the CA properly and wasn't signing keys the right way. A CA cert needs to be signed and if you pretend to be top level CA you self-sign your CA cert.openssl req -new -newkey rsa:2048 -keyout ca.key -out ca.pem openssl ca -create_serial -out cacert.pem -days 365 -keyfile ca.key -selfsign -infiles ca.pemThen you use ca command to sign requestsopenssl genrsa -des3 -out server.key 1024 openssl req -new -key server.key -out server.csr openssl ca -out server.pem -infiles server.csr
OK, I am trying to use client certificates to authenticate a python client to an Nginx server. Here is what I tried so far:Created a local CAopenssl genrsa -des3 -out ca.key 4096 openssl req -new -x509 -days 365 -key ca.key -out ca.crtCreated server key and certificateopenssl genrsa -des3 -out server.key 1024 openssl rsa -in server.key -out server.key openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crtUsed similar procedure to create a client key and certificateopenssl genrsa -des3 -out client.key 1024 openssl rsa -in client.key -out client.key openssl req -new -key client.key -out client.csr openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crtAdd these lines to my nginx configserver { listen 443; ssl on; server_name dev.lightcloud.com; keepalive_timeout 70; access_log /usr/local/var/log/nginx/lightcloud.access.log; error_log /usr/local/var/log/nginx/lightcloud.error.log; ssl_certificate /Users/wombat/Lightcloud-Web/ssl/server.crt; ssl_certificate_key /Users/wombat/Lightcloud-Web/ssl/server.key; ssl_client_certificate /Users/wombat/Lightcloud-Web/ssl/ca.crt; ssl_verify_client on; location / { uwsgi_pass unix:///tmp/uwsgi.socket; include uwsgi_params; } }created a PEM client filecat client.crt client.key ca.crt > client.pemcreated a test python scriptimport ssl import http.client context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) context.load_verify_locations("ca.crt") context.load_cert_chain("client.pem") conn = http.client.HTTPSConnection("localhost", context=context) conn.set_debuglevel(3) conn.putrequest('GET', '/') conn.endheaders() response = conn.getresponse() print(response.read())And now I get 400 The SSL certificate error from the server. What am I doing wrong?
Doing SSL client authentication in python
Your problem in uwsgilimit-postparams. Look atsource. This variable can be overridden by other configs. For example on debian config from/usr/share/uwsgi/conf/default.iniare also loaded.
stack: flask 0.10 + uwsgi 1.4.5 + nginx 1.2.3I can upload small files (<100k) through my application but larger ones fail. uwsgi log shows:Invalid (too big) CONTENT_LENGTH. skip.nginx log does not show anything useful.I tried the following, without success:[nginx conf] client_max_body_size 0 or 20M[uwsgi conf] limit-post: 0 or 20000000[flask conf] MAX_CONTENT_LENGTH = 20000000So my questions:Is there a conf somewhere else i can change?Is there a way of verifying the used options at runtime on uwsgi/nginx?
Upload large file nginx + uwsgi
The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is way better than having nginx load balacing between multiple instances (this is obviously true only for local instances). What you need to take in account is the "thundering herd problem". Its implications are exposed here:http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html.Finally, all of the uWSGI features are multithread/multiprocess (and greenthreads) aware so the caching (for example) is shared by all processes.
I've noticed that you can start multiple processes within one uWSGI instance behind nginx:uwsgi --processes 4 --socket /tmp/uwsgi.sockOr you can start multiple uWSGI instances on different sockets and load balance between them using nginx:upstream my_servers { server unix:///tmp.uwsgi1.sock; server unix:///tmp.uwsgi2.sock; #... }What is the difference between these 2 strategies and is one preferred over the other?How does load balancing done by nginx (in the first case) differ from load balancing done by uWSGI (in the second case)?nginx can front servers on multiple hosts. Can uWSGI do this within a single instance? Do certain uWSGI features only work within a single uWSGI process (ie. shared memory/cache)? If so it might be difficult to scale from the first approach to the second one....
Multiple server processes using nginx and uWSGI
If you used the certbot you will get these files:READMEcert.pemchain.pemfullchain.pemprivkey.pemssl_certificateshould point tofullchain.pemssl_certificate_keyshould point toprivkey.pemssl_trusted_certificateshould point tochain.pemFrom what I see, the PorkBun generated files are just renamed and mapped like this:fullchain.pem->domain.cert.pemprivkey.pem->private.key.pemchain.pem->intermediate.cert.pemcert.pem->public.key.pemSo you would do this for the files given by PorkBun:ssl_certificateshould point todomain.cert.pemssl_certificate_keyshould point toprivate.key.pemssl_trusted_certificateshould point tointermediate.cert.pemBasicallyfullchain.pemis just made up ofcert.pem+chain.pemconcatenated together. See here for more information:Generate CRT & KEY ssl files from Let's Encrypt from scratchPersonally, I wouldnotuse their generated ones because you would have to manually replace it every 90 days. Best if you use another option likecertbotwhich lets you automatically renew it or do it 'manually' via some cronjob. Good luck!
I'm trying to figure out how to use the Porkbun Let's Encrypt Files with Nginx.They have generated a zip file with the following files for me to usedomain.cert.pem,intermediate.cert.pem,private.key.pem,public.key.pemFrom this sitehttps://wbxpress.net/install-porkbun-ssl-nginx-wordpress/I've worked out thatssl_certificateisdomain.cert.pemssl_certificate_keyisprivate.cert.pemBut for my needs I have to specify thessl_trusted_certificateas well.Can anybody point me in the right direction ?
How to use Porkbun SSL Certificate Files with Nginx?
You have a small typo in yourgunicorn.servicefile. Change to:WantedBy=multi-user.targetAlso, you may want to change to:Restart=always
I'm running a Debian web server with nginx and gunicorn running a django app. I've got everything up and running just fine but after rebooting the server I get a 502 bad gateway error. I've traced the issue back to gunicorn being inactive after the reboot. If I start the service the problem is fixed until I reboot the server again.Starting the service:systemctl start gunicorn.serviceAfter the reboot here is my gunicorn service status:{username}@instance-3:~$ sudo systemctl status gunicorn ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled) Active: inactive (dead)Contents of my /etc/systemd/system/gunicorn.service file:[Unit] Description=gunicorn daemon After=network.target [Service] User={username} Group={username} WorkingDirectory=/home/{username}/web/{projname} ExecStart=/usr/local/bin/gunicorn {projname}.wsgi:application Restart=on-failure [Install] WantedBy=multi.user.targetAny ideas to figure out why the gunicorn service isn't starting after reboot?Edit:Could the issue be that the gunicorn.conf has a different dir in chdir and the exec than the working directory?{username}@instance-3:~$ cat /etc/init/gunicorn.conf cription "Gunicorn application server handling {projname}" start on runlevel [2345] stop on runlevel [!2345] respawn setuid {username} setgid {username} chdir /home/data-reporting/draco_reporting exec {projname}/bin/gunicorn --workers 3 --bind unix:/home/{username}/data-reporting/{projname}/{projname}.sock {projname}.wsgi:application
gunicorn does not start after boot
You can't combineproxy_passwithtry_filesin the way that you have attempted. As the comment in your configuration describes, thetry_filesdirective causes nginx to look for a file that matches the URI and then look for a directory that matches the URI. If it doesn't find either, it responds with a 404. You can read more abouttry_filesin thenginx documentation.It's not clear from your question that you need to usetry_filesat all so the simplest way to fix your configuration is to remove thetry_filesline.
I have a Spring Boot + MVC app up and running on my server and it's bound tohttp://localhost:8000.There is an nginx proxy (or is it a reverse proxy, not sure about the name) that listens to the outside world on ports 80 and 443. The root ( / ) will resolve correctly, but anything under it will not resolve and results in a 404 error ( /someControllerName/action, /images/, /css/).I have this as my configuration:upstream jetty { server localhost:8000; } server { listen 80; server_name domain.com; return 301 http://www.domain.com$request_uri; } server { listen 443; server_name domain.com; ssl_certificate /etc/nginx/ssl/ssl.crt; ssl_certificate_key /etc/nginx/ssl/ssl.key; return 301 https://www.domain.com$request_uri; } server { listen 80 default_server; listen 443 ssl; root /usr/share/nginx/html; index index.html index.htm; server_name www.domain.com localhost; #ssl on; ssl_certificate /etc/nginx/ssl/ssl-unified.crt; ssl_certificate_key /etc/nginx/ssl/ssl.key; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules proxy_pass $scheme://jetty/$request_uri; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; try_files $uri $uri/ =404; } }Any help is very much appreciated.
Running a Spring Boot app behind nginx
turns out that the code above works absolutly perfect. there was another problem in my variables YAML file.
I'm setting up an automated provisioning process for a webserver using Ansible. For this, I have an array containing dictionaries with vhosts to setup:vhosts: - name: 'vhost1' server_name: 'domain1.com' - name: 'vhost2' server_name: 'domain2.com'I prepared a template with some generic nginx vhost configuration:server { listen 80; server_name {{ item.server_name }}; root /home/www/{{ item.name }}/htdocs; index index.php; location / { try_files $uri $uri/ /index.php?$args; } }Finally, I use the following task to copy a prepared template to the target host:- name: Setup vhosts template: src=vhost.j2 dest=/etc/nginx/sites-available/{{ item.name }} with_items: vhostsThe tasks iterates over thevhostvariable as expected. Unfortunately, Ansible does not pass the current item from the iterator to the template, instead the template has access to all currently valid variables.Is there any way to pass the current item from the iterator to the template?
How to loop over array containing template variables with ansible?
This exact situation took me forever to figure out, but OSS is like that I guess. This post is a year old so maybe the original poster figured it out, or gave up?Anyway, the problem for me at least was caused by a few things:IIS expects the realm string to be the same as what it sent to Nginx, but if your Nginx server_name is listening on a different address than the upstream then the server side WWW-Authenticate is not going to be what IIS was expecting and ignore it.The builtin header module doesn't clear the other WWW-Authenticate headers, particularly the problematic WWW-Authenticate: Negotiate. Using the headers-more module clears the old headers, and adds whatever you tell it to.After this, I was able to finally push Sharepoint 2010 through Nginx.Thanks stackoverflow.server { listen 80; server_name your.site.com; location / { proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_pass_header Authorization; //This didnt work for me more_set_input_headers 'Authorization: $http_authorization'; proxy_set_header Accept-Encoding ""; proxy_pass https://sharepoint/; proxy_redirect default; #This is what worked for me, but you need the headers-more mod more_set_headers -s 401 'WWW-Authenticate: Basic realm="intranet.example.com"'; } }
I am trying to setup nginx as a reverse rpoxy server in front off several IIS web servers who are authenticating using Basic authentication.(note - this is not the same asnginx providing the auth using a password file- it should just be marshelling everythnig between the browser/server)Its working kind off - but getting repeatedly prompted for auth by every single resource (image/css etc) on a page.upstream my_iis_server { server 192.168.1.10; } server { listen 1.1.1.1:80; server_name www.example.com; ## send request back to my iis server ## location / { proxy_pass http://my_iis_server; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass_header Authorization; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
Nginx reverse proxy - passthrough basic authenication
I can't speak for Flask, but I can for CherryPy. That looks like the "proper way"...mostly. That line about a MethodDispatcher is a no-op since it only affects CherryPy Applications, and you don't appear to have mounted any (just a single Flask app instead).Regarding point 3, you have it right. CherryPy allows you to run multiple Server objects in the same process in order to listen on multiple ports (or protocols), but it doesn't have any sugar for starting up multiple processes. As you say, multiple cherryd commands with varying config files is how to do it (unless you want to use a more integrated cluster/config management tool likeeggmonster).
Per suggestions on SO/SF and other sites, I am using CherryPy as the WSGI server to launch multiple instances of a Python web server I built with Flask. Each instance runs on its own port and sits behind Nginx. I should note that the below does work for me, but I'm troubled that I have gone about things the wrong way and it works "by accident".Here is my current cherrypy.conf file:[global] server.socket_host = '0.0.0.0' server.socket_port = 8891 request.dispatch: cherrypy.dispatch.MethodDispatcher() tree.mount = {'/':my_flask_server.app}Without diving too far into my Flask server, here's how it starts:import flask app = flask.Flask(__name__) @app.route('/') def hello_world(): return "hello"And here is the command I issue on the command line to launch with Cherryd:cherryd -c cherrypy.conf -i my_flask_serverQuestions are:Is wrapping Flask inside CherryPy still the preferred method of using Flask in production?https://stackoverflow.com/questions/4884541/cherrypy-vs-flask-werkzeugIs this the proper way to use a .conf file to launch CherryPy and import the Flask app? I have scoured the CherryPy documentation, but I cannot find any use cases that match what I am trying to do here specifically.Is the proper way to launch multiple CherryPy/Flask instances on a single machine to execute multiple cherryd commands (daemonizing with -d, etc) with unique .conf files for each port to be used (8891, 8892, etc)? Or is there a better "CherryPy" way to accomplish this?Thanks for any help and insight.
Using CherryPy/Cherryd to launch multiple Flask instances
Answered it myself - hopefully this is of some use to others,too
We are using nginx for https traffic offloading, proxying to a locally installed jasperserver (5.2) running on port 8080.internet ---(https/443)---> nginx ---(http/8080)---> tomcat/jasperserverWhen accessing the jasperserver directly on its port everything is fine. When accessing the service through nginx some functionalities are broken (e.g. editing a user in the jasperserver UI) and the jasperserver log has entries like this:CSRFGuard: potential cross-site request forgery (CSRF) attack thwarted (user:%user%, ip:%remote_ip%, uri:%request_uri%, error:%exception_message%)After some debugging we found the cause for this:In its standard configuration nginx is not forwarding request headers that contain underscores in their name. Jasperserver (and the OWASP framework) however default to using underscores for transmitting the csrf token (JASPER_CSRF_TOKENandOWASP_CSRFTOKENrespectively).Solution is to either:nginx: allow underscores in headersserver { ... underscores_in_headers on;jasperserver: change token configuration name injasperserver-pro/WEB-INF/esapi/Owasp.CsrfGuard.propertiesAlso see here:header variables go missing in productionhttp://wiki.nginx.org/HttpCoreModule#underscores_in_headers
Running jasperserver behind nginx: Potential CSRF attack
Your php code is being displayed directly because it's not being sent to the php engine, that means the location block is being matched and the php file is being served, but the php file isn't being captured by the php block, so your problem is in the php block.In that block you have 2fastcgi_pass, one with a port (9000) and the other to a unix socket, you can't have both together, but since you've tagged your question with fastcgi so I'll assume you are using fastcgi, try commenting this line#fastcgi_pass unix:/var/run/php5-fpm.sock;
I'm using this configuration on a fresh install of php5-fpm and nginx on ubuntu 13.04:server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } error_page 404 /404.html; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } }But, my web browser is seeing php as text instead of the executed results. Where should I look to troubleshoot?
My nginx + fastcgi configuration downloads php files instead of executing them
The servers you proxy behind an Nginx front-end web server are referred to as upstream servers. You will want to refer to the documentation for theHttpUpstreamModule. It's very similair to what you are familiar with. If you don't need load-balancing, you just setup the one upstream server in the configuration and it will serve your purpose.
Is there an equivalent of Apache'sProxyRemotedirective for NginX?So the scenario is I am behind a corporate proxy and I want to do proxy passes for various services with NginX. I would do it in Apache with the following:ProxyPass /localStackOverflow/ https://stackoverflow.com/ ProxyPassReverse /localStackOverflow/ https://stackoverflow.com/ ProxyRemote https://stackoverflow.com/ http://(my corporate proxy IP)I know I need theproxy_passdirective in Nginx but can't find what I would use for theProxyRemote.
How to configure Nginx behind a corporate proxy
In my gunicorn settings, settingworkers=2solved this issue.When I was sending a request to the external URL, the external application would send a request back. This new request would occupy the one and only worker in the application. The original request that I sent out is workerless, and so it get's stuck. With 2 workers, I am able to simultaneously send out a request and receive another request.
I am sending a post request from a method inside a web application running on django+nginx+gunicorn. I have no issues receiving 200 response from the same code when executed on django's own server (using runserver).try: response = requests.post(post_url, data=some_data) if response.status_code == OK and response.content == '': logger.info("Request successful") else: logger.info("Request failed with response({}): {}".format(response.status_code, response.content)) return response.status_code == OK and response.content == '' except requests.RequestException as e: logger.info("Request failed with exception: {}".format(e.message)) return FalseI checked the server logs at post_url, it is indeed returning 200 response with this data. However, when I run the app behind gunicorn and nginx, I am not able to receive the response, (however the request is being sent). The code gets stuck at the first line after the try block, and gunicorn worker times out (after 30 seconds).This is the apache server log at the post_url:[14/Sep/2016:13:19:20 +0000] "POST POST_URL_PATH HTTP/1.0" 200 295 "-" "python-requests/2.9.1"UPDATE:I forgot to mention, this request takes less than a second to execute, so it is not a timeout issue. Something is wrong with the configuration? I have the standard nginx+gunicorn setup, where gunicorn is set as the proxy_pass in nginx. I am guessing since I am behind a nginx proxy, should I be doing something different while sending a post request from the application?
Making a POST request to an external URL from a django + gunicorn + nginx setup
I was having similar issues while setting up nginx reverse proxy forStorm-UIAfter digging for sometime, I got it working.server { listen 80; server_name example.com; location ^~ /css/ { rewrite /(.*) /storm-ui/$1; } location ^~ /js/ { rewrite /(.*) /storm-ui/$1; } location ^~ /templates/ { rewrite /(.*) /storm-ui/$1; } location ^~ /api/ { rewrite /(.*) /storm-ui/$1; } location ~ ^/topology(.*) { rewrite /(.*) /storm-ui/$1; } location /storm-ui/ { proxy_redirect / /storm-ui/; #proxy_pass http://:/; proxy_pass http://10.14.23.10:8080/; } }
I have a 3rd-party ui server running in a docker container, exposed on port 8080.It seems to expect to load resources with an absolute path:http://localhost:8080/index.html,http://localhost:8080/js/some_jsfilesetc.I want to create a reverse proxy to it so it looks like it is coming from a different path:https://myserver.com/stormui/index.html,https://myserver.com/stormui/js/...first I triedlocation /stormui/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #rewrite ^/stormui/(.*) /$1 break; proxy_pass http://127.0.0.1:8080/; }The index.html page loads, but the browser still tries to load the refered content without the additional path, so I get a 404 on all the javascripts etc referenced from index.html.Then I tried to use referer to do the rewrite location / {if ($http_referer ~ "^/stormui/.*") { rewrite ^/(.*) /stormui/$1 break; } root /usr/share/nginx/html; index index.html index.htm; ... }That didn't work, either. Is there a way to do this?
nginx reverse proxy to a set of pages with an additional path in the URL based on http referer?
The way that SSEs are built is by the client opening a connection to the server, which is then left open until the server has some data to send. This is part of the SSE spec, and not a thing specific to ActionController::Live. It's effectively the same as long-polling, but with the connection not being closed after the first bit of data is returned, and with the mechanism built into the browser.As such, the only way it can be implemented is by having multiple open client connections to the webserver which sit there indefinitely. As to what resources are required to deal with them, I'm not sure, as I've not yet tried to benchmark this, but it'll need enough servers for Puma to keep open thousands of connections if you have that many users with a page open.The default limit for puma is 16 concurrent connections. Several blogs posts about setting up SSEs for Rails mention upping this to a larger value, but none that I've found suggest what this higher value should be. They do mention that the number of DB connections will need to be the same, as each Rails thread keeps one running. Sort of sounds like an expensive way to run things."Run a benchmark" is the only answer really.I can't comment as to reverse proxying as I've not tried it, but as SSEs are done over standard HTTP, I shouldn't think it'll need any special setup.
I'm experimenting with Rails 4ActionController::Liveand Server Sent Events. I'm using MRI 2.0.0 and Puma.For what I can see, each connected client keeps an active connection to the server. I was wondering if it is possible to leverage SSEs without keeping all response streams running.Puma manages multiple connections using threads, and I imagine there is a limit to the number of cuncurrent connections.What if I want to support a real-world scenario with thousands of clients registering to my Rails app for SSE events?Is there any example?Also, I usually run Rails app servers behind an nginx reverse proxy. Would it require any particular setup?
Server Sent Events and Rails Streaming
To get rid of everything nginx related (configs etc.) do:sudo apt-get purge nginx
I've gone pretty badly wrong and I want to just uninstall and then reinstall a fresh copy to start over.I've tried#sudo apt-get nginx uninstallthat didn't work as well ascd /usr/local/src wget http://nginxcp.com/nginxadmin2.3-stable.tar tar xf nginxadmin2.3-stable.tar cd publicnginx ./nginxinstaller uninstallwith no luck, can someone help me out please? running ubuntu 12.04 server edition, long time support
uninstalling nginx?
On windows, you are using Docker Toolbox, and the IP you need is192.168.99.100(which is the IP of the Docker Toolbox VM). The IP you got is the IP of the containerinsidethe VM, which is not accessible directly from Windows.
I'm running an nginx container on a windows 10 machine. I've stripped it down to a bare minimum - an nginx image provided in the Docker hub. I'm running it using:docker run --name ng -d -P nginxThis is the output ofdocker ps:b5411ff47ca6 nginx "nginx -g 'daemon off" 22 seconds ago Up 21 seconds 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp ngAnd this is the IP I'm getting when doingdocker inspect ng: "IPAddress": "172.17.0.2"So, the next thing I'm trying to do is access the Nginx server from the host machine by openinghttp://172.17.0.2:32771in browser of the host machine. This is not working (host not found etc).Please advise
Can not access nginx container on a local windows machine
Found the solution to my issue by searching for Nginx used as a reverse proxy for any other application with basic_auth.Solution was the answer found here:https://serverfault.com/questions/511846/basic-auth-for-a-tomcat-app-jira-with-nginx-as-reverse-proxyThe line I was missing from my nginx configuration was:# Don't forward auth to Tomcat proxy_set_header Authorization "";By default, it appears that after basic auth Nginx will additionally forward the auth headers to Jenkins and this is what was leading to my issue. Jenkins receives the forwarded auth headers and then thinks it needs to authorize itself too?!If we set our reverse proxy to not forward any authorization headers as shown above then everything works as it should. Nginx will prompt basic_auth and after successful auth we explicitly clear (reset?) the auth headers when forwarding to our reverse proxy.
Below is my nginx configuration file for Jenkins. Most of it is exactly as per I've read in the documentation.Config file:upstream app_server { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen [::]:80 default ipv6only=on; server_name sub.mydomain.net; location ^~ /jenkins/ { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://app_server; break; } auth_basic "[....] Please confirm identity..."; auth_basic_user_file /etc/nginx/.htpasswd; }}When navigating tohttp://sub.mydomain.net/jenkinsI get prompted for my basic auth withServer says: [....] Please confirm identify....This is correct, but as soon a I enter the proper credentials I then getPROMPTED AGAINfor basic auth once again, but this time:Server says: Jenkins.Where is this second hidden basic_auth coming from?! It's not making any sense to me.HittingCANCELon the first prompt I then correctly receive a401 authorization requirederror.HittingCANCELon the second basic auth ("Server says: Jenkins") I get:HTTP ERROR 401 Problem accessing /jenkins/. Reason: Invalid password/token for user: _____ Powered by Jetty://Does anyone know what's possibly going on?
Jenkins/Nginx - Double prompted for basic auth, why? Why is there an internal Jenkins auth?
Better solution?Siege.More accurate benchmarking tool than ab
I am planning to setup nginx as reverse proxy. I will have apache to deliver my dynamic content, and nginx will deliver the static content.My configuration i have now is just Apache with fastCGI. This gives me no configuration problems and runs great.After I have set up nginx I want to run some benchmarks to see if I really got some performance increases, else i will switch back.Does anyone know how I can benchmark this type of setup? Or maybe someone did this already and have some canned results, I will be glad to hear them.PS.I know this is more a serverfault type of question, but i have seen numerous posts about apache and nginx so i thought i give it a try
How to benchmark apache/nginx setup
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners athttps://www.rabbitmq.com/configure.html)In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for examplehttp://www.rabbitmq.com/web-stomp.html.Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.nginx-rabbitmq.conf:server { listen 5672; server_name x.x.x.x; location / { proxy_pass http://localhost:55672/; } }rabbitmq.conf:[ {rabbit, [ {tcp_listeners, [{"127.0.0.1", 55672}]} ] } ]By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:sudo rabbitmqctl add_user my_user my_password sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exceptionimport pika credentials = pika.credentials.PlainCredentials('my_username', 'my_password') pika.BlockingConnection( pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials) )--[raises ConnectionClosed exception]--If I use the same parameters but change host to localhost and port to 5672 then I connect ok:pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"Which shows a 400 status code response (bad request).So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
RabbitMQ connection through Nginx
To kill nginx process.If you are sure nginx is actually running, You just need to killnginx.exeprocess and re-runapache.OpenRun(Window key + R) ORcommend prompt(cmd.exe) and Paste below command,taskkill /F /IM nginx.exeTo find which process is holding port 80.Here isnetstatcommand & output to find which process is holdingport 80.C:\> netstat -n -a -o | findstr "0.0.0.0:80" TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 1588^ Here,1588isPIDof process holdingport 80.So, below is sample command to get Process name fromPID 1588.C:\> tasklist /svc /FI "PID eq 1588" Image Name PID Services ========================= ======== ============================================ nginx.exe 1588 N/ASo, it shows thatnginx.exeis holdingport 80.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI've run nginx once and now I cannot get rid of it. when I run apache on my server localhost still point to that welcome to nginx i dont know why. I'm on windows 7.
Nginx wont leave! how to remove it [closed]
Will likely be one of two reasons:You are using anti-virus software and it is MITM your traffic and so downgrading you to HTTP/1.1. Turn off https traffic monitoring on your AV to connect directly to the server. You can check if this is the case by usingan online tool to test your site for HTTP/2 support.You are using older TLS ciphers and specifically one that Chrome disallows for HTTP/2 (https://http2.github.io/http2-spec/#BadCipherSuites) as per Step 5 of above guide. Scan your site usinghttps://www.ssllabs.com/ssltest/to check your TLS config and improve it.The third reason islack of ALPN support in your SSL/TLS library(i.e. You are using openssl 1.0.1 and need to be one 1.0.2 or later, for example) but you have already confirmed you have ALPN support so skipping that for this answer.
I setup my Nginx conf as perDigital Ocean paper, and now http2 is available.But in Chrome (Version 54.0.2840.98 (64-bit)) Dev tool, it's always on HTTP 1/1:NAME METHOD STATUS PROTOCOL shell.js?v=xx.. GET 200 http/1/1My server is running Ubuntu 16.04 LTS which supports both ALPN & NPN, and the openssl version shipped with it is 1.0.2g.I checked http2 support withthis tool siteand the result is:Yeah! example.com supports HTTP/2.0. ALPN supported...Also checking with curl is OK:$ curl -I --http2 https://www.example.com HTTP/2 200 server: nginx/1.10.0 (Ubuntu) date: Tue, 13 Dec 2016 15:59:13 GMT content-type: text/html; charset=utf-8 content-length: 5603 x-powered-by: Express cache-control: public, max-age=0 etag: W/"15e3-EUyjnNnyevoQO+tRlVVZxg" vary: Accept-Encoding strict-transport-security: max-age=63072000; includeSubdomains x-frame-options: DENY x-content-type-options: nosniffI also checked with is-http2 cli from my console:is-http2 www.amazon.com × HTTP/2 not supported by www.amazon.com Supported protocols: http/1.1 is-http2 www.example.com ✓ HTTP/2 supported by www.example.com Supported protocols: h2 http/1.1Why doesn't Chrome recognise it?How can I check it also with Safari (v 10.0.1)?
Why doesn't Chrome browser recognize my http2 server?
Whendefaultis not specified in a map block, the default resulting value will be an empty string. So, in your case, whatever value$storecodeis set with in the first map block, it is replaced with an empty string in the second one.Since map variables are evaluated when they are used, you cannot set$storecodeas the default value in the second map block, because that will cause an infinite loop.So the solution is to introduce a temporary variable in the first map block and then use it as the default value in the second block:map $host $default_storecode { default dom_nl; domain.com dom_nl; domain.de dom_de; store.com str_de; } map $host$uri $storecode { default $default_storecode; ~^store.com/en.* str_en; ~^store.com/fr.* str_fr; }Alternatively, you can merge these two map blocks into one:map $host$uri $storecode { default dom_nl; ~^domain.com.* dom_nl; ~^domain.de.* dom_de; ~^store.com/en.* str_en; ~^store.com/fr.* str_fr; ~^store.com.* str_de; }
we have a multisite set-up and need to map domains and domains/subfolders to a variable. This way the programming knows which version to load.We have stores that have separate domains and that can be captured by$http_hostbut also domain.com/-string-locale-here- and are captured by$http_host$uriand a match commandSomehow the below is not working. Can this be because there are two map commands, both mapping towards the same variable$storecodeOr what might be going wrong?map $http_host $storecode { default dom_nl; domain.com dom_nl; domain.de dom_de; store.com str_de; } map $http_host$uri $storecode { ~^store.com/en.* str_en; ~^store.com/fr.* str_fr; }
nginx conf /w multiple map(s) to same variable
I got in contact with the guys who made the AMI and found out there are additional configuration files that override thephp.iniThere are 2 files which hold settings/etc/php-fpm.d/www.conf // This is the file which holds upload_max_filesize and post_max_size, among others /etc/php-fpm.confObviously the locations may differ on different configurations, but hopefully this will help give someone an idea of what else to look for.
Bear in mind, I am no sysadmin, I am just a developer. I cannot find anyone with theexactproblem as me, just similar, and none of their "fixes" seem to work.I am currently running an Amazon EC2 instance running.CentOS 6.2 Nginx 1.2.2 PHP 5.3.16 with APC Percona 5.5.24 // not currently using this as I am using an RDSI have set my php.ini (/etc/php.ini) settings to the followingupload_max_filesize=10M post_max_filesize=20MAfter reloading the config, usingphp -ivia ssh, these settings seemed to be loaded. Showingupload_max_filesize=10M, etc.When usingphpinfo()orini_get, both options are returned as4Mphpinfo()indicates that the file I am editing is the one loaded (/etc/php.ini).I have also runphp -i | grep "\.ini"to check which files are loaded, and there are no unnecessary loaded configs. I even went through each loaded file individually to check they didn't have the settings inside.Additionally, I have been suggested to try using a.user.iniconfig file. This did not change the values either.ini_set()does not work either.I'm at a bit of a loss.EDIT: not sure this will help, but I am using this AMIhttp://megumi-cloud.com/
Cannot change upload_max_filesize or post_max_size in php.ini
It is really easy to host different apps on one host with Nginx and Unicorn.The separation you can get by defining different names of thesocketfiles of each application. Of course you should point the rightcurrent/publicdirectories in theserversection ofnginx.conf.The last touch is in theunicorn_init.shfile: on the top of it you should changeAPP_ROOTwith the full path tocurrent/publicdirectory of your application.If your setup is similar to the RailsCast's one, all the other things are done by capistrano .
I successfully setup a rails site using the Screencast 335 deploy to a VPS tutorial. Now I want to add another rails app on a new domain but I am confused about the steps required.In the above setup, there are no changes to sites-available or /etc/nginx/nginx.conf. The only configuration is in unicorn.rb, unicorn_init.sh and nginx.conf in my apps config directory. The nginx.conf file looks like this:-upstream unicorn { server unix:/tmp/unicorn.my_app.sock fail_timeout=0; } server { listen 80 default deferred; # server_name my_app.com.au www.my_app.com.au; root /var/www/my_app/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }In my Capistrano recipe I have this linesudo "ln -nfs #{current_path}/config/nginx.conf /etc/nginx/sites-enabled/#{application}"Is adding a second domain merely a matter of removing default deferred after listen and un-commenting the server_name section then repeating this config file with a different upstream socket name and server name for the second app? Will that work or do I need to transfer this file to sites-available and create a symbolic link to sites-enabled?
multiple rails apps on nginx and unicorn
With nginx you don't need rewrites at all.upstream domain_server { server localhost:8000 fail_timeout=0; } proxy_set_header Host domain.com; proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for; server { listen 80 default_server; location / { proxy_pass http://domain_server/userdomain/$http_host; } } server { listen 80; server_name domain.com; root /var/www/domain.com; location / { try_files $uri @backend; } location @backend { proxy_pass http://domain_server; } } server { listen 80; server_name ~^(?.+)\.domain\.com$; location / { proxy_pass http://domain_server/website/$subdomain$request_uri; } }http://nginx.org/r/proxy_passhttp://wiki.nginx.org/IfIsEvilhttp://wiki.nginx.org/Pitfallshttp://nginx.org/en/docs/http/converting_rewrite_rules.html
I need these two types of rewrites:subdomain.domain.com => domain.com/website/subdomainotherdomain.com => domain.com/userdomain/otherdomain.comMy problem is that I want the user to seesubdomain.domain.com, andotherdomain.com, not the redirected version. My current rewrite in nginx works, but the user's URL shows the rewrite, and I want this to be transparent to the user, any ideas?:upstream domain_server { server localhost:8000 fail_timeout=0; } server { listen 80; root /var/www/domain.com; server_name domain.com ~^(?.*)\.domain\.com$ ~^(?.*)$; if ( $subdomain ) { rewrite ^ http://domain.com/website/$subdomain break; } if ( $otherdomain ) { rewrite ^ http://domain.com/userdomain/$otherdomain break; } location / { proxy_redirect off; proxy_buffering off; proxy_set_header Host $http_host; proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for; if (!-f $request_filename) { proxy_pass http://domain_server; break; } } }
nginx subdomain and domain rewrite w proxy pass
Just needed to setexpires off;within my proxy location block..
For what i understandproxy_cachecan only be disable by changing the incoming request headers to somtehing like Cache-Control': 'no-cache'. This seems to not be working for me, is there any way to completly disble caching for that proxy ?proxy_cache off didn t work either response headers always come back like that:Cache-Control max-age=86400Connection keep-aliveContent-Type text/plainDate Mon, 19 Mar 2012 19:42:28 GMTExpires Tue, 20 Mar 2012 19:42:28 GMTServer nginx/0.7.65Transfer-Encoding chunkedAldo the request i am proxing are comming from node.js server so i need to enable "streaming"Thanks
Nginx how to completly disable Proxy caching
You need to add the trailing slashlocation /panel/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:8082/; }
To remove the use of ports on several of the applications running on this server, I've been using nginx's proxy_pass to do this. However, for some reason the actual url is being passed to the application. Is there a way so that it thinks/panelis really just/?location /panel { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:8082/; }
Removing start of path from nginx proxy_pass
Hopefully this helps with your question about the nginx config files. You can find the nginx configuration for your sites by runningcd ~/.config/valet/Nginxin your terminal. To get to the base nginx config for valet usecd /usr/local/etc/nginx/valet. You should then seevalet.conf, inside you can update the following lines to put the log files where you want them.access_log "/Users/[user_id]/.config/valet/Log/access.log"; error_log "/Users/[user_id]/.config/valet/Log/nginx-error.log";Make sure to runvalet restartafter you make changes to the valet.conf file.
I'm using laravel valet to serve sites in my local dev env, which is great. However, there's only one file in the expected location of~/.valet/Log:➜ ls ~/.valet/Log nginx-error.logI've tinkered with php-fpm log settings and the nginx log settings, but I'm not sure that I'm even using the right config files, since I suspect that valet installs its own version of PHP and nginx.Can any one tell me where the php / nginx config files for valet would be found, and what specific settings to change to drop the PHP error / log files where they're supposed to be written?
Laravel Valet logs
Most likely you have another service listening on 8080, I think the omnibus install have some service hooking 8080 - just use 8081 instead.Edit:I just did a quick search and found that it's the unicorn server that is listening to 8080 with the original omnibus installer.Note:You will only need to change the external_url in gitlab.rb, no other config file should have to be edited for this.Edit#2:As @emeraldjava stated there is an option in the configuration file for using another unicorn port:#unicorn['port'] = '8080'
I'm currently in the process of trying to get Gitlab omnibus installed on my private Debian server, and it works perfectly on port 80, the problem is I also have an Apache server listening on port 80. So I'm trying to get Nginx listening on port 8080 but for some reason I'm getting a "502 Gitlab is not responding" Error I have edited both "external_url" in gitlab.rb and also the port number under the server block in the nginx.conf file. and no joy.If someone could help me that would be great!
Gitlab on port 8080
You need to re-specify passenger_enabled in the location block.
I want to protect my newly deployed Rails 3 app with the basic http authentication. It's running on the latest Nginx/Passenger and I'm using the following Nginx directive to protect the web root directory:location = / { auth_basic "Restricted"; auth_basic_user_file htpasswd; }htpasswd file was generated using Apache htpasswd utililty. However, after entering correct username and password I'm getting transferred to the 403 Forbidden error page. Analyzing Nginx error log revealed this:directory index of "/var/www/mysite/public/" is forbidden, client: 108.14.212.10, server: mysite.com, request: "GET / HTTP/1.1", host: "mysite.com"Obviously, I don't want to list the contents of the mysite/public directory. How can I configure this properly so the Rails app starts after I enter my login info?
Password protecting Rails site running on Nginx and Phusion Passenger
Looks like the request headers may have exceeded the default uwsgi maximum buffer size of 4k. Try increasing the buffer size by addingbuffer-size=32768to youruwsgi.inifile.
While I was adding content for my django web site on admin panel,I get the error.After I added 10-15 content,site give the this error. "The page you are looking for is temporarily unavailable."I analysed nginx and uwsgi logs.Nginx log contains to below line.2012/06/02 22:02:53 [error] 5203#0: *602 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 92.10.214.1, server: server.com, request: "POST /admin/hdduyuru/duyurular/add/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:1235", host: "127.0.0.1", referrer: "http://127.0.0.1/admin/hdduyuru/duyurular/add/"And uwsgi log contains to below line.invalid request block size: 4169 (max 4096)...skipI'm using below line to deploy my site on uwsgi+nginx/usr/bin/uwsgi --socket 127.0.0.1:1245 --master --workers 5 --harakiri 30 --disable-logging --daemonize /tmp/daemonize.log --pidfile /tmp/pidfile.txt --vacuum --gid 500 --uid 500 --ini /home/uwsgi.ini/home/uwsgi.in[uwsgi] chdir=/home/ module=hdblog.wsgi:application master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/tmp/hdblog.log
Django Admin Panel Content Posting Error
error_page 404 =200 @empty_json; location @empty_json { return 200 "{}"; }Reference:http://nginx.org/r/error_pagehttp://nginx.org/r/returnhttp://nginx.org/r/location
We've got an API running on Nginx, supposed to return JSON objects. This server has a lot of load so we did a lot of performance improvements.The API recieves an ID from the client. The server has a bunch of files representing these IDs. So if the ID is found as a file, the contents of that file (Which is JSON) will be returned by the backend. If the file does not exists, no backend is called, Nginx simple sends a 404 for that, so we save performance (No backend system has to run).Now we stumbled upon a problem. Due to old systems we still have to support, we cannot hand out a 404 page for clients as this will cause problems. What I came up with, is to return an empty JSON string instead ({}) with a 'fake' 200 status code. This needs to be a highly performant solution to still be able to handle all the load.Is this possible to do, and if so, how?
Nginx return an empty json object with fake 200 status code
Actually, what bothers me the most is theException NotificationsI am getting from Rails when a bot hits my site with an unknown HTTP method. (Sometimes I get a dozen or so exception emails à laAn ActionController::UnknownHttpMethod occurred in ...within a matter of seconds.)So in myproduction.rbI added one extra line:config.middleware.use ExceptionNotification::Rack, :ignore_exceptions => ['ActionController::UnknownHttpMethod'] + ExceptionNotifier.ignored_exceptions, # I added this line :email => { :email_prefix => "[ERROR]", :sender_address => %{"Error Notification" <[email protected]>}, :exception_recipients => %w{[email protected]} }Let's see how that works out. I will post updates in a few weeks.
On my Rails production website I sometimes get a dozen or so errors along the lines of:An ActionController::UnknownHttpMethod occurred in #:TRACK, accepted HTTP methods are OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, CONNECT, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK, VERSION-CONTROL, REPORT, CHECKOUT, CHECKIN, UNCHECKOUT, MKWORKSPACE, UPDATE, LABEL, MERGE, BASELINE-CONTROL, MKACTIVITY, ORDERPATCH, ACL, SEARCH, MKCALENDAR, and PATCHI guess this is when bots hit my site with HTTP methods that Rails can't handle (in my case it's mostlyOPENVAS,TRACK,DEBUG,TRACK, andINDEXbut also weird methods likeWMIXLVXM).Is there a way tosilencethese messages in any way? I am still unsure as to whether this is a Rails issue or an Nginx issue.I am using a custom controller to render custom error pages to the user:Rails.application.routes.draw do %w(404 500).each do |status| match status, :to => 'errors#show', :status => status, :via => :all end ... endclass ErrorsController < ApplicationController def show status = params[:status] || 500 @title = "Error" render(:status => status, :template => "errors/show.html.erb") end endBut my custom controller is probably not causing the errors?Thanks for any help.
How to silence ActionController::UnknownHttpMethod errors?
You can do it through OpenResty + Lua-OpenSSL and parse the raw certificate to get it.Refer this:https://github.com/Seb35/nginx-ssl-variables/blob/master/COMPATIBILITY.md#ssl_client_s_dn_x509Just like this:local varibleName = string.match(require("openssl").x509.read(ngx.var.ssl_client_raw_cert):issuer():oneline(),"/C=([^/]+)")
I have an Nginx server which clients make requests to with a Client certificate containing a specific CN and SAN. I want to be able to extract the CN (Common Name) and SAN (Subject Alternative Names) fields of that client cert.rough example config:server { listen 443 ssl; ssl_client_certificate /etc/nginx/certs/client.crt; ssl_verify_client on; #400 if request without valid cert location / { root /usr/share/nginx/html; } location /auth_test { # do something with the CN and SAN. # tried these embedded vars so far, to no avail return 200 " $ssl_client_s_dn $ssl_server_name $ssl_client_escaped_cert $ssl_client_cert $ssl_client_raw_cert"; } }Using the embedded variables exposed as part of thengx_http_ssl_modulemodule I can access the DN (Distinguished Name) and therefore CN etc but I don't seem to be able to get access to the SAN.Is there some embedded var / other module / general Nginx foo I'm missing? I can access the raw cert, so is it possible to decode that manually and extract it?I'd really rather do this at the Nginx layer as opposed to passing the cert down to the application layer and doing it there.Any help much appreciated.
Nginx - how to access Client Certificate's Subject Alternative Name (SAN) field
(I eventually tracked down a solution myself...)Install tomcat, then install the WAR version of TeamCity, which is in the download area above theJava EE Containertab. This exposes TeamCity under a base URL that you can choose at the time you install the WAR.The simplest approach is to copy the .war file into Tomcat's webapps directory, giving it a name that matches the desired base URL. For instance, installingteamcity.warinto$TOMCAT_HOME/webappswill load TeamCity under the urlhttp://localhost:8080/teamcity(assuming the default Tomcat install). Proxying fromhttps://public.address.com/teamcityto this internal address should be fairly straighforward in nginx.I had trouble getting it to run immediately after I installed the .war file, but after restarting Tomcat, it all came good.
I am trying to set up TeamCity behind nginx. I'd likehttps://public.address.com/teamcity/... to redirect tohttp://127.0.0.1:8111/..., but even though nginx does this successfully, the login page comes back with references that look like this:Obviously, this won't do, and fiddling with therootURLsetting (Server URL:inServer Configuration) doesn't make any difference.How do I run TeamCity behind a proxy under a non-root URL?FWIW, here's the relevant portion of my nginx config:location /teamcity/ { proxy_pass http://127.0.0.1:8111/; proxy_redirect http://127.0.0.1:8111/ https://$host/teamcity/; }
TeamCity behind nginx proxy
https://calomel.org/nginx.htmlBlock most "referrer spam" -- "more of an annoyance than a problem"nginx.conf## Deny certain Referers (case insensitive) ## The ~* makes it case insensitive as opposed to just a ~ if ($http_referer ~* (babes|click|diamond|forsale|girl|jewelry|love|nudit|organic|poker|porn|poweroversoftware|sex|teen|video|webcam|zippo)) { return 403; }
I'm running two mongrels under an Nginx server. I keep getting requests for a nonexistent file. The IP addresses change frequently but the referring URL stays the same. I'd like to resolve this.
How to block referral spam using Nginx?
I run the following command to install json for php7 and it worked perfectly fine.[root@server dbs]# sudo yum install php70u-json
My server configurations:[root@server ~]# php -v PHP 7.0.22 (cli) (built: Aug 7 2017 16:18:27) ( NTS ) [root@server ~]# nginx -v nginx version: nginx/1.10.2OS: CentOS 7.3.1611 (Core)Details of my YUM installation:[root@server ~]# yum list installed | grep php php70u-cli.x86_64 7.0.22-2.ius.centos7 @ius php70u-common.x86_64 7.0.22-2.ius.centos7 @ius php70u-fpm.x86_64 7.0.22-2.ius.centos7 @ius php70u-fpm-nginx.noarch 7.0.22-2.ius.centos7 @ius php70u-mysqlnd.x86_64 7.0.22-2.ius.centos7 @ius php70u-pdo.x86_64 7.0.22-2.ius.centos7 @iusHere is the details of the investigation: I tried executing following code in test.php:Here are my system details: PHP Version => 7.0.22 OS: I am trying to execute following code in test2.php: 1, 'b' => 2, 'c' => 3, 'd' => 4, 'e' => 5); echo json_encode($arr); ?>And, getting the following error:[root@server ~]# php /tmp/test2.php **PHP Fatal error: Uncaught Error: Call to undefined function json_encode() in /tmp/test2.php:3**How to resolve this?
In PHP 7.0 Fatal error: Uncaught Error: Call to undefined function json_encode()
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.from flask import request @app.errorhandler(404) def page_not_found(error): return 'This route does not exist {}'.format(request.url), 404So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:upstream apiserver { server 127.0.0.1:5000; } location /api { rewrite /api/(.*) /$1 break; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://apiserver; proxy_next_upstream error timeout http_502; proxy_buffering off; }For instance, when I go tohttp://127.0.0.1:5000/me, I get a valid response from the app. However when I go tohttp://127.0.0.1/api/meI get a 404 from the flask app (not nginx). Also, the flaskSERVER_NAMEvariable is set to127.0.0.1:5000, if that's important.I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
Flask app gives ubiquitous 404 when proxied through nginx
Turns out I fixed my own problem... misunderstood how Nginx worked. :Dserver { listen 1234; //port that Nginx listens on server_name xxx.xx.xx.xx; #the actual IP of the server; it has a public IP address access_log /home/lilo/textImageSite/access.log; error_log /home/lilo/textImageSite/error.log; location /static { root /home/lilo/textImageSite/imageSite; } location / { proxy_pass http://127.0.0.1:8888; //the port that Gunicorn uses } }So in my case, if I have my Gunicorn instance running on port 8888, then going to xxx.xxx.xx.x:8888/textImageSite would load the page, but without any static content. If I access it using xxx.xxx.xx.x:1234, then the page will load the static content (images, css style sheets etc). It's my first time using Gunicorn and Nginx (and first time writing a Django app too) so hopefully this will help someone who's confused :)
Right now, I'm trying to follow this tutorial:http://honza.ca/2011/05/deploying-django-with-nginx-and-gunicornThe template site loads correctly, but the images don't load. Here is part of my config.py file for my application:# Absolute filesystem path to the directory that will hold user-uploaded files. # Example: "/home/media/media.lawrence.com/media/" MEDIA_ROOT = '' # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash. # Examples: "http://media.lawrence.com/media/", "http://example.com/media/" MEDIA_URL = '' # Absolute path to the directory static files should be collected to. # Don't put anything in this directory yourself; store your static files # in apps' "static/" subdirectories and in STATICFILES_DIRS. # Example: "/home/media/media.lawrence.com/static/" STATIC_ROOT = '/home/lilo/textImageSite/imageSite/static/' # URL prefix for static files. # Example: "http://media.lawrence.com/static/" STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = ( # Put strings here, like "/home/html/static" or "C:/www/django/static". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. )My nginx config file (located at /etc/nginx/sites-enabled):server { listen 80; server_name xxx.xx.xx.xx; #the actual IP of the server; it has a public IP address access_log /home/lilo/textImageSite/access.log; error_log /home/lilo/textImageSite/error.log; location /static { root /home/lilo/textImageSite/imageSite; } location / { proxy_pass http://127.0.0.1:8888; } }My gunicorn_conf file:bind = "0.0.0.0:8888" logfile = "/home/lilo/textImageSite/gunicorn.log" workers = 3And right now in my template, this is how I'm accessing the image: Here is what the generated HTML looks like: Sorry for the wall of text, but I can't figure out what's wrong with my setup...
How exactly do I server static files with nginx and gunicorn for a Django app?
I seem to have found a work around that fixed my problem. After some additional Google research, I added the following lines to my Nginx config:proxy_buffers 8 16k; proxy_buffer_size 32k;However, I still don't knowwhythis worked and why only Firefox seemed to have problems. If anyone can shed light on this, or offer a better solution, it would be much appreciated!
I am running a website locally, all the traffic is routed through NGinx which then dispatches requests to PHP pages to Apache and serves static files. Works perfectly in Chrome, Safari, IE, etc.However, whenever I open the website in Firefox I get the following error:502 Bad Gateway nginx/0.7.65If I clear out cache and cookies, and then restart FireFox, I am able to load the site once or twice before the error returns. I've tried both Firefox 3.6 and 3.5 and both have the same problem.Here is what my Nginx config looks like:worker_processes 2; events { worker_connections 1024; } http { server { listen 80; server_name local.mysite.amc; root /Users/joshmaker/Sites/mysite; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://local.mysite.amc:8080; } include /opt/local/etc/nginx/rewrite.txt; } server { include /opt/local/etc/nginx/mime.types; listen 80; server_name local.static.mysite.amc; root /Users/joshmaker/Sites/mysite; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; } }And here is the errors that Firefox generates in my error.log file:[error] 11013#0: *26 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream [error] 11013#0: *30 upstream sent too big header while reading response header from upstream [error] 11013#0: *30 no live upstreams while connecting to upstreamI am completely at a loss why a browser would cause a server error. Can someone help?
Nginx 502 Bad Gateway error ONLY in Firefox
Fix by adding more headers in Nginx (X-Forwarded-Ssl on,X-Forwarded-Port 443andX-Forwarded-Host "your hostname",X-Forwarded-Proto https). The problem was actually in the new way CSRF tokens are checked by ActionController (compares the request.base_url with the origin header)
I've got a really weird issue with Rails 5 (beta1) and it's preventing me from safely submitting any forms.When running in production mode behind an Nginx (1.4.6 on Ubuntu 12.04) reverse proxy which decrypts SSL, Rails is rejecting my CSRF tokens saying they are invalid despite the fact that the correct token is being submit by the form.Everything works fine when I turn SSL off in Nginx.Any help would be appreciated.
InvalidAuthenticityToken in Rails 5 behind Nginx using SSL
In Nginx, each cookie is available in embedded variable$cookie_CookieName. In case you want to check cookie with namemycookie, you can do it using this configuration snippet:if ($cookie_mycookie != "foobar") { return 401; }From nginxmanualfor theifcommand:A condition maybe (among others):Comparison of a variable with a string using the=and!=operators;Matching of a variable against a regular expression using the~(for case-sensitive matching) and~*(for case-insensitive matching) operators.Regular expressions can contain captures that are made available for later reuse in the $1..$9 variables.Negative operators!~and!~*are also available. If a regular expression includes the}or;characters, the whole expressions should be enclosed in single or double quotes.
There are countless tutorials on checking if a cookie exists and contains my content, in this casefoobar.How do I do the following assuming mycookie is the cookie that I want set.if ($cookie_mycookie does not equal "foobar") { return 401; }I have tried the following to no avail.if (!$http_mycookie ~* "foorbar" ) { return 401; }Thank you!
Check if cookie does NOT contain specified content NGINX
Is it possible to define a common location for all servers?No.You could make separate file andincludeit into all your servers./etc/nginx/error-location.inc:location ^~ /error/ { internal; root /var/www/nginx/errors; }And then:server { ... include error-location.inc; } server { ... include error-location.inc; }
Is it possible to define a common location for all servers? Fromnginx locationdocumentation I've seen that location depends on server. I would like to do something like this:... http { error_page 404 /error/404.html; error_page 500 501 502 503 504 /error/50x.html; location ^~ /error/ { internal; root /var/www/nginx/errors; } server { ... } server { ... } ... }I've tried setting:http { ... root /var/www/nginx/errors; # also with root /var/www/nginx ... }with no success: always showing nginx default error page.
nginx error location for all servers
set $request_url $request_uri; if ($request_uri ~ ^/example(.*)$ ) { set $request_url /module/controller/action; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9090; #include fastcgi.conf; fastcgi_param REQUEST_URI $request_url; #fastcgi_param REQUEST_URI $request_uri; }
We get information from$_SERVER['REQUEST_URI']not from$_GETor$_POST.I want to define$request_urito change/exampleto/module/controller/action. Please note that I do not want to trigger a redirect.I tried the code below to do this, but it doesn't work.location /example { rewrite /module/controller/action; }
How to change $request_uri in nginx?
I presume you get this when you run:php bin/symfony_requirementsThis is just a warning and you can safely ignore the message. I've response to similar questions on this. See this URL for more details:https://github.com/symfony/symfony/issues/15007
I have a problem installing symfony 3.1 in php7, nginx and ubuntu 16.04, i have this error:intl ICU version installed on your system is outdated (55.1) and does not match the ICU data bundled with Symfony (57.1) To get the latest internationalization data upgrade the ICU system package and the intl PHP extension.How can i solve this issue? can i change symfony and use IC 55.1 instead of ICU 57.1?
ICU version compatibility Symfony 3.1
For starters, don't useleinto run things in production. You can uselein uberjarto create a jar file with all your deps ready to run, andjava -jarto run the app from the resulting jar. There is also the option of runninglein ring uberwarto create a war archive to be run inside tomcat, which provides some other conveniences (like log rotation and integration with /etc/init.d as a service etc. on most Linux systems).nginxsits in front of your app, on port 80. It will serve up the content by proxying your app. This is useful because nginx has many capabilities (especially regarding security) that you then don't need to implement in your own app, including optional integration with https and selinux integration. Using nginx in front of your app also prevents you from needing to run java as root (typically only the root user can use port 80). Furthermore you can let nginx serve static assets directly, rather than having to serve them from your app.
This is a follow up to my questionhere. I've set up a home server (just my other laptop running ubuntu and nginx) and I want to serve clojure files.I am asking help for understanding how this process works. I am sorry at this point I am confused and I think I need to start over. I am asking a new question because I want to usenginxnotlein ring server, as suggested in the answer for that question.First I started a projectguestbookwith leiningen and I ranlein ring serverand I see "Hello World" atlocalhost:3000. As far as I understand this has nothing to do with nginx!How does nginx enter in this process? At first I was trying to create a proxy server with nginx and that worked too, but I did not know how serve clojure files with that setup.This is what I have in my nginx.conf file adapted fromthis answer:upstream ring { server 127.0.0.1:3000 fail_timeout=0; } server { root /home/a/guestbook/resources/public; # make site accessible from http://localhost server_name localhost; location / { # first attempt to serve request as file try_files $uri $uri/ @ring; } location @ring { proxy_redirect off; proxy_buffering off; proxy_set_header Host $http_host; proxy_pass http://ring; } location ~ ^(assets|images|javascript|stylesheets|system)/ { expires max; add_header Cache-Control public; } }So I want to use my domainexample.com(not localhost); how do I go about doing this?EDITAs per@noisesmith's commentI will opt to go with lein uberjar option. As explainedhere, it appears very easy to create one:$ lein uberjar Unpacking clojure-1.1.0-alpha-20091113.120145-2.jar Unpacking clojure-contrib-1.0-20091114.050149-13.jar Compiling helloworld [jar] Building jar: helloworld.jar $ java -jar helloworld.jar Hello world!Can you also direct me to the right documentation about how I can use this uberjar with nginx?
Can I use Clojure with nginx?
http://web.archive.org/web/20180812021847/https://blog.martinfjordvald.com/2011/02/nginx-primer-2-from-apache-to-nginx/Everything is inside. No more .htaccess, no more complex rules use try_files.EDIT: And if it is not obvious, do not trust online converters.
I've got the following .htaccess file for my apache:  Options +FollowSymlinks # Options +SymLinksIfOwnerMatch  RewriteEngine On  RewriteBase /  RewriteRule ^$          index.php       [L]  RewriteCond %{REQUEST_FILENAME}         !-f  RewriteCond %{REQUEST_FILENAME}         !-d  RewriteRule (.*)        index.php?page=$1  [QSA,L] Suddenly I had to change my webserver to nginx and I don't know why, but the mod rewrite is not working.I used an online 'converter' to convert it, so I've got the following:location / { rewrite ^/$ / index.php break; if ($request_filename ~ !-f){ rewrite ^(.*)$ / index.php?page=$1 break; } }Could you help me what's wrong?Thanks in advance, Marcell
Converting .htaccess to nginx (mod_rewrite)
If you can usegnu-awkyou can make use ofFPATto specify the column data:awk -v FPAT='\\[[^][]*]|"[^"]*"|\\S+' '{ for(i=1; i<=NF; i++) { print "$"i" = ", $i } }' fileThe pattern matches:\\[[^][]*]Match from an opening[till closing]using anegated character class|Or"[^"]*"Match from an opening till closing double quote|Or\\S+1 or more non whitespace charsOutput$1 = ::1 $2 = - $3 = - $4 = [12/Oct/2021:15:26:25 +0530] $5 = "GET / HTTP/1.1" $6 = 200 $7 = 1717 $8 = "-" $9 = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36"
nginx access.log. It is delimited by 1) white space 2) [ ] and 3) double quotes.::1 - - [12/Oct/2021:15:26:25 +0530] "GET / HTTP/1.1" 200 1717 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" ::1 - - [12/Oct/2021:15:26:25 +0530] "GET /css/custom.css HTTP/1.1" 200 202664 "https://localhost/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36"after parsing it supposed to look like$1 = ::1$4 = [12/Oct/2021:15:26:25 +0530] or 12/Oct/2021:15:26:25 +0530$5 = "GET / HTTP/1.1"$6 = 200$7 = 1717$8 = "-"$9 = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36"I tried some options likeawk -F'[],] *'awk -f [][{}], but they doesn't work with full line.nginx access.log shared here is just an example. I am trying to understand how to parse with mix of such delimiters for usages in other complex logs.
How to parse logs ( nginx/apache access.log ) with mix of delimiters i.e. square bracket, space and double quotes? and optionally convert to json
This is how I solved my problem.# Add nginx config COPY .docker/nginx/prod.conf /temp/prod.conf RUN envsubst /app < /temp/prod.conf > /etc/nginx/conf.d/default.conf
I'm setting up a docker image fromnginxto serve a Vue app as static files. My vue app uses Vue-Router and it works perfectly on an other server. My nginx config is just like this:https://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurationsAnd now I wanna migrate to docker, and this is myDockerfile# build stage FROM node:9.11.1-alpine as build-stage WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build # production stage FROM nginx:1.15.8-alpine as production-stage COPY docker/nginx/prod.conf /etc/nginx/nginx.conf/prod.conf # [*] COPY --from=build-stage /app/dist /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]And this is thedocker/nginx/prod.confserver { listen 80; server_name _ default_server; root /usr/share/nginx/html; location / { try_files $uri $uri/ /index.html; } }It works with the home page, for example:http://192.168.1.10but got 404 on other URL, such ashttp://192.168.1.10/settingsThe prod.conf was copied to the nginx folder, I do see it.Do you have any idea, or am I missing something?
How to config nginx for Vue-router on Docker
Yes, this behaviour is expected although docs also say:If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI:location /some/path/ { proxy_pass http://127.0.0.1; }Nginx engineers say the same:https://serverfault.com/questions/459369/disabling-url-decoding-in-nginx-proxyHowever if you append $request_uri to proxy_pass (and strip locale beforehand it may work assaidby Nginx engineer):set $modified_uri $request_uri; if ($modified_uri ~ "^/([\w]{2})(/.*)") { set $modified_uri $1; } proxy_pass http://example$modified_uri;
I have nginxlocationdirective which purpose is to "remove" localization prefix from the URI for theproxy_passdirective.For example, to make URIhttp://example.com/en/lalalause proxy_passhttp://example.com/lalalalocation ~ '^/(?[\w]{2})(/(?.*))?$' { ... proxy_pass http://example/$rest; ... }This way therestvariable will be decoded when passed to proxy_pass directeve. It seems to be an expectedbehavior.The problem is when my URI contains encoded space%20passed from clienthttp://example.com/lala%20lalanginx decodes URI tohttp://example.com/lala lalaI can see it in my error.log.The question is - is it possible do use encodedrestvariable somehow as it is passed from client? If I am doing something completely wrong, please, suggest the right way.Thank you.
Nginx - encoding (normalizing) part of URI
It's not just a matter of performance, but a matter of compliance with the TLS specifications.I guess that most browsers can parse through these files and figure out what the correct order of the chain should be.Some browsers may be tolerant, but theTLS specification explicitly says that you MUST present the certificate chain in the right order:certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate that specifies the root certificate authority MAY be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case.I suppose some servers could re-arrange the certificate chain in the right order when reading their configuration before sending their cert chain (in which case there might still be an performance issue), but this isn't always the case.I haven't tried to configure Nginx with a chain in the wrong order, but I know Apache Httpd will send the chain exactly as configured (so in the wrong order if it's configured in the wrong order). In doubt, I'd suggest to configure your server with the chain in the right order to make sure it's compliant with the TLS specification.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed9 years ago.Improve this questionFrom what i can tell i can stack the "add trust" certificates in any oder when i put them together into my domain.crt file before installing it on the server. I guess that most browsers can parse through these files and figure out what the correct order of the chain should be. But in terms of performance, is the a correct way to stack them, that will cause the browsers to take less time to analyze the certificate?For example, a certificate i just installed had the following files that needed to be combined.domain_com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crtIs this the best order to concatenate them to the file, assuming the contents of the first filename shows up at the top of the file?
Peformance: Does SSL trust chain order matter? [closed]
I just had this exact same problem. I was using Ubuntu 12.04 and Linux Mint 14 so different OS but likely to have the same issues.A couple of issues may happening. Firstly, you need to have php5-fpm installed (FastCGI Process Manager). I was trying to run it with my standard version of PHP but it was not working -http://www.php.net/manual/en/install.fpm.phpI also had Apache installed, and even if it weren't running it must have had some conflict because once I uninstalled Apache I was able to execute the PHP files.I would also look at this linefastcgi_pass 127.0.0.1:9000;And consider changing it tofastcgi_pass unix:/var/run/php5-fpm.sock;Here is a detailed guide to installation of Nginx and PHP5-FPM for RHEL (and other OS's)http://www.if-not-true-then-false.com/2011/install-nginx-php-fpm-on-fedora-centos-red-hat-rhel/
Iv'e setup an Nginx php server on a linux REHL machine. When accessing html files all goes well, but trying to access php file, the file is downloaded instead of being executed.This is my nginx.conf:user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }...and this is the server block:server { listen 80; server_name {mywebsitename}; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html/{mywebsitename}/; } location /ngx_status_2462 { stub_status on; access_log off; allow all; } location ~ \.php$ { # fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html/{mywebsitename}$fastcgi_script_name; include fastcgi_params; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
Nginx downloads php instead of running it
As Dag Nabbit stated, a Minecraft server does not talk http. You would typically do this via NAT. A proxy server needs to know the protocol, because as the name suggests, it acts on behalf of the the client. Nginx knows various protocols, not just http, but Minecraft is not one of them. You can however write a proxy module for this protocol and use the existing nginx infrastructure. Since I'm not familiar with the protocol, I can't comment on the fact that this would have any advantages over NAT.
I'm trying to run two Minecraft servers on the same machine on two different ports. I want to reference them based on subdomains:one.example.com -> :25500 two.example.com -> :25501I have used nginx for things like this before, but it's not working with Minecraft. It's responding with http status 400. Here is a sample from my log:192.168.0.1 - - [21/Apr/2013:17:25:40 -0700] "\x02<\x00\x0E\x00t\x00h\x00e\x00s\x00a\x00n\x00d\x00y\x00m\x00a\x00n\x001\x002\x003\x00\x1C\x00t\x00e\x00s\x00t\x00.\x00r\x00y\x00a\x00n\x00s\x00a\x00n\x00d\x00y\x00.\x00i\x00s\x00-\x00a\x00-\x00g\x00e\x00e\x00k\x00.\x00c\x00o\x00m\x00\x00c\xDD" 400 173 "-" "-"Here is my nginx config:upstream mine1 { server 127.0.0.1:25500; } upstream mine2 { server 127.0.0.1:25501; } server { listen 25565; server_name one.example.com; access_log /var/log/nginx/one.access; error_log /var/log/nginx/one.error; location / { proxy_pass http://mine1; } } server { listen 25565; server_name two.example.com; access_log /var/log/nginx/two.access; error_log /var/log/nginx/two.error; location / { proxy_pass http://mine2; } }If I'm reading this correctly, nginx is responding with 400. My guess is the Minecraft client is not sending valid HTTP headers and Nginx is tossing out the request. But I'm totally at a loss. Any help would be appreciated.
Nginx proxy_pass to Minecraft server
The best way is to use nginx server to serve you static file and let you node.js server handle the dynamic content.It is usually the most optimized solution to reduce the amount of requests on your node.js server that is slower to server static files than nginx for example :The configuration to achieve that is very easy if you already set a reverse proxy for you nodejs app.nd nginx configuration could beroot /home/myapp; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location /public/ { alias /home/myapp/public/; } location / { proxy_pass http://IPADRESSOFNODEJSSERVER:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. #try_files $uri $uri/ =404; }Every request with /public/ at the first part of the url will be handled by nginx and every other request will be proxied to you nodejs app at yourIPADRESSOFNODEJSSERVER:NODEJSPORTusually theIPADRESSOFNODEJSSERVERis the localhostThe doc section of express tell thathttp://expressjs.com/en/advanced/best-practice-performance.html#proxyAn even better option is to use a reverse proxy to serve static files; see Use a reverse proxy for more information.Moreover nginx will let you easily definecaching rulesso for static assets that doesn't change it can speed up your app also with one line.location /public/ { expires 10d; alias /home/myapp/public/; }You can find a lot of articles that compare the both methods on internet for example:http://blog.modulus.io/supercharge-your-nodejs-applications-with-nginx
I already usenginxasreverse proxyto serve mynode.jswebapps3000<->80for example. Actually, I serve my assets in the node app, usingexpress.staticmiddleware.I read and read again that nginx is extremely efficient to serve static files.The question is, what is the best ? Serving assets as I already do or configuring nginx to serve the static files itself directly ?Or it is almost the same ?
Which is most efficient : serving static files directly by nginx or by node via nginx reverse proxy?
Resolved after findinghttps://confluence.jetbrains.com/display/YTD65/Configuring+Proxy#ConfiguringProxy-IISreverseproxy
What is the IIS equivalent of this configuration in NGINX?proxy_set_header X-Forwarded-Proto https;I am running JetBrains YouTrack on Windows server, using IIS as a terminating SSL proxy, and get this error when trying to log in:HTTP ERROR 405 Problem accessing /hub/auth/login. Reason: HTTP method POST is not supported by this URL Powered by Jetty://My web.config looks like this: I am trying to follow the solution from this source:https://confluence.jetbrains.com/display/YTD65/Linux.+JAR+in+Nginx+Web+Server, but for IIS
IIS Equivalent of "proxy_set_header X-Forwarded-Proto https;"
Unicorn was not designed to handle "slow clients". You can read more about this in thePHILOSOPHYhelp file:Most benchmarks we’ve seen don’t tell you this, and unicorn doesn’t care about slow clients… but you should.A “slow client” can be any client outside of your datacenter. Network traffic within a local network is always faster than traffic that crosses outside of it. The laws of physics do not allow otherwise.Persistent connections were introduced in HTTP/1.1 reduce latency from connection establishment and TCP slow start. They also waste server resources when clients are idle.Persistent connections mean one of the unicorn worker processes (depending on your application, it can be very memory hungry) would spend a significant amount of its time idle keeping the connection alive and not doing anything else. Being single-threaded and using blocking I/O, a worker cannot serve other clients while keeping a connection alive. Thus unicorn does not implement persistent connections.If your application responses are larger than the socket buffer or if you’re handling large requests (uploads), worker processes will also be bottlenecked by the speed of the client connection. You should not allow unicorn to serve clients outside of your local network.
I'm a bit confused about this architecture. On one of the projects I'm working on, Unicorn was chosen as a Rails server. And it is put behind Nginx web server. As I understand Unicorn is fully functional web server and we don't plan to host any other Rails applications on the same server instance.So my question would be: what are the benefits of having additional layer in chain:client -> nginx -> unicorn -> unicorn worker
Is it necessary to put Unicorn behind Nginx ( or Apache)
Big idea: To gain full control of your nginx configurations, you need to override the default settings in the.platform/nginx/nginx.conffile in your project directory.The problem: When I ssh'd into my EB instance, I found that in the file/etc/nginx/nginx.confstill includes the default settinggzip off. For some reason my extension of this file did not overwrite this setting. I suppose it's because in Amazon Linux 2, theproxy configurations should be under.platform/nginxdirectory.Solution: I usedsshto obtain a copy ofnginx.conf, added it to my project directory.platform/nginx, commented out the original settings for gzip, and added the new gzip settings. Below is a snippet of my updatednginx.conffile:#Original Settings #gzip off; #gzip_comp_level 4; #gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; #New Settings gzip on; gzip_static on; gzip_comp_level 9; gzip_proxied any; gzip_types application/javascript application/rss+xml application/vnd.ms-fontobject application/x-font application/x-font-opentype application/x-font-otf application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml application/json font/opentype font/otf font/ttf image/svg+xml image/x-icon text/css text/html text/javascript text/plain text/xml;After deploying, it finally worked! Hope this will help others with the same question.Thanks to @Marcin's suggestion to ssh into my instance, which helped me figure out what's going on.
I have an AWS EB environment of Python 3.7 running Amazon Linux 2/3.1.2 using Nginx as a proxy server. I'm trying to add a gzip compression for my application. I tried out several tutorials online but they all don't appear to work for me. I'm also new to AWS so might not be familiar with some of its services.Currently, I had a directory tree like this:-- .ebextensions -- .platform -- nginx -- conf.d -- gzip.conf -- (other files)I tried adding a config file in.ebextensionsto create a.confto enable gzip compression, but it didn't seem to work. I also tried switching the proxy to Apache, but no luck.Thistutorial says that for the latest version of Amazon Linux 2, the nginx config files should be placed in the.platformfolder, so I did as noted. However, mygzip.conffile still didn't seem to work - files are still rendered in their original formats.Currently mygzip.conf:gzip on; gzip_vary on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/html text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\.";EDIT:I SSH'd into my eb instance and found this file is at/etc/nginx/conf.d/gzip.confand the content is the same as what I uploaded. Would this path be correct to enable gzip?Any help will be appreciated!
Using gzip in AWS ElasticBeanstalk Nginx
First, mTLS and TLS/SSL termination are not exactly the same thing. mTLS ismutual authentication🤝 meaning the client authenticates the server and the server authenticates the client.Typically the SSL termination takes care of the server authenticating the client but it takes client support for the server to be able to authenticate the client.Also, theCertificate Authorityand what I believe you are referring to ascertificate managerare 2 different things.For Kubernetes, you can set upTLS/SSL termination on an Ingressusing aningress controller like Nginx. You cantotally use a self-signed certificate with your own Certificate Authority with this. The only thing is that your requests will not be verified but your client/browser unless the CA (Certificate Authority) is added as a trusted entity.Now with respect to mTLS you don't necessarily care if you use exactly the same CA, Cert, and Key to authenticate both ways. However, you would have to force your ingress to authenticate the client and with the Nginx ingress controller you can do it with these annotations:nginx.ingress.kubernetes.io/auth-tls-verify-client: "on" nginx.ingress.kubernetes.io/auth-tls-secret: "default/mycerts"You would create the above secret in K8s with something like this:kubectl create secret generic mycerts --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crtSome more details in thisblog.Note: Service meshes likeIstio,Linkerd, andConsulsupport mTLS out the box between your services.✌️
I have a Kubernetes cluster (AKS) that is hosting a REST echo service. The service runs fine via HTTP. I am using NGINX ingress to route traffic. I now want to set up this service via HTTPS and with mTLS, so forcing the client to specify a certificate to be able to communicate with the echo service. This is a POC, so I am using a self-signed cert.What Kubernetes components do I need to set up to be able to pull this off? I read NGINX documentation but wasn't able to understand if I need to create a Certificate Authority/Cert-manager in the Kubernetes cluster and use that to configure an ingress service to perform the mTLS step. I am OK with terminating the SSL at ingress (after mTLS has been performed) and allow an unsecured channel from ingress to the echo-service.I am hoping someone with experience with this kind of setup can provide some guidance.Thanks!
mTLS setup using self-signed cert in Kubernetes and NGINX
It fails because you need to use the FQDN to Resolve the name.Using just the hostname will usually work because in kubernetes the resolv.conf is configured with search domains so that you don't usually need to provide a service's FQDN.However, specifying the FQDN is necessary when you tell nginx to use a custom name server because it does not get the benefit of these domain search specs.In nginx.conf added at the server level:resolver kube-dns.kube-system.svc.cluster.local valid=10s;Then used a FQDN in proxy_pass:proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;
This question already has answers here:DNS does not resolve with NGINX in Kubernetes(3 answers)Closed5 years ago.So, I would like to havenginxresolve hostnames for backends at request time. I expect to getHTTP 502 Bad Gatewaywhen back-end service is down and I expect service response, when it's up.I usenginx:1.15-alpineimage fornginxand here is what I have in it's config:server { resolver kube-dns.kube-system.svc.cluster.local valid=5s; server_name mysystem.com; listen 80; client_max_body_size 20M; location = /nginx_status { stub_status on; access_log off; } # Services configuration location ~ /my-service/ { set $service_endpoint http://my-service.namespace:8080; proxy_pass $service_endpoint$request_uri; include includes/defaults-inc.conf; include includes/proxy-inc.conf; } }So, when I make the request to the nginx, I get 502 Bad Gateway response. Nginx's log say the name is not found:2018/06/28 19:49:18 [error] 7#7: *1 my-service.namespace could not be resolved (3: Host not found), client: 10.44.0.1, server: mysystem.com, request: "GET /my-service/version HTTP/1.1", host: "35.229.17.63:8080"However, when I log into the container with shell (kubectl exec ... -- sh) and test the DNS resolution, it works perfectly.# nslookup my-service.namespace kube-dns.kube-system.svc.cluster.local Server: 10.47.240.10 Address 1: 10.47.240.10 kube-dns.kube-system.svc.cluster.local Name: my-service.namespace Address 1: 10.44.0.75 mysystem-namespace-mysystem-namespace-my-service-0.my-service.namespace.svc.cluster.localMoreover, I canwget http://my-service.namespace:8080/and get a response.Why nginx cannot resolve the hostname?Update: How I managed to resolve it:Innginx.confat theserverlevel I have added a resolver setting:resolver kube-dns.kube-system.svc.cluster.local valid=10s;Then I used a FQDN inproxy_pass:proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;
nginx won't resolve hostname in K8S [duplicate]
Python sets__name__to"__main__"when the script is the entry point for the Python interpreter. Since Gunicorn imports the script it is runningthatscript will not be the entry point and so will not have__name__set to"__main__".
I have this from/home/myname/myapp/app.py:from flask import Flask app = Flask(__name__) print __name__ @app.route('/') def index(): return "Hello world!" if __name__ == '__main__': print 'in if' app.run()When I run:$ gunicorn app:app -b 127.0.0.2:8000It says:2013-03-01 11:26:56 [21907] [INFO] Starting gunicorn 0.17.2 2013-03-01 11:26:56 [21907] [INFO] Listening at: http://127.0.0.2:8000 (21907) 2013-03-01 11:26:56 [21907] [INFO] Using worker: sync 2013-03-01 11:26:56 [21912] [INFO] Booting worker with pid: 21912 appSo the__name__of the app isapp. Not__main__like I need it to be to run the if statement.I tried putting an empty__init__.pyin the directory. Here is mynginx sites-enabled default:server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/myname/myapp; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.2:8000; } }Edit... While this appdoesprint'Hello world'when I visit the site. The point is that I need__name__to equal'__main__'. I also just want to know why it doesn't and how to make it equal__main__.Edit 2... I just had the epiphany that I do not need to runapp.run()since that is what Gunicorn is for. Duh. But I would still like to figure out why__name__isn't'__main__'
Flask Gunicorn app can't get __name__ to equal '__main__'
Sure you can. Take a look at thenginx.conf example on tornado's homepage.The relevant bits in your case would be:http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } ... server { ... # for your "static" website location ^~ /static/ { root /var/www; if ($query_string) { expires max; } } # for your tornado's app location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect false; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } ... } ... }
I have a static website served up by nginx right now, and I want to develop an app with Tornado on the same server.The Tornado documentation mentions that wsgi doesn't support non-blocking requests.Is there a way for me to get them to work together (on the same server)?
running Tornado and Nginx on same server
The webserver needs a Unix domain socket to connect to the FastCGI application, but the socket can't be created. Most likely the directory you want it to be in doesn't exist (because they are automatically created when you do abind).
So I am following this guide:http://technotes.1000lines.net/?p=23and I am going through the steps. I have a VPN (slicehost.com) with Debian Etch, serving a website (static so far) with nginx. I used wget to download FastCGI and I did the usual make make install routine.So I guess since FastCGI can't normally run CGI scripts you have to use some type of perl wrapper to interpret the perl.Now I run this scripthttp://technotes.1000lines.net/fastcgi-wrapper.pland I run into the exact same problem that a person ran into on the page that the script was submitted:http://www.ruby-forum.com/topic/145858(I'm not a ruby person and there is nothing ruby oriented in there)I keep getting a# bind/listen: No such file or directoryAnd I have no idea how to proceed. I would appreciate any help and I can give any more details that anyone would need.
How can I run Perl scripts using FastCGI on Nginx?
PythonAnywhere developer here: yes, that's right -- we do have nginx and uWSGI installed. When you create a website on the "Web" page on our site, what happens under the hood is (simplifying a bit) that we generate the appropriate nginx/uWSGI configuration files for you and start everything up so that you only need to work on the Django code.The reason that those tools (or similar ones like Apache and mod_wsgi) are necessary is that Django's built-in webserver is not designed for production use. You can run its "manage.py runserver" command to make it serve up pages on your own machine, but the system it uses for that is not designed for security or efficiency -- it just provides an easy way for you to get something running for debugging purposes. The same is true for the built-in webservers for the other Python web frameworks like Flask and web2py.nginx is designed to be fast, efficient, and secure, so it's a better choice to handle incoming web requests when your website is on the public Internet and thus subject to large amounts of traffic (if you're lucky and your site takes off) and also to abuse from hackers. That's not to say that it automatically makes your site fast and secure, of course, but at least it means that you're starting with the right system. It's also much better at serving static files (like your CSS, JavaScript, images, and so on) than Django is, because that's what it was built for.uWSGI is designed to receive incoming web requests and rapidly and efficiently delegate processing them to multiple worker processes, then collate the responses and send them back to nginx.Of course, all of that could in theory be built into Django instead -- but it would be a lot of work for the Django team to do that, and it would be a waste of time for them to re-invent the wheel rather than focusing on the areas where Django provides its real benefits, of making it easy to quickly develop complex websites.
I have experience in hosting simple Django projects in Pythonanywhere platform(I did't have to install nginx and uWSGI).Many people use nginx+Uwsgi with Django, why would that required ?I hope nginx is a web server, load balancer, mail proxy and HTTP cache. Uwsgi is a webs server gateway interface.Does all those things are being included default in Heroku / Pythonanywhere platforms ?
Django, why would I need nginx and uWSGI during hosting?
Well actually you are right thehttpis the problem but not exactly that one in your code block. Lets explain it a bit:In yournginx.conffile you have something similar to this:http { ... ... ... include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }So everything you write in your conf files are inside thishttpblock/scope. Butrdpis nothttpis a different protocol.The only workaround I know for nginx to handle this is to work ontcplevel.So inside in yournginx.confand outside thehttpblock you have to declare thestreamblock like this:stream { # ... server { listen 80; proxy_pass 192.168.0.100:3389; } }With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing theserver_nameattribute you can't use it in thestreamscope, plus you lose all the logging functionality that comes on thehttplevel.For more info on this topic check thedocs
I'm using the below config in nginx to proxy RDP connection:server { listen 80; server_name domain.com; location / { proxy_pass http://192.168.0.100:3389; } }but the connection doesn't go through. My guess is that the problem ishttpinproxy_pass. Googling "Nginx RDP" didn't yield much.Anyone knows if it's possible and if yes how?
How to proxy RDP via Nginx
After having some discussions in this post and on twitter it looks like there is no easy way to achieve what I want via Webpack. The files are only served as static files at runtime and it is not possible to exclude a file at build time and include it at runtime.So I decided to go with the solution/workaround I had in mind: changing the static files when starting up the docker container.I create my docker image by doingnpm run build:prod docker build -t angularapp .I am using the official nginx docker image as my base image and the Dockerfile looks likeFROM nginx:1.11.1 COPY dist /usr/share/nginx/html COPY run.sh /run.sh CMD ["bash", "/run.sh"]Therun.shis used to modify the config file viasedand to start nginx afterwards:#!/bin/sh /bin/sed -i "s|http://localhost:8080|${BASE_URL}|" /usr/share/nginx/html/api.config.chunk.js nginx -g 'daemon off;'This allows me to configure theBASE_URLvia environment variabel in mydocker-compose.ymlfile (simplified):version: '2' services: api: image: restapi frontend: image: angularapp environment: BASE_URL: https://api.test.example.comWith this solution/workaround I can deploy the docker image created by my jenkins job for a specific version deploy in all my environments (development, staging, production) by configuring the REST API endpoint used via environment variable when starting the docker container.
We want to deploy ourAngular 2app usingDocker imagesin different environments (staging/test, production, ...)When developing locally we are connecting to the backend REST API viahttp://localhost:8080but when we deploy in the different environments we want to use thesame Docker imageand connect to adifferent REST API endpoint.What would be the preferred way toinjectthisconfigurationinto theDocker container at runtime?Is there a way to do this viaenvironment variables?Can we do this via aplain text filecontaining something like{ "BASE_URL": "https://api.test.example.com" }
configure Angular 2 Webpack App in Docker container environment specific
If you check the docs forproxy_pass,proxy_passneeds to be in alocation,if in locationorlimit_exceptblock. You have it in aserverblock.Try replacing your usage ofproxy_passwithlocation / { proxy_pass ... }
I am totally new to Nginx and need your help. Basically I have a single server with single IP address, but I want to host two different web application within the server with different domain name. So, basically, for each domain name, I want it to redirect to different port number. I tried below and got an error[root@mysvr nginx]# nginx -t -c /etc/nginx/nginx.conf nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/nginx.conf:41 nginx: configuration file /etc/nginx/nginx.conf test failedFollowing is the Nginx setting. Line 41 is where the proxy_pass is.server { listen 80; server_name server1.com www.server1.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:1003; } server { listen 80; server_name server2.com www.server2.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.1.1:1004; }Thank you!
Nginx reverse proxy setting
The original example cannot be used directly, because the main configuration is at/etc/nginx/nginx.conf./etc/nginx/nginx.confhashttpdirectives, which includes thesites-enabled/*directives. The only changes to be made on/etc/nginx/nginx.confare:work_processes 4; worker_connections 1024;Also, removetext/htmlfrom it because it's already gzipped by default.The end result of yournginx.confin your app should have nohttpdirectives, justupstreamandserver.
I'm using nginx 1.4.1. After copyingunicorn's example ofnginx.conf, I found out the settings must be moved to different directives. I still couldn't manage to place the following settings in thenginx.conffile:worker_processes,user,pidandeventsblock. When I place them as it is now, the log showsdirective is not allowed here. What should I fix?worker_processes 1; user deployer sudo; # for systems with a "nogroup" pid /run/nginx.pid; events { worker_connections 1024; # increase if you have lots of clients accept_mutex off; # "on" if nginx worker_processes > 1 } upstream abc { ... } server { ... }Update 1I know about thispost, but it's weird that whatever I am doing is not working. I couldn't find any docs in nginx.
nginx directive is not allowed here in unicorn's example nginx.conf