Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You haven't configured PHP in the server section so PHP files will obviously be sent as plain text. How are you planning to run PHP? As FastCGI?Update:The configuration you have shown here does still not include anything at all about PHP or FastCGI. Try something like this:# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; }
I have this nginx vhost fileserver { # php/fastcgi listen 80; server_name trinityplex.com www.trinity.com; access_log /home/web/trinity_web/log/access.log; root /home/web/trinity_web/public; location / { index index.html index.htm index.php; } }(for domain trinityplex.com), but if I go to trinityplex.com nginx display me 502 Bad gateway and throws the index file - chrome download index.php like a normal download.It's ridiculous I have never seen that. If I ask PHP for version it dumpsPHP 5.3.5-0.dotdeb.0 with Suhosin-Patch (cli) (built: Jan 7 2011 00:30:52) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies with Suhosin v0.9.32.1, Copyright (c) 2007-2010, by SektionEins GmbHHave you any ideas how to fix that?Here is an nginx cofig fileuser root; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server_names_hash_bucket_size 128; gzip on; include /usr/local/nginx/sites-enabled/*; }
nginx trouble loading index file
Redhat and related distributions (fedora, centos) keep their source rpms in a highly regular directory tree. for RHEL5 you want:ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/for other releases, you can browse the ftp server until you find what you want. Otherwise, google for the exact version of nginx you have (rpm -q nginx)Assuming you can find the srpm, install it with rpm:rpm -ivh nginx-xxxx.src.rpmThis'll put the sources and build files in/usr/src/redhat/{BUILD,SPEC,SRC,SOURCES}. You can modify the.specfile in/usr/src/redhat/SPECto build the module you want along with the rest of nginx, or you can build nginx manually.Which module do you want to build? Infedora's nginx.spec, several modules are specified whenconfigureis run. This may be as simple as adding a line here:./configure \ [snip...] --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --with-http_perl_module \ [snip...]After adding whatever changes tonginx.spec, you can build the finalrpmwithrpmbuild:rpmbuild -ba nginx.specAssuming the package builds without error, rpmbuild will leave it in/usr/src/redhat/RPMS/Update:yum will want to replace your nginx package as updates become available. You will probably want to rebuild each new package as it becomes available, using the same process as above.However,If security is not a concern, you can simple exclude nginx from the update list by adding the following to your yum config (probably/etc/yum.repos.d/${repo}.repoor similar.Be sure to associate it with the right repo):exclude=nginx*Or running yum with the --exclude optionyum --exclude=nginx*
Hello I asked this question to superuser but I did not get a good question there and i really need the answer. I know some of you here can answer this question.I have installed nginx via yum. Now I want to add a module, but I have to compile the source again and include the the new module.But i can't find the source. Does someone know what I have to do to recompile the source and get the module in.UpdateI did everything in the answer from Patrick and it worked out great. However when now when I run the yum update it wants to update the installed rpm with the same version.Can I just let it update, or should i specify that it is already up to date.
Can this package be recompiled
This way will worknginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $email $upstream_http_x_auth_request_email; add_header X-Auth-Request-Email $email;The only downside is that it will add the header to all the http requests, even for css/js files
I'm facing an issue withoauth2 proxyand Ingress Nginx (with the latest versions) in a Kubernetes cluster where theX-Auth-Requestheaders are not being passed through to the client during the standard oauth authentication flow. I'm specifically using Azure as the auth provider.Here's the relevant portion of my oauth Proxy configuration:pass_access_token = true pass_authorization_header = true pass_user_headers = true set_xauthrequest = trueWhen I explicitly call/oauth2/auth, I get the headers as expected. However, during the standard OAuth2 auth flow, none of the headers are returned with any request.This situation is somewhat similar to another question here:Oauth2-Proxy do not pass X-Auth-Request-Groups header, but in my case, I'm not receiving any of theX-Auth-Requestheaders, except when I call/oauth2/authdirectly.I've also tried adding the following snippet to my application Ingress configuration with no luck:nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $email $upstream_http_x_auth_request_email; access_by_lua_block { if ngx.var.email ~= "" then ngx.req.set_header("X-Auth-Request-Email", ngx.var.email) end }I've gone through multiple configurations, read numerous blog posts, and scoured GitHub issues, but haven't been able to resolve this issue. Does anyone have any insights into what could be causing this behavior?
oauth2 proxy with Ingress nginx not passing X-Auth-Request headers during standard auth flow
You need to addAPPLICATION_ROOTparams to your flask app:from flask import Flask, url_for from werkzeug.serving import run_simple from werkzeug.wsgi import DispatcherMiddleware app = Flask(__name__) app.config['APPLICATION_ROOT'] = '/cn'if you need to host more than one application on your server, you can configure nginx to redirect all request to your specific flask app served by gunicorn like this. (it is not necessary if your server hosts only one application) Find out more about gunicorn and nginx here:https://docs.gunicorn.org/en/stable/deploy.htmlserver { listen 8000; server_name example.com; proxy_intercept_errors on; fastcgi_intercept_errors on; location / { include proxy_params; proxy_pass http://unix:/path_to_example_flask_app_1/app.sock; } location /cn/{ include proxy_params; proxy_pass http://unix:/path_to_example_flask_app_cn/app.sock; } }serve flask app with gunicorn: A complete exmaple here:https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04#move into project directory /path_to_gunicorn/gunicorn --workers 3 --bind unix:app.sock -m 007 run:appIf you are usingflask_restfulinstead, you can specify the root path also in the following way:from flask import Flask from flask_restful import Api app = Flask(__name__) app.debug = False api = Api(app, prefix='/cn') api.add_resource(ResourceClass, '/example_path') #will be served when the resource is requested at path /cn/example_path if __name__ == '__main__': app.run(host="0.0.0.0", port=8000)
I have a flask app which I want to host it on a subfolder of a website, likeexample.com/cn.I configured my nginx likelocation /cn { proxy_pass http://localhost:8000/; }So if I accessexample.com/cn, It will redirect to the index page of flask.However, I have wrote the routes of other pages on flask likeapp.route('/a'). So if I click the link of pagea, the URI isexample.com/a, thennginxcannot redirect it to the right ​page.​I think I can rewrite all the routes on flask likeapp.route('/cn/a'), but it's complex. And if someday I want to deploy it onexample.com/en, I think I need to rewrite all the routes again.Does anyone have other methods?
How to host a Flask app on a subfolder / URL prefix with Nginx?
Maybe removehttp://because it is a TCP connection (not a HTTP connection) and addso_keepalive=ontolisten 5432;so the connection stays open.Maybe you have to usestreaminstead ofhttpblock:https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
I have a PostgreSQL server started at a remote machine on the port15432. I want to configure NGINX to make the database available remotely by hostdb.domain.myand port5432. The configuration I tried is:server { listen 5432; server_name db.domain.my; location / { proxy_pass http://127.0.0.1:15432/; } }When I try to connect the database remotely withpsqlI get the error:$ psql -h db.domain.my -U myuser psql: received invalid response to SSL negotiation: HI also tried to addsslword afterlisten 5432without any success.How do I configure NGINX correctly?
How do I configure proxy_pass on NGINX for PostgreSQL?
In your NGinx container you only need the statics and in your PHP-FPM container you only need the PHP files. If you are capable of splitting the files, you don't need any file in both sites.Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...NGinx handles requests from users. If a request is to a static file (configured in NGinx site), it sends the contents back to the user. If the request is to a PHP file (and NGinx is correctly configured to use FPM on that place), it sends the request to the FPM server (via socket or TCP request), which knows how to execute PHP files (NGinx doesn't know that). You can use PHP-FPM or whatever other interpreter you prefer, but this one works great when configured correctly.
I am pretty new with all of this docker stuff and I have thisdocker-compose.ymlfile:fpm: build: context: "./php-fpm" dockerfile: "my-fpm-dockerfile" restart: "always" ports: - "9002:9000" volumes: - ./src:/var/www/src depends_on: - "db" nginx: build: context: "./nginx" dockerfile: "my-nginx-dockerfile" restart: "always" ports: - "8084:80" volumes: - ./docker/logs/nginx/:/var/log/nginx:cached - ./src:/var/www/src depends_on: - "fpm"I am curious why do I need to add my project files in thefpmcontainer as well as in thenginx?Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...I believe that this information would be useful to other docker newbies as well.Thanks in advance.
Why do we need to map the project files in both PHP-FPM and web server container?
So the problem is that getInitialProps get executed in the server and axios can not run the server usehttps://www.npmjs.com/package/isomorphic-fetchinstead.
I am using next js when trying to call an API endpoint(dispatching a redux action)in getInitialProps. I am getting the 404 error, my /api/ is proxied in nginx server, all other routes work very well only this route is causing the problem.I have tried by changing the api fetch function call to async but still the same error.static async getInitialProps({ store, query: { id } }) { await store.dispatch(fetchP(id)); console.log('GetIntial'); return { id }; }My action creatorexport const fetchp = id => async dispatch => { dispatch({ type: FETCH_P_DETAILS_STARTED }); await API.get(`ps/${id}`) .then(res => { console.log(res); dispatch({ type: FETCH_P_DETAILS_SUCCESS, payload: res.data.data, }); }) .catch(err => { console.log(err); if (!err.response) { // network error err.message = 'Error: Network Error'; console.log(err.message); } else if (err.code === 'ECONNABORTED') { console.log('Timeout'); } else { err.message = err.message; } dispatch({ type: FETCH_P_DETAILS_FAILURE, payload: err.message, }); }); };My nginx configlocation /api/ { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }Error Response Error: Request failed with status code 404 at createError (/home/ubuntu/mainWebApp/node_modules/axios/lib/core/createError.js:16:15) at settle (/home/ubuntu/mainWebApp/node_modules/axios/lib/core/settle.js:18:12) at IncomingMessage.handleStreamEnd (/home/ubuntu/mainWebApp/node_modules/axios/lib/adapters/http.js:202:11) at IncomingMessage.emit (events.js:198:15) at endReadableNT (_stream_readable.js:1139:12) at processTicksAndRejections (internal/process/task_queues.js:81:17)
How to fix, "Error: Request failed with status code 404" in axios Next js
Just add :rewrite (.*)/$ $1/index.html last; rewrite (.*)/..$ $1/../index.html last;Should works
Using nginx as a reverse proxy, I'd like to mimic theindexdirective withproxy_pass. Therefore I'd like nginx to query/index.htmlinstead of/,/sub/index.htmlinstead of/sub/.What would be the best approach to do this ?Not sure if it's relevant, but the proxied server does answerHTTP 200on/, but I'd still like to rewrite it to/index.html.As the/request leaks some information by listing the directory content, I'd also like to be sure that no one will be capable of accessing it (like doing something like/sub/..).Thanksies
nginx rewrite all trailing / to /index.html with proxy pass
You can use something called anamed location. It can't be accessed from the outside at all, but inside your config you can refer to it in some cases:location @nginxonly { proxy_pass http://example.com/$uri$is_args$args; }After creating your named location you can refer to it insomeother places like the last item in atry_filesdirective.
Can I create a location which can be accessed by any other location in nginx config and cannot be accessed directly from outside?I can use a deny directive, but it will also deny access to the locations defined in nginx config.Here's my config -server { listen *:80; server_name 127.0.0.1; location = /auth { set $query ''; if ($request_uri ~* "[^\?]+\?(.*)$") { set $query $1; } # add_header X-debug-message "Parameters being passed $is_args$args" always; proxy_pass http://127.0.0.1:8080/auth?$query; } location /kibana/ { rewrite ^/kibana/(.*) /$1 break; proxy_pass http://127.0.0.1:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; auth_request /auth; } location ~ (/app/|/app/kibana|/bundles/|/kibana4|/status|/plugins|/ui/|/api/|/monitoring/|/elasticsearch/) { internal; proxy_pass http://127.0.0.1:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; rewrite /kibana4/(.*)$ /$1 break; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }So, I need the last location to be accessible from location /kibana/ only, but withinternal;it's throwing a 404 error, without it works fine.I actually need to protect kibana with nginx, but I will effectively end up exposing it without any authentication anyways.
Can I create a "private" location in Nginx?
I have found the solution for this. It is needed to enable SSL in nginx in order to achieve http2. I tried it and it's working well. I found the answer from this link.How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04Now my config file as below.server { listen 443 ssl http2; server_name localhost; ssl_certificate /etc/nginx/ssl/nginx-selfsigned.crt; ssl_certificate_key /etc/nginx/ssl/nginx-selfsigned.key; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
I am working on enabling http2 in nginx docker. I am getting this error when I calling to localhost. My nginx configuration file as below.server { listen 2020 http2; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }Http2 listen port is 2020. I have exposed it to 2020 when the container is started.docker run -d --name nginx -p 80:80 -p 2020:2020 -v /home/pubudu/work/nginx/nginx-tms/config/conf.d:/etc/nginx/conf.d/:ro nginx:latestThis is the tutorial link:How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04Do i need to use port 443 ? is it mandatory ? (I didn't use ssl for this)If i curl this url, response like below.Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: " to save to a file.I know that the page is transferred as a binary in http2. I want get the nginx index page as a response.your help will be really appreciated.
localhost sent an invalid response. ERR_INVALID_HTTP_RESPONSE
You can achieve this by using Javascript to check the HTTP status code for the current page, and refresh the page when the server is back up (i.e. returns the200 OKstatus code). To avoid hammering the server when many users encounter the502error page at once, I’d recommend using thetruncated binary exponential backoffalgorithm. This means the time between retries doubles each time up until a preset maximum, which lowers the overall load on your server.The code below checks the current page HTTP status over AJAX until it returns200 OK, in which case it’ll refresh the page to get the live version. It will try retry if a502is encountered, starting at an 8 second interval, then 16, 32, …, 4096 seconds, then with unlimited subsequent retries at 4096 second intervals (about 68 minutes). If any code other than502or200is encountered, the retry process is silently aborted (though you could change this with morecasestatements if desired). Currently unavailable Site currently unavailable (code 502) This page will refresh when the site is back. Your browser doesn’t support javascript. Please try refreshing the page manually every few minutes. If you’re using nginx, you can add the following to your configuration file to use the page:error_page 502 /502.html; location = /502.html { alias /path/to/502.html; }
When I’m doing maintenance on my site and restart the server, sometimes NGINX returns a 502 Bad Gateway error. The same thing sometimes happens under heavy load. This is confusing to my visitors who don’t realize the issue is probably temporary. Is there any way I can have visitors automatically refresh the page when the site is back?
Automatically refresh page on 502 Bad Gateway error
You want to rewrite the URI and redirect. You can achieve it usinglocationandreturndirectives, but arewritedirective would be the simplest approach:rewrite ^/new(.*)$ https://new-service.company.com$1 permanent;Seethis documentfor more.BTW, the problem with yourlocationblock solution, was the regular expression capture, wasn't. Use:location ~ ^/new(.*)$ { return 301 https://new-service.company.com$1$is_args$args; }Seethis documentfor more.
I am currently facing a small problem using nginx to redirect to another host. I want to for example redirecthttps://service.company.com/new/test.htmltohttps://new-service.company.com/test.html.For now I have following configuration, which redirects me tohttps://new-service.company.com/new/test.html.server { # SSL ssl_certificate /etc/nginx/cert/chained_star_company.com.crt; ssl_certificate_key /etc/nginx/cert/star_company.com.key; listen 443; server_name service.company.com; location /new/$1 { return 301 $scheme://service-new.company.com/$1; } }I also tried following with the same result:return 301 $scheme://service-new.company.com/$request_uri
Nginx return under location
In my project, uwsgi run in docker containersocket = :8000not workbut, change tohttp-socket = :8000it workhope that will be helpful
uWSGI NOT working with .ini file but works directly from command line.For this project, I'm usingPythonwithDjango,NGinXanduWSGI. When running the server configuration with parameters directly in the command line, it works but does not work when using the .ini file.In myNGinXconfiguration, I have this uwsgi_pass command:uwsgi --socket ifbAMPdatabase/ifbAMPdatabase.sock --module ifbAMPdatabase.wsgi --chmod-socket=666.ini file:[uwsgi] project = ifbAMPdatabase base = /home/ampdbvenv/ifbAMPdb home = %(base)/pyVenvIFBAMPDB/ chdir = %(base)/%(project)/ #module = %(base)/%(project).wsgi module = %(project).wsgi:application wsgi-file = %(base)/%(project)/wsgi.py master = true processes = 4 socket = %(base)/%(project)/%(project).sock chmod-socket = 666 vacuum = true ; plugins=python enable-threads = true uid = www-data gid = www-data log-date = trueOBS:Some of these parameters I have added just for testing, but they didn't change a thing (it wasn't working with a simple .ini file like in the documentaton).nginx site file:location / { uwsgi_pass django; include /etc/nginx/uwsgi_params; }
uWSGI NOT working with .ini file
correctedThe image exposes 80 as the httpd porthttps://github.com/nginxinc/docker-nginx/blob/11fc019b2be3ad51ba5d097b1857a099c4056213/mainline/jessie/Dockerfile#L25So using-p 80:80should work and does work for me:docker run -p 80:80 nginx 172.17.0.1 - - [22/Aug/2016:17:26:32 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" "-"So most probably you either run a httpd container on the host so the port cannot be binded ( should be visible through the start ) or you probably have an issue withlocalhost- does127.0.0.1work? You might have a issue withipv6then Or better using a docker-compose.yml fileversion: '2' services: webserver: image: nginx ports: - '80:80'and the start it withdocker-compose up- you can know easily add other services, like a tomcat, puma server, or a FPM upstream, whatever app you might have.
Mac 10.11.5 here. I am specifically trying to installDocker for Mac(notDocker Toolbox or any other offering). I followed all the instructions on theirInstallation page, and everything was going fine until they ask you to try running an nginx server (Step 3. Explore the application and run examples).Runningdocker run hello-worldworked beautifully with no problems at all. I was able to see the correct console output that was expected for that image.However, they then ask you to try running an nginx instance:docker run -d -p 80:80 --name webserver nginxI ran this, and got no errors. Console output was expected:Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx 51f5c6a04d83: Pull complete a3ed95caeb02: Pull complete 51d229e136d0: Pull complete bcd41daec8cc: Pull complete Digest: sha256:0fe6413f3e30fcc5920bc8fa769280975b10b1c26721de956e1428b9e2f29d04 Status: Downloaded newer image for nginx:latest ae8ee4595f47e7057e527342783d035b224afd17327b057331529e2820fe2b61So then I randocker ps:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae8ee4595f47 nginx "nginx -g 'daemon off" 12 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp, 443/tcp webserverSo far so good. But then I open up my browser and point it tohttp://localhostand I get (in Chrome):Any ideas where I'm going awry? I waited 5 mins just to give nginx/docker ample time to start up, but that doesn't change anything.For tracking purposes, the related GitHub issue:https://github.com/docker/for-mac/issues/393
Docker for Mac nginx example doesn't run
I ended up also asking the same question onLetsencrypt forumswhere I got an answer.Basically, when you have created certificate with--standaloneplugin, just regenerate it with--webrootand then it can be updated with--webrootfrom next time onwards.sudo ./letsencrypt-auto certonly -a webroot --renew-by-default -w -d
I have already set up certificates using the --standalone flag which is working great but the problem is I have to stop Nginx server every time I have to renew the certificates because the --standalone option requires port 80 to be free.The --webroot method does not require stopping the server and essentially taking down all the sites on the server.So is it possible to renew certificates using --webroot which were installed using the --standalone flag?
Letsencrypt - Change installed certificates to use --webroot for renewal instead of --standalone
Useproxy_ssl_verify on; proxy_ssl_trusted_certificate /path/to/your_selfsigned_ca_cert.pemFor additional details you can refer to nginx proxy docshere
I am trying to institute TLS at every layer of a proxying path. What I'm seeing is Nginx allowing an upstream to have a self-signed certificate. Is there any way to lock the authorities that are accepted when passing traffic to an upstream?end-user --1--> nginx01 --2--> nginx02 --N--> nginxNnginx01 has a trusted cert and the end-user connects without issue. nginx02 has a self-signed cert, and when I proxy_pass tohttps://nginx02I don't see any complaints in end-user browser or in nginx01 logs. I would expect a rejection. If I curl nginx02 from nginx01 as expected I get the ssl rejection.Is there any way to force nginx01 to validate nginx02 certs?CentOS 7 running nginx 1.6.3-8./etc/hosts 10.21.10.99 upstream.example.com curl https://upstream.example.com # ssl rejection curl https://upstream.example.com --cacert ./upstream.example.com.crt # works fine (200) # nginx configuration server { listen 443; ssl on; ssl_certificate /etc/nginx/security/full_chain.crt; ssl_certificate_key /etc/nginx/security/ingress.example.com.key; server_name ingress.example.com; location / { proxy_pass https://upstream.example.com; } }
Force nginx to verify upstream certs
I just readmattstauffer blog postand finally found way to do in Laravel route. Like the followingRoute::get('user/{vue_capture?}', function() { return View::make('user.index'); })->where('vue_capture', '[\/\w\.-]*');It does not return 404 when user directly visiting a deep link to site.
I read the following note fromvue router documentationNote: when using the history mode, the server needs to be properly configured so that a user directly visiting a deep link on your site doesn't get a 404.So, I try to configure my nginx like the followingserver { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /var/www/laravel/public/; index index.php index.html index.htm; server_name working.dev; location /user { rewrite ^(.+)$ /index.php last; } location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri /index.php =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }And the following is my Vue configurationvar router = new VueRouter({ hashbang: false, history: true, linkActiveClass: "active", root: '/user' });But, I still got 404 when user directly visiting a deep link in my site.Edited: I use Laravel routing too. The Following is my laravel routing.Route::get('user', function() { return View::make('user.index'); });
Vue router server configuration for history mode on nginx does not work
One approach is to userewrite ... break, for example:location / { try_files $uri @proxy; } location @proxy { rewrite ^ /demo$uri break; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://django_app_server; }Seethis documentfor details.
I am running a django application through gunicorn via unix socket and I have my nginx configuration which looks like this :Current NGINX config File :upstream django_app_server { server unix:/django/run/gunicorn.sock fail_timeout=0; } server{ listen 80; server_name demo.mysite.com; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://django_app_server; break; } } }so my django is running on a unix socket here , lets say if it was running on localhost then it has a url which looks like :http://127.0.0.1:8000/demo/app1 http://127.0.0.1:8000/demo/notificationsMain goalso what i want to do is , when someone visithttp://demo.mysite.com/app1they can access via proxy pass the url :http://127.0.0.1:8000/demo/app1It would have been really easy if i would be running django on localhost tcp port and i could have easy done this and it would have worked for me :server{ listen 80; server_name demo.mysite.com; location / { proxy_pass http://127.0.0.1:8000/demo/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }How do i achieve this with my current nginx configurtion ?
How to change request_uri in nginx proxy_pass?
You need to make sure that the "root" defined in the server section matches what you want. Or define "root" under your regex locationlocation ~* \.(jpg|jpeg|png|gif|swf|svg|ico|mp4|eot|ttf|otf|woff|woff2|css|js)$. Without a root defined there nginx may be reverting to global definition under the server section.
I have the following configuration to run an angular.js app, which works fine.location / { root /usr/share/nginx/html; index index.html; expires -1; add_header Pragma "no-cache"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; try_files $uri $uri/ /index.html =404; }But when I add cachelocation ~* \.(jpg|jpeg|png|gif|swf|svg|ico|mp4|eot|ttf|otf|woff|woff2|css|js)$ { add_header Cache-Control "max-age=86400, must-revalidate, s-maxage=2592000"; }The files are not longer accesibles[error] 5#5: *1 open() "/etc/nginx/html/scripts/vendor-1568b32f3e.js" failed (2: No such file or directory), client: 87.63.75.163, server: localhost, request: "GET /scripts/vendor-1568b32f3e.js HTTP/1.1", host: "myapp", referrer: "http://myapp/auth/login"Not sure why is trying to get the resources from/etc/nginx/html/when the root path is/usr/share/nginx/html
Nginx configuration to cache angular app files
You can do it just like this. NOT '^~'location ~ /api/v2/stocks/accounts/([^/]+)/tradings { proxy_pass http://localhost:4001/api/v2/stocks/accounts/$1/tradings; }As described bynginx.org.The^~is often to matchdirectory, like this below:location ^~ /dir/ { # some actions }
I have this REST API URL:http://localhost:4000/api/v2/stocks/accounts/1162/tradingsI want it to proxy_pass to URL:http://localhost:4001/api/v2/stocks/accounts/1162/tradingsWhere 1162 is the URL parameter which can be other value.I have the following:location ^~ /api/v2/stocks/accounts/([^/]+)/tradings { proxy_pass http://localhost:4001/api/v2/stocks/accounts/$1/tradings; }But it doesn't work (404 Not Found), I have googled similar problem, but not much help:Like this one:Get arguments Nginx and path for proxy_passor this one:trouble with location regex and redirection in nginx.Is there any way to achieve what I want, using nginx?Thanks in advance for any help.ADDED:I also added parameters capture:location ~ /api/v2/stocks/accounts/([^/]+)/tradings/(.*) { proxy_pass http://localhost:4001/api/v2/stocks/accounts/$1/tradings/$2$is_args$args; }
Nginx location regex capture variable
From what I understand, the answer is no - it's not a security vulnerability. CRIME/BEAST attack injects chosen plaintext to uncover original plaintext; in your case this would CSS and JavaScript, which carry no security value. (Presumably, you serve them over HTTPS to avoid mixed content warnings on the browser).The attack cannot uncover your per-session symmetric key, so it cannot affect your sensitive content assuming it does not use gzip/deflate. Of course, if you wish to be 100% sure, you can also consider chunked encoding in addition to gzip, as per this article:https://community.qualys.com/blogs/securitylabs/2013/08/07/defending-against-the-breach-attack
As I understand it, gzipping opens up a security vulnerability (BREACH/CRIME) if I use it with SSL/HTTPS.What if I only use it on my CSS and JS files, is it still a security vulnerability if those files are served off my server over HTTPS?
HTTPS + gzip: Is it a security vulnerability if I only gzip non-sensitive files?
ok since you already defined 2 subdomains, you just need to add theserver_nameto the nginx blocksupstream app1{ server unix:/tmp/unicorn.app1.sock fail_timeout=0; } upstream app2{ server unix:/tmp/unicorn.app2.sock fail_timeout=0; } server{ listen 80; server_name app1.domain.com; root /var/www/app1/public; try_files $uri/index.html $uri.html $uri @app; location @app{ proxy_pass http://app1; } error_page 500 502 503 504 /500.html; } server{ listen 80; server_name app2.domain.com; root /var/www/app2/public; try_files $uri/index.html $uri.html $uri @app; location @app{ proxy_pass http://app2; } error_page 500 502 503 504 /500.html; }
I have two Rails apps and I want to host them with just one domain name like this:app1.example.com app2.example.comI have a VPS on digital ocean and I have already run one app with Nginx and Unicorn. This is my nginx configuration file:upstream app1{ server unix:/tmp/unicorn.app1.sock fail_timeout=0; } #upstream app2{ # server unix:/tmp/unicorn.app2.sock fail_timeout=0; #} server{ listen 80; root /var/www/app1/public; try_files $uri/index.html $uri.html $uri @app; location @app{ proxy_pass http://app1; } error_page 500 502 503 504 /500.html; }It seems I need another server block to host another app, but I don't know how to let nginx differentiate the two server blocks since I only have one domain. Any ideas?
One domain name for multiple Rails apps with Nginx and Unicorn
You need to specify minimal events block:events { worker_connections 1024; }Root needs to be declared with absolute path on the filesystem.
So I want to serve static files from a specific folder on a specific port with Nginx.$ nginx -c `pwd`/nginx.confmy localnginx.confgoes like:http { server { root .; listen 8080; } }but i'm getting this error:nginx: [emerg] no "events" section in configurationI can see the events section in/etc/nginx/nginx.confbut I don't want to have to duplicate all the configuration. There has to be a way to use the default settings and just different port number for a specific folder.What am I missing to get this to work?
configuring nginx to serve static files from a custom directory
Expires header (and some other headers) is honoured by nginx to determine if a response is cacheable, but it's not used to determine how long to cache it.By default, your inactive cache will be deleted after 10 min. Could you increase that number to see if it makes a difference?proxy_cache_path path [levels=levels] keys_zone=name:size [inactive=time] [max_size=size] [loader_files=number] [loader_sleep=time] [loader_threshold=time];Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness. By default, inactive is set to 10 minutes.Reference:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path
I am using nginx to cache requests to my uwsgi backend usinguwsgi_cache_path /var/cache/nginx/uwsgi keys_zone=cache:15M max_size=5G;My back-end is setting a very long expires header (1 year+). However, as my system runs, I see the cache topping out at 15M. It gets up to that level, then prunes down to 10M.This causes a lot of unnecessary calls to my back end. When I change the keys_zone size it seems to control the size of the entire cache. It seems to ignore the max_size and instead substitute the keys_zone size. (*)Can anyone explain this behavior? Is there a known bug in this version? Am I missing the point? I don't want to allocate 5G to the cache manager..# nginx -V nginx version: nginx/1.2.0 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --user=www-data --group=www-data --with-http_ssl_module --with-http_stub_status_module(*) Update: I guess this was my overactive imagination trying to find a pattern in the chaos.
nginx limiting the total cache size
Generally, this is done in the application itself, not at the webserver level. The webserver generally only knows what to serve, and from where. Both PHP and Rails have the ability to do what you're describing above, but again, that's within the application itself.From what I can tell,this articleis a good step-by-step walk through which is very similar to what you're asking for, but again, this involves application changes. If you search Google for "nginx css versioning" you'll find other articles which discuss the nginx config, but all that I looked at involved application changes as well.
I have a setup where nginx is serving all static content (CSS/JS). Our problem is that when we update the static content the browser doesn't necessarily update them immediately, causing problems when we're pushing new versions.I would like to have a nginx plugin that basically replaces all calls to CSS/JS and ads a versioning number, like this:Before:After:And does this automatically based on the latest changed date on the style.css file itself. So I don't have to update the HTML. Is there a plugin for this? I know Googles mod_pagespeed does simliar things in their apache2 module.. but I can't find anyone for nginx.
Auto-versioning CSS/JS in nginx
When you see that error log entry in your php-fpm error log, it's actually providing a helpful stack trace of the slow php process.In your php-fpm configuration file (e.g. /etc/php-fpm.d/www.conf), take a look at therequest_slowlog_timeoutandslowlogsettings. The first defines how many seconds until a request is considered "slow", and the latter defines the file that stack traces will be written to.If you look at the php-fpm slowlog file, you'll get a better idea of exactly where in the method call stack your processes are hanging up at.
I need some tips on how to debug a new server config that hangs. This site itself is a very big instance of Drupal. Big as in a 45+ MB of PHP memory per page load with APC functioning.The site itself does run on another server with nginx/php-fpm/apc. The new server I'm setting up has a custom PHP 5.3 build.nginx is configured to listen on port 80, and passes the fastcgi request to 127.0.0.1:9000. This works.In the Drupal root directory, I have a plain PHP file with phpinfo(); in it. I can load this PHP file directly and confirm the PHP build looks good.There is no nginx error, but the php-fpm error log will show this as the page hangs:[22-Dec-2012 17:41:16] WARNING: [pool www] child 19760, script '/var/www/mysite/public_html/index.php' (request: "GET /index.php") executing too slow (5.068781 sec), loggingBesides this error, there's nothing.So I'm looking for advice on ways to debug this considering a normal PHP script loads fine, but loading a Drupal app (directly to index.php, not even trying clean urls) hangs.
How do I debug nginx/php-fpm for site that hangs?
Phusion Passenger author here. Usepassenger_intercept_errors off.
I'm using Rails 3.2 with passenger+nginx. I want to display nice custom 500 page when the db server is down. I want to show something when my rails app cannot be started. Here is my nginx:server { listen 80; server_name localhost; root /var/www/store/public; error_page 500 /500.html; # root location / { passenger_enabled on; rails_env production; passenger_use_global_queue on; } }The above configuration doesn't work at all. When it happens, it shows only:Internal Server Error (500)Any idea?
Passenger+Nginx show custom 500 page
Non-encrypted traffic on port 443 can work, but if you want compatibility with networks with paranoid and not-quite-competent security policies you should assume that somebody has "secured" themselves against it.Regardless of silly firewalls you should use SSL-encrypted WebSockets, because WebSocket protocol is not compatible with HTTP (it's masquerading as such, but that's not enough) and will not work reliably with HTTP proxies.For example O2 UK (and probably many other mobile ISPs) pipes all non-encrypted connections through their proxy to recompress images and censor websites. Their proxy breaks WebSocket connections and the only workaround for it is to use SSL (unless you're happy with Socket.IO falling back to jsonp polling...)
According to the following post, some networks only allow a connection to port 80 and 443:Socket IO fails to connect within corporate networksEdit: For clarification, the issue is when the end user is using a browser at work behind a corporate firewall. My server firewall setup is under my control.I've read about Nginx using proxy_pass to Socket.io listening on another port (which I've read about disadvantages) and also reverse proxy using nodejitsu/node-http-proxy to pass non-node traffic to Nginx (which has other disadvantages). I am interested in considering all possible options.After much searching, I did not find any discussion about the possibility of socket.io listening to port 443, like this:var io = require('socket.io').listen(443);Where the client would connect like this:var socket = io.connect('http://url:443/', {secure: false, port: '443'});Aside from forfeiting the use of https on that server, are there any other drawbacks to this? (For example, do corporate networks block non-SSL communications over port 443?)
Is it a bad idea to use port 443 for Socket.IO?
Solution is sticking to the naming scheme of kohana: all files lower caseWindows by default is not case sensitive, and linux is. Can't "solve" that
There is an issue/bug/feature/whatever on Linux + NGinx + Kohana :We have to make sure that we keep all our file names in lowercase only.We can't have anything like "setUserServer.php". It simply doesn't work. No idea why. If we give the name of the same file as "setuserserver.php", it runs.This problem doesn't exist on Windows + Apache + Kohana.If anyone has an idea how to solve this, please do chime in this thread.Thanks.
Case sensitivity in URL issue on Linux + NGinx + Kohana + php
Does nginx run on windows?I think you'd have a much better result using an existing library that includes a good http server. My first choice would belibevent.
The application runs in Linux, Windows, Macintosh.Also, if yes, how much effort is required?
Is it possible to embed nginx in a C/C++ application
This is how I solved this. First, I added to application.rb:# Monit support if defined?(PhusionPassenger) require 'pidfile_manager' PhusionPassenger.on_event(:starting_worker_process) do |forked| if forked # We're in smart spawning mode. PidfileManager.write_pid_file else # We're in conservative spawning mode. We don't need to do anything. end end PhusionPassenger.on_event(:stopping_worker_process) do PidfileManager.remove_pid_file end endand then I implemented the PidfileManager:module PidfileManager extend self BASENAME = '/var/tmp/rack.*.pid' def write_pid_file pid = Process.pid count = 1 pidfile = nil go_over_pid_files do |file, saved_pid| file_id = file[/(\d+)/,1].to_i # Increase counter only if we met the same file id count += 1 if file_id == count # We're already there return if saved_pid == pid # Check if the process is alive res = begin Process.kill(0, saved_pid) rescue Errno::ESRCH nil end # It's dead, reuse unless res pidfile = file break end end pidfile ||= BASENAME.sub('*', count.to_s) File.open(pidfile, 'w') {|f| f.write(pid.to_s)} end def remove_pid_file pid = Process.pid go_over_pid_files do |file, saved_pid| if pid == saved_pid File.unlink(file) break end end end private def go_over_pid_files Dir[BASENAME].each do |file| saved_pid = File.read(file).to_i yield file, saved_pid end end endAnd then you just tell monit to monitor each instance using /var/tmp/rack.X.pid as a pidfile.
I have several rails applications deployed by nginx passenger. I want those applications to be monitored by using monit. How can I monitor those applications using monit? Should I monitor nginx as well?
How to monitor nginx passenger with monit
We had this problem a while back when writing HTML pages out to disk. The solution for us was to write to a temporary file and then atomically rename the file. You might also want to consider usingfsync.The full source is available here:staticgenerator/__init__.py, but here are the useful bits:import os import stat import tempfile ... f, tmpname = tempfile.mkstemp(dir=directory) os.write(f, content) # See http://docs.python.org/library/os.html#os.fsync f.flush() os.fsync(f.fileno()) os.close(f) # Ensure it is webserver readable os.chmod(tmpname, stat.S_IREAD | stat.S_IWRITE | stat.S_IWUSR | stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) # Rename is an atomic operation in POSIX # See: http://docs.python.org/library/os.html#os.rename os.rename(tmpname, fn)
I have a view in my Django application that automatically creates an image using the PIL, stores it in the Nginx media server, and returns a html template with a img tag pointing to it's url.This works fine, but I notice an issue. For every 5 times I access this view, in 1 of them the image doesn't render.I did some investigation and I found something interesting, this is the HTTP response header when the image renders properly:Accept-Ranges:bytes Connection:keep-alive Content-Length:14966 Content-Type:image/jpeg Date:Wed, 18 Aug 2010 15:36:16 GMT Last-Modified:Wed, 18 Aug 2010 15:36:16 GMT Server:nginx/0.5.33and this is the header when the image doesn't load:Accept-Ranges:bytes Connection:keep-alive Content-Length:0 Content-Type:image/jpeg Date:Wed, 18 Aug 2010 15:37:47 GMT Last-Modified:Wed, 18 Aug 2010 15:37:46 GMT Server:nginx/0.5.33Notice the Content-Lenth equals to zero. What could have caused this? Any ideas on how could I further debug this problem?Edit:When the view is called, it calls this "draw" method of the model. This is basically what it does (I removed the bulk of the code for clarity):def draw(self): # Open/Creates a file if not self.image: (fd, self.image) = tempfile.mkstemp(dir=settings.IMAGE_PATH, suffix=".jpeg") fd2 = os.fdopen(fd, "wb") else: fd2 = open(os.path.join(settings.SITE_ROOT, self.image), "wb") # Creates a PIL Image im = Image.new(mode, (width, height)) # Do some drawing ..... # Saves im = im.resize((self.get_size_site(self.width), self.get_size_site(self.height))) im.save(fd2, "JPEG") fd2.close()Edit2:This is website:http://xxxcnn7979.hospedagemdesites.ws:8000/cartao/99/if you keep hitting F5 the image on the right will eventually render.
Django and dynamically generated images
There is no way manually reset browser cache on user side (browser) while the client do not request to server for new content. In this case can be useful access to any scripts, that you SPA is download without cache. In this case you can change this script and run force reload page (but be careful - you need any flag for prevent permanent force reloading after every page loading). For example, if you have GTM on site - this can help.UPD: I am not js specialist, but you needed to add GTM tag on all pages like this js-script:function getCookie(name) { let matches = document.cookie.match(new RegExp( "(?:^|; )" + name.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g, '\\$1') + "=([^;]*)" )); return matches ? decodeURIComponent(matches[1]) : undefined; } was_reloaded = getCookie('was_reloaded') alert(was_reloaded) if (was_reloaded != 'yes') { document.cookie = "was_reloaded=yes; path=/; max-age=3600;" location.reload(); } }
During a server migration a new nginx configuration was missing cache conrol directives. Hence, we ended up with a cachedindex.htmlwhich is very bad for our SPA that is not refreshed anymore if we deploy new code. We need the index.html to not be cached.This was our (bad) nginx config that was online some days:server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }We fixed our config:server { listen 80; root /usr/share/nginx/html; location / { index index.html index.htm; try_files $uri $uri/ /index.html; add_header Cache-Control "no-store, no-cache, must-revalidate"; } location ~* \.(js|jpg|jpeg|gif|png|svg|css)$ { add_header Cache-Control "max-age=31536000, public"; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }QUESTIONClients that have been to our webpage within the last days cached an old index.html. How can we force their browsers to drop their cachedindex.html?
nginx cached index.html force reload
Fromthe documentation:breakstops processing the current set of ngx_http_rewrite_module directives as with the break directive;Both therewriteandsetdirectives are implemented by thengx_http_rewrite_module.The statements are evaluated sequentially within thelocationblock. Thebreak(either on its own, or part of arewrite...break) will stop processing within the current context. So anysetdirectives following it will be ignored.
Im trying to find what is the relation between rewrite statement in nginx location block and set variable statement inside location block. Why im asking is because of different behaviour in below 2 caseswhat does not work- getting http 500 as url is not set- when set is after rewrite statement*location ~ ^/offer/ { log_by_lua_file lua/datadog/api_latency.lua; proxy_pass $url; proxy_read_timeout 60; rewrite ^((?U).*)(/?)$ $1 break; set $location_name offer; set $url https://example.com; }What works- when set is before rewrite statementlocation ~ ^/offer/ { log_by_lua_file lua/datadog/api_latency.lua; proxy_pass $url; proxy_read_timeout 60; set $url https://example.com; rewrite ^((?U).*)(/?)$ $1 break; set $location_name offer; }In the nginx debug logs i can see that set variable being executed in working case but not in non working case. I have searched in nginx documentation if there is any relation - best i can get is that both of these are executed in rewrite phase but no other info regarding cause of this behaviour.Any idea why this is happening?
relation between rewrite uri and set variable statements in nginx
Since ESNI (or ECH, as it's now called) isnot supported by OpenSSL, it can't be supported by nginx, either.
I was looking around for a way but I've only got that Nginx does implement the normal SNI and that's it.Can it be that ESNI is still a "not yet ready" feature for Nginx?
Is there a way to enable/setup ESNI in Nginx?
You need to split thosehostdefinitions into separateingressrules.Then you can use annotation towhitelist source rangeusing followingannotation:nginx.ingress.kubernetes.io/whitelist-source-rangeSomething like this:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app1-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/24" spec: rules: - host: app1.com http: paths: - path: / backend: serviceName: app1-service servicePort: http --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app2-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/24" spec: rules: - host: app2.com http: paths: - path: / backend: serviceName: app2-service servicePort: httpYou can also useserver snipperand add nginx config to theyaml.Something like this:apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/server-snippet: | location / { # block one workstation deny 192.168.1.1; # allow anyone in 192.168.1.0/24 allow 192.168.1.0/24; # drop rest of the world deny all; }
I have searched a lot and I didn't find the solution. I want to block/allow ip's into each host definition in the nginx-ingress, not per locations.This is the ingress.yaml:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: test1.test.com #Blocking rules here, only affecting test1.test.com domain http: paths: - path: / backend: serviceName: wordpressA servicePort: 80 - host: test2.test.com #Blocking rules here, only affecting test2.test.com domain http: paths: - path: / backend: serviceName: wordpressB servicePort: 80Many thanks for your time
How to add blocking IP rules on each nginx-ingress host
Try to add following headers, hope that this will help:server { location / { proxy_pass http://webservers; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }Fullofficial instructionabout how to setup Django Channels + Nginxcan be found here
I have setup a Django application with Nginx + uwsgi. The application also uses django-channels with redis. When deploying the setup in an individual machine, everything works fine.But when I tried to setup the app in 2 instances and setup a common load balancer to coordinate the requests, the request get properly routed to the daphne process and I can see the logs. But the status code returned from the daphne process is 200 instead of 101.Load balancer nginx conf:upstream webservers { server 10.1.1.2; server 10.1.1.3; } server { location / { proxy_pass http://webservers; } }Versions used:daphne==2.2.4 channels==2.1.6 channels-redis==2.3.2All the routing works fine and there are no errors, but just that the status code returned is 200 instead of 101.
Django channels daphne returns 200 status code
You must turnproxy_pass http://wiki;toproxy_pass http://wiki/;.As I know, Nginx would take two different way with/without backslash at the end of uri. You may findmore details aboutproxy_passdirectiveon nginx.org.In your case, a backslash(/) is essential as a uri to be passed to server. You've already got error message "Can't get /wiki". In fact, this error message means that there is no/wikiinserver wiki:3000, not in Nginx scope.Getting better knowing about proxy_pass directive with/without uri would help you much.I hope this would help.
I've got several Docker containers acting as web servers on a bridge network. I want to use Nginx as a proxy that exposes a service (web) outside the bridge network and embeds content from other services (i.e. wiki) using server side includes.Long story short, I'm trying to use the configuration below, but my locations aren't working properly. The/location works fine, but when I add another location (e.g./wiki) or change/to something more specific (e.g./web) I get a message from Nginx saying that it "Can't get /wiki" or "Can't get /web" respectively:events { worker_connections 1024; } http { upstream wiki { server wiki:3000; } upstream web { server web:3000; } server { ssi on; location = /wiki { proxy_pass http://wiki; } location = / { proxy_pass http://web; } } }I've attached to the Nginx container and validated that I can reach the other containers using CURL- they appear to be working properly.I've also read the Nginx pitfalls and know that using hostnames (wiki, web) isn't ideal, but I don't know the IP addresses ahead of time and have tried to counter any DNS issues by telling docker-compose that the nginx container depends on web and wiki.Any ideas?
How do I map a location to an upstream server in Nginx?
Theindexdirective should take care of most of thatserver { index index.php; ... }If your setup dictates using try_files, then this should work for you:location / { try_files $uri $uri/ $uri/index.php?$query_string =404; }You can also capture the location and use as a variable:location ~ ^/(?) { # Variable $anyfolder is now available try_files $uri $uri/ /$anyfolder/index.php?$query_string =404; }EditI see from your comment that you want to try the subject folder's index.php file first and go on to the one in the root folder if there isn't one in the subject folder.For that, you could try something like...location / { try_files $uri $uri/ $uri/index.php$is_args$args /index.php$is_args$args; }Note:$is_args$argsis better than?$query_stringif there is a chance there might not be arguments.Edit 2Okay. Got the bounty but kept on feeling I was missing something and that your query was not actually addressed. After reading and rereading, I now think I finally fully understand your query.You want to check for index.php in the target folder. If found, this will be executed. If not found, keep checking parent folders up the directory tree until one is found (which may be the root folder).The answer I gave in "EDIT" above just jumps to the root folder but you will like to check intervening folders first.Not tested but you could try a recursive regex pattern# This will recursively swap the parent folder for "current" # However will only work up to "/directChildOfRoot/grandChildOfRoot" # So we will add another location block to continue to handle "direct child of root" and "root" folders location ~ ^/(?.+)/(?[^\/]+)/? { try_files /$current /$current/ /$current/index.php$is_args$args /$parent; } # This handles "direct child of root" and "root" folders location / { try_files $uri $uri/ $uri/index.php$is_args$args /index.php$is_args$args; }
In Apache it is possible to redirect everything to the closest index.php, using.htaccessExample folder structure:/Subdir /index.php /.htaccess /Subdir /Subdir/.htaccess /Subdir/index.phpIf I access/somethingit will redirect to the root index.php, and if I access/Subdir/somethingit will redirect toSubdir/index.phpCan this be done in nginx as well? It should be possible, because in nginx documentation it saysIf you need .htaccess, you’re probably doing it wrong:)I know how to redirect everything to the root index.php:location / { try_files $uri $uri/ /index.php?$query_string; }But how to check for index.php in every parent directory until/?edit:I found that these rules do what I want:location / { try_files $uri $uri/ /index.php?$query_string; } location /Subdir{ try_files $uri $uri/ /Subdir/index.php?$query_string; }But is there a way to make it abstract, likelocation /$anyfolder{ try_files $uri $uri/ /$anyfolder/index.php?$query_string; }?
Cascade index.php in nginx "try_files"
To enable the SSL you need to configure the Nginx for it. As far as I see in your code, you are still using the default Nginx config without any modifications. Here is an example onhow to enable SSL on Nginx. The main components are:server { listen 443; server_name jenkins.domain.com; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on;Also I see that you mount a volume (-v /etc/nginx/certs:/etc/nginx/certs) in the docker run command, this means that the /etc/nginx/certs in the container will be same as the host and so make sure you have the correct certificates on the host machine!
I have problem with serving Angular app using nginx on docker. Problem is only when I want to turn on SSL on site. I'm using Bamboo for deployment.Here is my Dockerfile:FROM node:8.6 as node WORKDIR /app COPY package.json /app/ COPY ssl/certificate.crt /app/ COPY ssl/ /app/ssl RUN npm install -g @angular/cli --unsafe RUN npm install COPY ./ /app/ RUN ng build --prod --aot=false --env=prod FROM nginx RUN mkdir -p /ssl COPY --from=node /app/ssl/ /ssl/ ADD ssl/certificate.crt /etc/nginx/certs/ ADD ssl/private.key /etc/nginx/certs/ RUN ls /etc/nginx/certs/ COPY --from=node /app/dist/ /usr/share/nginx/html RUN ls /usr/share/nginx/htmlScript to run:docker build -t test-app . docker run --name test-app-cont -v /etc/nginx/certs:/etc/nginx/certs -d -p 3010:443 test-appDeployment runs successfully, but there is no app on server served.Please have a look at this screen: There is listed what is in /certs and /html directories. Everything seems to be good.If I remove these lines dedicated to SSL, everything works fine, and on server I can see my app, but only through http.The certificates are valid, I checked.What am I doing wrong?
Angular app + NGINX + Docker
See if you can followthis example:FROM nginx:alpine COPY default.conf /etc/nginx/conf.d/default.conf COPY index.html /usr/share/nginx/html/index.htmlIt uses adefault.conf filewhich does specify the index.html usedlocation / { root /usr/share/nginx/html; index index.html index.htm; }Change in thedefault.confthe listening port from 80 to 8080, and EXPOSE it.Or simplydocker runwith-p 8080:80(hostPort:containerPort).
Mac OS here, running Docker Version 17.12.0-ce-mac49. I have the following super-simpleDockerfile:FROM nginx COPY index.html /usr/share/nginx/htmlI create my Docker image:docker build -t mydocbox .So far so good, no errors. I then create a container from that image:docker run -it -p 8080:8080 -d --name mydocbox mydocboxAnd I see it running (I have confirmed its running by issuingdocker psas well as SSHing into the box viadocker exec -it bashand confirming/usr/share/nginx/html/index.htmlexists)!When I open a browser and go tohttp://localhost:8080, I get an empty/blank/nothing-to-see-here screen, not my expectedindex.htmlpage.I don't see any errors anywhere and nothing to indicate configuration is bad or that firewalls are causing issues. Any ideas as to what the problem could be or how I could troubleshoot?
Dockerized nginx isn't serving HTML page
Compile with both modules--with-stream--with-stream_ssl_preread_moduleCreate a stream block outside http blockstream { upstream app { server IP1:Port; server IP2:Port; } map $ssl_preread_server_name $upstream { default app; } server { listen PORT; ssl_preread on; proxy_pass $upstream; } }This worked for me. Let me know if this works for you too
I am trying to implement "ssl_preread" in my nginx. My nginx is compiled with "--with-stream_ssl_preread_module" this module.I mentioned "ssl_preread on;" in server directive of nginx.conf. But i am getting below error.nginx: [emerg] "ssl_preread" directive is not allowed here in /opt/nginx/conf/nginx.conf:43I am following below doc.http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
"ssl_preread" is not working in NGINX
You already had the components of a correct solution. Use the scheme and hostname, together with the capture to construct the destination URL:rewrite ^/(.*)/$ https://$host/$1 permanent;
I want to redirect http to https and remove trailing slashes in nginx with one single redirect. The solution I have today is the following:server { listen 80; server_name www.example.com rewrite ^/(.*)/$ /$1 permanent; return 301 https://$host$request_uri; }The problem with this solution is that it will give two redirectsGives Two redirects:http://www.example.com/test/ --> http://www.example.com/test http://www.example.com/test --> https://www.example.com/testIs it possible to make a solution where you only get one single redirect like bellow?http://www.example.com/test/ --> https://www.example.com/testwhen I looked through the documentation of nginx rewrite and return methods I felt like it should be possible to do it with a single rewrite somehow:rewrite ^/(.*)/$ https://$host$request_uri permanent;But nothing I have tried have given me the correct results.
Nginx redirect http to https and remove trailing slashes with one single redirect
The main issue with using the same domain across multiple apps is security in regards to cookies. If apps are independent, then you might want to ensure a vulnerability in one app would not necessarily affect your other apps.Otherwise, with nginx, there is really no limitation on your setup, however you decide to go. You can use nginx to easily join or disjoin multiple domains and/or ports/servers, into whatever setup you wish.Whether you decide to go with multiple domains or multiple paths on a single domain have more to do with what kinds of apps you have in mind, and how logically separate would they appear to be from one another. With the help of therewritedirective, even if you make a "wrong" choice initially, if you do have a desire, you could always fix it later on (preserving all existing links flawlessly), pretty much without any ill effect.
So i am building a website using NodeJS where i will use Nginx as a reverse proxy to my app/apps. I will be using jade and sharing some layouts between subdomain and displaying specific content according to subdomain. I am trying to figure out from alot of research the best method of structuring the app. Is the best way to run each subdomain as a separate app on the same server? Or can i link them as one app? Please share your ideas and suggestions so i can make a decision and begin my coding :)
Folder Structure for Nodejs Multi Subdomain site
Take a look into your.jsfile, make sure that you are using the right ajax URL (//your_site.com/handler, instead ofhttp://your_site.com/handler), for instance:$.ajax({ url:'//your_site.com/handler',dataType:'json',type:'get', success: function(data){...}, complete:function(xhr, textStatus){...} });
I'm using Nginx + flask-socketio + aws elb and when the URL is loaded on https I'm getting the following error message which is something related to the Nginx and socket, please help on this,socket.io.min.js:2 Mixed Content: The page at 'https://localhost/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/socket.io/1/?t=1477375737508'. This request has been blocked; the content must be served over HTTPS.d.handshake @ socket.io.min.js:2 socket.io.min.js:2 XMLHttpRequest cannot load http://localhost/socket.io/1/?t=1477375737508. Failed to start loading.
Mixed content: page at https was loaded over https but requested an insecure
I changed the regular expression that tries to serve static resources directly.server { ... location ~* \.(js|css|gif|ico)$ { try_files $uri /wiki/index.php; expires max; log_not_found off; } ... }
I am trying to set up a wiki using Nginx.When I use/wiki/File:image.jpgNginx returns 404.When I use/index.php?title=File:image.jpgit works correctly.server { listen 80; listen [::]:80 ipv6only=on; root /usr/share/nginx/mediawiki; index index.php index.html index.htm; ... location /wiki/ { index index.php; rewrite ^/wiki/([^?]*)(?:\?(.*))? /index.php?title=$1&$2 last; } location ~* /wiki/images/.*.(html|htm|shtml|php)$ { types { } default_type text/plain; } location ~* /wikiimages/ { try_files $uri /wiki/index.php; } location ~* \.(js|css|jpg|jpeg|png|gif|ico)$ { try_files $uri /wiki/index.php; expires max; log_not_found off; } location ~*\.php?$ { try_files $uri =404; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; include fastcgi_params; } location /wiki/.*\.php?$ { try_files $uri =404; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; include fastcgi_params; } }
MediaWiki File Not Found with File:example.jpg when using short urls
The initial problem is therootdirective in thelocation /apiblock, which should not include the location component as this gets appended as part of the URI, so:location /api { root /var/www/mysite/src; ... }will result in a local path of/var/www/mysite/src/api/index.phpwhen presented with the URI/api/index.php. Seethis documentfor details.Thetry_filesrule does not rewrite the URI as you specify in your example. If you really need the final path of the URI to be presented as a query string to/api/index.phpyou will need to userewrite.The simplest solution (if you do not need to serve static content from that location) is to replace yourtry_fileswith:location /api { ... rewrite ^/api/(.*)$ /api/index.php?$1 last; location ~ \.php$ { ... } }Otherwise, use a named location:location /api { ... try_files $uri $uri/ @rewrite; location ~ \.php$ { ... } } location @rewrite { rewrite ^/api/(.*)$ /api/index.php?$1 last; }Seethisandthisfor details.
I am trying to configure nginx to serve static and PHP files. The config I have isn't working. I want the following local folder structure:src/static/ -> contains HTML, CSS, JS, images etc src/api/ -> contains PHP files for a small REST serviceIf I visithttp://mysite.localI want to be served files from the /static folder. If I visithttp://mysite.local/apiI want to be served the API PHP files. I want the requests to the api to be re-written and sent to an index.php file.Some examples:http://mysite.local/test.html -> served from src/static/test.html http://mysite.local/images/something.png -> served from src/static/images/something.png http://mysite.local/css/style.css -> served from src/static/css/style.css http://mysite.local/api/users -> served from src/api/index.php?users http://mysite.local/api/users/bob -> served from src/api/index.php?users/bob http://mysite.local/api/biscuits/chocolate/10 -> served from src/api/index.php?biscuits/chocolate/10The below config works for static files but not for the api files. I get a 404 error back if I visit one of the API paths.server { listen 80; server_name mysite.local; access_log /var/log/nginx/mysite.access.log main; error_log /var/log/nginx/mysite.error.log debug; location / { index index.html; root /var/www/mysite/src/static; try_files $uri $uri/ =404; } location /api { index index.php; root /var/www/mysite/src/api; try_files $uri $uri/ /index.php?$query_string; location ~ \.php$ { try_files $uri = 404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }
NGINX configuration for static and PHP files
You can usestrconv.Unquoteto convert the string to a normal one:package main import ( "encoding/json" "fmt" "strconv" ) func main() { // this is what your input string looks like... qs := "{\\x22device_id\\x22: \\x221234567890\\x22}" // now let's convert it to a normal string // note that it has to look like a Go string literal so we're // using Sprintf s, err := strconv.Unquote(fmt.Sprintf(`"%s"`, qs)) if err != nil { panic(err) } fmt.Println(s) // just for good measure, let's see if it can actually be decoded. // SPOILER ALERT: It decodes just fine! var v map[string]interface{} if err := json.Unmarshal([]byte(s), &v); err != nil { panic(err) } fmt.Println(v) }Playground
I've got a log file, where each line is a JSON. Due to some Nginx security reasons, the logs are being saved in a hexadecimal format (e.g. the char " will be converted to \x22). Here is an example of a JSON line:{ "body_bytes_sent": "474", "params": {\x22device_id\x22: \x221234567890\x22} }My goal:Read the file line by line.Convert each line to a readable format{ "body_bytes_sent": "474", "params" : {"device_id": "1234567890"} }Convert this string into a JSON object so I could manipulate it.Any help will be appreciated.
Go - How to decode/convert a txt file contains hex chars into readable string
Actually you have another error. I've checked your server block and got following:$ sudo nginx -t nginx: [emerg] invalid URL prefix in /etc/nginx/sites-enabled/test:23 nginx: configuration file /etc/nginx/nginx.conf test failedThis is error about missing protocol inproxy_pass localhost:8000;line. After fixing it toproxy_pass http://localhost:8000;configs test passed.Probably you're looking into old (or wrong) error log.
I'm trying to direct all HTTP requests resembling/to a specific HTTP server running on the localhost. Below is the relevantlocationline in my nginx.conf:# nginx.conf upstream django { server unix:///app/django.sock; # for a file socket } server { access_log /var/log/access.log; error_log /var/log/error.log; listen 80; server_name 127.0.0.1; charset utf-8; client_max_body_size 75M; # Django media location /media { alias /app/media; } location /static { alias /app/static; } location ~* "[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$" { # matches UUIDv4 proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass localhost:8000; } # Finally, send all non-media requests to the Django server. location / { uwsgi_pass django; include /app/conf/uwsgi_params; } }When starting nginx, I get the following error:nginx: [emerg] unknown directive "8}-([0-9a-f]" in /etc/nginx/sites-enabled/nginx.conf:30What gives?
Why is nginx complaining of an unknown directive?
I've addedentrypointsetting to dockergen service and changedcommanda bit:dockergen: image: jwilder/docker-gen:latest links: - nginx volumes_from: - nginx volumes: - /var/run/docker.sock:/tmp/docker.sock - ./extra:/etc/docker-gen/templates - /etc/nginx/certs tty: true entrypoint: ["/bin/sh", "-c"] command: > " docker-gen -watch -only-exposed -notify-sighup $(echo $NGINX_NAME | tail -c +2) /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf "Container names injected by Docker linking start with '/', but when I sendSIGHUPto containers with leading slash, the signal doesn't arrive:$ docker kill -s SIGHUP /myproject_dockergen_1/nginxIf I strip it though, nginx restarts as it should. So this$(echo $NGINX_NAME | tail -c +2)part is here to remove first char from$NGINX_NAME.
I have two services in mydocker-compose.yml: docker-gen and nginx. Docker-gen is linked to nginx. In order for docker-gen to work I must pass the actual name or hash of nginx container so that docker-gen can restart nginx on change.When I link docker-gen to nginx, a set of environment variables appears in the docker-gen container, the most interesting to me isNGINX_NAME– it's the name of nginx container.So it should be straightforward to put$NGINX_NAMEincommandfield of service and get it to work. But$NGINX_NAMEdoesn't expand when I start the services. Looking through docker-gen logs I see the lines:2015/04/24 12:54:27 Sending container '$NGINX_NAME' signal '1' 2015/04/24 12:54:27 Error sending signal to container: No such container: $NGINX_NAMEMydocker_config.ymlis as follows:nginx: image: nginx:latest ports: - '80:80' volumes: - /tmp/nginx:/etc/nginx/conf.d dockergen: image: jwilder/docker-gen:latest links: - nginx volumes_from: - nginx volumes: - /var/run/docker.sock:/tmp/docker.sock - ./extra:/etc/docker-gen/templates - /etc/nginx/certs tty: true command: > -watch -only-exposed -notify-sighup "$NGINX_NAME" /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.confIs there a way to put environment variable placeholder in command so it could expand to actual value when the container is up?
Use environment vars of container in command key of docker-compose
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.But an option is to "Link via an Ambassador Container"https://docs.docker.com/articles/ambassador_pattern_linking/https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
I have an nginx docker container and a webapp container successfully running and talking to eachother.The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.upstream app_humansio { server humansio:8080 max_fails=3 fail_timeout=30s; }"humansio" is set in the/etc/hostsfile by docker because I've started nginx with--link humansio:humansio. The webapp container (humansio) is always exposing 8080.The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?--I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.Thanks!
Restarting Containers When Using Docker and Nginx proxy_pass
I think you are really close. You have an extra ^ in your regex search string. ^ means "match from the beginning of the line"location ~ ^/(?.+)/Content/(?.+)$ { root /var/www/$project/Content/$content; }
I can't find any information on doing this specifically but I am basically trying to catch a location like:http://domain.com/project/Content/Images/image.pngand I want it to point to root like so:/var/www/$project/Content/Images/image.pngThis is what I tried to put together but it doesnt seem to be working:location ~ ^/(?.+)/Content/^(?.+)$ { root /var/www/$project/Content/$content; }I't doesn't seem to be catching this location as I get a 404 error which is setup with a php page I have with try_files in a location for /. This makes me think the regexp is wrong but I am not sure.
How do you dynamically set nginx root based on location?
As stated in themapdirectivedocumentation:The resulting value can be a string or another variable (0.9.0).Update:This functionality has been added to version 1.11.2 of NGinx, as per Comment #7 here:https://trac.nginx.org/nginx/ticket/663#comment:7
I am trying to map a variable inside the http directive in Nginx.When left alone, the variable alone gets expanded, if I add anything else to the string the expansion stops working.http { map $host $foo { #default "$host"; # - this works fine and returns 'localhost' default "This is my host: $host"; # - THIS DOESN'T WORK } server { location / { echo $foo; } } }Do you have any suggestions to make the expansion work inside the map?
Variable interpolation inside Map directive
Celery and gunicorn are different things. Celery is an asynchronous task manager, and gunicorn is a web server. You can run both of them as background tasks (celerydto daemonize celery), just feed them your django project.A common way to run them is usingsupervisor, which will make sure they stay running after you log out from the server. The celery github repo has somesample scripts for using celery with supervisor.
I'm deploying a Django app with gunicorn, nginx and supervisor.I currently run the background workers using celery:$ python manage.py celery workerThis is my gunicorn configuration:#!/bin/bash NAME="hello_app" # Name of the application DJANGODIR=/webapps/hello_django/hello # Django project directory SOCKFILE=/webapps/hello_django/run/gunicorn.sock # we will communicte using this unix socket USER=hello # the user to run as GROUP=webapps # the group to run as NUM_WORKERS=3 # how many worker processes should Gunicorn spawn DJANGO_SETTINGS_MODULE=hello.settings # which settings file should Django use DJANGO_WSGI_MODULE=hello.wsgi # WSGI module name echo "Starting $NAME as `whoami`" # Activate the virtual environment cd $DJANGODIR source ../bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DJANGODIR:$PYTHONPATH # Create the run directory if it doesn't exist RUNDIR=$(dirname $SOCKFILE) test -d $RUNDIR || mkdir -p $RUNDIR # Start your Django Unicorn # Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon) exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $NUM_WORKERS \ --user=$USER --group=$GROUP \ --log-level=debug \ --bind=unix:$SOCKFILEIs there a way to run celery background workers under gunicorn? Is it even referring to the same thing?
Difference between Celery and Gunicorn workers?
Ok, I found a solution for me:location / { rewrite ^ http://web.noc.local/pwm/ last; } location /pwm { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; proxy_buffering off; proxy_connect_timeout 30; proxy_send_timeout 30; proxy_read_timeout 30; proxy_pass http://pwm_server; }
I have an nginx on port 80 and a tomcat on port 8080 configured as upstream.The war application in tomcat listen to /pwm.I would like to configure nginx to a reverse proxy for tomcat and rewrite the url "/" to "/pwm".example: user types "web.noc.local" in browser and nginx rewrites the url to web.noc.local/pwm and redirects to tomcat on port 8080.my nginx config:upstream pwm_server { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; server_name web.noc.local; access_log /var/log/nginx/log/web.noc.local.access.log main; error_log /var/log/nginx/log/web.noc.local.error.log; location / { if ($is_args != "") { rewrite "^$" /pwm break; expires 7d; proxy_pass http://pwm_server; } proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; proxy_buffering off; proxy_connect_timeout 30; proxy_send_timeout 30; proxy_read_timeout 30; proxy_pass http://pwm_server; } }now when I open the url the, nothing happens, only a blank screen.thx for help.
nginx url rewrite for reverse proxy
You should set the environment value ROOT_URL before executing meteor, i.e.ROOT_URL=http://app.example.com meteor run
I'm currently developing on my server, not on my personal computer, but it seems to be impossible to tell it to Meteor, as I'm trying to use Facebook login. The expected login url forapp.example.comishttps://www.facebook.com/dialog/oauth?client_id=&redirect_uri=http://app.example.com/_oauth/facebook?close&But I always gethttps://www.facebook.com/dialog/oauth?client_id=&redirect_uri=http://localhost:3000/_oauth/facebook?close&I'm using Nginx as a proxy for Meteor server, so I should be able to access it pointing toapp.example.com, but Meteor seems to not detect it. Where is it changeable?
Meteor + accounts-facebook redirecting to wrong url
Fixed by adding PythonPath in my ini file, since I have my python files in an app subdirectory and by using the filename as the module.pp=/home/user/projects/python/flask/project/app module=filename
I'm trying to setup NGINX, uWSGI and Flask. I'm currently getting,uWSGI ErrorPython application not foundI get some strange errors in my uwsgi error file, which you can find at the bottom of my post.I'll get straight to it, this is on a fresh VPS running Ubuntu 13.04 64bit, these are the commands I ran.sudo apt-get updatesudo apt-get install build-essentialsudo apt-get install python-devsudo apt-get install python-pipsudo apt-get install nginxsudo apt-get install uwsgisudo apt-get install uwsgi-plugin-pythonsudo pip install virtualenvI then created a virtual environment, activated it and ranpip install flaskI then made a folder called app and place a file called hello.py inside the same folder/project /app -hello.py /bin /include /lib /localThis is my NGINX file (the nginx error file is empty)server { listen 80; server_name project.domain.net; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } location ~ /\. { deny all; } }This is my uWSGI ini file[uwsgi] chdir = /home/user/projects/python/flask/project uid = www-data gid = www-data chmod-socket = 666 plugin = python socket = /tmp/uwsgi.sock module = run callable = app virtualenv = /home/user/projects/python/flask/projectThis is my hello.py filefrom flask import Flask app = Flask(__name__) @app.route("/") def hello_word(): return "Hello World!" if __name__ == "__main__": app.run()This is my uWSGI error filehttps://p.kk7.me/sepukinulu.applescriptit's quite long so I figured I would paste it in a pastebin-style website. I can edit my post to include it here if this is not ok.Any help would be greatly appreciated!
Flask, Nginx, uWSGI Python Application not found
I have tried using post_action and ngx.location.capture, but both of them wait for the subrequest to finish to close the connection.Take a look atngx.eof()documentation.Update:http://wiki.nginx.org/HttpLuaModule#ngx.eof
I need to do some time consuming processing on images that are served by NGinx, and I'd like to be able to respond quickly with partially processed images from cache.Here are the steps of I'd like:User make first request for image AUser get image A without any processingConnection is freedimage A is put on cache (A0)A "detached" subrequest is started (S1) [first image transformation]Until subrequest S1 is done, all request for image A get A0When subrequest S1 is done, cache value is replaced with its results (A1)From now on all request for image A get A1A "detached" subrequest is started (S2) [second image transformation]Until subrequest S2 is done, all request for image A get A1When subrequest S2 is done, cache value is replaced with its results (A2) . . . and so onI'm using NGinx Lua module to process the images, and I'd like to be able to use proxy_cache functionality (LRU clean up, TTL, etc)I have tried using proxy_pass, post_action and ngx.location.capture, but all of them wait for the subrequest to finish to close the connection. I've seen some solutions likeDrupal Cache Warmerthat issue a OS call to curl, but if possible I'd like not to do that.This is my test case so farserver { listen 8080; location / { rewrite_by_lua ' ngx.say(".") --res = ngx.location.capture("/internal") ngx.exit(ngx.OK) '; proxy_pass http://127.0.0.1:8080/internal; } location /internal { content_by_lua ' ngx.log(ngx.ERR, "before") ngx.sleep(10) ngx.say("Done") ngx.log(ngx.ERR, "after") ngx.exit(ngx.OK) '; } }
Nginx detached subrequest
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).Until now I have 2 options:Deploy a redmine instance for each groupDeploy one redmine instance with multiple databaseI really don't know what is the best practice in this situation, I've seen some people doing this in both ways.I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).There are any good practice for this situation? What's the best practice to take in this situation?Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).Thanks for any advice and sorry for my english!Edit:My solution: I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
Multiple redmine instances best practices
I managed to fix this problem after spending lots of time.There's a 3rd party module for nginxHttpSubsModule, which allows you to replace strings in the response body (eg. html).So the problem can be fixed by:location / { http://localhost:8080/webApp1; subs_filter_types text/html; subs_filter '/webApp1' ''; }It'll remove all the context '/webApp1' from the html responses.Hope this helps others who encountered this problem too.z.
I'm trying to deploy an web application on tomcat server fronted with Nginx. The problem I encounter is tag in my jsp pages is printing out "incorrect" (it is correct from tomcat point of view) context path.My Web app on tomcat is deployed on context path: /webApp1 with tomcat running on port 8080. So the web application is accessible viahttp://localhost:8080/webApp1My nginx is configured to proxy_pass as follows:location / { http://localhost:8080/webApp1; }With this configuration, the web app is supposed to work with urlhttp://localhostThis only works for the home page text. The home page loaded successfully but all the links on home page have /webApp1 prefixed as tomcat think that it is running by itself hence the output the contextpath as prefix for all links.Has anyone fixed this before.All answers are much appreciated.z.
Context path for tomcat web application fronted with Nginx as reverse proxy
Your best option is to first have standardized directory structure (e.g. c:\www\example.com ). Then in each site directory have a directory for your root and for conf files. Then you'd use this in your main nginx.confhttp { }section.include c:/www/*/conf/nginx.conf;Then each site's nginx.conf will get loaded when you start or issue a reload to nginx. So if you have these two paths for a site, you are set.c:\www\example.com\conf\ c:\www\example.com\htdocs\Web files go inhtdocsandnginx.confgoes in conf. Simple as that.I never used nginx on Windows, so I assume its' conf uses forward slashes like *nix.
I have intalled nginx on Windows and put annginx.confin my http root directory, but it seems this path is not included, I can include it by includingc:/http_default/nginx.conf, but I want nginx to automaticaly include anynginx.conffor current working directory. Example: forhttp://mydomain.com/test/index.php, I wantc:/http_default/test/nginx.confto be included.
How to config nginx to read nginx.conf in current working directory?
Sounds like MongoDB will be a good fit for this - fast updates with advanced operators, and M/R for batch offline processing. I think CherryPy behind Nginx should work well too. If you go the mod_wsgi route just watch out forthis issue.
This question is related to an older question:MySQL tracking system. In short: I have to implement a tracking system that will have high loads using Python. For the database part I've settled on mongoDB (which sounds like the right tool for this job). The development language will be Python.I was thinking of using several instances of a CherryPy application behind nginx. The reasoning behind this is that I don't want to handle all the wsgi part myself, but on the other hand I don't need a full blown web framework since the app will be simple and there's no need for ORM.My questions are:Should I use the CherryPy builtin server or should I use Apache with modwsgi (or another server altogether)?Does this sound like a reasonable approach (nginx, mongoDB)? If not what would you recommend?Thank you in advance.
Tracking system and real time stats analysis in Python
I found the following to be the minimal starting configuration that serves contents from the givenhtmldirectory in the current$PWDdirectory:Runnginx -p $PWD -e stderr -c nginx.confwithnginx.confbeing:# Run nginx using: # nginx -p $PWD -e stderr -c nginx.conf daemon off; # run in foreground events {} pid nginx.pid; http { access_log /dev/stdout; # Directories nginx needs configured to start up. client_body_temp_path .; proxy_temp_path .; fastcgi_temp_path .; uwsgi_temp_path .; scgi_temp_path .; server { server_name localhost; listen 127.0.0.1:1234; location / { root html; } } }I tested this withnginx version: nginx/1.22.0.If you then create a file to serve, such asmkdir html echo hi > html/myfileyou can visit http://localhost:1234/myfile in the browser.ExplanationsYou can see what the CLI flags do innginx -h.-e stderris used because otherwise nginx will try to use its defaulterror.loglocation already for the purpose of pointing out errors in the config file.-p $PWDis used because nginx requires absolute paths for its prefix directory.When referring to a directory such asclient_body_temp_path .;the.will effectively be the given-pprefix directory.
I wish to just runnginxon the command line, in the foreground, as my own user, with configs and files to serve from the current directory.What is the minimal configuration and CLI invocation that will start nginx?
Minimal nginx configuration to start it from the command line
Use the HLS protocol (HTTP Live Streaming). Nginx knows how to render HTTP perfectly. So, you just need to create and update the playlist and fragments of the HLS stream, as well as monitor the removal of old fragments. To do this, there is a nginx-rtmp-hls module. It is located in the hls directory, but it is not collected by default since requires the libavformat library included in the ffmpeg package. To build nginx with HLS support, you need to add this module explicitly during configuration:./configure --add-module=/path/to/nginx-rtmp-module --add-module=/path/to/nginx-rtmp-module/hlsTo generate HLS, just specify the following directives:application myapp { live on; hls on; hls_path /tmp/hls; hls_fragment 5s; }And finally, in the http {} section, configure the return of everything related to HLS:location /hls { root /tmp; }To show stream in browser create html page with such content (example):Update 1:You attached link on Nginx setup tutorial, so i'm referencing on their "Compiling NGINX with the RTMP Module" step with changes related to HLS module:$ cd /path/to/build/dir $ git clone https://github.com/arut/nginx-rtmp-module.git $ git clone https://github.com/nginx/nginx.git $ cd nginx $ ./auto/configure --add-module=../nginx-rtmp-module --add-module=../nginx-rtmp-module/hls $ make $ sudo make install
I have followed along the documentation/tutorial on how to set up the config file for RTMP streaming from here:https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/and it is pretty straight forward. However, I am not sure how I can have my backend built on Flask to redirect the stream to some HLS/DASH video player that is embedded in an HTML template that is sent in response to a client that requested for a specific HTTP endpoint. The tutorial shows how to view locally in a VLC media player but not how to embed it in an HTML file that gets sent to the client. How would I go about doing this? For reference, I am hosting my website on Heroku that is set up with its Nginx buildpack from here,https://github.com/heroku/heroku-buildpack-nginx, and I am not sure if I need to have Heroku install additional dependencies to set up an RTMP server and listen for a stream.
Nginx RTMP with Flask
you should checkECS_Execution_Role_Policy. it should containslogspermission. like :{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ] }you should configureecs_agent's config forawslogsdriver.this config file path is/etc/ecs/ecs.configin host. this file should be like :ECS_CLUSTER=test_ecs_cluster ECS_AVAILABLE_LOGGING_DRIVERS=["awslogs","json-file"]See :Here'sa document
My nginx Dockerfile:FROM nginx:1.15.12-alpine RUN rm /etc/nginx/conf.d/default.conf COPY ./nginx/nginx.conf /etc/nginx/conf.d # Forward request logs to Docker log collector RUN ln -sf /dev/stdout /var/log/nginx/access.log \ && ln -sf /dev/stderr /var/log/nginx/error.log EXPOSE 80 ENTRYPOINT ["nginx", "-g", "daemon off;"]My container from my task definition for ECS:[ { "name": "nginx", "image": "", "networkMode": "awsvpc", "essential": true, "portMappings": [ { "containerPort": 80, "protocol": "http" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "mygroup", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "nginx" } }, "essential": true } ]Yet when the task is deployed, it fails, and in CloudWatch I see the following:I'm very new to ECS / Cloudwatch. How can I see the NGINX errors from the container failing?
ECS Fargate NGINX container not showing errors in CloudWatch logs
If I understood correctly you have static resources oneditoranddashboardupstreams and in both cases the URL is the same/static/some.resourceBecause you cannot differentiate based on the URL you could configurenginxto try if the file exists ondashboardfirst and proxy the request toeditorif not found.upstream editor { server editor:80; } upstream dashboard { server dashboard:80; } server { location /static { # Send 404s to editor error_page 404 = @editor; proxy_intercept_errors on; proxy_pass http://dashboard } location @editor { # If dashboard does not have the file try with editor proxy_pass http://editor } }See alsonginx – try files on multiple named location or serverHope it helps.
I'm trying to use Nginx as a reverse proxy to serve two containers. Here is a part of my Nginx conf file:upstream dashboard { server dashboard:80; } upstream editor { server editor:80; } server { listen 80; server_name example.com; location / { proxy_pass http://dashboard; } location /editor/ { rewrite ^/editor(.*)$ $1 break; proxy_pass http://editor; }I'm getting 404 errors when I navigate to the/editorurl in my browser because the page is submitting requests for static resources that reside in the upstream container, "editor".I'm pretty new to Nginx but I presume when it receives a request with the url:http://example.com/static/css/2.3d394414.chunk.cssNginx has no way of knowing that the corresponding css is located inside theeditorcontainer. How can I amend the configuration to fix this problem? I've seen some configurations which provide a common path to any static assets but I need a solution that can handle assets within docker containers.
Can't serve static assets from docker containers behind Nginx reverse proxy
The answer is like in this post:https://stackoverflow.com/a/52319161/3093499Only change is putting the resolver and set variable into the server-body instead of the location.
I use an nginx container with this config:set $ui http://ui:9000/backend; resolver 127.0.0.11 valid=5m; proxy_pass $ui;This is needed, because the "ui" container wont necessarly be up when nginx starts. This avoids the "host not found in upstream..." error.But now I get a 404 even when the ui-container is up and running (they are both in the same network defined in the docker-compose.yml). When I proxy pass without the variable, without the resolver and start the ui container first, everything works.Now I am looking for why docker is failing to resolve it. Could I maybe manually add a fake route tohttp://uiwhich gets replaced when the ui-container starts? Where would that be? Or can I fix the resolver?
Dockercompose, Nginx, Resolver not working
as per your configurationservice-worker.jsmust be in/root directory defined withrootnginx directive.Please check if the file is present there. If you are using express and express static and have placed the file in public/assets directory, it won't work. if for this file you want to to have different location. you can usealiasdirective.
I am trying to set cache header for service worker through nginx increate react app project, in the configuration, I triedlocation /service-worker.js { add_header Cache-Control "no-cache"; proxy_cache_bypass $http_pragma; proxy_cache_revalidate on; expires off; access_log off; }However when I load my page, sw registration fails with the message.A bad HTTP response code (404) was received when fetching the script. registerServiceWorker.js:71 Error during service worker registration: TypeError: Failed to register a ServiceWorker: A bad HTTP response code (404) was received when fetching the script.Can someone please suggest a way with nginx using create-react-app?
Create react app service worker nginx no cache configuration
I finally found the solution :I took inspiration from thisscriptand created one using WEBROOT MODE.I created a git to share this solution :https://github.com/SammyHam/LetsEncrypt-SSL-config-for-Elastic-Beanstalk
For my nodejs application in Elastic BeanStalk, without Beanstalk Load Balancer I want to set up a Letsencrypt certificate and keep the classic domain provided by AWS : xxx.xxxx.elasticbeanstalk.comAfter several searches I found two possible solutions :1 - Using an .ebextensions file => to install Certbot, get a Letsencrypt certificate and config Nginx.great post about that =>https://bluefletch.com/blog/domain-agnostic-letsencrypt-ssl-config-for-elastic-beanstalk-single-instances/2 - From an ssh connection, install Certbot, generate a certificate and Upload it to IAM AWS.Docs AWS :https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-ssl-upload.htmlFor both solutions I have the same error message during domain verification by Certbot.I think that the directory generated by certbot for the verification isn't accessible..Error :To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address.So, my question is : What's the best way to setup a SSL Certificate to get the green lock for a Node Js Elastic BeanStalk application without Beanstalk Load Balancer ?Thank you for your help.
AWS Elastic Beanstalk - NodeJS : Get certificate SSL from Letsencrypt without Beanstalk Load Balancer
The0|1should be within parentheses or redefined as a character class.The rewritten URI needs a leading/as allnginxURIs have a leading/.So all of these should be equivalent:rewrite ^/(.*)/page/(0|1)$ /$1 last; rewrite ^/(.*)/page/[01]$ /$1 last; rewrite ^(/.*)/page/[01]$ $1 last;There's a useful website for regular expressionshere.
Sorry if this question was asked many times. I can't make nginx do proper rewrite. I need to remove last part of the url. For example, this is the url I have:https:/mydomain.com/this/is/some/url/page/0 https:/mydomain.com/this/is/some/url/page/1I need to rewrite these both to this:https:/mydomain.com/this/is/some/urlThis is what I have tried so far:location / { ... rewrite ^/(.*)/page/0|1$ $1 last; ... }But it does not work. It seems to me that it is correct? What is wrong with that? (I hate regex).EDIT:location / { # Remove trailing double slashes. if ($request_uri ~ "^[^?]*?//") { rewrite "^" $scheme://$host$uri permanent; } # Remove trailing slashes. rewrite ^/(.*)/$ /$1 permanent; # Rewrite page/0 and page/1 from url. rewrite ^/(.*)/page/[01]$ /$1 last; proxy_pass http://backend_web; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }
nginx rewrite url and remove last part
Now that you have your blog working athttp:///blogyou need fix few more thingslocation ^~ /blog { proxy_pass http://<>/blog; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect http:/// https://$host/; proxy_cookie_domain $host; proxy_set_header X-Forwarded-Proto $scheme; }Also add below to yourwp-config.phpdefine('FORCE_SSL_ADMIN', true); if ( isset( $_SERVER['HTTP_X_FORWARDED_PROTO'] ) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') $_SERVER['HTTPS']='on';
I have a Ruby on Rails application and a Wordpress blog hosted on separate EC2 instances.I'm trying to make the Wordpress blog to act like a subfolder of the Rails application (example.com/blog instead of blog.example.com) for better SEOThe Rails application can be accessed through http and https (http is redirecting to https)https://www.nginx.com/resources/admin-guide/reverse-proxy/I tried using nginx reverse proxy function and I think it's my best option right now but my attempt was unsuccessful.The main page of the blog opens as expected (example.com/blog) but without css.A URL with arguements (example.com/blog/args) redirects me back to the Rails application (example.com/args)I set the desired blog url in wp-config.php as the following:define('WP_SITEURL', 'https://www.example.com/blog'); define('WP_HOME', 'https://www.example.com/blog');This is the nginx configuration I use:location ^~ /blog { proxy_pass http://<>/blog; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }https://github.com/waterlink/rack-reverse-proxyI also tried the rack-reverse-proxy gem but got the same resultIt's really important for the Rails application and the Wordpress blog to stay separated for auto-scaling, redundancy and deployment purposes.If there's another way achieving this, I'm open to suggestions.
nginx reverse proxy from rails to wordpress
Try to add this in your AppModule:import { HashLocationStrategy, LocationStrategy } from '@angular/common'; @NgModule({ // ... providers: [{provide: LocationStrategy, useClass: HashLocationStrategy}], // ... }) export class AppModule {}
I have an Angular 4 application deploy on a remote server with Nginx, and accessible with this address:http://xx.xx.xx.xx/app. The app works well, I can navigate in my website but when I refresh one page, for examplehttp://xx.xx.xx.xx/app/page1, it displays the index.html page of nginx.Nginx's pages are located in/usr/share/nginx/html, and my app's pages are located in/var/www/app.nginx.confserver { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; include /etc/nginx/default.d/*.conf; */ location /app { root /var/www; try_files $uri $uri/ /index.html; } }It seems that the lineroot /var/wwwis not taken into account during a refresh.I heard something about a bug with try_files, and it seems that I need to add a rewrite somewhere. Thanks for help.
Angular 4 - Nginx / Refresh shows bad page
I managed to get this done largely using the ingress here:https://github.com/kubernetes/kops/tree/master/addons/ingress-nginxexcept for the ingress service I addedservice.beta.kubernetes.io/aws-load-balancer-ssl-certannotation pointing to my certificate ARN and settargetPortof both the ports to 80
I set up a kubernetes cluster in AWS using KOPS; now I want to set up an NGINX ingress controller and terminate TLS with AWS managed certificate. The topology in my understanding is AWS ELB is facing the internet and terminates TLS, forwards unencrypted to ingress service which then does dispatches.I've deployed ingress controller fromhttps://github.com/kubernetes/ingress/tree/master/examples/aws/nginxExcept I used annotations as described on top ofhttps://github.com/kubernetes/ingress/issues/71to add the certificate.I add the route to Route53 and open my browser to https address and get a 400 response from NGINX with message "The plain HTTP request was sent to HTTPS port"What am I doing wrong?This is my ingress resource:apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: dispatcher namespace: test spec: rules: - host: REDACTED http: paths: - backend: serviceName: REDACTED servicePort: 80 path: /api/v0
How to set up kubernetes NGINX ingress in AWS and SSL termination
If you're able to move the black list outside of the module's context, perhaps to a system file, a KV store, or SHM, that would allow each process to talk to a central source blacklist. I believe shmat() and futex will do the job and the overhead will be negligible.
Lets suppose I wish to write a nginx module that blocks clients by IP. In order to do so, on initialization stage i read a file with IP addresses that I have to block (black list) and store it in module's context.Now I wish to update the black list without restarting nginx. One of the possible solutions, is to add a handler on specific location. e.g. if uri "/block/1.2.3.4" requested, my handler adds ip address 1.2.3.4 to the black list.However, nginx runs several workers as separated processes, so only one particular worker will updated.What is a common pattern to cope such problems?
How to update internal state of nginx' module runtime?
For troubleshooting this kind of problem, the--resolveoptioncan be useful:curl -k -I --resolve www.example.com:80:192.0.2.1 https://www.example.com/Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line. The port number should be the number used for the specific protocol the host will be used for. It means you need several entries if you want to provide address for the same host but different ports.Especially if the site you’re trying to fetch from uses SNI: In that case you can use the--resolveoption to specify the server name that gets used in the TLS client hello.One troubleshooting step to try: updatecurlorcompile it yourself from the sourcesand retry. For one thing, somecurlversions (e.g., MacOS) supposedly don’t send SNI for-k/--insecure.If that’s the issue you’ve hit and you can’t replacecurl,there’s a workaround you can usethat essentially involves creating your own CA and private keys and CSRs, and tweaks to your haproxy.After setting it up, then in place of specifying-k/--insecure, you use--cacertor--capath:curl https://example.com/api/endpoint --cacert certs/servers/example.com/chain.pem curl https://example.com/api/endpoint --capath certs/caIf the issue you’ve hit is due to SNI, you may also troubleshoot it with a site likehttps://sni.velox.ch/:curl --insecure https://sni.velox.ch/Otherwise, if it’s not SNI, then I recall seeing somewhere that-k/--insecuremay not work as expected with some proxy configurations. So if you are going through some kind of proxy from the client side and you could somehow test directly without the proxy, that might be worth exploring.
When I am opening a url usingcurlwithout-k, my request is passing and I am able to see the expected result.$ curl -vvv https://MYHOSTNAME/wex/archive.info -A SUKU$RANDOM * Trying 10.38.202.192... * Connected to MYHOSTNAME (10.38.202.192) port 443 (#0) * TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: *.MYCNAME * Server certificate: ProdIssuedCA1 * Server certificate: InternalRootCA > GET /wex/archive.info HTTP/1.1 > Host: MYHOSTNAME > User-Agent: SUKU19816 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.2 < Date: Thu, 26 Jan 2017 01:08:40 GMT < Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 19 < Connection: keep-alive < Set-Cookie: JSESSIONID=1XXXXXXXX3E58093E816FE62D81; Path=/wex/; HttpOnly < X-WebProxy-Id: 220ffb81872a < status=Running * Connection #0 to host MYHOSTNAME left intactBut when I am opening same urlwith-kits failing. To me its not making any sense since in my understanding the purpose of-kis only to skip certificate verification$ curl -vvv https://MYHOSTNAME/wex/archive.info -A SUKU$RANDOM -k * Trying 10.38.202.192... * Connected to MYHOSTNAME (10.38.202.192) port 443 (#0) * Server aborted the SSL handshake * Closing connection 0 curl: (35) Server aborted the SSL handshakeRequest flow:SSL termination is happening on HAPROXY machineHAPROXY will forward request to nginx
curl with `-k` and without `-k`
I suggest you forget about Letsencrypt. The value proposition of that service is really focused on "getting that green lock in the browser", which you explicitly say you don't require.Also, Letsencrypt requires access to your server to verify that the ACME challenge file is there, which means YES, you need every such server to have a publicly reachable domain. So you need to own the domain and have DNS pointing to your specific server, which sounds undesirable in a testing environment.So in summary I think you're trying to use the wrong tool for your needs. Try using regular self-signed certificatesas described in this question. For that to work, the connecting clients must be set to not verify the certificates.Or you can take it to the next level andcreate your own CA. For that to work, you need to make all your containers import that root cert so that they will trust it.Of course, once you ship the containers/images into production, don't forget to undo these things and get real valid certificates. That's when Letsencrypt will be useful.
During development, test, and staging, we have a variety of docker servers that come and go as virtual machines. Eventually, the docker images under this process will get to a customer machine with a well-defined host and domain names. However, until that point all the machines are only our internal network. In the customer-deployed environment it is the intent that ALL 'http' communication be it internal or external is via HTTPS. Given this intent, it is highly desirable to wire all the containers up with useable/testable SSL certificates.One,two,three, and on and on of MANY docker/letsencrypt/nginx tutorials describe how to do this at the end, but not during the development process. Does anyone know if such a focused setup is possible? Do I need to make the inner-most docker container (ours happens to house a Tomcat webapp) have a public domain? Or is this just completely impractical [even knowing this for certain will be a big help!]? If this usage is possible, might anyone know (or have) specifics on what needs to be done to get this functional?UPDATEIn case it wasn't clear from the above. I want to ship Docker containers one of which will probably be a letsencrypt/nginx proxy. There are many to choose from onDocker Hub. However, I can't figure out how to setup such a system for development/test where all the machines are on an internal network. The certificates can be 'test' - the need is to allow HTTPS/TLS, not a green lock in Chrome! This will allow for a huge amount of testing (ie. HTTP properly locked down, TLSv1.0 turned off to avoid certain vulnerabilities, etc, etc).
Docker: LetsEncrypt for development of "Https everywhere"
Answering my own question. This issue is rather a configuration problem and caused by my own fault.Basically I haven't posted the ReplicationController of the Nginx-Ingress Resource. This one was missing the port 2222. so now it does look like:apiVersion: v1 kind: ReplicationController metadata: name: {{ template "fullname" . }} labels: k8s-app: "{{ .Chart.Name }}" chart: "{{.Chart.Name}}-{{.Chart.Version}}" spec: replicas: 1 selector: k8s-app: "{{ .Chart.Name }}" template: metadata: labels: name: {{ template "fullname" . }} k8s-app: "{{ .Chart.Name }}" chart: "{{.Chart.Name}}-{{.Chart.Version}}" spec: terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 name: "{{ .Chart.Name }}" imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 hostPort: 80 # we do need to expose 2222 to be able to access this port via # the tcp-services - containerPort: 2222 hostPort: 2222 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap-ssh
I have a small kubernetes (1.3) cluster (basically one node) and would like to install gogs in there. Gogs is "installed" using Helm. I do have the following templates in my helm chart:Deployment (using image gogs:0.9.97, having containerPort 3000 (http) as well as 2222 (ssh)Ingress (this is only for Port 80)Service (Port 80 (http) as well as 2222 (ssh))The http-stuff is configured correctly and I can access the container as well as the contained git-repositories via http without any trouble. Now I would like to use ssh for the git-connections as well. I tried the --tcp-services-configmap configuration of nginx-ingress, but to no avail. The log of the Ingress Controller states, that the configured service does not have any active endpoints, which I find rather strange, since the http stuff is working.UPDATEI just did an nmap on the DNS and the port 2222 is not open. This looks like a konfiguration problem. The port is open on the container (tested by connecting to the cluster ip from the ndoe).Guess that the problem is that the log of the Ingress Controller states, that the configured service does not have any active endpoints.My Service onfiguration is:apiVersion: v1 kind: Service metadata: name: {{ template "fullname" . }} labels: app: {{ template "fullname" . }} spec: ports: - name: http port: 80 targetPort: http protocol: TCP - name: ssh port: 2222 targetPort: ssh protocol: TCP selector: app: {{ template "fullname" . }}The config-map is:apiVersion: v1 kind: ConfigMap metadata: name: tcp-configmap-ssh data: 2222: "default/{{ template "fullname" . }}:2222"
Access Kubernetes Git Container via Ingress via HTTP as well as SSH
In general, security reasoning says that running as root as bad. If there were any kind of bug, for example a code execution bug that can allow anybody to execute arbitrary code they would be able to destroy your entire system.If you don't run the process as root, any code execution vulnerabilities would need to be paired with a secondary privilege escalation vulnerability in order to destroy your system.In a docker container, this is mitigated slightly in that you'll be able to recover your old system relatively easily, however, it is generally still a bad practice or habit to allow processes to run as root as a malicious attacker can and will steal the information that may exist on your server or turn your server into a malware delivery mechanism.
I am reading through the uWSGI documentation and it warns toalways avoid running your uWSGI instances as root. What is the reason behind this?Does it matter if it is the only process (besides nginx) running in a docker container, serving up a flask application?
Why not run uwsgi instances as root
Nginx provides only thePurge methodfor invalidating cache which is only one of the four methods Varnish offers and not even the best option for your scenario.Moreover I strongly recommend Varnish over Nginx for caching web pages due to its specific nature of caching tool. Nginx could be pretty good at delivering static content, but it writes all the cached content to disk which is slower compared to Varnish storing it in memory.
We're about to set up a cache and reverse proxy for our site, and we're deciding whether to use Varnish or Nginx. We have complex cache-busting requirements, and we effectively require surrogate key (or tag-based) cache invalidation.Varnish offersHashtwowith this functionality. Does Nginx offer this in any form?
Nginx caching: tag-based cache-busting like Varnish Hashtwo
First I had to allow the Docker Bridge for my Docker Network to route traffic to the host. This is a bit cumbersome as the id for the network bridge for my Docker network is generated by Docker. I had to manually add a rule to iptables.I am using letsencrypt server certificates and the letsencrypt ca is not part of the default java truststore. There for I had to add it to the following truststore:$JAVA_HOME/jre/lib/security/cacerts.Works!
I have problems with this specific container configuration and make the Atlassian tools use their Application Links flawlessly.I have some atlassian applications running inside docker containers: Jira, Confluence, CrowdAll container are on the same server behind nginx:Nginx-> Confluence-> Jira-> CrowdI access the container over nginx https proxy with the following subdomains:https://confluence.example.comhttps://jira.example.comhttps://crowd.example.comHow do I have to set up the Docker network or network in order that Jira can access Confluence with the URLhttps://confluence.example.comand Confluence can access Jira with the URLhttps://jira.example.com?
Atlassian Application Links Inside Docker
You generally have one folder per Dockerfile, as:each one can use multiple other resource files (config files, data files, ...) when doing their respectivedocker build -t xxx .each one can have its own.dockerignoreMyprojectb2d, for instance, has one Dockerfile per application:
What is the best way to structure a repository / project that has multipleDockerfilefor provisioning services.e.g.Dockerfile# build Nodejs app serverDockerfile# build Nginx forward proxyDockerfile# build Redis cache serverWhat is thebest practicesandstandardstructure within a repository to contain this information?
Git repo structure with multiple docker files
Based on theShiny Docsthis a Shiny Server Professional feature only and you need to use the whitelist_headers directive to get those headers:4.9 Proxied Headers Typically, HTTP headers sent to Shiny Server will not be forwarded to the underlying Shiny application. However, Shiny Server Professional is able to forward specified headers into the Shiny application using thewhitelist_headersconfiguration directive, which can be set globally or for a particular server or location.Update: just tested the whitelist-headers option in a non-pro shiny server install, and I can't get the custom headers to show. I did verify the headers were send on by nginx by using netcat to show me the incoming data (nc -l 8080 and a quick change to proxy_pass in the nginx.conf file).Update 2: can't get NGINX to log the HTTP_SEC_WEBSOCKET_KEY header (the authorization header is logged after specifying it in the log specification) and I can't see it in the traffic between nginx and Shiny Server, I think it either boils down to getting Shiny Server Professional or to modifying the shiny source code to pass the authorization header to the application.
I've successfully implemented an nginx reverse proxy for my shiny-server in order to have SSL and user authentication. However, there is still a gap that I can't figure out. Is there a way for my shiny app to determine which user is actually logged in?Here's my /etc/nginx/sites-available/defaultserver { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name myserver.com; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/shiny.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Fix the “It appears that your reverse proxy set up is broken" error. proxy_pass http://localhost:3838; proxy_read_timeout 90; proxy_redirect http://localhost:3838 https://myserver.com; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_set_header Authorization $http_authorization; proxy_pass_header Authorization; } }With the last two lines of my location I expect to have a header with the user name. I found that tiphere. I foundthiswhich allows me to see my header information but none of the headers I see are my user name.Edit:With Bert Neef's citation, I see why the above didn't work. However, the server does have access to the HTTP_SEC_WEBSOCKET_KEY header which is unique across sessions. It seems that if we can get Nginx to log that value then the server can look at that code to match up the header to an actual user. That being said, I don't know if that is feasible and I don't know how to get Nginx to log that value.
Can shiny determine the use who logged in to nginx reverse proxy
As @Memes pointed out, I made a mistake in my location block:location /demos/demo-magento2/ { set $MAGE_ROOT /mnt/storage/demo-magento2/; set $MAGE_MODE developers; include /mnt/storage/demo-magento2/nginx.conf.sample; }Should be:location /demos/demo-magento2/ { set $MAGE_ROOT /mnt/storage/demo/demo-magento2/; set $MAGE_MODE developers; include /mnt/storage/demo/demo-magento2/nginx.conf.sample; }The key is that I was missing the demo/ subfolder in the path. The nginx error was actually pretty self-explanatory!
Using Nginx 1.4.6 on Ubuntu, i'm trying to configure Magento 2 to run in a subfolder.I already have some others projets in/var/www, that are set up like so:server { server_name website.com; root /var/www/; location /p1/ { # config } location /p2/ { # config } }But now, my Magento installation is located at/mnt/storage/demo/demo-magento2and I can't find a way to include it in this server block.I tried to use their sample configuration for Nginx (https://github.com/magento/magento2/wiki/Nginx-Configuration-Settings-and-Environment-Variables). So, I added this location block to my server block configuration:location /demos/demo-magento2/ { set $MAGE_ROOT /mnt/storage/demo-magento2/; set $MAGE_MODE developers; include /mnt/storage/demo-magento2/nginx.conf.sample; }And Nginx keeps returning me this error:2015/10/19 18:15:04 [emerg] 6250#0: location "/setup" is outside location "/demos/demo-magento2/" in /mnt/storage/demo-magento2/nginx.conf.sample:27I am quite new to Nginx, so can someone explains me how to figure it out?
Magento 2 in subfolder Nginx
ngx.thread.spawn not working, only this code worked:access_by_lua ' local socket = require "socket" local conn = socket.tcp() conn:connect("10.10.1.1", 2015) conn:send("GET /lua_async HTTP/1.1\\n\\n") conn:close() ';
How I can duplicate (or create and send) a request with the nginx web server. I can't usepost_action, because it is a synchronous method. Also, I compiled nginx with Lua support, but if I try to usehttp.requestwithngx.thread.spawnorcoroutine, I find the request has been executed synchronously. How do I solve this?location ~ /(.*)\.jpg { proxy_pass http://127.0.0.1:6081; access_by_lua_file '/var/m-system/stats.lua'; }Lua script (withcoroutine):local http = require "socket.http" local co = coroutine.create(function() http.request("http://10.10.1.1:81/log?action=view") end ) coroutine.resume(co)
Asynchronous duplication request with nginx
It turned out that Gunicorn was the culprit. Putting its configuration into a file resolved the issue.gunicorn_config.py put in the same folder as manage.py:bind = "0.0.0.0:8000" loglevel = "INFO" workers = "4" reload = True errorlog = "/var/log/gunicorn/error.log" accesslog = "/var/log/gunicorn/access.log"And some changes in docker-compose.yml:app: restart: always build: src expose: - "8000" links: - postgres:postgres volumes_from: - storage_files_1 env_file: .env command: gunicorn --config=gunicorn_config.py barbell.wsgi:applicationNow it works as it should.
I've been trying to set up an environment in docker-compose where there are several containers:DjangoNginxPostgresDbDataStorageI've used the following configuration:app: restart: always build: src expose: - "8000" links: - postgres:postgres volumes_from: - storage_files_1 env_file: .env command: gunicorn barbell.wsgi:application \ -b 0.0.0.0:8000 -w 4 nginx: restart: always build: nginx ports: - "80:80" - "443:443" volumes_from: - storage_files_1 links: - app:app postgres: restart: always image: postgres:latest volumes_from: - storage_data_1 ports: - "5432:5432"My nginx sites-enabled config file looked like this:server { listen 80; server_name localhost; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location /static { alias /static/; autoindex on; } location / { proxy_pass http://app:8000; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } }And it doesn't work - nginx always returns 502, but serves static files perfectly. I also tried the same setup with uwsgi, no luck.However, when I combine the Django with nginx and serve everything from the same container, everything works (again, both on uwsgi and gunicorn).Any idea what am I missing?UpdateHere are the nginx logs:*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.42.1, server: 0.0.0.0, request: "GET / HTTP/1.1", upstream: "http://172.17.1.75:8000/", host: "localhost"
Docker-compose: nginx does not work with django and gunicorn
The docker container stops when the main process in the container stops.I setup a little dockerfile and a start script to show how this could work in your case:DockerfileFROM nginx COPY start.sh / CMD ["/start.sh"]start.sh#!/bin/bash nginx & sleep 20 # replace sleep 20 with your test of inactivity nginx stopBuild container, run and test$ docker build -t ng . $ docker run -d ng $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a373e721da7 ng:latest "/start.sh" 4 seconds ago Up 3 seconds 443/tcp, 80/tcp distracted_colden $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a373e721da7 ng:latest "/start.sh" 16 seconds ago Up 16 seconds 80/tcp, 443/tcp distracted_colden $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES $
I am trying to stop a Docker container running Nginx only after there has been no activity in the access.log of that Nginx instance for a period of time.Is it possible to stop a Docker container from inside the container? The other solution I can think of is to have a cron running on the host OS that checks the/var/lib/docker/aufs/mnt/[container id]/but I am planning on starting lots of containers and would prefer not to have to keep a list of IDs.
Stop a Nginx Docker container
auth_request is just for authentication. This little hack should work for youerror_page 401 /auth;After auth error it'll go to /auth location again, this time as ordinary request.
I am using nginx as a single point of entry to my entire solution.my goal is to send some url to be authenticated before continuing.I have found a module named:ngx_http_auth_request_modulewhich suits right in place to solve my problem.http://nginx.org/en/docs/http/ngx_http_auth_request_module.htmli am compiling my nginx like this:./configure --with-http_auth_request_modulemy code looks like this:http { server { listen 8080 default_server; server_name _; location /api { auth_request /auth; proxy_pass http://localhost:8000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } location = /auth { proxy_pass http://localhost:8000/auth/user; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } } }my problem is that if my server returns and error, like401, then a default page gets shown. what i want is to be able to return the json that was returned from my authentication service. this is useless without this. what is the point of showing just a generic401page? i want to return my proxied json response.please help.
nginx http auth request module will only return default error pages
Isn't your access_log also defined in a server block ? Have a look at the default config in nginx/sites-enabled/. In this case the value in http block is overwritten by the one in the lower block.
I can't get any changes in the /etc/nginx/nginx.conf http block to be used. I'm starting with the simplest thing - I want to modify the name of access.log to something else (ie a.log). It is a vanilla nginx install (no custom config files yet). Here's what I know:changing a value in the head of nginx.conf does affect the configuration (changing worker_processes 4 to worker_processes 2 does change the # of workers)Making a syntax error in nginx.conf's http blockdoescause nginx to throw an error on restartChanging access_log to access_log /var/log/nginx/a.log does not modify the location of the log, and nginx in fact continues logging to /var/log/nginx/access.logHere is a snippet of my nginx.conf file:user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/a.log; #.... }Is it something as simple as I'm modifying an http block that gets overwritten by some other config file? Thanks for the help.
Changing Nginx Log File Location
You should adjust yourlocationblock to use a combination of arewriteand aproxy_passdirective, such that each request is rewritten dynamically. As your configuration stands, the variables don't appear to be resolved on a per-request basis - according toNginx's documentation, only the server name, port, and URI can be specified by variables.What you need is something akin to this, rewriting the URI and definitively specifying a proxy:location @fallback { resolver 8.8.8.8; rewrite ^.*$ /bucket/$arg_member/$arg_imgname break; proxy_pass http://s3.amazonaws.com; proxy_set_header Host s3.amazonaws.com; }and to satisfy your switchable scheme (HTTP and HTTPS), the best option is likely to have two separate server blocks, with two different@fallbacklocations - one for HTTP and the other for HTTPS. Theproxy_passlocation cannot resolve$schemein this context. Details are somewhat sketchy on what is supported (and why), butNginx's documentationis all we have to go on.
I have a local directory (uploads) and anS3bucket setup. when the user is uploading an image the file is stored on the local directory:/uploads/member_id/image_nameand after 30 minutes the system is uploading the files toS3with the same path:s3.amazonaws.com/bucket/member_id/image_name;I have setup this rewrite rule on my nginx server but it's not working. this rule should check if the file is exists locally, and if not, open a proxy toS3..Any idea how to fix it?location /uploads/ { rewrite ^/uploads/(.*)/([A-Za-z0-9\._]*)$ /uploads/$1/$2?imgname=$2&member=$1; try_files $uri @fallback; } location @fallback { resolver 8.8.8.8; proxy_pass $scheme://s3.amazonaws.com/bucket/$arg_member/$arg_imgname; proxy_set_header Host s3.amazonaws.com; }
nginx rewrite - proxy if file not exists
It is definately possible to serve these files using bottle. You simply serve them as static files. As far as authentication goes, I do not believe bottle comes with authentication support ( as far as I know ). When it comes to performance though, this is an area when it really depends on how you deploy it. In a regular threaded environment, where each request gets it's ownregularthread, I highly doubt your server will be able to comfortably serve hundreds of thousands of requests at the same time. However, it is noted in the documentation, thatgreenletsmay be able to let you overcome this issue.Resources:Bottle static file serving:http://bottlepy.org/docs/dev/tutorial.html#routing-static-filesBottle greenlets:http://bottlepy.org/docs/dev/async.html#greenlets-to-the-rescue
I want to makebottlepython web service to serve binary files like pdf, picture and exe with authentication.Is it possible to serve all this files using bottle? I have hard time finding a tutorial for that.How about theperformance? Does bottle python handle hundreds of thousands downloads simultaneously?I am planning to use it withnginxuwsgi.
How bottle return binary files
It is not possible.Moreover, it is not recommended to blindly list all codes, asnginx allows to redefine all response codes, including ones you don't really want to redefine except in a few very specific situations (e.g. you don't normally want to redefine 304 (Not Modified), and probably not 302 (Found) unless there are very specific reasons);redefining some of the error codes might cause more harm than good (e.g. redefining 400 (Bad Request) is a bad idea).
When looking at the documentation for nginx'serror_pagedirective, it seems that one has to manually list out every possible status code that nginx (or an upstream server) could return.For example:error_page 404 /404.html; error_page 502 503 504 /50x.html; error_page 403 http://example.com/forbidden.html; error_page 404 = @fetch;Is there anyway of producing a wildcard for ALL status codes that are not specified directly... For example:error_page 404 /404.html; error_page 5xx /50x.html;orerror_page 404 /404.html; error_page 502 503 504 /50x.html; error_page @catchall /5xx.html;
nginx: Is it possible to have an 'catch all' error_page?
set_by_lua+ngx.location.capture?
Is there any way to read redis value and store it into nginx variable?I want to use it for multi-domain website, where subdomains will point to different IPs. All the subdomains will be stored in redis like this:"subdomain" => "address_for_proxy_pass"So what I need is to parse subdomain (done), store it into variable (done) and than make redis query (done) and store the result into variable. How to do this easily?
Nginx – read value from redis and store it into variable
The SSL protocol by itself (without theSNIextension) uses the ip address of the server to request the SSL certificate. With SNI it also passes the hostname (doesn't work for Win XP), but that should't be relevant here.Server directives are not an exact match. It's the "closest" match. It may appear "work", but it may be ending up in the wrong server directive. It's hard to tell without any more information, like the server root.The point is something will always work since you appear to be using a single ip address.
I run nginx for static content and as a proxy to Apache/mod_wsgi serving django. I have example.com and test.example.com as proxy to Apache/Django and static.example.com which serves all static files directly through nginx. I have a wildcard SSL cert so that each of these sub-domains can use SSL (and I only have one IP).Why is it that when usinglisten 443 default_server ssl;ineithertest.example.comorexample.com, SSL works for both yet I have to explicitly listen to 443 for static.example.com?ssl_certificate /etc/ssl/certs/example.chained.crt; ssl_certificate_key /etc/ssl/private/example.key; server { listen 80; listen 443; server_name static.example.com; # ... serves content ... } server { listen 80; listen 443 default_server ssl; server_name example.com; # ... proxy pass to http://example.com:8080 (apache) ... } server { listen 80; # why don't I need `listen 443;` here? server_name test.example.com; # ... proxy pass to http://test.example.com:8080 (apache) ... }
Why does listen `443 default_server ssl` work for multiple server names in nginx?
I came across this inproduction.log:Started GET "/assets/bg-linen-light.png" for ***** at 2011-08-28 11:04:42 +0400 Served asset /bg-linen-light.png - 304 Not Modified (102ms)The problem is that the browser had requested that filebeforex_sendfile_headerwas changed to what it should be, so it seems that Rails (and/or the browser) does absolutely nothing after changing the variable.The issue was resolved by going torails console, and typingRails.cache.clear. A hard refresh after that solves the problem!Started GET "/assets/bg-linen-light.png" for ***** at 2011-08-28 11:06:06 +0400 Served asset /bg-linen-light.png - 200 OK (4ms)
I've set up a production environment running Rails 3.1.0rc6, Thin and Nginx.For some reason, having setconfig.action_dispatch.x_sendfile_header = "X-Accel-Redirect"inconfig/environments/production.rb, Rails seems to have completely ignored it; assets aren't being served, and the response headers for one file is as follows:Server: nginx/1.0.5 Date: Sun, 28 Aug 2011 00:26:08 GMT Content-Type: image/png Content-Length: 0 Cache-Control: no-cache Last-Modified: Sat, 27 Aug 2011 23:47:35 GMT Etag: "af4810c52cb323d9ed061d1db5b4f296" X-UA-Compatible: IE=Edge,chrome=1 X-Sendfile: /var/www/***/app/assets/images/bg-linen-light.png X-Runtime: 0.004595 X-Content-Digest: da39a3ee5e6b4b0d3255bfef95601890afd80709 Age: 0 X-Rack-Cache: stale, valid, store 200 OKSo it seems Rails is still setting theX-Sendfileheader. I've tried adding the sendfile_header line toconfig/application.rb, but I'm guessing it's being overriden or ignored as well.My yml file for Thin:--- chdir: /var/www/*** environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 1 daemonize: trueNginx vhost:upstream *** { server 127.0.0.1:3000; } server { listen 80; server_name ***; root /var/www/***/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://***; break; } } }I've already tried/etc/init.d/thin stopthen starting it again a few times, to no avail.
Rails ignoring config.action_dispatch.x_sendfile_header? Using Thin + Nginx
i'm answering this question myself as i found the solution after a lot of "oh my ...... ... i can't believe this does not work". what was missing in my case, waserror_page 403 = @backend;in the main server block, as a request for / did not return an http 404 (file not found), but an http 403 (no access).the issue was submitted back to the project as issue nr. 5https://github.com/rsms/ec2-webapp/issues#issue/5
i'm having a little issue with Rasmus Andersson awesome node.js EC2 templatehttp://rsms.me/2011/03/23/ec2-wep-app-template.htmlok, the issue isi would like the root urlhttp://www.mydomain.com/response to be delivered by the node.js server (which listens on port 3000)nginx should still deliver everything static from /public/ (so nginx should look in /public/ first, if it's not there pass the request to node.js on port 3000) i.e.:http://www.mydomain.com/favicon.icoshould response with the file from /var/mydomain/public/favicon.icohttp://www.mydomain.com/should be passed to node.js on port 3000http://www.mydomain.com/contentpage.htmlshould be passed on to node.js on port 3000this is my/etc/nginx/sites-available/mydomain-httpconfig file. i know that i will have to rewrite the location / part, but i don't know what i should put in there.thx a lot## Access over HTTP (but not HTTPS) server { listen 80; listen [::]:80 default ipv6only=on; access_log /var/log/nginx/access.log; location / { root /var/mydomain/public; index index.html; error_page 404 = @backend; } location @backend { proxy_pass http://127.0.0.1:3000; proxy_set_header X-Client-IP $remote_addr; } }
node.js server to return /, static files from /public/ via nginx?
Have a look at themod_proxy documentation, I think theProxyPassMatchdirective is of interest.
I need to convert following nginx rule to Apache configuration. can anyone help me.location /chat { rewrite /chat(/.+)$ $1 break; proxy_pass http://localhost:8000; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_buffering off; proxy_send_timeout 310; }
What is the equivalent value for apache configuration
No. The request doesn't get to the PHP engine until after the file has been uploaded.
Can you run some script before the uploading of a file starts in php? Example, I'm POSTing to upload.php, and in that file I want to check their $_SESSION first before I start wasting bandwidth on them and the file stats uploading to my server. I'm using php 5.2.11 on nginx.
Is there a way to run some script before the upload starts in php?
If you use fixed length numbers, for example only three digits, for this particular case you can use string concatenation instead of adding numbers:server { server_name "~^pr-(\d{3}).review-apps.example.com$"; location / { set $port 50$1; proxy_pass "http://localhost:$port/test"; } }This will give you exactly50145for thepr-125.review-apps.example.comhostname.For variable count of port number digits you can use named regex capture group:server { server_name "~^pr-(?\d).review-apps.example.com$" "~^pr-(?\d{2}).review-apps.example.com$" "~^pr-(?\d{3}).review-apps.example.com$"; location / { if ($pr1) { set $port 5000$pr1; } if ($pr2) { set $port 500$pr2; } if ($pr3) { set $port 50$pr3; } proxy_pass "http://localhost:$port/test"; } }
I'm trying to set up an Nginx, where I want to add two numbers together.server { server_name "~^pr-(\d*).review-apps.example.com$"; location / { set $port 50000+$1; proxy_pass "http://localhost:$port/test"; } }But this doesn't work 🙄. (Result is in string. For example "50000+125")How can I add two numbers in nginx.conf? Is it even possible?
NGINX add two variable/arguments numbers
This is a way too open-ended question but I will try to answer it:In terms of security could someone tell me If I need nginx, and my options please?You will need Nginx (or Apache) on both scenarios. With one or multiple servers, using Express or not, Express is only an application framework to build routes but you still need a service that will respond to network requests, this is what Nginx and Apache do.You could avoid using Nginx but then, your users would have to make the request directly to the port where you started Express. For example:http://my-site.com:3000/welcomeIn terms of security, you would better hide the port number anduse a Nginx's reverse proxyso that your users will only need to browsehttp://my-site.com/welcome.my developer tells me I should use another server to host the front-ends, such as cloudflareCloudflare does not offer hosting services as of today. It does offer CDN to host a few files but not a full site. You would need another Digital Ocean instance to do so. In aCloudflare's forum postI found:"Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".I have read that nginX can enable hosting multiple sites on the same serverYes, Nginx (and Apache too) can host multiple sites, with different names or the same name, as domains (www.my-backend.com,www.my-frontend.com) or subdomains (www.backend.my-site.com,www.my-site.com) in the same server.... but unsure if this is good practiceIt is definitively not a bad practice, it is very common to separate applications on different subdomains. A few valid reasons to keep them in separated servers would be:Because you want that if the front-end fails the back-end API continues to work.Because you want to balance network traffic.Because you want simply want to keep them separated (the microservices architecture).It would be a bad practice to abuse from these microservices (i.e. a microservice to get the value of PI)
I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare. I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.In terms of security could someone tell me If I need nginx, and my options please?Thanks
Is nginx needed if Express used
I am assuming you are not passing asgi application to daphne, because configuration you pasted in question has missing line. You have to pass it correctly. Assuming you have confpackagewithasgi.pymodule inside it containing asgi application instance, you have to docommand=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock conf.asgi:applicationconf.asgi:applicationshould be at the end.
I got a 502 error when I'm trying to open a website. I used the instructions from the official websitelinkAdded new file lifeline.conf at/etc/supervisor/conf.d/lifeline.conf[fcgi-program:asgi] # TCP socket used by Nginx backend upstream socket=tcp://localhost:8000 # Directory where your site's project files are located directory=/home/ubuntu/lifeline/lifeline-backend # Each process needs to have a separate socket file, so we use process_num # Make sure to update "mysite.asgi" to match your project name command=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-head$ # Number of processes to startup, roughly the number of CPUs you have numprocs=4 # Give each process a unique name so they can be told apart process_name=asgi%(process_num)d # Automatically start and recover processes autostart=true autorestart=true # Choose where you want your log to go stdout_logfile=/home/ubuntu/asgi.log redirect_stderr=trueSetup nginx confupstream channels-backend { server localhost:8000; } server { listen 80; server_name staging.mysite.com www.staging.mysite.com; client_max_body_size 30M; location = /favicon.ico { access_log off; log_not_found off; } location / { try_files $uri @proxy_to_app; } location @proxy_to_app { proxy_pass http://channels-backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } }I checked the asgi log file and it contains an error .daphne: error: the following arguments are required: applicationI'm guessing a mistake inlifeline.conf.
Trouble with deploy django channels using Daphne and Nginx