Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
client_max_body_sizedefault value is 1 MB.RTM
When I'm trying to upload a file on my server I get this error:413 Request Entity Too LargeWhich ofcourse means my file is too large. So i've done a quick google search and came accross this:open:/etc/nginx/nginx.confEdit:# set client body size to 2M # client_max_body_size 2M;However I don't have that code in my nginx.conf file? Did this recently change? Can't find nothing about itThanks
nginx / 413 Request Entity Too Large
There is a little known fact that you don't have to start location with/[email protected] this would work:location / { auth_request .auth; proxy_pass http://protected.resource; } location .auth { internal; proxy_pass http://external.url/auth; # you must use path part here ^ # otherwise it would be invalid request to external.url }
I'm trying to usenginx'sngx_http_auth_request_modulein such way:server { location / { auth_request http://external.url; proxy_pass http://protected.resource; } }It doesn't work, the error is:2017/02/21 02:45:36 [error] 17917#0: *17 open() "/usr/local/htmlhttp://external.url" failed (2: No such file or directory), ...Or in this way withnamed location:server { location / { auth_request @auth; proxy_pass http://protected.resource; } location @auth { proxy_pass http://external.url; } }In this case the error is almost the same:2017/02/22 03:13:25 [error] 25476#0: *34 open() "/usr/local/html@auth" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET / HTTP/1.1", subrequest: "@auth", host: "127.0.0.1"I know there is a way like this:server { location / { auth_request /_auth_check; proxy_pass http://protected.resource; } location /_auth_check { internal; proxy_pass http://external.url; } }But in this case thehttp://protected.resourcecan not use the/_auth_checkpath.Is there a way to use an externalURIas a parameter for theauth_requestdirective without overlapping thehttp://protected.resourcerouting?If not, why?It looks a little bit strange to look for theauth_request's URI through static files (/usr/local/html).
how can I use external URI with the nginx's auth_request module
Your SSL connection is terminated as nginx, which communicates with Kestrel in plain http. Kestrel redirects user to https, which is terminated again in nginx, passed to Kestrel as http and again and again. That's endless loop.Configure nginx to require https (redirect http to https), don't touch Kestrel. You site itself will always work without SSL, and nginx will deliver it to users (and back) using SSL.
I spun up a new VM running Ubuntu 16.04 and ran the command.dotnet new -t webwhich creates a new basic MVC web template. Next I ran the app and the connection was successful.After that I modified nginx.conf to use SSLserver { listen 443 http2 ssl default; ssl_certificate /etc/ssl/certs/testCert.crt; ssl_certificate_key /etc/ssl/certs/testCert.key; ssl_protocols TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; location / { proxy_pass http://localhost:5000; proxy_cache_bypass $http_upgrade; proxy_redirect off; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } }Ran the app again and it was running under HTTPS without errors.Then I configured MVC services in Startup.cs to enforce HTTPS.services.AddMvc(config => { config.Filters.Add(new RequireHttpsAttribute()); });Finally I tried to connect again but getERR_TOO_MANY_REDIRECTS. However if I only run on Kestrel and configure some options it will work just fine.services.Configure(options => { options.AddServerHeader = false; options.UseHttps("devcert.pfx", "password"); });It seems like it must be nginx, however it could be a problem with MVC/ASP.NET Core. How can I further diagnose this issue or fix it?
How can I enforce SSL using ASP.NET Core and nginx
The problem was in the extra slash in my url. I changed url fromhttps://somesite.com/mywsdl/?wsdltohttps://somesite.com/mywsdl?wsdland the problem disappeared.
I made a wsdl using sun-jaxws. I created a web service client in Netbeans, and successfully called the wsdl web service. Then I configured my nginx server to access the web service by https. When I call the service over https I get the following error:com.sun.xml.internal.ws.client.ClientTransportException: The server sent HTTP status code 200: OKMy wsdl is available by addresshttps://somesite.com/mywsdl/?wsdl. Inside the wsdl I see such service location: I don't know whether the problem is in my nginx configuration, or in my jaxws.
Connecting to webservice results in com.sun.xml.internal.ws.client.ClientTransportException: The server sent HTTP status code 200: OK
No, it does not work forproxy_pass.http://nginx.org/r/etagEnables or disables automatic generation of the “ETag” response header field forstatic resources.Even more, it's turned on by default.
I'm using Nginx 1.9.2 and following is my configurationupstream httpserver0{ server 127.0.0.1:35011 max_fails=3 fail_timeout=30s; #H_server0 } server { listen 443 ssl; listen 80; server_name 11.22.33.44; #my_server_name etag on; location ~* \.(ts|raw)$ { set $server_id "0"; if ( $uri ~ ^/(.*cfs+)/(.*)$ ){ set $server_id $1; } if ( $server_id = "4cfs" ){ proxy_pass http://httpserver0$request_uri; } } }I'm usingupstream moduleand proxy_pass for reverse proxy, and I enabled etag function byetag onwithin the server block.However, when I check the Header of HTTP response, I didn't find theetagfield at all..Does anyone have ideas about this? Thanks!
In Nginx, "etag" directive doesn't work for proxy_pass?
Please try configuration as below,server { listen 80; server_name status.example.com; location / { proxy_pass http://example.com:1234; } }For reference useNGINX Reverse ProxyandModule ngx_http_proxy_module
I have a domain that I can browse to viaexample.com:1234. Now I do not want to always have to type the port at the very end, but rather have nginx redirect me to the static URL when browsing a subdomain eg.status.example.com.I have tried writing a redirect, but it didn't work at all.server { listen 80; server_name status.example.com; return 301 $scheme://www.example.com:1234; }Where's my error? Is it the server block? Am I missing something basic here?
Nginx Subdomain redirect to static URL
You shouldn't be trying to hit the IP address of the container, you should be using the IP address of the host machine.What you are missing is the mapping of the port of the host machine to the port of the container running the nginx server.Assuming that you want to use port 8888 on the host machine, you need a parameter such as this to map the ports:docker run ... -p 8888:8888 ...Then you should be able to access you server athttp://:8888EDIT: There is another gotcha if you are running on a Mac. To use Docker on a Mac it's common to useboot2dockerbutboot2dockeradds in another layer. You need determine the IP address of the boot2docker container and use that instead of localhost to access nginx.$ boot2docker ip The VM's Host only interface IP address is: $ wget http://:8888 ... Connecting to :8888... connected. HTTP request sent, awaiting response... 200 OKReference:https://viget.com/extend/how-to-use-docker-on-os-x-the-missing-guideEDIT: ... or withdocker-machinethe equivalent command would bedocker-machine ip whereis likely to be "default".
We are trying to use docker to run nginx but for some reason I'm unable to access thenginxweb server running inside the docker container.We have booted a Docker Container using the followingDockerfile:https://github.com/dwyl/learn-docker/blob/53cca71042482ca70e03033c66d969b475c61ac2/Dockerfile(Its a basic hello world using nginx running on port 8888) To run the container we used:docker run -it ubuntu bashwe determined the Container's IP address using thedocker inspectcommand:docker inspect --format '{{ .NetworkSettings.IPAddress }}' a9404c168b21which is:172.17.0.11when I try to visit the container's IP address and the nginx port in a browserhttp://172.17.0.11:8888/we getERR_CONNECTION_TIMED_OUTor using curl:curl 172.17.0.11:8888 curl: (7) Failed to connect to 172.17.0.11 port 8888: Connection refusedTo attempt to solve this we googledextensivelybut suspect we might be asking the "wrong" questions...
How to access web page served by nginx web server running in docker container
thissmart answerwas my solutionI assume that you're running a Linux, and you're using gEdit to edit your files. In the/etc/nginx/sites-enabled, it may have left a temp file e.g.default~(watch the~). Delete this file, and it will solve your problem.
I am developing a django powered application usingnginxandgunicorn. Everything was fine, until today which i tried to restartnginxwhich failed. so I tested it withsudo nginx -tcommand and I got this error.nginx: [emerg] host not found in "localhost:8004" of the "listen" directive in /etc/nginx/sites-enabled/808:2 nginx: configuration file /etc/nginx/nginx.conf test failedthis is808config file contents:server { listen localhost:8004; error_log /var/www/html/burger808/error.log; location / { proxy_pass http://127.0.0.1:8005; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } location /static/ { autoindex on; alias /var/www/html/burger808/burger808/static/; } location /media/ { autoindex on; alias /var/www/html/burger808/burger808/media/; } }I thought it might be some conflicting over8004port. So i checked the other congfig files and there were no conflictings.I wonder what could've caused this error?
nginx: [emerg] host not found in "localhost:8004" of the "listen" directive
Here is how to change the port. Edit/etc/opscode/chef-server.rbnginx['non_ssl_port'] = 10080 nginx['ssl_port'] = 10443 nginx['url'] = "https://:10443/"and adjust your local~/.chef/knife.rbto readchef_server_url 'https://:10443/organizations/'But currently there is a bug in Chef that prevents the embedded nginx to run on a non standard port:https://github.com/chef/chef-server/issues/50
I tried to install apache on a machine that chef-server was installed. Apache could not start up due to the occupation of port 80 by chef nginx. If I want to let apache use port 80 as default, is it possible to change chef nginx default http port to another one?I found a solution on the Internet to set virtual host on both apache and nginx, but they need adifferent FQDNas server name. My machine uses an IP instead of FQDN, so I need to change the default HTTP port for chef nginx.I tried to add/etc/chef-server/chef-server.rbwith the following content:nginx['non_ssl_port'] = 9898Then I ran 'chef-server-ctl reconfigure'. It didn't work.Can anyone help on this? Thanks.UpdatedMy information was wrong regarding changing the chef server settings.The settings should be added into/etc/opscode/chef-server.rbfor Chef12. Afterchef-server-ctl reconfigure, nginx's HTTP port is changed to 9898. Thanks.
How to change chef nginx default http port 80?
OK, issue was I was calling the extenal file redirects.conf, since every file ending in .conf is considered a site configuration file.I changed it to sitename.redirects and now it works fine
I need to setup about 5-6k redirects on a domain (for a site migration), and I'm new to nginx. I have some test redirects working in the main .conf file for the domain. But I don't want to have 5k+ rewrites in the main .conf file so I have been told that I can include a external file in the .conf to keep it clean, so my main .conf like thisserver { listen.....etc etc rewrite ^oldurl newurl permanent; rewrite ^oldurl newurl permanent; include /etc/nginx/conf.d/redirects.conf; location ....etc etc }Then in the redirects.conf I just haverewrite ^oldurl newurl permanent;But when I try to restart nginx I get the error:"rewrite" directive is not allowed here in /etc/nginx/conf.d/redirects.conf:1Thanks
How to include redirects on external file?
try runninglsofornetstat -tlnp | grep 80to determine which application is using port 80. Once you have that figured out you can do something like ps -elf to kill that process.
Trying to start passenger standalone withpassenger start -p 80and it's saying its already running but when i do apassenger stop -p 80i getAccording to the PID file '/var/crm/tmp/pids/passenger.80.pid', Phusion Passenger Standalone doesn't seem to be running.But it clearly is not because when i try stop it, it says its not running and i cant access it from the web[root@technetium crm]# passenger start -p 80 *** ERROR *** The address 0.0.0.0:80 is already in use by another process, perhaps another Phusion Passenger Standalone instance. If you want to run this Phusion Passenger Standalone instance on another port, use the -p option, like this: passenger start -p 81
Passenger process already running? but its not
If you would like to have all (sub)domains on a server have the same favicon, you can enter this in the server configuration:location ~ /(favicon.ico|apple-touch-icon.png)$ { root /var/www/default; }And just place the icons in the above folder.Hope that helps, cheers!
I am looking to set the icons for a domain on an nginx server I have configured. There are many different urls on this domain which will need to display the same favicon / icon no matter what the url.I am looking for some advice in implementation.
Global favicon.ico and iOS icons
This is not related to CarrierWave, Nginx is not being able to write at the folder/usr/local/Cellar/nginx/1.0.4/client_body_temp/with the temporary uploaded file, which means your Nginx process doesn't have rights on it. Make sure the user that's running nginx can read/write files under this specific path, if you have not changed the configuration, Nginx usually starts it's workers as usernobodyso you might give him read/write acces to this folder.Run the following command:ps aux | grep "nginx: worker process"And see which user is running nginx.
I've installed carrierwave gem with rmagick.I can get it working fine if load thro WEBrick but getting 500 Internal Server Error when trying to use nginx instead.The nginx error.log says:2011/08/14 10:06:40 [crit] 760#0: *4247 open() "/usr/local/Cellar/nginx/1.0.4/client_body_temp/0000000033" failed (13: Permission denied), client: 127.0.0.1, server: jewellery.dev, request: "POST /items/28?locale=en HTTP/1.1", host: "jewellery.dev:8080", referrer: "http://jewellery.dev:8080/items/28/edit?locale=en"Also I've created an file on initializers folder containing:CarrierWave.configure do |config| config.permissions = 0777 endAm I missing something?
Rails 3 + carrierwave + nginx = permission denied
Nginx is a http server, it has nothing to do with Hadoop.
Is that possible to run Hadoop on Nginx? if so, is there any reference?
Can Hadoop run on Nginx?
TL;DR if you're on localhost and testing it doesn't matter what URL you put. Also note that this setting can be easily changed after installation from admin area.I was deploying wiki.js in our company and first I was setting it up on throwaway domain before switching to target domain, and I was confused by this as well.I've put target URL during installation and it seemed at first like this setting is unused - I was able to use wiki normally. Later I found out that it is in fact used in few places like for examplewhen user requests password reset, the reset link will be generated against this URL.Note that while using reverse proxy allows you to easily change on which domain name is wiki.js served, if the wiki is public for users, the system WILL have to know this public URL, for reasons like password reset mentioned above.
After configuring my database and running my Wiki.js instance using nodejs, I was prompted to "install" Wiki.js onlocalhost:3000. However, there is this input bar asking for the public URLwiki.example.com:I am trying wiki.js out on my own computer, which has nothing to do with public URLs. In the future, I plan to usenginxto reverse proxy received request to two different ports on my server, which also does not require public URL exposing to the service (proxied bynginxalready).Therefore I am curious: Why does wiki.js need public URL when installing? What do I need to configure when testing Wiki.js on my computer? What do I need to configure in nginx reverse proxy and what to fill here in the public URL input bar?
Why does wiki.js need public URL when installing?
I figured out the answer by following this link:Increasing client_max_body_size in Nginx conf on AWS Elastic BeanstalkThe nginx configuration settings should be performed in a folder named .platform.The folder structure is (.platform/nginx/conf.d/proxy.conf)Inside the proxy.conf mention:client_body_buffer_size 50M;(the size according to your requirement).Inside the .platform folder make another file named 00_myconf.config with the following contents:container_commands: 01_reload_nginx: command: "service nginx reload"AWS documentation regarding configuring nginx:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html(Read the reverse proxy configuration)
I've deployed a ML Model on AWS. It's an image classifier. When I provide the following images to the ML Model via a form in Flask, it works in certain cases but doesn't work in other cases.The link of the image which work is listed below:https://drive.google.com/file/d/1hbrEa2gNLdqGPJxp5jVxWcXl1wunp5Mc/view?usp=sharingThe link of the image which gives an error is listed below:https://drive.google.com/file/d/1znWTRnTMPft_r_jwpJ0JQuMnnazsUXs-/view?usp=sharingBoth of the above images look alike. The first image which is around 150kb of size works when I select the file and upload it for analysis. The image which is around 10kb however doesn't when I select and upload it for analysis from a PC. When I try to do the same with my mobile phone browser, both show an error.The error shown in the logs is - [warn]: a client request body is buffered to a temporary file.
Nginx: Client request body is buffered to a temporary file
You can build React app and serve its static files with NginxYou can use docker-compose to manage Nginx and Django. What is more, you can build React static files during docker build.Here is my article:Docker-Compose for Django and React with Nginx reverse-proxy and Let's encrypt certificate.Below my Nginx configuration:server { listen 80; server_name _; server_tokens off; client_max_body_size 20M; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html; } location /api { try_files $uri @proxy_api; } location /admin { try_files $uri @proxy_api; } location @proxy_api { proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Url-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://backend:8000; } location /django_static/ { autoindex on; alias /app/backend/server/django_static/; } }Nginx dockerfile:# The first stage # Build React static files FROM node:13.12.0-alpine as build WORKDIR /app/frontend COPY ./frontend/package.json ./ COPY ./frontend/package-lock.json ./ RUN npm ci --silent COPY ./frontend/ ./ RUN npm run build # The second stage # Copy React static files and start nginx FROM nginx:stable-alpine COPY --from=build /app/frontend/build /usr/share/nginx/html CMD ["nginx", "-g", "daemon off;"]
I have a digitalocean server and I have already deployed my Django backend server using gunicorn and nginx.How do I deploy the React app on the same server?
Deploy both django and react on cloud using nginx
the problem was the missing-.nginx: container_name: nginx image: nginx:1.19.3 restart: always ports: - 80:80 - 443:443 - "127.0.0.1:8080:8080" nginx-exporter: image: nginx/nginx-prometheus-exporter:0.8.0 command: - -nginx.scrape-uri - http://127.0.0.1:8080/stub_status
I have a docker-compose and both nginx and nginx-prometheus-exporter are containers. I put the relevant parts here:nginx: container_name: nginx image: nginx:1.19.3 restart: always ports: - 80:80 - 443:443 - "127.0.0.1:8080:8080" nginx-exporter: image: nginx/nginx-prometheus-exporter:0.8.0 command: -nginx.scrape-uri -http://127.0.0.1:8080/stub_statusI triedhttp://nginx:8080/stub_status,nginx:8080/stub_statusand127.0.0.1:8080/stub_statusfor-nginx.scrape-uribut none of them worked and I gotCould not create Nginx Client: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": dial tcp 127.0.0.1:8080: connect: connection refused.Also the localhost:8080/stub_status is available in my VM using curl.
nginx-prometheus-exporter container cannot connect to nginx
Faced a similar issue before, it happens when the client makes a request and then closes it(either because server took too long to respond or client has been disrupted)but uwsgi is still processing that request.From the Tags I notice that you are using nginx+uwsgi configuration, there are multiple ways to solve this :Find your most time consuming request and match it between nginx and uwsgi(harakiri). Note that this doesn't work when client itself disrupts.On your nginx config setuwsgi_ignore_client_abort onfor uwsgi routes.Or you can just disable logging of write errorsignore-write-errors = true.
Here is the error log that I received when I put my application in the long run.Oct 22 11:41:18 uwsgi[4613]: OSError: write error Oct 22 11:41:48 uwsgi[4613]: Tue Oct 22 11:41:48 2019 - uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 341] during GET /api/events/system-alarms/ Nov 19 19:11:01 uwsgi[30627]: OSError: write error Nov 19 19:11:02 uwsgi[30627]: Tue Nov 19 19:11:02 2019 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /api/statistics/connected-clients/?type=auto&required_fields=0,11Also, I need to know the reason for the os write error and a broken pipe in detail.
uwsgi: OSError: write error during GET request
Came up with the following configuration - it is working for all of my test routes / requirements so far.The regex is almost the same as the one posted by @Gilgames.I based mine on the official docs rewrite example:https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-targetApart from that I took a quick course athttps://www.regular-expressions.info/😁apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ing namespace: some-ns annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1/$2-$3/$5 certmanager.k8s.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE" nginx.ingress.kubernetes.io/session-cookie-path: / nginx.ingress.kubernetes.io/from-to-www-redirect: "true" nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'example.com') { rewrite ^ https://www.example.com$request_uri permanent; } spec: tls: - hosts: - www.example.com - example.com secretName: tls-secret-test rules: - host: www.example.com http: paths: - path: /([a-z]{2})/([a-z]{2})-([a-z]{2})/coolapp(/|$)(.*) backend: serviceName: coolapp-svc servicePort: 80 - host: example.com http: paths: - path: /([a-z]{2})/([a-z]{2})-([a-z]{2})/coolapp(/|$)(.*) backend: serviceName: coolapp-svc servicePort: 80
How can I configurenginx.ingress.kubernetes.io/rewrite-targetandspec.rules.http.paths.pathto satisfy the following URI patterns?/aa/bb-aa/coolapp /aa/bb-aa/coolapp/ccLegend:a= Any letter between a-z. Lowercase. Exactly 2 letters - no more, no less.b= Any letter between a-z. Lowercase. Exactly 2 letters - no more, no less.c= any valid URI character. Lowercase. Of variable length - think slug.Example URI:s that should match the above pattern:/us/en-us/coolapp /us/en-us/coolapp/faq /us/en-us/coolapp/privacy-policyAttentionStarting in Version 0.22.0, ingress definitions using the annotationnginx.ingress.kubernetes.io/rewrite-targetare not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.NoteCaptured groups are saved in numbered placeholders, chronologically, in the form$1,$2...$n. These placeholders can be used as parameters in the rewrite-target annotation.References:https://kubernetes.github.io/ingress-nginx/examples/rewrite/https://github.com/kubernetes/ingress-nginx/pull/3174
How to configure nginx.ingress.kubernetes.io/rewrite-target and spec.rules.http.paths.path to satisfy the following URI patterns
Looking through thenginxsource, it appears that the only mechanism that would modify the permissions of the temporary file is therequest_body_file_group_accessproperty of the request, which is consulted inngx_http_write_request_body():if (r->request_body_file_group_access) { tf->access = 0660; }But even that limits you to0660and it seems that it is not a user-settable property, only being utilized by thengx_http_davmodule.The permissions are ultimately set inngx_open_tempfile(), where they default to0600:fd = open((const char *) name, O_CREAT|O_EXCL|O_RDWR, access ? access : 0600);So it seems that there is currently no configuration-based solution. If you're willing/able to buildnginxfrom source, one possibility is to apply a simple patch to set the permissions to whatever you want inngx_http_write_request_body():+ tf->access = 0644; + if (r->request_body_file_group_access) { tf->access = 0660; } rb->temp_file = tf;I tested this and obtained the following, the first file having been uploaded without the modification, and the second file with it:$ ls -al /tmp/upload/ total 984 drwxr-xr-x 2 nobody root 12288 Feb 18 13:42 . drwxrwxrwt 16 root root 12288 Feb 18 14:24 .. -rw------- 1 nobody nogroup 490667 Feb 18 13:40 0000000001 -rw-r--r-- 1 nobody nogroup 490667 Feb 18 13:42 0063184684
We use the client_body_in_file_only option with nginx, to allow file upload via Ajax. The config looks like this:location ~ ^(\/path1|\path2)$ { limit_except POST { deny all; } client_body_temp_path /path/to/app/tmp; client_body_in_file_only on; client_body_buffer_size 128K; client_max_body_size 1000M; #this option is a quick hack to make sure files get saved on (ie this type of request goes to) on a specific server proxy_pass http://admin; proxy_pass_request_headers on; proxy_set_header X-FILE $request_body_file; proxy_set_body off; proxy_redirect off; # might not need? proxy_read_timeout 3m; }This works, but the web server process (Mongrel) that handles the request has tosudothe temp file that comes through inheaders['X-FILE'], before it can do anything with it. This is because the temp file comes through with600permissions.I'm not happy with this approach, which requires us to edit the/etc/sudoersfile to allow the web server user to dosudo chmodwithout a password. It feels very unsecure.Is there a way, with the nginx config, to change the permissions on the temp file that is created, eg to 775?EDIT: I just tried changing the value of theumaskoption in the nginx init config, then restarting nginx, but it didn't help. It had been at0022, I changed it to0002. In both cases it comes through with 600 permissions.EDIT2: I also tried adding this line under theproxy_redirectline, in the nginx config.proxy_store_access user:rw group:rw all:r;But, it didn't make any difference - it still just hasuser:rw
Linux - client_body_in_file_only - how to set file permissions for the temp file?
By default, nginx is configured to allow a client maximum body size of 1MB. The files you are uploading (~8MB) are larger than 1MB, which is why the 413 (Request Entity Too Large) error is being returned.To fix this issue, simply editnginx.confand add aclient_max_body_sizeconfiguration like so:###################### # HTTP server ###################### server { ... listen 80; server_name xxxx.com; client_max_body_size 20M; ... }If you have HTTPS configured as well, be sure to addclient_max_body_sizethere as well:###################### # HTTPS server ###################### server { ... listen 443 default_server ssl; server_name xxxx.com; client_max_body_size 20M; ... }reload your server and you should be good![server]$sudo service nginx reloadMore info onclient_max_body_sizehere:http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_sizeSyntax: client_max_body_size size;Default: client_max_body_size 1m;Context: http, server, locationSets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size.
Whenever I upload a small file, such as an image, the data is saved successfully. However, when I upload an audio file I get this error: 413 Request Entity Too Large. The file sizes are around 8MB. The confusing part is that uploading these files in development process easily but now that the website is live, it doesn't work. I read that you can change the limit of the upload size but can't seem to figure it out. Another thing I read is that you should have files uploaded to a server, and you can use Nginx. I think I configured it; I typed the commandscp -r * root@[my ip address] /usr/share/nginx/htmland the files from my media folder were uploaded there. Now with that the files are not automatically put there, instead they are sent to the project's media folder. Shouldn't it automatically upload to the Nginx server?
413 Request Entity Too Large uploading files with Django Admin and Nginx Configuration
I had same problem, so I changed the nginx config file/etc/nginx/sites-avaiable/your-site.Change:fastcgi_pass unix:/run/php/php7.1-fpm.sock;tofastcgi_pass unix:/run/php/php7.2-fpm.sock;This worked for me.
So i installed LEMP (nginx, mysql, php..) by following the digital ocean guide. But ubuntu 16.04 only comes with php7 by default and i need greater then 7.1 to run Laravel. I am confused on why every time i replace php 7 with php 7.2-fpm from ondrejsudo add-apt-repository ppa:ondrej/phpWhy does the default php-fpm work and load the info.php page.. but when i install php 7.2-fpm from ondrej it shows up 502 bad gateway. Any help is appreciated so i can start Laravel! :D
502 Bad Gateway when installing PHP7.2 on nginx
Thehttp://nginx.org/r/ssl_client_certificatedirective is used to specify which certificates you trust for client-based authentication. Note that the whole list is basically sent every time a connection is attempted (usessl_trusted_certificateas per the docs if that's not desired).As per above, note thatssl_verify_depthbasically controls how easy would it be to get into your system — if you set it at a high-enough value, and someone is able to obtain a certificate with one of the CAs that you trust, or through one of their intermediaries which they trust to generate their own certificates, then they'd be able to authenticate with your nginx, whether or not that's your desire.As such, it'd normally be the practice that all certificates that are used for client-based authentication are generated by a privately sanctioned CA, hence, normally, there shouldn't be much of a length to the chain. If you want to equalise the depth number between the two CAs, to get the best protection fromssl_verify_depth, then it's conceivable to create an extra CA to add to the depth, then that CA to the trusted list instead of what's now an actual intermediary. (Note that it gets complicated once you involve a few intermediaries, the browser would need to know of their existence, which is usually cached, and can result in a number of ghost issues when non-cached.)Also, note that you don't actually have to have a single CA in the specified file — it can include multiple unrelated "root" CAs, so, if you want to add multiple independent CAs, you don't actually have to bother creating another CA to certify them — you can just include such independent CAs as-is.
I am trying to set up a NGINX to perform client authentication against multiple clients. The problem I have is that those clients will have different certificates, basically different Root CAs:[clientA.crt] ClientA > IntermediateA > RootA [clientB.crt] ClientB > IntermediateB1 > IntermediateB2 > RootBI looked at the NGINX documentation and I noticed thessl_client_certificatedirective. However, that property alone seems not work by itself, for example, if I configure it for only work for clientA for now:ssl_client_certificate /etc/nginx/ssl/clientA.crt; ssl_verify_client on;Then I received a 400 error code back. By looking at other questions, I figured out that I also have to also usessl_verify_depth: 3. Therefore, if I want concatenate both clientA and clientB in a bundle PEM to allow both clients, will I need use a high value? What's the purpose of this directive and what are the implications of setting to a high number with a bundled PEM?
nginx client authentication with multiple client certificates
try_fileschecks for the presence of a file on the local file system and cannot respond to the response code from a proxy.Presumably, the proxy response with a 404 response if the remote page does not exist, which can be intercepted by anerror_pagestatement.For example:location / { proxy_pass http://extranet; proxy_set_header ... proxy_set_header ... proxy_set_header ... proxy_intercept_errors on; error_page 404 = @fallback; } location @fallback { root /var/www/extranet/public; try_files $uri =404; }Seethis documentfor more.
I am setting up a Rails app with nginx in front.What I want is first to check if the URL makes sense for Railsthenserve content of the public folder.I can't achieve this:upstream extranet { server localhost:3000; } server { location / { try_files @extranet $uri; root /var/www/extranet/public; } location @extranet { proxy_pass http://extranet; proxy_read_timeout 90; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Host $host; client_max_body_size 100m; } }I get:*1 rewrite or internal redirection cycle while internally redirecting to "/"error.It seems liketry_files $uri @extranet;works but in my case it feels safer to check the Rails app first because the public folder might change.
Nginx proxy_pass then try_file
This should work. A similar command works at my end.kubectl -n policy-demo delete networkpolicy access-nginx
I've followed a Kubernetes tutorial similar to:https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/which created some basic networkpolicies as follows:root@server:~# kubectl get netpol -n policy-demo NAME POD-SELECTOR AGE access-nginx run=nginx 49m default-deny 50mI saw that I can delete the entire namespace (pods included) using a command like "kubectl delete ns policy-demo", but I can't see what command I need to use if I just want to delete a single policy (or edit it even).How would I use kubectl to delete just the "access-nginx" policy above?
How to delete networkpolicies using kubectl?
You can setup a S3 bucket that redirects naked domain to www. It is explained here.http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.htmlYou can redirect http to https by using Cloudfront. You can read more information here.http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.htmlYou can setup the webserver on your EC2 instances to redirect as well, but that requires that you setup up your SSL certificate as well. It is easier to let AWS handle that with Cloudfront.You are probably using Apache so it would be something like this.NameVirtualHost *:80 ServerName mysite.example.com DocumentRoot /usr/local/apache2/htdocs Redirect permanent / https://mysite.example.com/ ServerName mysite.example.com DocumentRoot /app/directory/ SSLEngine On # etc... Then setup your SSL certificate with LetsEncrypt in your deploy script.
I'm using Elastic Beanstalk and I've followed the instructions to deploy my app using the express web server as follow:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.htmlThis setup uses nginx and route 53.Everything works well, but now I'm trying to redirect from non-www/non-https URLs to "https://www.domain.com" (always https with www).I've seen different solutions out there that either aren't working or seem hacky. What's the proper way to do this from theaws console?Thanks a lot!
Redirect non-www to www with aws elastic beanstalk
You can do this if you know which location block or server is handling the request for stats. Just add the directiveaccess_log off;to the server or location block in which you want this disabled.--Edit--Add this location to your server block:location /stats/ { try_files $uri $uri/ =404; access_log off; }
I'm using an access and error log in Nginx.I have extremely large number of requests for stats which take up too much storage space in access.log and are not required.Is it possible to exclude a specific file or folder from logging to access.log?I would like to exclude all requests to/stats/server { listen 80 default_server; listen 443 ssl default_server; server_name ***.co.uk www.***.co.uk; root /var/www/***/html; index index.html index.php; access_log /var/www/***/log/access.log; error_log /var/www/***/log/error.log; }
Nginx - Exclude specific file or folder from logging to access.log
please separate http and https traffic. your current config is messing up a bit with things. The following code rewrites all request fromhttp://example.comtohttps://example.comusing a permanent redirect:server { listen 80; server_name example.com; return 301 https://$server_name$request_uri; }Second code block will handle the request coming in from port 443 (example here will give you an A rating on ssllabs.com):server { listen 443 ssl; server_name example.com; ssl_certificate /path_to/ssl.crt; ssl_certificate_key /path_to/ssl.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:10m; # ssl_session_tickets off; # openssl dhparam -out dhparam.pem 2048 # ssl_dhparam /etc/nginx/SSL/dhparams.pem; ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGC$ ssl_prefer_server_ciphers on; add_header Strict-Transport-Security "max-age=15768000;includeSubdomains; preload"; root /srv/wwwroot/; index index.html index.htm index.php; client_max_body_size 20M; location / { # your special config if needed } }and finally with a third block in our config we rewritehttps://www.example.comback tohttps://example.com:server { listen 443; server_name www.example.com; return 301 https://$server_name$request_uri; }Hope this helps.
I have the following nginx configuration.server { listen 80; listen [::]:80; listen 443 ssl; server_name example.com; return 301 https://www.example.com$request_uri; }it redirectshttp://example.comtohttps://www.example.combut does not redirecthttps://example.comtohttps://www.example.com.How can I redirecthttps://example.comtohttps://www.example.com?
Nginx redirect to www domain not working
You could try and register some middleware that can modify requests based on the headers forwarded by nginx. You probably also want to set the remote IP address to the value of theX-Forwarded-Forheader.Something like this should work (untested):public class AppHarborMiddleware : OwinMiddleware { public AppHarborMiddleware(OwinMiddleware next) : base(next) { } public override Task Invoke(IOwinContext context) { if (string.Equals(context.Request.Headers["X-Forwarded-Proto"], "https", StringComparison.InvariantCultureIgnoreCase)) { context.Request.Scheme = "https"; } var forwardedForHeader = context.Request.Headers["X-Forwarded-For"]; if (!string.IsNullOrEmpty(forwardedForHeader)) { context.Request.RemoteIpAddress = forwardedForHeader; } return Next.Invoke(context); } }Make sure to add it before you configure the authentication middleware:app.Use(); app.UseOAuthBearerTokens(new OAuthAuthorizationServerOptions { AllowInsecureHttp = false, });
Applications at AppHarbor sit behind an NGINX load balancer. Because of this, all requests that hit the client app will come over HTTP as the SSL will be handled by this front end.ASP.NET MVC's OAuth 2 OAuthAuthorizationServerOptions has options to restrict access to token requests to only use HTTPS. The problem is, unlike a Controller or ApiController, I don't know how to allow these forwarded requests through when I specify AllowInsecureHttp = false.Specifically, in the app startup/config:app.UseOAuthBearerTokens(new OAuthAuthorizationServerOptions { AllowInsecureHttp = true, });Needs to somehow do this check internally and if it's true, treat it as SSL:HttpContext.Request.Headers["X-Forwarded-Proto"] == "https"Here's how I do it using MVC Controller's by applying a custom Filter Attribute:https://gist.github.com/runesoerensen/915869
AppHarbor's Reverse Proxy causing issues with SSL and app.UseOAuthBearerTokens ASP.NET MVC 5
Method 1Remove your app's virtual host entry and restart Nginx. Phusion Passenger will no longer serve it.Method 2In case you want to keep your app's virtual host entry, but not actually run the app.Set the following option and restart Nginx:passenger_min_instances 0;Phusion Passenger will now shut down your app if it hasn't seen traffic for a while (~10 minutes). It'll be started again if traffic comes in for that app.Withpassenger_min_instances 0, you can also kill the application processes manually. Look up the PIDs withpassenger-status, then runkill .
I use thepassengerspawned bynginx. There aremanyother rails applications on the server that uses passenger (each has own virtual host in nginx).I can restart the Rails/Nginx/Passenger application like this:touch tmp/restart.txtHow I can stop it?This doesn't work:touch tmp/stop.txt touch tmp/shutdown.txt
How to stop rails nginx-passenger application?
Two possible problems:You don't have any brackets in your regex so it's not going to be a capturing group. And you missed out the ~* command to tell Nginx to do a regex match.location ~* ^/backups/(\w+)$ { try_files $uri $uri/ /backups/$1/index.php; }The last parameter in a try_files is magic. It doesn't actually try to see if the file exists. Instead it rewrites the request URI with the last parameter and reprocesses the request, which is moderately surprising. To fix this you can (and should) fall back to either a 404 or other page.location ~* ^/backups/(\w+)$ { try_files $uri $uri/ /backups/$1/index.php /404_static.html; } location = /404_static.html { root /documents/projects/intahwebz/intahwebz/data/html/; internal; }btw if you have further issues you should enablerewrite_log on;which will write the matching to the servers error file at notice level, and helps figure out location matching issues.
Using nginx and CodeIgniter, I have a location block in my server config that handles the routing for my project like this:location /beta/ { try_files $uri $uri/ /beta/index.php; }This works fine, but I perform backups on this CodeIgniter project and move them to another folder. The "beta" project gets renamed (with a time-stamp). So I have a backups folder with CodeIgniter projects named as such:backups/beta_2013_05_21_0857 backups/beta_2012_05_23_0750What I'm trying to do is create another location rule that handles these variable-named projects, but all attempts at using regex so far have failed. If I name the project directly it does work.location /backups/beta_2013_05_21_0857 { try_files $uri $uri/ /backups/beta_2013_05_21_0857/index.php; }But obviously I don't want to create a rule for each and every folder. Does anyone have any idea on how to solve this? This is the how I was trying to solve the problem:location /backups/^\w+$/ { try_files $uri $uri/ /backups/$1/index.php; }
Using regex in nginx location block for variables
If you use thepaste.deploy.config.PrefixMiddlewarein your WSGI pipeline viause = egg:PasteDeploy#prefix, it will automatically translateX-Forwarded-ForintoREMOTE_ADDR. It is also great for other properties of your reverse proxy, for example it will translateX-Forwarded-Protointowsgi.url_schemeto ensure that if the user visits with https then generated URLs are also https.http://pythonpaste.org/deploy/class-paste.deploy.config.PrefixMiddleware.html
I have a Pyramid application which usesrequest.environ['REMOTE_ADDR']in some places.The application is served by Python Paste on port 6543 and a nginx server listening on port 80 is forwarding requests to the Paste server.The nginx configuration is inspired by the Pyramid cookbook:server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name localhost; access_log /var/log/nginx/localhost.access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://127.0.0.1:6543; }In the Pyramid application the variable request.environ['REMOTE_ADDR'] is now always equal to 127.0.0.1. I see a few strategies to solve this problem but I don't know if there is a recommended way to do that.Here is what I'm considering:add a NewRequest subscriber which replaces request.environ['REMOTE_ADDR'] if necessary:if 'HTTP_X_REAL_IP' in event.request.environ: event.request.environ['REMOTE_ADDR'] = event.request.environ['HTTP_X_REAL_IP']use a wsgi middleware to modify request.environ before hitting the Pyramid layer.something elseWhich strategy do you use for deploying Pyramid applications ? What will happen if I have two nginx proxies ? (the first serving the LAN and a second one one a machine directly connected to the internet).
How to get the real IP of a client in a pyramid server behind a nginx proxy
I found the answer , It was Quite simple as i guessed . One has to set the root directory once and use the sub-directories as the locationserver { listen 80; server_name QuadraPaper; access_log /home/gdev/Projects/QuardaPaper/access_log.log; root /home/gdev/Projects/QuardaPaper; location /site_media/ { autoindex on; access_log off; } location /media/ { autoindex on; } }I got a clue fromNginx doesn't serve static
I am looking for a simple configuration to serve all files and directories inside a particular folder.To be more precise I am trying to serve everything inside the pinax/static_media/folder and/media/folder as it is with the same url, and preferably auto index everything .by the way I have runpython manage.py build_media --allso all static content is under/site_media/staticThe current configuration I am using :server { listen 80; server_name QuadraPaper; access_log /home/gdev/Projects/QuardaPaper/access_log.log; location ^*/site_media/*$ { autoindex on; access_log off; root /home/gdev/Projects/QuardaPaper/site_media; } location /media/ { autoindex on; root /home/gdev/Projects/QuardaPaper/media/; }All the different configuration instructions from various sites has really confused me , for exampleHow to serve all existing static files directly with NGINX, but proxy the rest to a backend server.http://coffeecode.net/archives/200-Using-nginx-to-serve-static-content-with-Evergreen.htmlhttps://serverfault.com/q/46315/91723http://wiki.nginx.org/Pitfallshttp://pinaxproject.com/docs/0.7/media/#ref-media-develEnvironment information:Xubuntu 10.04 running on VirtualBoxnginx 1.1.4pinax 0.72django 1.0.4fastcgi for running django via nginx
Nginx , SImple configuration for serving all the files in a Directory and all the directories within
I do not know what you mean under "smart", but anyway Nginx has caching starting from 0.7 branch. There are many parameters to tune, e.g.you can have various TTLs for different return codes,ability to return stale content when application does not respondpossible to limit the total size of the cache on diskyou can define what pieces of information will be used to generate a cache key.The documentation is here
I really like nginx.But recently I've found that varnish gives you an opportunity to implement smart caching revers proxy layer(with URL purging). I have a cluster of mongrels which are pretty resource-intensive so if this caching layer can remove some load from mongrels this can be a great thing.I didn't find a way to implement the caching layer(with for application pages; static content is cacheable of course) same with nginx..Should I use Varnish instead? What would you recommend?
Should I go with Varnish instead of nginx?
NGINX + Gunicorn + DjangoDjango project:djangoapp - ... - database - djangoapp - settings.py - urls.py - ... - media - static - manage.py - requirements.txtServer: install venv, requirements.txt:sudo apt-get update sudo apt-get install -y git python3-dev python3-venv python3-pip supervisor nginx vim libpq-dev --> cd djangoapp pathon3 -m venv venv source venv/bin/activate (venv) pip3 install -r requirements.txtServer: install NGINX:sudo apt-get install nginx sudo vim /etc/nginx/sites-enabled/defaultServer: NGINX config:server { listen 80 default_server; listen [::]:80 default_server; location /static/ { alias /home/ubuntu/djangoapp/static/; } location /media/ { alias /home/ubuntu/djangoapp/media/; } location / { proxy_pass http://127.0.0.1:8000; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; add_header P3P 'CP="ALL DSP COR PSAa OUR NOR ONL UNI COM NAV"'; add_header Access-Control-Allow-Origin *; } }Server: setup supervisor:cd /etc/supervisor/conf.d/ sudo vim djangoapp.confServer: supervisor config:[program:djangoapp] command = /home/ubuntu/djangoapp/venv/bin/gunicorn djangoapp.wsgi -b 127.0.0.1:8000 -w 4 --timeout 90 autostart=true autorestart=true directory=/home/ubuntu/djangoapp stderr_logfile=/var/log/game_muster.err.log stdout_logfile=/var/log/game_muster.out.logServer: update supervisor with the new process:sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl restart djangoapp
Everything worked very well before gunicorn and nginx, static files were served to the website. But now, it doesn't work anymore.Settings.pySTATICFILES_DIRS = [ '/root/vcrm/vcrm1/static/' ] STATIC_ROOT = os.path.join(BASE_DIR, 'vcrm/static') STATIC_URL = '/static/' MEDIA_ROOT = '/root/vcrm/vcrm1/vcrm/media/' MEDIA_URL = '/media/'/etc/nginx/sites-available/vcrmserver { listen 80; server_name 195.110.58.168; location = /favicon.ico { access_log off; log_not_found off; } location /static { root /root/vcrm/vcrm1/vcrm; } location = /media { root /root/vcrm/vcrm1/vcrm; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; }}When I run collectstatic:You have requested to collect static files at the destination location as specified in your settings: /root/vcrm/vcrm1/vcrm/static This will overwrite existing files! Are you sure you want to do this?and then:Found another file with the destination path 'admin/js/vendor/jquery/jquery.min.js'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path. 0 static files copied to '/root/vcrm/vcrm1/vcrm/static', 251 unmodified.
Django doesn't serve static files with NGINX + GUNICORN
Use--redirectsudo certbot --nginx --redirect -d your_domain -d www.your_domainSecuritysection of thedocumentation
So I'm trying to automate certbot a bit in a script. When I run thissudo certbot --nginx -d your_domain -d www.your_domainI get the following:Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access. ------------------------------------------------------------------------------- 1: No redirect - Make no further changes to the webserver configuration. 2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration. ------------------------------------------------------------------------------- Select the appropriate number [1-2] then [enter] (press 'c' to cancel):My question is, Can I somehow make it so that it automatically selects the option number2without asking this question?In other words, somehow specifying option2in thesudo certbot --nginx -d your_domain -d www.your_domaincommand without it being interactive.
Certbot, specify redirection without answering question
In order to have Nginx handles fastcgi requests in parallel you'll need several things:Nginx >= 1.7.1 for threadpools, and this configuration:worker_processes N; // N as integer or autowhereNis the number of processes,autonumber of processes will equate the number of cores; if you have many IO, you might want to go beyond this number (having as many processes/threads as cores is not a warranty that the CPU will be saturated).In terms of NGINX, the thread pool is performing the functions of the delivery service. It consists of a task queue and a number of threads that handle the queue. When a worker process needs to do a potentially long operation, instead of processing the operation by itself it puts a task in the pool’s queue, from which it can be taken and processed by any free thread.Consequently, you want to chooseNbigger than the maximum number of parallel requests. Hence you can pick say 1000, even if you got 4 cores; for IO, threads will only take some memory, not much CPU.When you have many IO requests with large latencies, you'll also needaio threadsin the'http','server', or'location'context, which is a short for:# in the 'main' context thread_pool default threads=32 max_queue=65536; # in the 'http', 'server', or 'location' context aio threads=default;You might see that switching from Linux to FreeBSD can be an alternative when dealing with slow IO. See the reference blog for deeper understanding.Thread Pools in NGINX Boost Performance 9x! (www.nginx.com/blog)
Using a minimal fastcgi/nginx configuration on ubuntu 18.04, it looks like nginx only handles one fastcgi request at a time.# nginx configuration location ~ ^\.cgi$ { # Fastcgi socket fastcgi_pass unix:/var/run/fcgiwrap.socket; # Fastcgi parameters, include the standard ones include /etc/nginx/fastcgi_params; }I demonstrate this by using a cgi script like this:#!/bin/bash echo "Content-Type: text"; echo; echo; sleep 5; echo Hello worldUse curl to access the script from two side-by-side command prompts, and you will see that the server handles the requests sequentially.How can I ensure nginx handles fastcgi requests in parallel?
How can I make nginx handle fastcgi requests concurrently?
I just had the same problem and fixed it by adding a tailing/to the location:location /airflow/ {instead oflocation /airflow {. The tailing backslash tells nginx to remove the preceeding /airflow in uri paths to the corresponding python app.My overall config looks as follows:server_name my_server.my_org.net; location /airflow/ { proxy_pass http://localhost:9997; proxy_set_header Host $host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }Inairflow.cfgI additionally specified:base_url = http://my_server.my_org.net/airflow enable_proxy_fix = False # Seems to be deprecated? web_server_port = 9997
I'm trying to set up Airflow behind nginx, using the instructions given here.airflow.cfg filebase_url = https://myorg.com/airflow web_server_port = 8081 . . . enable_proxy_fix = Truenginx configurationserver { listen 443 ssl http2 default_server; server_name myorg.com; . . . location /airflow { proxy_pass http://localhost:8081; proxy_set_header Host $host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwarded-Proto "https"; } }Airflow webserver and scheduler are up and running as systemd. When I try to accesshttps://myorg.com/airflow/, it gives Airflow 404 = lots of circles.What could be wrong? Really appreciate your help in getting this running.
Airflow + Nginx set up gives Airflow 404 = lots of circles
Nginx doesn't support proxying to FTP servers. At best, you can proxy the socket... and this is a real hassle with regular old FTP due to it opening new connections on random ports every time a file is requested.What you can probably do instead is create a FUSE mount to that FTP server on some local path, and serve that path with Nginx like normal. To that end,CurlFtpFSis one tool for this. Tutorial:https://linuxconfig.org/mount-remote-ftp-directory-host-locally-into-linux-filesystem(Note: For security and reliability, it's strongly recommended you migrate away from FTP when possible. ConsiderSSH/SFTPinstead.)
I have a local network, on which there are some old insecure services. I use nginx reverse proxy with client certificates authentication as safe entrypoint to this local network from the Internet. Till now I used it only to proxy HTTP servers usinglocation / { proxy_pass http://192.168.123.45:80/; }and everything works fine.But now I would like to serve static files, that are accessible through FTP on a local server, I tried simply:location /foo { proxy_pass ftp://user:[email protected]:5000/; }but that doesn't work, and I could not find anything that would simply proxy HTTP request to FTP request.Is there any way to do this?
Server static files from FTP server using NGINX
So there is no need to do rewrite or anything else. Simply pass the header parameters that you want to pass as query parameter to the localhost application like below by appending to the arguments.If you have custom header parameter like userid, then it would be $http_useridserver { location /test { set $args $args&host=$http_host; proxy_pass http://localhost:8080; } }
I have an application running on localhost listening on port 8080nginx is running as reverse proxy, listening on port 80So, a request coming to nginx on port 80 is sent to this application listening on localhost:8080 and response from this application sent back to the userNow this application is incapable of reading the header variables from request header and can read only query parametersSo I want nginx to pass header values as query parameters to this application listening on localhost:8080E.g. let us say in the request header there is a custom variable called 'userid'.How do we pass this userid as &userid=value appended to the url to application listening on localhost 8080My current test file of site-available and site-enabled isserver { location /test { proxy_pass http://localhost:8080; } }
nginx - Passing request header variables to upstream URL as query parameter
After being in contact with Google Cloud Support, we were able to devise a workaround for this issue, although the root cause still remains unknown.The workaround is to define the NGINX log format itself as a JSON string. This will allow the Google-Fluentd parser to correctly parse the payload as a JSON object. This is the only solution that has worked for me so far.For reference, the log format I used is:log_format json_combined escape=json '{' '"time_local":"$time_local",' '"remote_addr":"$remote_addr",' '"remote_user":"$remote_user",' '"request_method":"$request_method",' '"request":"$request",' '"status": "$status",' '"body_bytes_sent":"$body_bytes_sent",' '"request_time":"$request_time",' '"http_referrer":"$http_referer",' '"http_user_agent":"$http_user_agent"' '}'; access_log /var/log/nginx/access.log json_combined;
I have a basic nginx deployment serving static content running on a GKE cluster. I have configured Stackdriver Logging for the cluster as per instructionshere(I enabled logging for an existing cluster), and I also enabled the Stackdriver Kubernetes Monitoring feature explainedhere. The logging itself seems to be working fine, as I can see the logs from nginx in Stackdriver.I am trying to create some log-based metrics like number of fulfilled 2xx requests, but all I am getting in the log entries in Stackdriver is thetextPayloadfield. From what I understand, enabling Stackdriver Monitoring on the cluster spins up some Fluentd agents (which I can see if I runkubectl get pods -n kube-system), and they should have an nginx log parser enabled by default (as per documentationhere). However, none of the log entries that show up in Stackdriver have thejsonPayloadfield that should be there for structured logs.I'm using the defaultlog_formatconfig for nginx, and I've verified that the default nginx parser is able to parse the logs my application is writing (I copied thedefault Fluentd nginx parser plugin regular expressionand a log entry from my application tothis tooland it was able to parse the entry)I'm sure I must be missing something, but I can't figure out what.Edit:For reference, here is my NGINX log format:log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent"';And I have tried the following so far:Upgrade my k8s cluster from version 1.11.5 to 1.11.6 (due to issue with structured logging in version 1.11.4, which was fixed in 1.11.6)Downgrade from version 1.11.6 to 1.11.3Make a brand new cluster with the GCP console (version 1.10.9), with Stackdriver Monitoring and Stackdriver Logging options enabled and deploy my application on that. Still nojsonPayloadfield, onlytextPayload.So far, none of these have solved it.
NGINX Logs have no jsonPayload field in Stackdriver
The difference between the examples you have provied is how the processes are being started.Running the commandnginxwill start the application and wait for your user action to stop it.Thesystemctlorservicecommands are nearly the same thing and runningservice nginx startorsystemctl start nginxwill start a service in the background running the Nginx daemon.You can also use this to do aservice nginx restartorsystemctl restart nginxto restart the service, or even aservice nginx reload/systemctl reload nginxto reload the configuration without completely stopping the Nginx server.The reason why you can't do bothnginxandsystemctl start nginxis due to the nginx configuration already listening port 80, and you can't listen on the same port on a single IP address at the same time.You can also force the nginx service to start on boot by runningsystemctl enable nginxwhich will be why yoursystemctl status nginxreturns 'disabled'.Hope this makes sense.
I have noticed that when ever I start nginx with ubuntu command "nginx" and I do systemctl status nginx. It shows that systemctl is disabled. More over if I first start nginx with command systemctl start nginx and i try to start nginx with command nginx, it check the availbility of the ports and then says nginx: [emerg] still could not bind(). So i thought there must be a differene and their purpose. When I strt nginx with command nginx the only way to stop nginx is by the means of force using killlall nginx or kill -9 (process id) or by clearing the port. So I am pretty sure there is some difference in them.
What's the difference between starting nginx with command "nginx", "service start nginx" and "systemctl nginx start"?
The problem is that the volume would be mounted after the build operations is completed. That is why this approach won't work for you.What you will need to do is copy those resources inside container in adockerfile.Assuming you don't have a dockerfile defined. You can create your on makingnginxyour base image.Which would look somewhat like....From nginx:latest COPY from/host/dir to/container/container/dirSomething similar but in different context they are explaininghereCheers!
I want to run multiple web application images with NGINX.So I wrotedocker-compose.ymlwhich build nginx image and run nodejs containers.I have SSL Certificate issued by letsencrypt.The certificate files is located in/etc/letsencrypt/live/mydomain.com/I want NGINX container to read the files.So, I appendedvolumes: - /etc/letsencrypt/live/mydomain.com/:/etc/cert:rotodocker-compose.yml.But nginx.conf cannot read the files.I found that the directory/etc/certdoesn't exist and it's mounted as bind type.I want to know how to set volumes in the docker-compose.yml file to read inside of containers.docker-compose.ymlversion: '2.0' services: nginx: container_name: manager build: ./nginx links: - app-1:app-1 ... ports: - 80:80 - 443:443 volumes: - /etc/letsencrypt/live/mydomain.com/:/etc/cert:ro depends_on: - app-1 ... app-1: container_name: audio-1 image: audio:test ports: - 80 ...nginx.confworker_processes 4; events { worker_connections 1024; } http { upstream node-app { least_conn; server app-1:80 weight=10 max_fails=3 fail_timeout=60s; ... } server { listen 80; return 301 https://$host$request_uri; } server { listen 443 ssl; ssl_certificate /etc/cert/fullchain.pem; ssl_certificate_key /etc/cert/privkey.pem; location / { proxy_pass http://node-app; ... } } }errornginx: [emerg] BIO_new_file("/etc/cert/fullchain.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/cert/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)$docker inspect manager... "Mounts": [ { "Type": "bind", "Source": "/etc/letsencrypt/live/luvber.kr", "Destination": "/etc/cert", "Mode": "ro", "RW": false, "Propagation": "rprivate" } ], ...Thanks
build with volumes in the docker-compose.yml
Most likely your Nginx server is overloaded therefore request cannot be processed in the timely fashion causing the error.It might be caused by several issues:Nginx server simply lacks hardware resources (CPU, RAM, Network or Disk). Make sure it has enough headroom to operate by monitoring the aforementioned resources on Nginx side using i.e.JMeter PerfMon PluginNginx server configuration is not suitable for high loads. Default configuration might not be suitable for handling 500 concurrent users so check outHow to Configure nginx for Optimized Performancearticle to see whether your setup matches recommendations. Basically you need to combine points 1 and 2 to get most of your Nginx setup, if the machine is relatively idle and you're getting high response times and/or low throughput - something is wrong either with your web application or with infrastructure setup.It might be a problem with your application which cannot generate response faster than Nginx times out. Check out its logs and inspect what's going on usingprofiling tools- it will allow you to detect where application spends the most of timeIt might be an network infrastructure problem, i.e. you're using WiFi instead of LAN or there is a faulty router or you reached the limits of corporate proxy. It might be also a good idea to capture network traffic using a sniffer tool likeWiresharkBut first of allread logs: JMeter logs, Nginx logs, OS logs, whatever exists and relevant. Most probably you will figure out the cause from them.
I have been working on performing the load test on jmeter for 500 users per second. I am using JMeter for the same. While running the load test I am continuously getting the error on login API. Below is the request and response which I am sending and receiving timeout.Sample RequestPOST https://example.com//9000/v1/api/user/login POST data: { "email":"[email protected]", "password":"abcdef" } [no cookies] Request Headers: Connection: keep-alive Content-Type: application/json : Content-Length: 79 Host: botstest.smartbothub.com User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_151)sample responseThread Name: Thread Group 1-310 Sample Start: 2017-12-27 11:30:06 IST Load time: 61422 Connect Time: 1148 Latency: 61422 Size in bytes: 363 Sent bytes:286 Headers size in bytes: 171 Body size in bytes: 192 Sample Count: 1 Error Count: 1 Data type ("text"|"bin"|""): text Response code: 504 Response message: Gateway Time-out Response headers: HTTP/1.1 504 Gateway Time-out Server: nginx/1.10.3 (Ubuntu) Date: Wed, 27 Dec 2017 06:01:07 GMT Content-Type: text/html Content-Length: 192 Connection: keep-alive HTTPSampleResult fields: ContentType: text/html DataEncoding: nullThe nginx config file for the server is as shown below.https://drive.google.com/file/d/1_XWYeqSAWZz6dtnTTCeLOEMzDwvEWwY1/view
504 gateway timeout while runnning the Load Test jmeter
I finally found it. Given the various descriptions on the net using just url, I thought that I am setting the host also with the url. But you actually need to set the host and the url independently. So the solution is to add the service name of the selenium browser together with host.- WebDriver: url: http://localhost/ # url of app browser: chrome host: chrome # selenium server host, default 127.0.0.1 # port: 4444 # selenium server port, default 4444 # window_size: maximize # or 640x480
I have somewhere an error in setting the Webdriver for my codeception and just can't figure it out.when starting withdocker-compose run --rm codeception runit finds the acceptance tests, and even reads the$I->wantTobut then throws an error:[ConnectionException] Can't connect to Webdriver at http://127.0.0.1:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.myacceptance.suite.ymlis the following and I already tried replacing the url with chrome, nginx-web, the ip of the actual server (which does not make sense, but I really don't know what else to put in there)actor: AcceptanceTester modules: enabled: # selenium webdriver - WebDriver: url: 'http://localhost/' browser: chrome - \Helper\Acceptancemydocker-compose.yml. I set the volumes in an additionl overrideversion: '2' services: codeception: image: codeception/codeception:2.3.5 depends_on: - nginx-web - php-web - chrome nginx-web: image: nginxext:0.5.6 depends_on: - php-web expose: - 80 php-web: image: phpext:0.7.0 expose: - 9000 # https://github.com/SeleniumHQ/docker-selenium chrome: image: selenium/standalone-chrome-debug:3.7.1 ports: - 4444 - 5900Any ideas what I am doing wrong?
codeception in docker-compose - can't connect to Webdriver
You need to implement the so-called 'X-Sendfile feature'. Let's say your paid-for files will be served from location/protected/- you need to add to nginx's config:location /protected/ { internal; root /some/path; }then when you want to serve your user a file namedmycoolflix.mp4your app needs to add headerX-Accel-Redirect: /protected/mycoolflix.mp4and the file/some/path/protected/mycoolflix.mp4will be served to the user. More information in the nginx documentationhereandhere. Serving files from your views is not a good idea - it makes one of your Django processes busy until the download is complete, preventing it from serving other requests.
I want to build an app and let user to see some videos just if they have permissions or they paid for that video. I am using Django and I want to add ngnix and gunicorn to serve media files. I am not sure if once the user has the url of the video, how can I block him to not see the video if his payment expired or he doesn't have the permissions. For now I let django to serve the videos and I overwrite the server method and if he doesn't have access to video I return 404.
Can I add permissions to media django media files?
Usually PHP files are processed by alocation ~ \.php$block (or similar). I assume thatindex.phpis not the only PHP file in your application, and to process PHP files within the/root/directory structure, that location will need to useroot /project_folder/root.You can specify a differentrootfor URIs which begin/publicand use a named location to execute the out-of-root index.php file.Something like this:root /project_folder/root; location / { try_files $uri $uri/ @index; } location /public { root /project_folder; } location @index { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /project_folder/index.php; fastcgi_pass ...; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass ...; }
Currently I'm running webprojects with the following directory structure (simplified):project_folder/public/root/index.phpWhat I want to do is set the root in the server block to:root /project_folder/root/;But when a request location does not exists I want it to forward the request to project_folder/index.php. I tried the following:try_files $uri ../index.php?$query_string;But this doesn't seem to work.The same goes for a request that starts with $document_root/public/* which need to be forwarded to: $document_root/../public/*I did manage this by adding a index.php in the /project_folder/root/ folder which includes the /project_folder/index.php. For the public folder I created a symlink. I could probably solve this by adding an alias for the location /public/ however I'm trying to keep my config as generic as possible.I'm trying to prevent setting the project_folder as a root folder as it sometimes contains files I prefer not to be accessed from the web. Introducing the 'root' folder is new but I'm trying to do this as efficient as possible. Existing webprojects do not have the root folder yet.Is what I'm trying to do possible and how would I be able to achieve this.Thanks in advance.
NGINX rewrite requests to file outside root
You get this error because your code executes the commandrequire("socket")This command will search for a file with that name in several directories. If successful the content will be executed as Lua code. If it is not successful you'll end up with your error message.In order to fix this you have to add the path containing the file either to the system variableLUA_PATHor you have to add it to the global tablepackage.pathbefor you require the file. Lua will replace ? with the name you give to require()For examplepackage.path = package.path .. ";" .. thisPathContainsTheLuaFile .. "?.lua"Please read:http://www.lua.org/manual/5.3/manual.html#pdf-requirehttps://www.lua.org/pil/8.1.html
I am trying to use lua to access redis values from nginx. When i execute lua files on command line there everything is ok i am able to read and write values to redis. But i when try to execute the same files from nginx by accessing a location in which access_by_lua directive is written the following error logged in error log fileno field package.preload['socket'] no file '/home/sivag/redis/redis-lua/src/socket.lua' no file 'src/socket.lua' no file '/home/sivag/lua/socket.lua' no file '/opt/openresty/lualib/socket.so' no file './socket.so' no file '/usr/local/lib/lua/5.1/socket.so' no file '/opt/openresty/luajit/lib/lua/5.1/socket.so' no file '/usr/local/lib/lua/5.1/loadall.so'What is the reason for this and how can i resolve this?
Module socket not found lua
If these are your onlyserverblocks, then they are also your defactodefault serverblocks for port 443 and port 80 respectively. See [this document][http://nginx.org/en/docs/http/server_names.html] for details.If you do not want this, you need to declare a default server block. A minimalist definition might be:server { listen 80 default_server; listen 443 ssl default_server; ssl_certifiate ...; ssl_certifiate_key ...; return 403; }The ssl certificate is required to start the Nginx service, but it can be any certificate. Also, thessl_certifiatedirectives are inherited, so you can place the default statements in thehttpblock instead.Usereturn 444;to just close the connection with no response.
I have two vhosts : one ondomain.tldport 80, the other onsub.domain.tldport 443 with SSL on. I added a CNAME entry on my DNS server that redirects mysubsubdomain todomain.tld.. Everything works as expected, but going tohttp://sub.domain.tlddoes the same as going tohttp://domain.tld, andhttps://domain.tldthe same ashttps://sub.domain.tld. How can I prevent this ?My configuration :server { listen *:443; listen [::]:443; server_name www.sub.domain.tld; ssl on; ssl_certifiate ...; ssl_certifiate_key ...; root /var/www/sub.domain.tld; ... } server { listen *:80; listen [::]:80; server_name www.domain.tld; root /var/www/domain.tld; ... }
Nginx drop when server_name does not match
It was just a problem with permissions:chmod 755 /home chmod 755 /home/userGot previous commands fromthis answer.
I have created a folder to the default server at/var/www/defaultand everything works as expected. Inside that folder I made a symlink to~/WebstormProjects/my-project, using the commonln -s. It worked for a while, and the last time I updated usingapt-get, nginx doesn't follow anymore the symbolic link, which gives me a 404 error, not even listing the symlinks as it used to do.Tried using thedisable_symlinksdirective, setting it tooff, and nothing happened. Also followed the steps inthis linkand still nothing. Also added myself to thewww-datauser, nothing.But if I editnginx.confby changing theuserdirective to my own user and restarting the server does work, but I know that's a very bad practice and some day in the future it will not allow PHP-FPM to work.So, what can I do to make nginx follow symlinks, without changing the owner of my source directories? BTW, I'm using Ubuntu 14.04.3 and nginx 1.4.6 installed via package manager.
nginx doesn't follow symlinks with www-data user
If you didn't fill in the HOSTNAME option on initialsetup of dokkuyou'll run into your current problem. The VHOST file has yet to be created causing the current error.To remedy this we have to create the missing VHOST file and populate with your domain name. First SSH into your droplet and run the following (Depending on your permissions you may require sudo to create and edit the VHOST file)cd /home/dokku touch VHOST chmod 0755 VHOST # Use your editor of choice nano, vim etc. # to add your hostname to VHOST file, eg. mydomain.comNow for each app we're going to need to trigger a rebuild of thenginx.conffile. To do this rundokku nginx:build-config myappfor each app.Note: Deleting the app's dir from/home/dokku/myappand redeploying will also have the same effect but will require you to re-link other containers e.g. db plugins.If everything has gone smoothly runningdokku domains myappshould now ouput in your terminal=====> myapp Domain Names myapp.mydomain.comYou now should be able to remove and add domains for your app successfully using thedokku domainscommandsSeethis answer also for reference
This is my first site I've were I've tried to use Dokku to deploy a rails app on Digital Ocean.This is a defaultDokku installon a basic Ubuntu VM hosted on Digital OceanWhen I try to run:dokku domains:add myapp mydomain.comI get the following error=====> unsupported vhost config found. disabling vhost support =====> config:set-norestart is deprecated as of v0.3.22 -----> Setting config vars NO_VHOST: 1 -----> VHOST support disabled, deleting four-heroes/VHOST -----> Added mydomain.com to myappThe last line looks like it may have worked despite the errors. However, when I run:dokku domains myappI get this message.=====> unsupported vhost config found. disabling vhost support =====> config:set-norestart is deprecated as of v0.3.22 -----> Setting config vars NO_VHOST: 1 =====> myapp Domain Names cat: /home/dokku/myapp/VHOST: No such file or directoryAside from the Postgresql plugin this is a default Dokku install. The application works well and Im able to access it at the the ip.ad.dr.ess:port combination, and I'm able to SSH to the domain(ssh[email protected]).I can't figure out where I messed up here.Any help is appriciated.
Dokku domains:add <app> <domain> returns unsupported vhost config found. disabling vhost support
The problem may occur in case of wrong concatenation order. You tried:cat www_example_com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > ssl-bundle.crtWhich looks correct, but concatenation usually require to eliminate extra download from root CA, therefore Nginx creatorsaid:Browsers usually store intermediate certificates which they receive and which are signed by trusted authorities, so actively used browsers may already have the required intermediate certificates and may not complain about a certificate sent without a chained bundle.The official docsexplicitly says:If the server certificate and the bundle have been concatenated in the wrong order, nginx will fail to start and will display the error message:SSL_CTX_use_PrivateKey_file(" ... /www.example.com.key") failed (SSL: error:0B080074:x509 certificate routines: X509_check_private_key:key values mismatch)because nginx has tried to use the private key with the bundle’s first certificate instead of the server certificate.So to solve the problem please try:Attachwww_example_com.crtto ssl_certificate Nginx config keyDownload latest Comodo CA certificates SHA2 fromofficial web pageand try one more time to concatenate the bundle
My server was hosted in Bluehost (Apache), the certificate was working fine. Now, I'm using Google Cloud for multiple pages in NodeJS on different port usingproxy_pass. I am trying to configure the SSL but I have problems. I was looking for similar questions, but it still shows the same error. I created the key file following thislink/var/log/nginx/error.log:2015/07/08 10:47:20 [emerg] 2950#0: SL_CTX_use_PrivateKey_file("/etc/nginx/ssl/domain_com/domain_com.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)When I put on console:openssl rsa -noout -modulus -in domain_com.keyshows me this:Modulus=D484DD1......512 characters in total......5A8F3DEF999005Fopenssl x509 -noout -modulus -in ssl-bundle.crt:Modulus=B1E3B0A.......512 characters in total......AFC79424BE139This is my Nginx setup:server { listen 443; server_name www.domain.com; ssl_certificate /etc/nginx/ssl/domain_com/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/domain_com/domain_com.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/domain_com.access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8086; proxy_read_timeout 90; proxy_redirect http://localhost:8086 https://www.domain.com; } }
Nginx SSL Certificate failed SSL: error:0B080074:x509 (Google Cloud)
I have found the answer to this horrible question. The answer is athttps://github.com/phusion/passenger/wiki/Debugging-application-startup-problemsunder the heading "Early Termination in Bash". It turns out that the Ubuntu .bashrc does not run if the shell is not interactive. Phusion Passenger does not run in an interactive shell. Therefore we do not load these environment variables for the Phusion Passenger process.Mike's comment was on track. If you are using rvm then nginx points to a ruby script that you can put the environment variables in before ruby starts.passenger_ruby /home/rails/.rvm/gems/ruby-2.2.1@was_i_towed/wrappers/ruby;Is a line in my nginx.conf file. If I open this wrapper in vi or nano then I can add the EXPORT SECRET= to the top of the file and it works.Other literature suggests that setting the environment variables in /etc/environment should also work.This issue should also be rendered moot when upgrading to Phusion Passenger 5 which has a facility for specifying environment variables in nginx.conf.
According to the documentation athttps://www.phusionpassenger.com/documentation/Users%20guide%20Nginx.html#env_vars_passenger_apps15.3.5 Phusion Passenger should be reading environmental variables from .bashrc. I am trying to run a rails 4.2 application from a user account named rails using nginx and Phusion passenger and get a 502 bad gateway error when I try to load it in the browser. The process operates under the correct user. When I open a ruby console in the rails app directory I see the environment variables from my bashrc including secret_key_base. However when I tail my nginx log the error I get is that it is not able to find secret_key_base. I have tried adding this elsewhere including /etc/bash.bashrc and /etc/nginx.conf.
Phusion Passenger 4 & nginx cannot see environment variables in Ubuntu Linux
Use this:sudo ln -sfn $(which ruby) /usr/bin/rubyThat is essentially the same for you as doing this:sudo ln -s /usr/local/rvm/rubies/ruby-2.2.0/bin/ruby /usr/bin/ruby
I am following thishttps://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-passenger-and-nginx-on-ubuntu-14-04But i have installed Ruby using RVM as its easy to maintain ruby.I am at the step to create a symlink for ruby which under this guide it sayssudo rm /usr/bin/ruby sudo ln -s /usr/local/bin/ruby /usr/bin/rubyBut since i have used RVM and which i runwhich rubyi get the following pathroot@99atoms-staging:~# which ruby /usr/local/rvm/rubies/ruby-2.2.0/bin/ruby
Creating a symlink for Ruby while using RVM
You must not have read thengx_http_ssi_module documentationproperly (especially its'SSI Commands' section): it explains the commands format.You need to set thessidirective toonin the context you wish SSI commands to be parsed, then you need to serve files there which contains those commands.For example:server { listen 8000; index index.txt; location / { ssi on; } }The$date_localvariable states that it must be configured with theconfigcommand, by settings itstimefmtparameter.You just need to serve files which will send commands back, such asindex.txt:The format being used by thetimefmtparameter is the one of thestrftimestandard C function (as the documentation states).
According tothe docsyou can set the date format in nginx with the commandconfig timefmtbut I can't find any documentation/example on where or how to set that.The default shows a string like "Sunday, 26-Oct-2014 21:05:24 Pacific Daylight Time" and I want to change it toyyyyMMddI'm running nginx on Windows if that makes a difference.Thank you
How to set date format for nginx $date_local
assuming you have an attributenode[:test][:bool] = truein the template would have to do<% if node[:apache][:bool] -%> ServerAlias ​​<% = node[:apache][:aliasl]%> <% end -%>another option is to check if the attribute is null<% unless node [:icinga][:core][:server_alias].nil? %> ServerAlias ​​<% = node[:icinga][:core][:server_alias]%> <% end%>
With Chef, is there a way to insert a block of text from a template only if a condition is met?Let's say we have an attribute:node["webapp"]["run"] = "true"And we only want an entry in the nginx .conf in sites-enabled/app.conf if the webapp is true, like this:#The nginx-webapp.conf.erb template file SOME WORKING NGINX CONFIG STUFF <% if node["webapp"]["run"]=="true" -%> location /webapp { try_files $uri @some_other_location; } <% end -%> SOME OTHER WORKING NGINX CONFIG STUFFAs it stands, the conditional text doesn't error out, it just never appears. I've double-checked that the template can see the node attribute by using this:<%= node["webapp"]["run"] %>Which DID insert the text "true" into the config file.I saw inChef and erb templates. How to use boolean code blocksthat I could insert what appears to be just text with an evaluated variable from the node.I have tried changing to<% if node[:webapp][:run]=="true" -%> TEXT <% end -%>to no avail.Any ideas what I'm doing wrong? Thanks!EDIT:Per Psyreactor's answer, in the template itself I stopped trying to evaluate the string "true" and instead used this:SOME WORKING NGINX CONFIG STUFF <% if node["webapp"]["run"] -%> location /webapp { try_files $uri @some_other_location; } <% end -%> SOME OTHER WORKING NGINX CONFIG STUFFThis DOES correctly insert the text block in the config file if the node attribute is set to "true"! I suppose I just assumed it was still a string that needed to be evaluated as such.
Chef template - Conditionally inserting a block of text
Ok, I found the answer. The trick is you have to define error_page explicitly for all those special locations. Here is the configuration which worked for me.location / { root /var/www/nginx-default; index index.html index.htm; error_page 404 /404.html; } location /abc1.html { root /var/www/nginx-default; error_page 404 /special_error.html; } location /abc2.html { root /var/www/nginx-default; error_page 404 /special_error2.html; }I am not good with nginx, but I have noticed it depends on the search pattern you give in "location" tag. I have tried different things and those failed . Forexample the above rules will ONLY work forhttp://localhost/abc1.htmland fail forhttp://localhost/abc1so your "location" search pattern should be good if you want to cover second case. Probably some nginx guru can shed some more light on this. Thanks.
I am running nginx server. I want to serve a custom error page for a particular request only. For-example for requesthttp://localhost/abc1 & http://localhost/abc2if these pages are not there I want to serve a custom error page. This custom error page should appear only for above two mentioned links, rest of the page errors can show the default error page. I have tried different configuration but nothing seems to work. ThoughtsRegards, Farrukh Arshad.
Multiple 404 error pages in nginx
The solution is to not use legacy unsupported versions of nginx. Starting from version 1.3.15 (pretty old one), nginx does not log the 400 errors in such cases.See changelog for information:http://nginx.org/en/CHANGES*) Change: opening and closing a connection without sending any data in it is no longer logged to access_log with error code 400.
Amazon Elastic Load Balancer (ELB) performs periodic health checks:In addition to the health check you configure for your load balancer, a second health check is performed by the service to protect against potential side-effects caused by instances being terminated without being deregistered. To perform this check, the load balancer opens a TCP connection on the same port that the health check is configured to use, and then closes the connection after the health check is completed.nginx logs these events with a 400 error, which happen many times per minutes:[07/Aug/2013:18:32:27 +0000] "-" 0.000 400 0 "-" "-" "-"how can I configure nginx to not log these events?
Configure nginx to not log ELB secondary healthcheck
proxy_buffersThenumberdefines how many buffers nginx will create and thesizehow big each buffer will be. When nginx starts receiving data from the upstream it starts filling up those buffers, either until the buffers are full or the upstream sendsEOForEOT. If any of those two conditions is met, nginx will send the contents of the buffers to the client.If the client isn’t reading the buffers quickly enough, it will attempt to write the contents of them to disk and send them when the client is able to receive.Take the average size of your JSON responses. Modern disks and file systems can handle even huge buffer sizes, but you should use something to the power of two and by creating a good balance between the number and size you can speed up the buffering process.proxy_busy_buffers_sizeThese are buffers that were already passed downstream but not yet completely send and therefor they can’t be reused. This directive limits the maximum total size of such buffers and thus allows remaining buffers to be used to read upstream responses.proxy_buffer_sizeThe main buffer that is always in use. Even if you disableproxy_buffering, nginx will still fill up this buffer and flush it as soon as it’s full orEOF/EOT.proxy_max_temp_file_sizeThis directive controls how many data might be written to disk if the buffers are full (still utilizing in-memory buffers, because you need them to communicate with the disk). If all buffers and this file are full, nginx stops reading from upstream and has to wait for downstream to fetch the data before it can continue with the same procedure.
I'm an iOS developer and my back end is all written in Django. I use gunicorn as my HTTP server. I have three workers running on a small EC2 instance.My iOS app does not require any images or static content. At most, I am sending 1-20 JSON objects at a time per request. Each JSON object has at most about 5-10 fields.I'm quite new to NGINX. I heard it can do proxy buffering. I would like to add proxy buffering for slow clients, but I don't know the appropriate specific settings to use for the following modules:proxy_buffers Syntax: proxy_buffers number size Default: 8 4k|8k Context: http server location Reference: proxy_buffers proxy_busy_buffers_size Syntax: proxy_busy_buffers_size size Default: 8k|16k Context: http server location Reference: proxy_busy_buffers_size proxy_buffer_size Syntax: proxy_buffer_size size Default: 4k|8k Context: http server location Reference: proxy_buffer_sizeThe only setting which I know how to use (which is pretty sad) is the one below:proxy_buffering Syntax: proxy_buffering on | off Default: on Context: http server location Reference: proxy_bufferingYour expertise in this area would be greatly appreciated by this kind lost soul!
What are the appropriate NGINX configs if you are only sending JSON objects?
The solution is to uncomment this line in nginx.conf:pid /var/run/nginx.pid;It looks like different installations do it differently but the right thing is to uncomment it.
When I try to restart nginx with sudo /etc/init.d/nginx restart I get the message from the subject.I discovered that the reason is most likely that the script doesn't know how to stop the deamon because the pid file (/var/run/nginx.pid) is not created on start.I have two installations on two different servers... one was compiled from source and the other came with phusion passenger.I tried this command:start-stop-daemon --start --quiet --pidfile /var/run/nginx.pid --exec /usr/sbin/nginx -- -c /etc/nginx/nginx.confon both machines and on one the pid file is created and on the other it is not - on that machine the paths are a bit different (but I don't think this is relevant):start-stop-daemon --start --quiet --pidfile /var/run/nginx.pid --exec /opt/nginx/sbin/nginx -- -c /opt/nginx/conf/nginx.confThe process starts and pid is not written...I'm on Debian...Any suggestions?
Restarting nginx: nginxnginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
First, don't useif. If is evil ->http://wiki.nginx.org/IfIsEvilYou can accomplish this by using the following rewrite rule.rewrite ^/folder/folder1/(.*)$ /folder/folder1/index.php?word=$1 last;Place this rewrite rule just above youlocation / {}block
so with apache i have a folder:www.domain.com/folder/folder1/index.php?word=blahblahand i want users who access www.domain.com/folder/folder1/blahblah to be redirected to the above URL without the url changing.Thus i have the following .htaccess in the folder/folder1/ which works perfectly:RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)$ index.php?word=$1So i want to achieve the same functionality with nginx and i used two converters:http://www.anilcetin.com/convert-apache-htaccess-to-nginx/results in:if (!-f $request_filename){ set $rule_0 1$rule_0; } if (!-d $request_filename){ set $rule_0 2$rule_0; } if ($rule_0 = "21"){ rewrite ^/(.+)$ /index.php?word=$1; }andhttp://winginx.com/htaccessresults in:if (!-e $request_filename){ rewrite ^(.+)$ /index.php?word=$1; }now, i tried with both but neither works. I tried inserting them either inlocation / { }or inlocation /folder/folder1 { }or inlocation ~ \.php$ { }in all locations i get a 404 errornginx error reports either "primary script unknown while reading response header from upstream" or "no such file or directory".can someone enlighten me please ?thanks in advance!
nginx php file rewrite url
uwsgi_param key value;Ex.uwsgi_param GEOIP_COUNTRY $geoip_country_name;
I am trying to use the GeoIP module with my Nginx and Uwsgi stack. All the tutorials relate to using it with fastcgi, but since I dont use fastcgi it doesnt help.I need to get nginx to pass GeoIP data into your CGI app via custom HTTP headers, e.g.:proxy_set_header X-GeoIP-Country $geoip_country_name; proxy_set_header X-GeoIP-City $geoip_city;How do I do this with Uwsgi?
Pass parameters to Python Flask via UWSGI / NGINX
We are using nginx together with our Django app in agunicornserver. The performance is quite good so far, but I have not done any direct comparisons with an Apache setup. Memory usage is quite small, nginx takes about 10MB memory and gunicorn about 150MB (but it also servers more than one app). Of course this may vary from app to app.I would suggest to simply give it a try, it should be quite easy to set up following some tutorials on the web and/or on the gunicorn website. Also get some comparable test case and use some kind of monitoring software likemuninto see changes over time.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI am running an alpha version of my app on a EC2 Small instance (1.7 GB RAM) with postgres and apache (wsgi-mod not as daemon but directly) on it.Performance is alright, but it could be better. I am also worried about memory usage if too many test users would join.Is it wise to switch from Apache to nginx server? Has any Django developer done that and is happier with the results? Any other tips on the way are also welcome.Thanks
Django Performance / Memory usage [closed]
The new "absolute-url" package in Meteor 0.4.0 fixed the problem.http://docs.meteor.com/#absoluteurl
I have installed both Apache and Meteor behind NginX through reverse-proxy (on an Ubuntu server). Apache is mapped directly as baseURL (www.mydomain.com/) and Meteor is mapped as a subfolder (www.mydomain.com/live/).The problem I encounter is that my Meteor test (which works as expected at port 3000) stops working behind NginX since every single references (CSS, Javascript, template) are absolute to baseURL. ... Obviously, since Apache is mapped at baseURL, these files are not found when testing through NginX.What would be the best way to resolve to problem? System Administration is not my forte, and Meteor is my first incursion at server-side javascript. So I don't even know if this can be fixed, and if so, if it's done through a server configuration, Meteor configuration or programmatically.EDIT: The new "absolute-url" package in Meteor 0.4.0 fixed the problem!http://docs.meteor.com/#absoluteurl
How can I correct the Meteor base-url in a NginX reverse-proxy configuration?
Solved. I had a problem in my nginx conf file that was causing node/express to receive the wrong request-header. When a relative path is passed intores.redirect, it pulls the Host from the incomingreqobject and sets it in the response-header.proxy_set_header Host $proxy_host;should have beenproxy_set_header Host $host;$proxy_hostis theupstreamhost address0.0.0.0:port$hostis the incoming request-header Hostexample.comUPDATEAs Louis Chatriot points out in the comments, newer versions of Nginx have replaced$hostwith$http_host, which in previous versions returnsexample.com:portbut now returnsexample.com.
I have a server configured to host multiple node.js+express apps on multiple domains through an Ngnix frontend. Everything works great, except for when calls to redirect are made from an express route:res.redirect('/admin');Then the client browser is redirected tohttp://0.0.0.0:8090It seems like it must be an issue with the redirection headers coming out of express, but just in case it's relevant, here's the nginx.conf file for the domain in question:server { listen 0.0.0.0:80; server_name *.example.com; access_log /var/log/nginx_example_access.log; error_log /var/log/nginx_example_error.log debug; # proxy to node location / { proxy_pass http://0.0.0.0:8090/; proxy_redirect off; proxy_set_header Host $proxy_host; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }
Why is my Nginx reverse-proxy node.js+express server redirecting to 0.0.0.0?
Thenginxconfig file you are trying to set is used inAmazon Linux 1(AL1).For AL2, thenginxconfig files should be provided (aws docs) using:.platform/nginx/conf.d/or if you want to overwrite mainnginxconfig file, use.platform/nginx/nginx.confThus, you can try the following file.platform/nginx/conf.d/laravel.confwith content of:location / { try_files $uri $uri/ /index.php?$query_string; gzip_static on; }If it does not work, you can inspect nginx logs in/var/logor nginx config folder to check if the file is correctly placed in nginx config folder.Solution I usedI used this and it worked..platform/nginx/conf.d/elasticbeanstalk/laravel.conf
I am deploying my Laravel application to AWS ElasticBeanstalk. I have deployed it. Now, I am trying to override "/etc/nginx/conf.d/elasticbeanstalk/php.conf" file using .platform folder.I created .platform/etc/nginx/conf.d/elasticbeanstalk/php.conf file right inside the project's root folder. Then I put in the configuration content.Then I deploy my application executing "be deploy" command. But the Nginx config file is not overridden. What is wrong with my config and how can I get it working?I tried using .ebextensions too creating a config file with the following content. The file is not just created.files: /etc/nginx/conf.d/elasticbeanstalk/laravel.conf: mode: "000755" owner: root group: root content: | location / { try_files $uri $uri/ /index.php?$query_string; gzip_static on; }
AWS Elasticbeanstalk overriding Nginx config using .platform is not working
After long experiments we came to this solution:location ~ ^/test(?:/(.*))?$ { # some directives here proxy_pass http://nginx_docker_container_url/$1; # some directives here }needed to pass everything after/testto app, with or without trailing slash it should be handled correctly
I'm running into the following issue where I would like to be able to access a proxy passed location (React/NextJs webApp in a hosted docker container) from a home website with a trailing slashandwithout a trailing slash.Currently, when I hit:http://my-website.com/test# this worksBut when I hit:http://my-website.com/test/# this fails with a 404I'd like to be able to hit both of these urls. What am I missing?### Default Server ### server { listen 80; root /usr/site; if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; } location ~/test(.*)$ { set $upstream_endpoint http://$docker_container_url; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_pass $upstream_endpoint$1/; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
Nginx accept trailing slash or no trailing slash in proxy_pass
For creating a combined image, you could follow either of the suggested paths:Creating a mergedDockerfilewith the setup steps for both images, and building your own custom image.Creating aDockerfilepulling from image 1 (the more "complex" one), and adding the commands needed for image 2.The second approach is preferred, since you start off a known-good image, not having to start from scrach. Plus, you may need only minimal changes. For this to work, both images should share a common base image, such asalpinefor example.Examining both Nginx and Java OpenJDK Dockerfiles, you could see thatNginx Dockerfileis fairly more complex, with many prerequisite packages and setup steps, so it would make a better fit for the base image. My suggestion is, start from the Nginx base image, and add Java on top.If you're happy with the versions of JDK available in Alpile repositories, your combinedDockerfilemay be as simple as:FROM nginx:alpine RUN apk add openjdk17If you need a specific Java version, which isn't available in Alpine repositories, it's usually a matter of downloading the zipped Alpine Java distribution and unpacking it and settingJAVA_HOMEaccordingly. For example, see the OpenJDK 13 AlpineDockerfile.
I need to build a Docker container (feeling as a N00b about it) that runs a Java application fronted by an nginx Web server. For reasons not subject to discussion I need to put them into one container.I'd like to use Alpine for that. I found both images that contain Alpine with an installed nginx and Alpine with an installed JDK. I need to combine both.What's my best course of action? Start with the nginx container and add a jdk or start with the jdk containing container and add nginx?Or is there an option to combine 2 images (and would that be a good idea).Insights are appreciated.
Add Java to an NGINX Docker or add NGINX to a Java Docker on Alpine?
ThePathattribute in the NGINX metrics collected by prometheus derives from the Ingress definition yaml.For example, if your ingress is:apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: namespace: spec: rules: - host: http: paths: - backend: serviceName: servicePort: path: /Then although NGINX will match any URL to your service, it'll all be logged under the path "/" (as seenhere).If you want metrics for a specific URL, you'll need to explicitly specify it like this (notice the ordering of rules):apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: namespace: spec: rules: - host: http: paths: - backend: serviceName: servicePort: path: /more/specific/path - backend: serviceName: servicePort: path: /
I'm usingingress-nginxas an Ingress controller for one of my services running over K8S (I'm using the nginx-0.20.0 release image with no specific metrics configurations in the K8S configmap the ingress-controller is using).The nginx-ingress-controller pods are successfully scraped into my Prometheus server but all ingress metrics (e.g.nginx_ingress_controller_request_duration_seconds_bucket) show up withpath="/"regardless of the real path of the handled request.Worth noting that when I look at the ingress logs - the path is logged correctly.How can I get the real path noted in the exported metrics?Thanks!
How to properly log the "Path" in K8S ingress-nginx metrics
You need something like this:location /confirm_email { proxy_method POST; proxy_set_body '{ "challenge": "$arg_challenge" }'; # your proxy_set_headers and other parameters here proxy_pass /rpc/verify?; }
My use case is that I have an email containing a "verify your email address" link. When the user clicks this link, the user agent performs a GET request like:GET http://widgetwerkz.example.com/confirm_email?challenge=LSXGMRUQMEBOThe server will perform this operation as a POST (because it is a side-effecting operation). I do not have access to the server code at all. The destination request should be:POST http://widgetwerkz.example.com/rpc/verify { "challenge": "LSXGMRUQMEBO" }What Nginx rewrite can I perform to achieve this?Edit: solution in contexthttp { server { # ... location /confirm_email { set $temp $arg_challenge; proxy_method POST; proxy_set_body '{ "challenge": "$temp" }'; proxy_pass http://127.0.0.1/rpc/verify; set $args ''; } } }This does all these together:Converts the request from GET to POSTRewrites the location from/confirm_emailto/rpc/verifyRemoves the query string from the request (e.g. resulting url is simply/rpc/verify, without the?challenge=LSXGMRUQMEBO)Adds a JSON body of:{ "challenge": "LSXGMRUQMEBO" }Thanks to Ivan for putting me on the right track!
How to rewrite an Nginx GET request into POST?
Looks like you fixed the issue for receiving an invalid certificate by adding an additional rule.The issue with the redirect looks like it's related tothisand it's not fixed as of this writing. However, there is a workaround as described on the same link:nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'foo.com' ) { rewrite ^ https://www.foo.com$request_uri permanent; }
Use caseI deployed the nginx ingress controller in my Kubernetes cluster using this helm chart:https://github.com/helm/charts/tree/master/stable/nginx-ingressI created an ingress resource for my frontend serving webserver and it is supposed to redirect from non-www to the www version. I am using SSL as well.The problemWhen I visit the www version of my website everything is fine and nginx serves the page using my Lets Encrypt SSL certificate (which exists as secret in the right namespace). However when I visit the NON-www version of the website I get the failing SSL certificate page in my Browser (NET::ERR_CERT_AUTHORITY_INVALID) and one can see the page is served using the Kubernetes ingress fake certificate. I assume that's also the reason why the redirect to the www version does not work at all.This is my ingress resource (actual hostnames have been redacted):apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/from-to-www-redirect: "true" creationTimestamp: 2018-10-03T19:34:41Z generation: 3 labels: app: nodejs chart: nodejs-1.0.1 heritage: Tiller release: example-frontend name: example-frontend namespace: microservices resourceVersion: "5700380" selfLink: /apis/extensions/v1beta1/namespaces/microservices/ingresses/example-frontend uid: 5f6d6500-c743-11e8-8aaf-42010a8401fa spec: rules: - host: www.example.io http: paths: - backend: serviceName: example-frontend servicePort: http path: / tls: - hosts: - example.io - www.example.io secretName: example-frontend-tlsThe questionWhy doesn't nginx use the provided certificate on the non-www version as well?
Nginx ingress resource - Redirect from to www (SSL doesn't work)
As you correctly mentioned,post_actionis not documented and has always been considered an unofficial directive.Nginx provides a new "mirror" module since version 1.13.4, describedherein the documentation. So I advise you to give it a try. In your case, it would look like this –location /r/ { rewrite /r/(.*)$ http://localhost:3000/sample/route1/$1 redirect; mirror /stats; } location = /stats { internal; rewrite /sample/route1/(.*) /stats/$1; proxy_pass http://127.0.0.1:3000; }This will not work!I've built a test configuration and unfortunately this will not work. It works for neitherrewritenorreturn. But it works forproxy_pass.WhyThe explanation follows. A HTTP request sequentially passes few "phases" during processing in Nginx. The thing is thatmirrorgets triggered in phasePRECONNECTwhich occurs later than phaseREWRITEwhererewrite/returnend request processing. So,mirrordoes not even get triggered because its processing would happen later.In case of serving files from the location or proxying viaproxy_pass(orfastcgi_pass, etc), the processing will finally reachREWRITEphase andmirrorwill be executed.Phases are described in Nginx documentationhere.WorkaroundsI do not see any good solution without trade-offs. You could create an extra location (returning a redirect) and proxy your request from/r/, so thatmirrorgets triggered. Something like this, depending on the rest of your configuration:location /r/ { # you may need setting Host to match `server_name` to make sure the # request will be caught by this `server`. # proxy_set_header Host $server_name; proxy_pass http://:/redirect/; mirror /stats; } location = /redirect { rewrite /redirect(.*)$ http://localhost:3000/sample/route1$1 redirect; }Certainly this is suboptimal and has extra boilerplate.
I am building an application where I need to do some analytics on the api-data combination usage. Below is my nginx configuration -location /r/ { rewrite /r/(.*)$ http://localhost:3000/sample/route1/$1 redirect; post_action /aftersampleroute1/$1; } location /aftersampleroute1/ { rewrite /aftersampleroute1/(.*) /stats/$1; proxy_pass http://127.0.0.1:3000; }location/r/is used to redirect the browser requesthttp://localhost:80/r/quwjDP4usto api/sample/route1/quwjDP4uswhich uses the idquwjDP4usto do something. Now in the background I want to pass the idquwjDP4usto an stats api/stats/quwjDP4uswhich updates the db record for that id.When I start nginx and make the requesthttp://localhost:80/r/quwjDP4usnginx successfully redirects my request to my application but doesn't make the 2nd request in the background to the stats api. What am I missing?Note -post_actionis not included in nginx docs, is there an alternate module/directive I can use?
configure nginx to make a background request
It turned out that the following directive (which was defined globally) prevented caching from working:proxy_buffering off;When I override it underlocationconfig withproxy_buffering on;, caching starts working.So, to make caching work with POST requests, we have to do the following:OutputCache-Control: public, max-age=10header on the serverAddproxy_cache_pathconfig andlocationconfig in nginx (examples are given in the question text)Make sure thatproxy_bufferingisonfor the location on which we want to have caching enabled.
My task is to implement microcaching strategy using nginx, that is, cache responses of some POST endpoints for a few seconds.Inhttpsection of thenginx.confI have the following:proxy_cache_path /tmp/cache keys_zone=cache:10m levels=1:2 inactive=600s max_size=100m;Then I havelocationinserver:location /my-url/ { root dir; client_max_body_size 50k; proxy_cache cache; proxy_cache_valid 10s; proxy_cache_methods POST; proxy_cache_key "$request_uri|$request_body"; proxy_ignore_headers Vary; add_header X-Cached $upstream_cache_status; proxy_pass http://my-upstream; }The application located atmy-upstreamoutputsCache-Control: max-age=10which, if I understand correctly, should make the responses cacheable.But when I make repetitive requests using curl in short time (less than 10 seconds)curl -v --data "a=b&c=d" https://my-host/my-url/1573all of them reach the backend (according to backend logs). Also,X-Cachedis alwaysMISS.Request and response follow:> POST /my-url/1573 HTTP/1.1 > Host: my-host > User-Agent: curl/7.47.0 > Accept: */* > Content-Length: 113 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 113 out of 113 bytes < HTTP/1.1 200 OK < Server: nginx < Date: Tue, 08 May 2018 07:16:10 GMT < Content-Type: text/html;charset=utf-8 < Transfer-Encoding: chunked < Connection: keep-alive < Keep-Alive: timeout=60 < Vary: Accept-Encoding < X-XSS-Protection: 1 < X-Content-Type-Options: nosniff < Strict-Transport-Security: max-age=31536000 < Cache-Control: max-age=10 < Content-Language: en-US < X-Cached: MISSSo the caching does not work.What am I doing wrong here?Is there any logging facility in nginx that would allow to see why it chooses not to cache a response?
POST response caching does not work in nginx
If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml hereKubernetes External IPBut if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.
I installedingress-nginxin a cluster. I tried exposing the service with thekind: nodePortoption, but this only allows for a port range between30000-32767(AFAIK)... I need to expose the service at port80for http and443for tls, so that I can linkA Recordsfor the domains directly to the service. Does anyone know how this can be done?I tried withtype: LoadBalancerbefore, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on80/443.
How to expose kubernetes nginx-ingress service on public node IP at port 80 / 443?
Finally solved by adding CERT_NAME under nginx server environment:nginx: image: nginx:alpine restart: always environment: - VIRTUAL_HOST=docker-reverse-proxy.com - VIRTUAL_PROTO=https - VIRTUAL_PORT=443 - CERT_NAME=YOUR_CERT_NAME ## Add this
I am using thehttps://github.com/jwilder/nginx-proxyfor nginx-proxy settingThe port 80 redirect is working. That means i can get to my site via non SSL using test.example.com but with HTTPS i get a chrome error of "This webpage is not available ERR_CONNECTION_CLOSED"Then I found that default.conf from nginx-proxy seems doesn't listen 443 port:upstream docker-reverse-proxy.com { ## Can be connected with "test" network # test_nginx_1 server 172.19.0.3:443; } server { server_name docker-reverse-proxy.com; listen 80 ; access_log /var/log/nginx/access.log vhost; location / { proxy_pass https://docker-reverse-proxy.com; } }Below is my configuation:1)docker run -d -p 80:80 -p 443:443 -v /private/etc/ssl/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro --name nginx-proxy --net=test jwilder/nginx-proxy:alpine2)docker-compose.ymlversion: '3' services: php-fpm: build: . restart: always volumes: - web:/www networks: - backend nginx: image: nginx:alpine restart: always environment: - VIRTUAL_HOST=docker-reverse-proxy.com - VIRTUAL_PROTO=https - VIRTUAL_PORT=443 ports: - 80 - 443 volumes: - web:/www:ro - ./nginx/default.conf:/etc/nginx/conf.d/default.conf - /private/etc/ssl/certs:/etc/nginx/certs networks: - frontend - backend depends_on: - php-fpm volumes: web: networks: backend: frontend: external: name: test
How to listen 443 port in jwilder/nginx-proxy
You need to add theFORCE_SCRIPT_NAMEsetting to yoursettings.pyas such:FORCE_SCRIPT_NAME = '/service'mind that there's no trailing slash at the end.
I have to serve a Django app from a subdirectory (hostname/service/). So far I'm able to get to the Admin Login prompt (hostname/service/admin/login/?next=/admin/), but after successfully logging in I'm redirected to (hostname/admin/login/) and get a 404.How can I keep the correct subdirectory and get inside the Admin panel?Here's the nginx.conf server block:server { listen 80; root /usr/share/nginx/html/; charset utf-8; location /service { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://djangoapp:8000/; proxy_set_header X-Real-IP $remote_addr; } location /static { alias /usr/src/app/static; } }Edit: These are the urlpatterns from urls.py:from django.conf.urls import url, include from django.contrib import admin # A custom view of my own from app.views import AdvanceCustomSearchView urlpatterns = [ url(r'^search/$', AdvanceCustomSearchView(), name='index_search'), url(r'^app/', include('app.urls')), url(r'^admin/', admin.site.urls), ]
Nginx serving Django in subdirectory - admin login is redirecting
We can make use of the "proxy_ssl_name" directive in nginx. It allows overriding the hostname against which nginx should verify the certificate of the backend server.proxy_ssl_name mybackend-server.hostname.com;
I am trying to implement HTTPS protocol communication at every layer of a proxying path. My proxying path is from client to load balancer (nginx) and then from nginx to the upstream server.I am facing a problem when the request is proxied from nginx to the upstream server.I am getting the following error in the nginx logs2017/03/26 19:08:39 [error] 76753#0: *140 upstream SSL certificate does not match "8ba0c0da44ee43ea894987ab01cf4fbc" while SSL handshaking to upstream, client: 10.191.200.230, server: abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com, request: "GET /a/a.html HTTP/1.1", upstream: "https://10.240.81.28:8001/a/a.html", host: "abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com:10003"This is my configuration for the upstream server blockupstream 8ba0c0da44ee43ea894987ab01cf4fbc { server slc01etc.us.oracle.com:8001 weight=1; keepalive 100; } proxy_pass https://8ba0c0da44ee43ea894987ab01cf4fbc; proxy_set_header Host $host:10003; proxy_set_header WL-Proxy-SSL true; proxy_set_header IS_SSL ssl; proxy_ssl_trusted_certificate /u01/data/secure_artifacts/ssl/trusted_certs/trusted-cert.pem; proxy_ssl_verify on;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;When the request goes from Nginx to the upstream server, nginx matches the upstream ssl certificate against the pattern present in the proxy_pass directive. But my upstream ssl certificate pattern is the upstream server hostname (slc01etc.us.oracle.com) .Is there any way, where I can force Nginx to verify the upstream ssl certificate against the server hostnames provided in the upstream server block, instead of the pattern present in the proxy_pass directibve?
How to force Nginx to verify upstream certificates against the hostnames present in upstream server block?
Official documentationdescribeshow location tree is traversed:Rregular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used. If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used.Based on that the configuration will be as follows:location ~* \.(i18n-.*\.js)$ { access_log off; expires off; } location ~* \.(css|js)$ { access_log off; expires 1y; add_header Cache-Control public; }Notes: Question mark in the regex is redundant unless used as a variabledocs:A named regular expression capture can be used later as a variable:server { server_name ~^(www\.)?(?.+)$; location / { root /sites/$domain; } }In case of using?:syntax to skip capturing groups they need to be used later, otherwise you can remove to simplify the location syntax.
I have caching enabled for specific files in nginx, like this:location ~* \.(?:css|js)$ { access_log off; add_header Cache-Control "no-transform,public,max-age=31536000,s-max-age=31536000"; expires 1y; }What I'd like to do here is to exclude all files matching the pattern i18n-*.js, and as a result, cache all .js files except for the ones starting with i18n.I tried to do a negative lookup to exclude the pattern, but it doesn't work as excepted because of the non-capturing group:location ~* \.(?!i18n-.*\.js)(?:css|js)$ { access_log off; add_header Cache-Control "no-transform,public,max-age=31536000,s-max-age=31536000"; expires 1y; }What's the smart solution here? I'm no regex expert, so a brief explanation would be helpful, too.
Regex to find files matching file extension except if filename contains string
You have wrong permission for subdir1, fix it:chmod 755 /home/user/Dropbox/subdir1or even better (recursive):find /var/www/example.com -type d -print0 | xargs -0 chmod 755As for nginx user, you can set it withuserconfiguration directive:user www-data;You can use any user with NGINX server, you just need correct permissions for folders (755) and files (644) of your project. I prefer distinct usernginx, it is good practice, but not necessary.You can create systemnginxuser in Ubuntu/Debian like this:sudo adduser --system --no-create-home --disabled-login --disabled-password --group nginx
I am running a local testing server on my laptop running Ubuntu 16.10. I was running Apache2, but I've decided to switch over to NginX. Following guides likethis one, I think I've got NginX up and running, along with PHP 7.0 fpm.However, when I load one of my sites, I get a403 Forbidden error. The NginX error log says :[error] 14107#14107: *1 directory index of "/var/www/example.com/" is forbidden, client: 127.0.0.1, server: example.com, request: "GET / HTTP/1.1", host: "example.com"I understand that all the parent directories in the path should have the right permissions, but I'm unclear on exactly what the correct permissions are. If I understand correctly, under Apache2, the directories were accessible to the user or groupwww-data, but I'm not sure if that's still true under NginX.Whatchmodorchowncommand should I be using to ensure that the relevant directory has permissions that will make it accessible to NginX?Note that I am completely replacing Apache2 with NginX, so there is no need to preserve any settings for Apache's sake.Also, for reference, here is the current directory path and permissions of my site. Note that the directory in/var/wwwis a symlink to a directory in my Dropbox folder, if that has any impact.$ namei -om /var/www/example.com f: /var/www/example.com drwxr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x www-data www-data www lrwxrwxrwx www-data www-data example.com -> /home/user/Dropbox/subdir1/subdir2/Site/ drwxr-xr-x root root / drwxr-xr-x root root home drwxr-xr-x user user user drwx--x--x user user subdir1 drwxrwxr-x user user subdir2 drwxr-xr-x user user Web drwxr-xr-x user user Site
What are the correct permissions for my site that is now served by NginX?
This problem solved with adding GitHub to known hosts according to Benyi's comment.ssh-keyscan -t rsa github.com >> /var/www/.ssh/known_hostsYou should specify ssh key firstly. After that, you should do git tasks what you want.1-) Ssh keys are not user specific. So you can create rsa key pair everywhere. Public key should be copied to github. Private key should be placed on your host.2-) In linux environment, default.sshfolder path is under the users home directory. If you do not specify user's home folder, it should be in/home/www-data/.ssh. If you can not access this folder you should specify your ssh key that have written in my example.3-) In linux environment,deploy.phprunned by user who executingnginxprocess. Commonlyapache2andnginxprocesses executed bywww-data user.4-) You should specify your ssh key path for sending this key file for authorization when you talk with github server.
I'm setting up a hook between Github and my server, which can auto pull new commits when the script triggered by Github requests.It's all setting finished, like ssh-keys, git origin. I can pull a new commit from my private repo hosted on Github by runninggit pull origin master. It's works fine with the shell.But when I write that command into adeploy.phpfile, it can be triggered by Github, but with error message.Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.After that, I run a commandwhoamithrough the php file, it returns userwww-data.Actually, I generate a key forwww-datauser, and put them in/var/www/.ssh, also copiedid_rsa.puband pasted it to Github, still have an authentication failure.nginxAll files are set to belongwww-data:www-dataI have addwww-data's public key to the repo's deploy keys.Thedeploy.phpcommandshell_exec("cd /var/www/html/tinfo/; git pull origin master 2>&1;");My question isHow to create a key forwww-data?Iswww-data's.sshdirectory/var/www/.ssh?If I'm not wrong, why does github refuse my connection? I guess it's related about the userwww-datawho executedeploy.phpfile and run commands through PHP.When talk to Github server, doeswww-datanot sent its private key to the server?Thank you so much.
Github authentication failed with user www-data
You'll do best to include the server_name as well, and to end your statements with semicolons:server { server_name some.server.name; listen 8888; # or just _ (underscore) to listen to any name root /test/index; index index.html; }
I am trying to rewrite a very simple nginx.conf file. The only purpose of this project is to have nginx serve a static index.html on localhost.Since all of the documentation and tutorials online have minimum 50 line configurations. I'm wondering if my 7 line configuration will work and accomplish what I need.} server { root /test/index listen 8888; } }Usually I just use the default nginx.conf and make modifications for whatever project I'm working on, so I'm not sure if I can strip out as much as I did here and have it still function?
How to specify a directory in nginx.conf to serve index.html from
According to your config of project2location /project2 { root /var/www/project2; index index.html; }Nginx will be looking for files under the path/var/www/project2/project2/for your requests to project2. So if your project2 is under/var/www/project2, The correct config should belocation /project2 { root /var/www; index index.html; }Another alternative is to usealiasinstead ofroot.in your case isalias /var/www/project2, checkhere
I have some static html files under:/var/www/project1Nginx config for this project is:server_name www.project1.com project1.com; root /var/www/project1; location / { index index.html; }My goal is to use nginx so that when a user enters this url:www.project1.com/project2Nginx uses another root, I have tried:location /project2 { root /var/www/project2; index index.html; }But this is not working. Any idea on how to achieve this?
Nginx - another root for a specific location
You need to setup the domain which is sending the CSRF cookie. Try settingCSRF_COOKIE_DOMAINto".domain.co.uk"andCSRF_COOKIE_SECUREtoTruein your settings.Relevant documentationhttps://docs.djangoproject.com/en/4.1/ref/csrf/#how-it-works
BackgroundI'm trying to configure my Django app to work with ssl provided by cloudflare. I have about the same setup asthis answerand have followed the same solution.Issue:This has been killing me for weeks (please help!) as I amnota networking/security guy and just need a solution that will avoid me gouging my eyes out but keep the site secure.I am currently getting a CSRF issue wherehttps://www.domain.co.ukdoes not matchhttps://domain.co.ukConfigSettings.pySECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) USE_X_FORWARDED_HOST = Truenginx:server { listen 80 default_server; server_name domain.co.uk www.domain.co.uk; access_log off; location /static/ { alias /static/; } location / { proxy_pass http://127.0.0.1:8000; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }Cloudflare DNS:A domain.co.uk points to Automatic CNAME www is an alias of domain.co.uk AutomaticBonusIn addition I also have the .com for the domain and would like to know how best to set this up so that it is also ssl.
CSRF django nginx with ssl from cloudflare
This problem seems similar to theNginx as reverse Proxy, remove X-Frame-Options headerthread on the Nginx mailing list. That solution wasproxy_hide_headerBy default, nginx does not pass the header fields “Date”, “Server”, “X-Pad”, and “X-Accel-...” from the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed. If, on the contrary, the passing of fields needs to be permitted, the proxy_pass_header directive can be used.
Is there a way to override Content-Security-Policy set by the domain/site A while i am using nginx proxy_pass on Site B.Site A defined Content-Security-Policy on their domain. Site B acts as a reverse proxy for site A.How can i override Content-Security-Policy while serve content from site B ?how can i achieve this in nginx proxy pass ?my current nginx server block looks like thisserver { server_name proxy-domain.com.; location / { proxy_pass http://www.target-site.com/; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }i have tried addingadd_header Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval'e.g.server { server_name proxy-domain.com.; location / { proxy_pass http://www.target-site.com/; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } add_header Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' }but if i check headers of site B, then it shows modified Content-Security-Policy of site B but the content from other sources does not gets loaded., only headers are set.why is that ?update: when i check headers i get 2 Content-Security-Policy headers , first are set by site A and then later one Content-Security-Policy headers set be my i.e. site B.e.g.Content-Security-Policy: default-src 'self' 'unsafe-inline' 'unsafe-eval' apis.google.com www.google.com; Content-Security-Policy: default-src 'self' 'unsafe-inline' 'unsafe-eval' *.cloudflare.com;
How to Override Content-Security-Policy of Site A while using nginx proxy_pass on Site B for serving content?
Your first match group is file name without extension, while you're passing it to the last fallback URL where extension is expected.Also there's no point of escaping forward slashes. They have no special meaning here.server { listen 80; server_name localhost; root /var/www/localhost/www; location ~* ^/sys/assets/(.+)\.css$ { try_files $uri /sys/assets/stylesheets/$1.css; } }
Been trying this for several hours now but i am having a hard time figuring it out.location ~* ^\/sys\/assets\/(.*).css$ { try_files $uri $uri/ /sys/assets/stylesheets/$1; }I am basically trying to make css files called from /sys/assets/file.css to fallback to /sys/assets/stylesheets/file.css
Nginx location regex not matching
The problem seems to beproxy_pass http://your-logstash-host;If you look at the logs in your LogStash Web, you'll see "WARN -- : attack prevented by Rack::Protection::JsonCsrf"There's some built-in security, which I'm not familiar with, provided by rack-protection to prevent Cross-origin resource sharing attacks. The problem is that the proxy_pass from Nginx looks like a CORS attack to ruby rack protection.EDIT:As previously stated, the module Rack::Protection::CSRF is the one throwing this warning.I have opened the code and we can clearly see what's going on:def has_vector?(request, headers) return false if request.xhr? return false unless headers['Content-Type'].to_s.split(';', 2).first =~ /^\s*application\/json\s*$/ origin(request.env).nil? and referrer(request.env) != request.host endSo here's the required nginx config required to pass the requests so that Sinatra will accept them:server { listen 80; server_name logstash.frontend.domain.org; location / { # Proxying all requests from logstash.frontend to logstash.backend proxy_pass http://logstash.backend.domain.org:9292; proxy_set_header X-Real-IP $remote_addr; # Set Referer and Host to prevent CSRF panick by Sinatra proxy_set_header Referer my-host-04; proxy_set_header Host my-host-04.domain.org; # Alternatively to setting the Referer and Host, you could set X-Requested-With #proxy_set_header X-Requested-With XMLHttpRequest; } }
I am trying to proxy requests from nginx to kibana (logstash). I can access the kibana dashboard on port 9292 - I can confirm that a service is listening on port 9292. I can successfully proxy from nginx to other services but the proxy directive for kibana (port 9292) does not work - I can proxy to 9200 for elasticsearch. Any ideas on how to troubleshoot this further would be appreciated.Update:I have tried changing the server setup in upstream to point to 0.0.0.0 as well as the server address but neither option works. The request gets routed to the default server.Another Update:I have noticed that removing the proxy parameters from the nginx default file allows me to forward the request to the kibana listneing port - however, kibana complains about missing "dashboards/default.json" which I am guessing is due to some missing or misconfigured setup in nginx.default (/etc/nginx/sites-available)upstream logstash { server 127.0.0.1:9292; ##kibana keepalive 100; } server { listen 84; listen [::]:84 ipv6only=on; root /var/www/; index index.html index.htm; server_name logstash; ##logging per server access_log /var/log/nginx/logstash/access.log; error_log /var/log/nginx/logstash/error.log; location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://logstash; } }
unable to proxy from nginx to kibana
Well you can simply say that each pool is like a separate php, like for me i use pools to run each by a different user, give each the appropriate limits in terms of resources and such for separate websites running on the same server.I don't understand though why 3 pools for same site, do you use anupstreamin nginx?As formax_childrenis the amount of spawned processes the fpm is allowed to spawn to handle concurrent connections, If you are having a lot of concurrent connections then you better increase that number, if the number is reached fpm won't spawn another child and wait for one to free to handle the waiting request.EDIT:Try playing with this config, might be useful, here's a snippet from the config file, by default it's commented.; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500Here's also another one; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0
I have a Drupal site with NGINX and PHP-FPM with 3 pools.What I want to know is what are FPM pools or just give me links to good documentation, i've searched about this topic but all I found is how to configure "X" to obtain a better performance.Also, what is pm.max_children? I recently notice in the log that when pool www1 reaches this value the pool www1 stops working, locking also a page on my site until I reload PHP-FPM. Why I'm reaching pm.max_children after some time? Is there a way to detect and react in this event... reloading PHP-FPM? Is there a way to avoid reaching pm.max_children?Thanks all for your knowledge.PD: I'm using perusio's configuration for Drupal and Nginx.
What are PHP-FPM pools and what is pm.max_children?
You could givesiegea try as well.The article you've linked looks good to me.Generating 60,000 rq/s and answering them at the same time will be a problem because you most definitely run out of resources. It would be best to have some other computers (maybe on the same network) to generate the requests and let your server only handle answering those.Here's an example siege configuration for your desired 60,000 rq/s that will hit your server for one minute.# ~/.siegerc logfile = $(HOME)/siege.log verbose = true csv = true logging = true protocol = HTTP/1.1 chunked = true cache = false accept-encoding = gzip benchmark = true concurrent = 60000 connection = close delay = 1 internet = false show-logfile = true time = 1M zero-data-ok = falseIf you don't have the infrastructure to generate the load, rent it. A very great service isBlitz.IO(I'm not affiliated with them). They have an easy and intuitive interface and (most important) they can generate nearly any traffic for you.
I want to see how far my nginx + node.js setup can go and what changes I can make to squeeze out extra performance I've stumbled ona great articledetailing some tuning that can be done to the OS to withstand more requests (which I'm not sure I completely understand)Say I want to see how it handles 60,000 requests per second for a duration of time.I've tried apachebench andbeeswithmachineguns. apachebench seems to be limited locally to about 3500 requests or something. Raising the concurrency only serves to decrease the average req/s somehow. I was able to see (claimed) ~5000 requests per second to a test page with beeswithmachineguns but that's still nowhere close to what I want. It seems to be a bit on the buggy side, however.Is there a reliable way to simulate a huge amount of requests like this?
How to simulate a huge amount of simultaneous requests to a web-server?
I have some ideas for debugging.Do avar_dump(file_get_contents('php://input'));instead of an echo. According to thereference:This function may return Boolean FALSE, but may also return a non-Boolean value which evaluates to FALSE. Please read the section on Booleans for more information. Use the === operator for testing the return value of this function.If you get abool(false)as output, there's something wrong which makes you cannot readphp://input- most likely a PHP issue. If you getstring(0) "", there's just nothing inphp://input(anymore?), which makes it more likely that this is an nginx issue.Also, according to thephp://reference, you cannot usephp://inputwithenctype="multipart/form-data". Are you sure you don't use that one? You could also try an HTML file if that's more familiar.You can also check the error logs,/var/log/nginx/error.logby default. Also, check the HTTP response code. Is it 200? If not, is it a helpful code?
I'm trying to upload a file using HTTP PUT.After reading a bit it seems the$_FILESarray is only with POST andmultipart/form-data. While with PUT, I'd need to manually readphp://inputto get the data. Both methods don't work.I tried the following options and would appreciate any tips you might have:curl --upload avatar.jpg http://api.test.com/user/dsadasdsa curl -X PUT -F "[email protected]" http://api.test.com/user/dsadasdsaMy PHP File is trying to print this but returns an empty string:echo file_get_contents("php://input");I started to think this might be an Nginx issue missing PUT DELETE support and installed nginx-extras as well as adding the following to my nginx config but this doesn't help as well unfortunately.root /var/www/; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access group:rw all:r;
Curl PUT request with file upload to PHP
you can use setcookie syntax.#include #include int main(int argc, char** argv) { int count = 0; printf("Content-type: text/html\r\n" "Set-Cookie: name=value\r\n" "\r\n" "CGI Hello!" "CGI Hello!" "Request number %d running on host %s\n", ++count, getenv("SERVER_NAME")); return 0; }
I'm creating a website in C++ using FastCGI on nginx. My problem is now to track a user (aka session). I can read the HTTP_COOKIE out, but I have no clue how I can create a new cookie with a name and a value and send this to the client.Looking up in Google I only found relevant stuff for PHP, Python and other scriptlanguages that try to run with CGI/fCGI.
How to create a cookie with FastCGI (nginx) in C++
Here's a way to rewrite via lua:location / { rewrite_by_lua ' if string.find(ngx.var.host, "_") then local newHost, n = ngx.re.gsub(ngx.var.host, "_", "-") ngx.redirect(ngx.var.scheme .. "://" .. newHost .. ngx.var.uri) end '; proxy_pass http://my_backend; proxy_set_header Host $host; }
I've got URLs coming in that look like this:https://some_sub_domain.whatever.comThat need to be redirected to:https://some-sub-domain.whatever.comI don't know what the subdomains will be (they're usernames).While I need to replace underscores for the subdomain, I need to leave other underscores in-tact:https://some_sub_domain.whatever.com/hey_there_underscoreShould redirect to:https://some-sub-domain.whatever.com/hey_there_underscore
Replace underscores in unknown subdomains with dashes in an nginx redirect
You need to setup send_timeout because this specifies the response timeout to the client.send_timeout 300I think this is the case because send_timeout applies to client-read operations which is exactly what you are trying to do.
This question already has answers here:Closed11 years ago.Possible Duplicate:How do I prevent a Gateway Timeout with NginxI'm using an existing SOAP API for importing data via XML. Sometimes while the XML is too large I get a 504 gateway timeout after 60 seconds.I've tried to set fastcgi_read_timeout to 300 in the nginx.conf but it doesnt work. I've changed maximum_execution_time to 3600Somebody an idea how I can change the timeout?
Nginx 504 gateway timeout after 60 seconds [duplicate]
Apache Mod SSL$ openssl genrsa -des3 -out < private key file name>.key 2048Apache-SSL$ openssl genrsa -des3 -out www.yourdomain-example.com.key 2048These two are obviously the exact same command, with a different way of writing the example name. They just generate the key pair, you'd need an additionalreqcommand to generate a CSR too.genrsagenerates a key pair, andreqgenerates a CSR. However,reqcan perform both operations at once when using-newkey.SeeOpenSSLreqexample documentation:Create a private key and then generate a certificate request from it:openssl genrsa -out key.pem 1024 openssl req -new -key key.pem -out req.pemThe same but just using req:openssl req -newkey rsa:1024 -keyout key.pem -out req.pem
I want to generate the CSR file for requesting SSL (wildcard) certificate. This certificate and private key will be used on multiple machines with both Apache and Nginx.RapitSSL states the following commands for the different setups:Nginx$ openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csrApache Mod SSL$ openssl genrsa -des3 -out .key 2048Apache-SSL$ openssl genrsa -des3 -out www.yourdomain-example.com.key 2048Is there a way to generate a CSR that works with both Apache and Nginx?
How to generate CSR for SSL that works with Nginx & Apache?
Do not use F5 to reload the page. Use click on the url + enter, or click in a link. That's how I got only 1 request.
Right now I'm using this:location ~* \.(js|css)$ { # |png|jpg|jpeg|gif|ico expires max; #log_not_found off; # what's this for? }And this is what I see in firebug:Did it work? If I didn't get it wrong, my browser is asking for the file again, and nginx is answering 'not modified', so my browser uses the cache. But I thought the browser shouldn't even ask for the file, it already knows it will never expire.Any thoughts?
How to set nginx cache headers to never expire?
Yes, it's valid. There's no difference between UNIX-socket and TCP/IP-socket in terms of HTTP Keepalive.
Nginx 1.1.4+ can serve upstream connection with HTTP1.1keepalivedirective, see theofficial documentation(it's not the same as keepalived clients' connections). So the Unicorn configuration can look like as below:upstream unicorn { server unix:/tmp/unicorn.todo.sock fail_timeout=0; keepalive 4; } server { try_files $uri/index.html $uri @unicorn; keepalive_timeout 70; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; proxy_http_version 1.1; proxy_set_header Connection ""; } }These headers are required for HTTP connection:proxy_http_versionandproxy_set_header.So the question is the configuration valid or socket-connection is permanent by itself?
Keepalived upstream connection to Unicorn via socket
This:RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,QSA,L]will be converted to:rewrite ^/(.+)/$ http://$http_host/$1 permanent;and this:RewriteRule !^(media/(.+)|favicon.ico|robots.txt|sitemap.xml|sitemap-main.xml)$ index.phpwill be converted to:rewrite /!^(media/(.+)|favicon.ico|robots.txt|sitemap.xml|sitemap-main.xml)$ /index.php;you can also use:if ($rule_0 = ""){ rewrite ^/(.+)/$ http://$http_host/$1 permanent; } if ($rule_0 = ""){ rewrite /!^(media/(.+)|favicon.ico|robots.txt|sitemap.xml|sitemap-main.xml)$ /index.php; }Documentation:http://wiki.nginx.org/HttpRewriteModuleFrom:http://www.anilcetin.com/convert-apache-htaccess-to-nginx/
I've managed to convert most of them, but I'm struggling a bit with these two -RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,QSA,L] RewriteRule !^(media/(.+)|favicon.ico|robots.txt|sitemap.xml|sitemap-main.xml)$ index.phpWould appreciate a hand if anyone is a nginx rewrite ninja :)
Need help converting Apache2 Rewrite rules to nginx
I have gathered up answers to all parts of this question.Base nginx does not support chunked requests (as Alexander confirmed!). Nginx can support chunked request by usingNginXHttpCunkinModule(as my question mentions). Better: this module graduated from beta status to production quality more than 18 months ago. Best: I spoke with some members of the CloudFoundry engineering team at a recentmeetup; they confirm that it's planned to add this module to their version of nginx. Problem solved. (Well, it's fully solved in the long term. But we don't have an exact date for when to expect this.)Therefore a short-term solution would be nice as well. I found one.Answering my question directed to Alexander: It's not possible to send "Content-Length" with chunked messages. That's really the point of chunked messages: you start sending them before you have the full content, so you cannot possibly know the length yet. So his idea to avoid chunked requests is correct. But to be more practical I would say, "Use HTTP/1.0 rather than HTTP/1.1." This has the effect of not sending chunked messages. We were able to patch our client temporarily to test this idea out. It worked. But we do not plan to roll out a public patch. It seems counterproductive to make everyone use a ten year old protocol (and a 10 year old unsupported client library!) to solve the problem for this one situation.Instead I'll use the hacked client when needed, I'll email out if others find a need for it, and we'll wait for CloudFoundry up update to HttpChunkin and HTTP/1.1.
It seems that nginx does not support chunked requests well. But I'm trying to get a more definitive (and current) answer. I have a client making a SOAP request to a server from a Java client which sets the headerTransfer-Encoding: chunked. All works well when I connect directly to my application on Tomcat.But when I put nginx between them, then things break.To add a few details: I'm working with CloudFoundry. I'm using the Micro Cloud Foundry to confirm that things work as expected in the absence of nginx. But my requirement is to use cloudfoundry.com, so I don't have the ability to bypass nginx there.This question and answersays that this is perhaps my only workaround:http://wiki.nginx.org/NginxHttpChunkinModule. But that workaround isn't available, since I cannot modify the configuration on cloudfoundry.com.This questionlooks similar too, but it actually covers the reverse of this requirement. It covers chunked responses rather than chunked requests.So how about any changes on the client to work around this? Is it possible to send bothTransfer-Encoding: chunkedandContent-Length: 123as headers? This area is new to me, but it seems from projects like Apache HttpComponents that one would set either the length or chunking but not both. The point of chunking is that you don't need to know the length when the request starts. Could I tell my client to use HTTP/1.0 and play nice with nginx without chunking? Are there other workaround ideas that I'm forgetting?
How to make a chunked request via nginx
Based on what I have read here:http://wiki.nginx.org/X-accelyou need to turn off X-Accel-Buffering. Here is some example code:public ActionResult Stream(string id) { Response.ContentType = "text/event-stream"; Response.Buffer = false; Response.BufferOutput = false; Response.Headers["X-Accel-Buffering"] = "no"; return View(); }Hopefully the code above fixes your problem.
I am trying to set up an event stream using MVC.NET, Nginx and Fastcgi. The streaming works fine for me using xsp4, but I have not been able to get it to work through Nginx and Fastcgi. My goal is to open an EventSource stream and to downstream data to my website.I have tried adding the 'ngx_http_upstream_keepalive' module -http://wiki.nginx.org/HttpUpstreamKeepaliveModule- which is funny because there is "Note - this This will not work with HTTP upstreams" in the module description. But wait, isn't that the name of the module? Anyways, maybe I'm confused here. I have also tried adding 'proxy_buffering off' to my nginx.conf, which also hasn't helped.I understand this should be fairly easy to do, but I am at a loss. Is there some property I can add to my nginx.conf to make this work? Or is there something to add to the Response in .NET?Please help me StackOverflow!
Trying to stream using eventsource through nginx/fastcgi
To removenginxServer:header you could useserver_tokens offdirective.For other headers try usingHeaders Morenginx module:more_set_headers 'Server: anon'; # replace the default 'nginx + Passenger' more_set_headers 'X-Powered-By'; # clear header entirely
I am trying to hide this headers for the production server but without success :X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0.021429 Server: nginx/1.0.0 + Phusion Passenger 3.0.7 (mod_rails/mod_rack)Using :- Rails 3.0.9 - Passenger 3.0.7 - Nginx 1.0.0Any ideas ?
Hide Headers in Passenger/Nginx Server