Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I ran into this today. It seems the issue is due to nginx (like most servers) not letting youPOSTto a static file.The solution is to capture 405 errors in your @503 location block, serving
the maintenance page. In addition, you will have to enable
@recursiveerrorpages@, since you are first, intentionally, throwing a
503 error, and then the user is throwing a 405 by posting to your
static file:recursive_error_pages on;
if (-f $document_root/system/maintenance.html) {
return 503;
}
error_page 404 /404.html;
error_page 500 502 504 /500.html;
error_page 503 @503;
location @503 {
error_page 405 = /system/maintenance.html;
# Serve static assets if found.
if (-f $request_filename) {
break;
}
rewrite ^(.*)$ /system/maintenance.html break;
}Source:https://www.onehub.com/blog/2009/03/06/rails-maintenance-pages-done-right/ | I have simple configuration file that is used to server custom 503 error page at a time of maintenance. The relevant part is this:server {
listen 80 default;
root /usr/share/nginx/html;
server_name example.com;
location / {
if (-f $document_root/503.json) {
return 503;
}
}
# error 503 redirect to 503.json
error_page 503 @maintenance;
location @maintenance {
rewrite ^(.*)$ /503.json break;
}
}The problem is Nginx figures out that any request resolves in a static file and any POST, PUT and DELETE requests get 405 (method not allowed) response.So the question is: how do I tell Nginx to serve my page for any HTTP method? | Return 503 for POST request in Nginx |
Yes.http://nginx.org/r/mapmap $http_referer $proxied {
default example.com;
"~*(?<=url=)(?[\w-.]*)(?=/)" $p;
} | Is there an alternative of using If to extract a value from a variable in Nginx config files?I.eif ($http_referer ~* (?<=url=)([\w-.]*)(?=/) ){
set $proxied $1;
rewrite (?<=/)(.+\.(css|jpg|png|gif|js)) http://$proxied/$1 redirect;
}Thanks | Nginx extract a value from a variable or any string |
If you are serving static files or using any of nginx's reverse proxy features, you can use nginx. But if not, since your servers are behind a load balancer, nginx isn't necessary at all.The rule of thumb is one node.js/express.js process per core. Have a look atclusterto help you manage this. Make sure your load balancer knows about all the node.js processes you are running (and is not just load balancing between one IP/port pair on each server).Update: Node.js now hasclusterbuilt in out of the box.Also, if you are deploying on Ubuntu you can use upstart instead of forever if you like. | I am new to expressjs, I want to deploy an expressjs app to production. Based on my googling, here's the setup on rackspace I am thinking:1 Load balancer + 2 server + Run app with foreverMy questions are:What engine shall I use to run the app? nginx?how many app can I run per server?Thank you. | Expressjs to production |
Yes; it's safe to use CherryPy on its own. | I've been working on a python web app using cherrypy and read it'd be more "robust" to use it as a backend, so I gave it a try.Shortly put, running some benchmarks on a page doing some database operations and serving static & dynamic content has shown that plain cherrypy was twice as fast than nginx and memcached, and about half faster than lighttpd. I heard the latter had memory leak issues, so refrained from using it. And yes, both nginx and lighttpd were configured to serve the static content.I didn't want to try out apache since I'll be deploying it on a relatively "small" VPS.So, considering that :I wont' be deploying it on a
distributed system for a while, is it
safe to use cherrypy on its own ?And when I will deploy it on a such
system, which frontend performs the
best ? | Cherrypy : Do I really need to put it behind a frontend? |
The final answer, much aided by @MSalters was more complicated than I could imagine. The reason is that NGINX works differently with variables than with statically entered hostnames - it does not even use the same DNS mechanism.The main issue is that path handling and prefix stripping does not work the same with variables. You have to strip path prefixes yourself. In my original example:location /foo/ {
set $FOO 127.0.0.1;
rewrite /foo/(.*) /$1 break;
proxy_pass http://$FOO/$1$is_args$args;
}In my example I use an IP address so no resolver is required. However, if you use a host name aresolveris required so add your DNS IP there. Shrugs.For full disclosure, we are using NGINX inside Kubernetes so it gets even more complicated. The special points of interest are:Add aresolverdirective with the IP of the cluster's DNS service (in my case 10.43.0.10). This is the ClusterIP of thekube-dnsservice in thekube-systemnamespace.Use a FQDN even if your NGINX is in the same namespace since the DNS can only resolve FQDN apparently.location /foo/ {
set $MYSERVICE myservice.mynamespace.svc.cluster.local;
rewrite /foo/(.*) /$1 break;
proxy_pass http://$MYSERVICE/$1$is_args$args;
resolver 10.43.0.10 valid=10s;
}NOTE: Due to a BUG (which is unfortunately not acknowledged by NGINX maintainers) in NGINX, using $1 in URLs will break if the path contains a space. So/foo%20bar/will be passed upstream as/foo bar/and just break. | Why does a variable not work inproxy_pass?This works perfectly:location /foo/ {
proxy_pass http://127.0.0.1/;
}This doesn't work at all:location /foo/ {
set $FOO http://127.0.0.1/;
proxy_pass $FOO;
add_header x-debug $FOO;
}I see thex-header: http://127.0.0.1/but the result is 404 so I don't know where it's proxying to but it's not identical to the first example.Sourcewhere it is explained that using a variable in proxy_pass will prevent NGINX startup errors when the upstream is not available.UPDATE: The issue is the upstream path rewriting. I expect it to rewrite/foo/blahto the upstream at/blahremoving the/fooprefix. It works fine with static host/uri entries but not with a variable. | Why does a variable not work in NGINX `proxy_pass`? |
I was able to resolve this by deleting my App Runner app (this is currently the only way to change the configuration-- seethisissue), then creating a new one and specifying the health check to ping port 80. | I'm creating my first app on AWS App Runner. I have a simple nginx Docker image that works locally by serving html on localhost:8080.When I try to deploy it, the result is "Create Failed". Upon digging into the CloudWatch logs, I see that the health check failed. The health check is configured to ping the root of the service "/" at port 8080. | AWS App Runner "Create Failed" on health check |
I've found that it works fine in practice as well, althoughat least one userhas had an issue with it.This line:proxy_set_header Upgrade $http_upgrade;Is actually doing what you want because$http_upgradecomes from the header sent by the client. So if the client doesn't request an upgrade, it doesn't get passed along.For some reason there doesn't seem to be an equivalent$http_connectionvariable, but you can create one withmap, which is theofficial solution:map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
location /chat/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
} | I have an Nginx server block that proxies requests to a node.js server. This server serves both HTTP content and WS (websocket) content. Is it okay to add upgrade headers on requests that should NOT upgrade to websocket connections?i.e. Using Nginx to proxy to a Node.js server that serves HTTP and WS, would it be good practice to use separate server blocks?This is my Nginx server block currently:server {
listen 443 ssl;
listen [::]:443;
server_name api.mysite.com;
ssl_certificate ...;
ssl_certificate_key ...;
ssl_dhparam ...;
ssl_protocols ...;
ssl_prefer_server_ciphers on;
ssl_ciphers ...;
location / {
proxy_pass http://localhost:5000;
proxy_buffering off;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Access-Control-Allow-Origin *;
}
}It looks like I'm always addingUpgradeandConnectionheaders to requests being proxied to the Node.js server, even if I don't want to upgrade:proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";The first line looks like it's setting the "Upgrade" header to $http_upgrade, passed from the request. I assume that if this header is NOT passed in the request then "Upgrade" will be set tonull(or equivalent), which will have no effect. Is that correct? | What happens if I include "Upgrade" and "Connection" headers on HTTP requests that are not intended to be upgraded to websocket connections? |
Thanksalexfor helping me out to solve this problem.Here is the solutionDjango app dir- /home/ubuntu/djangoWordpress dir- /var/www/html/blogNGINX Conf fileserver {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ /blog/.*\.php$ {
root /var/www/html;
index index.php index.html index.htm;
set $php_root /var/www/html/blog;
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location /blog {
root /var/www/html;
index index.php index.html index.htm;
set $php_root /var/www/html/blog;
try_files $uri $uri/ /blog/index.php;
}
location /static/ {
alias /home/ubuntu/django/your_app_name/static/static_root/;
}
location /media/ {
alias /home/ubuntu/django/your_app_name/media/ ;
}
}Note-please replace your home and siteurl withhttp://example.com/blogin wp-location table of wordpressNow Your Django app running on example.comNow Your Blog running on example.com/blog | I have tried many ways but do not know how to runDjango on example.comandwordpress on example.com/blogThe following running project directory structure for Django and Wordpress.Django app dir- /home/ubuntu/djangoDjango app running successfully on- example.com:8000Wordpress dir- /var/www/html/blogWordpress running successfully on- example.comNginx configurationserver {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/html/blog;
index index.php index.html index.htm;
server_name example.com;
location / {
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}Note-Django app running by gunicorn, I know the subdomain may be the solution but I do not want that.How to write nginx configuration for both Wordpress and Django to runDjango appon example.com andWordpresson example.com/blog ? | How to run django and wordpress on NGINX server using same domain? |
The only time a client will downgrade to ajax polling (assuming your server does support it which it does) is when the browser client doesn't support webSockets (e.g. a very old client) or perhaps if some proxy in the client path doesn't support webSockets.webSockets are supported in IE10+ and all recent releases of the other browsers.So, practically speaking, it's really just IE8 or IE9 or a badly behaved proxy where you might not see client webSocket support.There are no other conditions (other than lack of support) that will "knock" a connection down to polling.You can temporarily test your application with polling by only passing in the xhr-polling transport option when connecting from the client to tell the client that that is the only transport option that is allowed.Keep in mind that all webSocket connections start with an HTTP request that is then "upgraded" to the webSocket protocol if both sides agree so if you're looking at the network trace from your browser, you should see each webSocket connection start with an HTTP request - that is normal. And, in the latest version of socket.io, it may actually exchange a few data packets with the polling transport before it successfully tries and switches to an actual webSocket. | I am pretty new to socket.io and have written my first app in node/express/socket.io. Right now everything works great on my nginx server. I want to release my app to the public, but I am gripped with the fear that it just won't work for a lot of people. I have had a few friends test my app and everything went smoothly (it is a pretty simple app). Here is my concern: Right now every connection seems to be using websockets, which is what i want. But will my app sometimes downgrade to "polling" due to something weird on the clients' end? If so, how does socket.io decide when to use polling and when to use websocket (is it based on browser/version or connection or what)? I am pretty sure it uses websocket when possible, but is there a list somewhere of conditions that will knock it down to "polling"? Also, is there a way I can test my application for by using "polling" to see if it works?I can post code, but I think this is a general question on how socket.io works. | When does socket.io use polling instead of websockets? |
this:location /images/ {
root /www/myproject/files_storage;
}results in /www/myproject/files_storage/images path, it would be obvious if you setup error_log. So use "alias" directive instead of "root"http://nginx.org/en/docs/http/ngx_http_core_module.html#alias | I am new to nginx server. I tried to set a new url "/images/" for serving images. I edited the bi.site file in site-enabled folder.server {
listen *:80;
access_log /var/log/myproject/access_log;
location / {
proxy_pass http://127.0.0.1:5000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /images/ {
root /www/myproject/files_storage;
}
}And in /www/myproject/files_storage location i put a temp.txt file.When i puthttp://www.main_url/images/temp.txtit shows 404 not found. What am i doing wrong ? Did i miss something important ? | Root directory shows 404 in nginx |
Okay so I finally figured out why this is happening...It is not HHVM that is slow. I am using Vagrant and setting up a shared directory between my host and guest OS. VirtualBox shared folders are extremely SLOW!!! When I placed all my Wordpress files in a different private directory and pointed Nginx to it my requests/second dramatically increased. | I might be doing something wrong but I am doing a bit of testing between a php-fpm wordpress setup and a HHVM wordpress setup. I've heard & seen many mind blowing results from HHVM, but I'm just shocked at the results I'm getting.Using the following apache testing command I'm getting a much higher performance rate from php-fpm than HHVM.ab -n1000 http://127.0.0.1:8080/For php-fpm I am getting 109.98 requests/second.Unfortunately for me I'm getting only ~12.33 requests/second with HHVM.These tests are done on a standard fresh Wordpress install. I must be doing something wrong in my configuration. I just need a fresh pair of eyes to see if I'm not doing something right.SetupVagrant instance from my local Macbook.
Ubuntu Server 14.04.1 LTS
1GB RAM
1 CPU
Nginx
MySQLHHVM Configpid = /var/run/hhvm/pid
hhvm.server.file_socket=/var/run/hhvm/hhvm.sock
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
hhvm.mysql.typed_results = false
hhvm.eval.jit_warmup_requests = 0
hhvm.eval.jit = trueNginx Configlocation ~ \.(hh|php)$ {
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}Any help is appreciated! Thank you. | extremely slow HHVM, Wordpress, Nginx |
It looks like there is an upstart script that keeps Nginx up and running. After running this command I was able to stop Nginx:sudo initctl stop nginx | I am trying to recompile nginx in order to add the page speed module. Never done anything like this before so a little scared! I am the step after doing "make" where I want to stop nginx. The problem is it seems like it restarts itself because my site never goes down and if I keep running the command it keeps stopping it with a new ID each time:[ec2-user@ nginx-1.6.0]$ sudo service nginx stop
Stopping nginx: /sbin/service: line 66: 9107 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}
[ec2-user@ nginx-1.6.0]$ sudo service nginx stop
Stopping nginx: /sbin/service: line 66: 9131 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}
[ec2-user@ nginx-1.6.0]$ sudo service nginx stop
Stopping nginx: /sbin/service: line 66: 9151 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}
[ec2-user@ nginx-1.6.0]$ sudo service nginx stop
Stopping nginx: /sbin/service: line 66: 9171 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}So now I am scared to do a make install while my Nginx is still running because I know I am supposed to stop it before doing a make install. What should I do? | How to stop nginx on my Amazon EC2 instance |
Your uwsgi config should includepythonpath=/path/where/lives/settings.py/directive, so python interpreter will know where to find your apps.Find more information about uwsgi config options:http://projects.unbit.it/uwsgi/wiki/Dochttp://projects.unbit.it/uwsgi/wiki/Example | I have a Django project with one app calledsubscribe. In rooturls.pyI use include fromsubscribe'surls.py.I put toINSTALLED_APPSsubscribeand insubscribe'surls.pyI usesubscribe.views.for call my views. When server run aspython manage.py runserverlocally all works fine. But when server run on nginx+uwsgi with virtualenv, I've gotImportError: No module named subscribe.
When I changingsubscribetoproject.subscribeinINSTALLED_APPSand insubscribe'surls.pychangingsubscribe.views.toproject.subscribe.views.all works fine.uwsgi config:[uwsgi]
socket = 127.0.0.1:9003
workers = 2
master = true
virtualenv = /home/user/python
chdir = /home/user
env = DJANGO_SETTINGS_MODULE=project.settings
module = django.core.handlers.wsgi:WSGIHandler()
daemonize = /home/user/uwsgi.logWhy should I use absolute path import and how I can change it to relative back on nginx+uwsgi with virtualenv? | Django uwsgi import error |
You could try using thesplit clientsmodule:http {
# Split clients (approximately) equally based on
# client ip address
split_clients $remote_addr $cdn_host {
33% cdn1;
33% cdn2;
- cdn3;
}
server {
server_name example.com;
# Use the variable defined by the split_clients block to determine
# the rewritten hostname for requests beginning with /images/
location /images/ {
rewrite ^ http://$cdn_host.example.com$request_uri? permanent;
}
}
} | I have a couple of server addreses, like cdn1.website.com, cdn2.website.com, cdn3.website.com. Each of them holds simillar files.Request comes to my server and I want to redirect or rewrite it to a random cdn server.
Is it possible ? | Redirect request to CDN using nginx |
Crudely, you would use a low-level cache like Redis or Elasticache to cache raw data (eg the result of the SQL query); whereas you would use a higher-level cache like Nginx or Varnish to cache the whole HTML page on which the data is being displayed. So which one is appropriate depends somewhat on your usecase. If you have one simple page (or page fragment) which contains the slow data, and that content is displayed the same to all users, then a high-level cache might be appropriate. If the content is subject to lots of little tweaks and reformats which would make a whole-page cache very fragmented, then a lower-level cache would be appropriate.In reality, these technologies are not tied tightly to that high/low separation: you can store whole pages in Redis and individual data fragments in Varnish, so it's not as simple as that. But in general, decidewhatyou want to cache before decidinghowto cache it.Even once you've decided what to cache, choosing the right technology will depend on lots of considerations. Elasticache on AWS has the advantage of being fully-managed and so will save you maintenance, but will probably be the most expensive to run (at least on a small/medium scale). Nginx caching with a filesystem backend would probably be quickest and cheapest to implement, but won't scale well (and will be awkward to refactor as your scale increases). Varnish and Redis are probably best implemented as separate EC2 instances, so sit somewhere in the middle. | I am currently running an AWS EC2 Ubuntu server that fetches data from a Postgres RDS database instance. One of the SQL queries used in a view function for a particular page has a lot of joins in it and runs quite slowly. I've tried to trim down the query and removed some joins that might be bit unnecessary but it still takes a little longer than desired to load up (at least 6 seconds). I'm currently looking at potential caching strategies to help speed up the service of the page.I have considered using a Materialized View, however the data that is fetched by the original view function updates every 30 seconds on average, and I'm worried that implementing a trigger or regular cron job to refresh the MatView this often will take its toll on the database and might not be the best strategy for data that is updated and changed regularly (unless someone can suggest another way of updating the rows in the MatView that doesn't involve running a query that looks very similar to the original one)I've tested Redis on an Elasticache instance so far and have been impressed with how it works, however I've also been recommended to look at Nginx and Varnish caching strategies as well.I'm somewhat confused which caching strategy is best suited for this situation. Would Redis/Memcached on an Elasticache instance be a bit too heavyweight vs an implementation of Nginx/Varnish on an EC2 instance? Is it considered a bad idea to try and cache data that will change often on a Nginx cache? | Best caching strategy data that is updated frequently (Redis/Memcached vs Nginx/Varnish vs Materialized view) |
Kubernetes Ingress is incapable of this.You could create a new service that targets server1, server2 and backup1 and use that in the Ingress. But the backends will be used in a round robin fashion.You can create a Deployment and a Service of (stateless) nginx reverse proxies with the config you wish and use that in Ingress. | How do I set a Kubernentes Ingress and Controller to essentially do what the following nginx.conf file does:upstream backend {
server server1.example.com weight=5;
server server2.example.com:8080;
server backup1.example.com:8080 backup;
}I want one http endpoint to map to multiple Kubernetes services with a preference for a primary one but also have a backup one. (For my particular project, I need to have multiple services instead of one service with multiple pods.)Here's my attempted ingress.yaml file. I'm quite certain that the way I'm listing the multiple backends is incorrect. How would I do it? And how do I set the "backup" flag?apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
# kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: server1
servicePort:
- path: /
backend:
serviceName: server2
servicePort: 8080
- path: /
backend:
serviceName: backup1
servicePort: 8080I'm running Kubernetes on GKE. | How do I map multiple services to one Kubernetes Ingress path? |
When deploying Django, one of the recommended deployment methods is usingWSGI(seeDeploying Django).This method of deploying Django is also well supported by AWS Elastic Beanstalk, and they even have aDeploying a Django Application to Elastic Beanstalk.At a high level, you want to do the following:Create a Virtual Environment (usingvirtualenv) to keep track of your python dependencies as you developConfigure your project for Elastic Beanstalk. This includes freezing your virtualenv to arequirements.txtfile, and configuring EB Extensions for django's WSGI.Using theEB CLIto initialize your project, and create an environment.Behind the scenes, Elastic Beanstalk is going to spin up the Instances, Elastic Load Balancers, etc, as well as configure the instances to accept traffic with Apache, then use the Apache'smod_wsgito handle traffic for Django. | I am working on creating a Django web app using resources on AWS. I am new to deployment and in my production setup (Elastic Beanstalk i.e. ELB based) I would like to move away from the Django development web server and instead use Nginx + Gunicorn. I have been reading about them and also about ELB.Is Nginx + Gunicorn needed if I deploy my Django app on ELB? As ELB does come with reverse proxy, auto scaling, load balancing etc.Appreciate the inputs. | AWS elastic beanstalk + Nginx + Gunicorn |
Thelocation =syntax matches one URI and not all of the URIs under it. Also, you should use the^~modifier to prevent the regular expressionlocationblocks from interfering. Seethis documentfor the rules regarding the evaluation order forlocationblocks.If you have any PHP files under/t/sms/plivo/you will need to add a nested location block to handle those.For example:location ^~ /t/sms/plivo/ {
auth_basic off;
allow all; # Allow all to see content
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}Thatlocation ~ \.php$block is in addition to the block already in your configuration with the same name. And, you probably do not need theallow allstatement, unless you have somedenyrules that I cannot see. | I have setup my Nginx server to have authentication for everything, but I want to exclude all the files under/var/www/html/t/sms/plivofor password authentication. I have tried using different paths but it always asks for a password when I try to access a file under/var/www/html/t/sms/plivofrom my browser.Below is my/etc/nginx/sites-available/defaultfileserver {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
auth_basic "Private Property";
auth_basic_user_file /etc/nginx/.htpasswd;
#no password for the plivo folder so we can recieve messages!
location = /t/sms/plivo/ {
auth_basic off;
allow all; # Allow all to see content
}
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
} | Exclude one directory from Nginx password authentication |
This guide is only for PHP7, Mac El Capitan, PHPStorm 2016.3Install brewhttp://brew.sh/Install php7brew install php70Install nginxGuide -http://learnaholic.me/2012/10/10/installing-nginx-in-mac-os-x-mountain-lion/Config -https://gist.github.com/kmaxat/c07795ab88677efb843686d075fafa9ebrew install php70-xdebugCreate info.php file in public folder of laravel:info.php: Settings -> Languages & Frameworks -> PHPClick on '...' next to CLI interpreter. If above steps are done properly, you should be able to see this:Setup Server:
Run -> Edit Configurations -> ... (next to server).Setup Edit configuration
Run -> Edit Configurations -> + -> PHP Web Application. Choose created server, set nameIn Toolbar select created sever, then click on "Start Listening for PHP Debug Connections".Set breakpoint at public/index.phpIn toolbar click "Debug 'ServerName'" | Guide on how to setup XDebug with PHPStorm.Versions:PHP 7.0PHPStorm 2016.3.2XDebug 2.5OS X El Capitan 10.11.6 | How to use Xdebug with Laravel on Nginx with PHPStorm on Mac? |
A WebSocket application keeps a long-running connection open between the client and the server, facilitating the development of real-time applications. The HTTP Upgrade mechanism used to upgrade the connection from HTTP to WebSocket uses the Upgrade and Connection headers. There are some challenges that a reverse proxy server faces in supporting WebSocket. One is that WebSocket is a hop-by-hop protocol, so when a proxy server intercepts an Upgrade request from a client it needs to send its own Upgrade request to the backend server, including the appropriate headers. Also, since WebSocket connections are long lived, as opposed to the typical short-lived connections used by HTTP, the reverse proxy needs to allow these connections to remain open, rather than closing them because they seem to be idle.NGINX supports WebSocket by allowing a tunnel to be set up between a client and a backend server. For NGINX to send the Upgrade request from the client to the backend server, the Upgrade and Connection headers must be set explicitly, as in this example:location /wsapp/ {
proxy_pass http://wsbackend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}Once this is done, NGINX deals with this as a WebSocket connection.For more details please visit:-https://www.nginx.com/blog/websocket-nginx/https://blog.martinfjordvald.com/2013/02/websockets-in-nginx/Hope this would help! | So I've been reading up on this whole server set up in which Nginx is used in front of nodejs as a reverse proxy so that it serves the static content while allowing node to do the dynamic stuff. My question is, why would someone want to use the nginx front to reverse proxy to the websocket? If nginx serves the static content (HTML, CSS, JS, media, etc.) then can't the JS file that is served simply connect to the server directly using the ip address and the port that the websocket is listening on in the nodejs server? Why go through nginx to connect to the websocket on the server? Or am I not understanding this situation clearly? Thank you! | Why use nginx as websocket proxy? |
I answer myself, to share solution found here:https://stackoverflow.com/a/33260827/1786393the point was not the mentioned nginx configuration, but the PEM file:openssl req -newkey rsa:2048 -sha256 -nodes -keyout YOURPRIVATE.key -x509 -days 365 -out YOURPUBLIC.pem -subj "/C=US/ST=New York/L=Brooklyn/O=Example Brooklyn Company/CN=YOURDOMAIN.EXAMPLE"YOURDOMAIN.EXAMPLEin the subj strig of openssl must bereal hostnameof your server that receive webhooks. | I'm working on a Ruby language server to manage multiple Telegram Bots viasetwebhooksBTW, I'll delivery the server as opensource atBOTServerPROBLEMI have troubles receiving webhook updates from Telegram Bot API Server. I have set a webhook token (Telegram reply "success") but I do not receive any update on the succesfully configured webhook.I think the problem could be aroundself-signed Certificatemysteries. See oldreddit questionand answers.I have similar problem and I fair the point is in some "misunderstanding" between Telegram Bot API Server that send HTTPs webhooks updates and the bot server receving webhooks (I use nginx as proxy/https SSL certificate handler).It seems that someone solved the issue configuring nginx with a certificate "chain"; I'm pretty ingnorant in certificates tricks and so I ask:QUESTIONMay someone can post info, to configure nginx (any ssl web server!) with detailed settings / step-by step for dummies, showing how to pass from .key and .pem files described here:https://core.telegram.org/bots/self-signedtoset-up the certificate "chain"to configure in nginx config, to be "accepted" by Telegram Bot API Server ?BTW, my nginx config now:upstream backend {
server 127.0.0.1:3000;
}
#
# HTTPS server
#
server {
listen 8443 ssl;
server_name myhost.com;
ssl on;
ssl_certificate /mypath/ssl/PUBLIC.pem;
ssl_certificate_key /mypath/ssl/PRIVATE.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location @backend {
proxy_pass http://backend;
}
location / {
try_files $uri @backend;
}
}where PRIVATE.key + PUBLIC.pem files are that one generated following guidelines:Using self-signed certificates:openssl req -newkey rsa:2048 -sha256 -nodes -keyout PRIVATE.key -x509 -days 365 -out PUBLIC.pem -subj "/C=US/ST=New York/L=Brooklyn/O=Example Brooklyn Company/CN=YOURDOMAIN.EXAMPLE"thanksgiorgio | Telegram Bot API Webhooks Self-signed Certificate issue |
Adding thessl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;In the 3rd server directive fixed this issue. | I'm having issues with this config:#=========================#
# domain settings #
#=========================#
# Catch http://domain, and http://www.domain
server {
listen 80;
server_name www.domain domain;
# Redirect to https://domain
return 301 https://domain$request_uri;
}
# Catch https://www.domain
server {
listen 443;
server_name www.domain;
# Redirect to https://domain
return 301 https://domain$request_uri;
}
# Catch https://domain
server {
listen 443;
server_name domain;
root /usr/share/nginx/domain;
index index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location / {
try_files $uri $uri/ =404;
}
}Something is wrong with the 3rd server directive. I get a SSL connection error. But when I comment our that section everything works fine. But I want www to redirect to non-www over https alsoCan anyone spot the problem? | Nginx Redirect HTTP to HTTPS and WWW to Non-WWW |
After many hours of debugging we finally found the actual cause of the issue. The error message was produced by a client requesting the nginx without a domain, e.g.https://11.22.33.44/robots.txt. Nginx then forwarded the request to an IIS-server which did not have any default websites bound to https for ip-alone-requests.The conclusion for the original question is then, that "peer" actually DOES refer to the upstream (the IIS), as the IIS is the one cutting the connection.The solution we chose to this problem to not get this error in nginx and hereby avoid exposure for clients to send all servers in "down"-mode is to configure the nginx to deny these requests by itself. Another possible solution was to ensure that the IIS behaved nicely for these requests. | In order to debug an nginx error case, I need to fully understand an error log message first. Our nginx writes the particular error log message from time to time.Log message"peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream".What is meant by "peer"?I would like to know: Does "peer" refer to the upstream, meaning that the upstream closed the connection during ssl handshake, or does it refer to the client, meaning that the client closed connection while the load balancer and the webserver was internally during a handshake?Setupnginx loadbalancer2 webservers (upstreams) running IIS8Ssl provider: Comodo | nginx error message - what does "peer" refer to? |
Problem was in non-existent path of setting session.save_path and not in list of setting open_basedir in php.ini | When I try to open index.php in browser I see the error:No input file specified.In error.log:2013/11/04 22:40:07 [error] 3435#0: *4 FastCGI sent in stderr: "Unable to open primary script: /var/www/index.php (Operation not permitted)" while reading response header from upstream, client: 10.0.2.2, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost"Configuration of the server:CentOS 6.4PHP 5.4.17 (installed from sources)Nginx 1.0.15PHP-FPM works from user nginx.$ ps aux | grep fpm
root 3460 0.0 0.7 29524 3428 ? Ss 22:48 0:00 php-fpm: master process (/usr/etc/php-fpm.conf)
nginx 3462 0.0 0.5 29524 2732 ? S 22:48 0:00 php-fpm: pool www
nginx 3463 0.0 0.5 29524 2732 ? S 22:48 0:00 php-fpm: pool www
nginx 3464 0.0 0.7 29524 3592 ? S 22:48 0:00 php-fpm: pool www
nginx 3465 0.0 0.5 29524 2732 ? S 22:48 0:00 php-fpm: pool www
nginx 3466 0.0 0.5 29524 2732 ? S 22:48 0:00 php-fpm: pool www
vagrant 3468 0.0 0.1 5532 720 pts/0 D+ 22:48 0:00 grep fpm
$ ls -la /var/www
drwxr-xr-x 2 nginx nginx 4096 Ноя 4 22:34 .
drwxr-xr-x. 19 root root 4096 Ноя 4 22:31 ..
-rw-r--r-- 1 nginx nginx 17 Ноя 4 22:34 index.phpSwitch on catch_workers_output doesn't help | PHP-FPM: Operation not permitted |
The root of the problem is not withyoursetup, but with the firstweb forward- it works by redirecting the requested URL (http://www.yoursite.com) to the new URL (http://yoursite.com:8000)So this is already in place, when the request reaches your setup, and you can't change it back to port 80, as your provider blocks it.You could use a frameset as a forwarder ("Web 0.5") or live with it. | Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed10 years ago.Improve this questionI'm trying to set up a simple static website, and I have an issue with nginx that's complicated by a number of things, most notably the fact that my ISP blocks all inbound port 80 traffic.First, I got a web forward set up so that www.mysite.com will redirect to mysite.com:8000, and then I set up my router to forward port 8000 to my server running nginx. This gets around my ISP's block on port 80. I'm now attempting to have nginx on the server proxy the request on port 8000 to a virtual host on port 80, so that the site will show up as mysite.com after it loads rather than mysite.com:8000.I've been trying to do this with nginx'sproxy_passdirective, but no matter what I do the site always shows up as mysite.com:8000.Here's what I have so far:server {
listen [::]:8000
server_name mysite.com;
location / {
proxy_pass http://127.0.0.1:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
server {
listen 127.0.0.1:80;
server_name mysite.com;
root /var/www/homepage;
index index.html;
.
. (non-relevant stuff)
.
}Link to the actual site:http://www.bjacobel.comI've also tried to do this by forwarding port 8000 at the router to port 80, and having nginx listen on port 80, but the url with :8000 in it still shows up.Thanks for your help! | Nginx hide forwarded port number [closed] |
Your problem relates to the use of break instead of last. From the documentation:http://wiki.nginx.org/HttpRewriteModule#rewritelast - completes processing of current rewrite directives and restarts the process (including rewriting) with a search for a match on the URI fromall available locations.break - completes processing of current rewrite directives and non-rewrite processing continues within thecurrent location block only.Since you do not define a handler for the /redirector within the /forum location block, your if(..) { rewrite } does not do what you want. Make that break a last, so that the rewrite can trigger the appropriate location block. | I'm using IPB forums. I managed to use friendly urls with nginx server conf modifications. However I need to redirect my old forum's URLs to a redirector php file to get current url of a topic (or forum, member etc.). For example: if url is like/forum/index.php?board=23, I will do a redirection to redirector.php .This is my current configuration to be able to use friendly URLs on IPBlocation /forum {
try_files $uri $uri/ /forum/index.php;
rewrite ^ /forum/index.php? last;
}When I do insert an if statement inside this location block like the following, I can not retrieve query parameter "board".location /forum {
if ($arg_board != "") {
rewrite ^ /redirector.php?q=$arg_board break;
}
try_files $uri $uri/ /forum/index.php;
rewrite ^ /forum/index.php? last;
}What is missing in here? | Redirection if query parameter exists on nginx |
This should do the trick:set $no_cache "";
if ($request_uri ~* \.gif$) {
set $no_cache "1";
}
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache; | I have nginx setup, acting as a reverse proxy to apache.
However, I need to disable caching for gifs.
How can I do this on nginx?Thanks | disable nginx caching for certain file types |
Since you haven’t got an answer, I’ll give a correct, but entirely half-baked and code-free, solution. Check theMojolicious::Guides::CookbookfornginxandPlackdeployment. Mix this withPlack::Builderfor deploying multiple applications on the same server. I’d go withStarmanas the server engine probably but that is up to you and your specific needs.That’s basically it. Sorry I don’t have code for you but that should do exactly what you want once you get through each step; the docs are good and can be supplemented with blog posts from various Perl devs. | I have some Mojolicious-based apps which happily run under Apache2 with mod_cgi and mod_fastcgi.The urls are for example:http://example.org/oneapp/path/info?foo=bar
http://example.org/oneapp?foo=bar
http://example.org/secondapp/path/info?foo=bar
http://example.org/thirdapp/path/info?baz=heh
#etc...I had relative success configuring the apps assubdomainsusing proxy_pass
but I would like to keep the old urls(just switch from apache2 to nginx).
I would like to keep the same urls but run the apps using nginx.
What should my configuration look like and how should I run the apps.Thanks in advance! | Example for several (fastcgi/uwsgi/scgi/proxy_pass) Mojolicious apps in the same nginx virtual host? |
Remove the repository owned bywww-dataand follow the solution on thiswebpagefor setting up a post-receive hook in the repository owned bygit. | On my server, I have two users,www-data(which is used by nginx) andgit. Thegituser owns a repository that contains my website's code, and thewww-datauser owns a clone of that repository (which serves as the webroot for nginx). I want to set up a workflow such that pushing togit's repository causeswww-data's repository to update, thus updating my website.What is the correct way to set up the hooks for these repositories (that also takes into consideration privileges and permissions of these two users)? | Git-based website deployment workflow |
The problem after all was the usage of * instead of "*" through bash.The result was to have all the filenames returned at the FORWARDED_ALLOW_IPS parameter instead of the character "*". | I've tried everything:@Starlette:routes = [
Mount("/static/", StaticFiles(directory=parent+fs+"decoration"+fs+"static"), name="static"),
Route(....),
Route(....),
]@Uvicorn:--forwarded-allow-ips=domain.com
--proxy-headers@url_for:_external=True
_scheme="https"@nginx:proxy_set_header Subdomain $subdomain;
proxy_set_header Host $http_host;
proxy_pass http://localhost:7000/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect http://$http_host/ https://$http_host/;
include proxy_params;
server {
if ($host = sub.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name sub.domain.com;
return 404; # managed by Certbot
}If I open a .css or .js link, nginx renders it to https.When I allow Firefox to ignore the unsafe content, the whole page is rendered correctly at the production server.Let's encrypt works perfectly with the whole domain, no issues with the certificate. | Starlette's url_for doesn't create links with https scheme behind Nginx (via uvicorn) |
Annotationscan only be set on the whole kubernetes resource, as they are part of the resourcemetadata. Theingress specdoesn't include that functionality at a lower level.If you are looking for more complex setups,traefikhave built acustom resource definitionfor their ingress controller that allows more configuration perservice. The downside is the definition is not compatible with other ingress controllers. | We are migrating from a traditional nginx deployment to a kubernetes nginx-ingress controller. I'm trying to apply settings at alocationlevel, but can't see how to do so with annotations.For example, we had:server {
listen 80;
server_name example.com;
location /allow-big-uploads {
client_max_body_size 100M;
...
}
}And we translate to something like this:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 100m <-- this now applies globally
spec:
rules:
- host: example.com
http:
paths:
- path: /allow-big-uploads
backend:
serviceName: example-svc
servicePort: 5009Adding that annotation under thepathsection doesn't seem to work. Am I missing something? | Apply nginx-ingress annotations at path level |
You can confirm your app/assets/stylesheets folder it should have application.css file and you will have to precompile assets in production environment before go/start server in production environment.
You can precompile assets usingRAILS_ENV=production rails assets:precompileIf it still does not work then you can try the config.assets.compile option to true in production.rb so it will do live compilation. Although it should be false in production environment as it impact on performance.config.assets.compile = true | after moving my Ruby on Rails app to production server (AWS EC2 Amazon Linux 2018.03) pages don't render, because of error "The asset 'application.css' is not present in the asset pipeline" (precompiled files are presents in public/assets):production.logHowever, when I refresh my application (sometimes more then once), this file is found in cache and page is rendering correctly. It seems like server doesn't wait for file precompilation or something like that. It happens not only on first page entry, but every change of view.I followed tips from post:application.css not in asset pipeline, but it didn't help.My stack:ruby 2.6.3rails 5.2.3Unicorn 5.5.1nginx 1.14.1I will be really grateful for any hints. | RoR App: "The asset 'application.css' is not present in the asset pipeline" after moving to production server |
I don't know which ciphers work with TLSv1 and TLSv1.1. But I notice from testing sites with SSLTest, that the GCM ciphers are listed against TLSv1.2 only.You may need to use a more inclusive list of ciphers.For example:ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; | My nginx confid files looks like:server {
listen 80;
listen [::]:80;
server_name hostserver.ru www.hostserver.ru;
return 301 https://hostserver.ru$request_uri;
server_tokens off;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name hostserver.ru www.hostserver.ru;
ssl_certificate /etc/letsencrypt/live/hostserver.ru/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/hostserver.ru/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-R$
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000" always;
ssl_stapling on;
ssl_stapling_verify on;
root /var/www/html;
index index.html index.htm;
server_tokens off;
... some location stuff...
}Ufortunatelly, TLS1.2 not supported by Android 4.0-4.3 and I've chanched config:ssl_protocols TLSv1 TLSv1.1 TLSv1.2;But after usingSSLTestit shows me report that TLS1 and TLS1.1 are not supported.Did I missed smth to change in config files?
Thanks in advance.UPDATE:I've checked certificates by command:openssl s_client -tls1 (and so on) -connect example.org:443 < /dev/nulland certificate enabled for each protocol. | How to enable back TLSv1 and TLSv1.1 on nginx? |
I haven't used docker - but you should be able to get the information you want from x-forwarded-for:Express.js: how to get remote client addressFrom the above link:var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;Oh and interesting to note - a second answer in the above thread might actually be superior for your needs. Instead of changing your code everywhere that gets tries to get the user's ip address. You can just change how you initialize express:app.enable('trust proxy'); | I host an node.js server with express.js in a docker container.
The address of my container is 172.17.0.62
And I use nginx to redirect the traffic to 172.17.0.62I can access the my server.
But when I useconsole.log(req.ip + ' ' + req.protocol + ' ' + req.originalUrl);to log the traffic.
req.ip is always 172.17.42.1.
I want to get the ip of viewer of my webpageI use this in my nginx configurationproxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;andcat /proc/sys/net/ipv4/ip_forward # output 1 | Node.js in container always get the docker0 ip |
Phew. Solved. As the error message says, this indeed was just a "Permission" issue.Check through "/var/lib/nginx/tmp/client_body/" and make sure the permission is correct at each directory level solve the issue.More details can be found here :http://derekneely.com/2009/06/nginx-failed-13-permission-denied-while-reading-upstream/and here :Permission Denied error with Django while uploading a file | I got 500 from Django admin, when I tried to upload a photo.When I inspect the error.log I found:2014/03/13 23:00:55 [crit] 16478#0: *24 open() "/var/lib/nginx/tmp/client_body/0000000012" failed (13: Permission denied), client: xxxxxxx.xxx, server: xxxxxxx.xxx, request: "POST xxxxxxx.xxx/item/86/ HTTP/1.1", host: "xxxxxxx.xxx", referrer: "http://xxxxxxx.xxx/item/86/"
------------------------------------------------------------------------What could be wrong here? | nginx 500 error, permission denied for tmp folder |
You already figured it out but let me explain a bit why it's working.The first sitesite1should have worked just fine, because the defaulthttpport is 80, and that's whatsite1was listening to, sohttp://site1.comwould have worked just fine.The second config file forsite2was listening to port7777so doing a normalhttp://site2.comwould have not worked, actually it probably would have picked your default website and served that instead, because nginx isn't trying to match theserver_namewith the one in the config because the port doesn't match.You should create all your websites on port 80 and nginx will do the matching by it self and know which site to server, unless it's anhttpswebsite then you'd use port443instead, that's the default ssl port | I have multiple local sites and I want to configure nginx to have a different host of each website.In /var/www I have 2 sites: site1 and site2Then in /etc/nginx/sites-available/ I created 2 different configurations server for each one. I have the files site1 and site2 which content is like:server {
listen 80;
root /var/www/site1;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.html;
}
}andserver {
listen 7777;
root /var/www/site2;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.html;
}
}I access to them withhttp://localhost:80for site1 andhttp://localhost:7777for site2. That works perfectly. I also could add the hostname in /etc/hosts like this:127.0.0.1 localhost site1 site2and I could access them withhttp://site1:80andhttp://site2:7777. But I have to access always to the port number. I want to access them byhttp://site1andhttp://site2.Is there a solution to do that ? | change localhost hostname in nginx |
The "solution" found is to usehaproxyto split the tcp stream between nginx and NodeJS.It is not optimal because it adds yet-another-program in our stack, but it does the job.It seems to me that nginx websocket support is still far from being production-ready. | As explained onnginx's websiteI've used these settings for my nginx to proxy websockets to a NodeJS server:location /socket.io/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}Everything works fine and socket.emit() / socket.on() send messages to each other; until I send a rather big text message (26 kB of html).this big message is not received by NodeJS (so I guess that the issue is on nginx side)there are no error on nginx logonce this big message has been send by the client, NodeJS will stop receiving socket.io's heartbeats from this client.What am I doing wrong?
Is there a nginx setting that I am not aware of? | nginx as a proxy for NodeJS+socket.io: everything is OK except for big messages |
I should have used SERVICE name as the server name in nginx instead of ALIAS name.Running nslookup on nginx container shows:/ # nslookup api
nslookup: can't resolve '(null)': Name does not resolve
Name: api
Address 1: 172.20.0.7 project_api_1.project_default
Address 2: 172.20.0.5 project_api_3.project_default
Address 3: 172.20.0.6 project_api_2.project_default
/ # nslookup servers.api
nslookup: can't resolve '(null)': Name does not resolve
Name: servers.api
Address 1: 172.20.0.7 project_api_1.project_defaultWorking nginx.confworker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 8080;
location / {
resolver_timeout 30s;
resolver 127.0.0.11 ipv6=off valid=10s;
set $backend http://api:80;
proxy_pass $backend;
proxy_redirect off;
}
}
} | I'm trying to load balance an API server using nginx and docker's native DNS.I was hoping nginx will round-robin API calls to all available servers. But even when I specify docker's DNS server as the resolver nginx forward the request to only one server.Relevant section from docker-compose.ymlproxy:
restart: always
build: ./src/nginx/.
ports:
- "8080:8080"
links:
- api:servers.apinginx.confworker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 8080;
location / {
resolver_timeout 30s;
resolver 127.0.0.11 ipv6=off valid=10s;
set $backend http://servers.api:80;
proxy_pass $backend;
proxy_redirect off;
}
}
}NGINX round-robin load balancer works if I manually specify each server, which I don't want to do since it can't scale automatically.worker_processes 2;
events { worker_connections 1024; }
http{
sendfile on;
upstream api_servers{
server project_api_1:80;
server project_api_2:80;
server project_api_3:80;
}
server{
listen 8080;
location / {
proxy_pass http://api_servers;
proxy_redirect off;
}
}
}How to configure nginx in such a way that it can detect new containers added and include them in the round-robin? | Docker load balance using NGINX proxy |
If there is a clean separation of your client-side code and your server side code (e.g so anything the client needs to run is either pre-built into static files or served using your rest api), then it's far better to serve the client-side files either directly from NGINX or from a CDN. Performance and scaling are better, and there is less work for you to do in code on the server to manage caching, etc. plus you can later scale the api independently. | just a quick question.What would be more beneficial, serving my angular application via node with a reverse proxy from nginx or just serving it directly from nginx?I would think it would be faster to serve it direcly from nginx. | Serve angular in node vs nginx |
Well first of all there is nothing like transparently proxying a backend from a root domain to a domain with a added base url.If you want to proxyhttp://xyz/abctohttp://defthen there is no way to have a 100% guarantee to have everything work. You need application specific changesIf you backend API is something which doesn't return url accessing current url then you don't need to worry about the proxy_pass. But if you have a html then you need to fix everything that comes your way.See a simple config I created for deluge backendHow to proxy calls to specific URL to deluge using NGINX?As you can all the sub_filter were done to fix urls in CSS, JavaScript and HTML. And I had to run it, find issues and then implement fixes. Below is the config for your referencelocation ~* /deluge/(.*) {
sub_filter_once off;
sub_filter_types text/css;
sub_filter '"base": "/"' '"base": "/deluge/"';
sub_filter '' '\n';
sub_filter 'src="/' 'src="./';
sub_filter 'href="/' 'href="./';
sub_filter 'url("/' 'url("./';
sub_filter 'url(\'/' 'url(\'./';
set $deluge_host 192.168.33.100;
set $deluge_port 32770;
proxy_pass http://$deluge_host:$deluge_port/$1;
proxy_cookie_domain $deluge_host $host;
proxy_cookie_path / /deluge/;
proxy_redirect http://$deluge_host:$deluge_port/ /deluge/;
}You can customize the above based on your app. But below is what you would needlocation /app1/ {
sub_filter_once off;
sub_filter '' '\n';
sub_filter 'src="/' 'src="./';
sub_filter 'href="/' 'href="./';
} | I'm trying to use nginx to reverse proxy multiple web applications on the same host / port, using a different path to distinguish between applications.My nginx config looks like the following:proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
upstream app1 {
server 192.168.0.1:8080;
}
upstream app2 {
server 192.168.0.2:8080;
}
server {
server_name my-application-server;
listen 80;
location /app1/ {
proxy_pass http://app1/;
}
location /app2/ {
proxy_pass http://app2/;
}
}This correctly proxies any requests to individual pages on my app - e.g.http://my-application-server/app1/context/login, but any hyperlinks in my app are broken because they're missing theapp1part of the path - e.g. they direct me tohttp://my-application-server/context/login-successrather thanhttp://my-application-server/app1/context/login-success.I've tried adding in various values forproxy_redirectandrewrite, but nothing I do can convince these links to be rendered correctly.My app is a java webapp running in tomcat, if that makes any difference. I've seen other solutions where I can change the context path of my webapp, but I need nginx to transparently proxy the requests without the tomcat having to be configured to know about the nginx path. | Nginx reverse proxy with different context path |
Ended up matching the context of the Tomcat server to the desired Nginx location. According to above example, the context would be set to '/app'.Edit:
I set the application context property in theapplication.yml:server:
servlet:
contextPath: /app | I have some trouble getting Thymeleaf to form relative URL's correctly when it is used on a server behind a reverse proxy (nginx in my case).Let's say I have the following location in nginx:location /app {
proxy_pass http://10.0.0.0:8080;
}I have the following Thymeleaf 3 page (index.html):
Hello world!
The server used is an embedded Tomcat server (Spring boot), running at context/.When I send a request tohttp://example.com/app, then I get theindex.htmlpage as response. The css cannot be retrieved however, because when the URL is constructed in the Thymleaf template, it uses the context path of the tomcat server, which is/. The constructed url in theindex.htmllooks as follows:http://example.com/css/some.cssThis obviously results in a 404 not found.
The URL needs to be formed as follows:http://example.com/app/css/some.cssWhat do I need to configure to let Thymeleaf form the URL ashttp://example.com/app/css/some.css? I would rather not hardcode the base URL anywhere for certain profiles or something like that. I think I need to add something to the nginx configuration, but I'm not sure what exactly. | Thymeleaf template (in Spring boot application) behind reverse proxy not forming url's correctly |
using the “=” modifier it is possible to define an exact match of URI
and location. If an exact match is found, the search terminates.so you can use this configure :location =/index.html {
add_header "Cache-Control" "no-cache" ;
}
location / {
rewrite ^(.*) /index.html break;
}you can find more information indifferent in break and lastDirectives with the "=" prefix | I have the following config (for an Angular app):location / {
try_files $uri /index.html;
add_header "Cache-Control" "no-cache" ;
}Now I would like to add that header only forindex.html, but not for any other files. How to do that? | Nginx - how to add header for index.html when using try_files |
You have two independent issues:Your requests all redirect toexample.com, regardless of which specific domain is originally accessed.This happens because the$server_namevariable that you are using is effectively a static variable in a givenservercontext, and has a very distant relationship to$http_host.The correct way would be to use$hostinstead (which is basically$http_hostwith some edge-corner cleanups).You're receiving connection issues when trying to contacthttps://example.com, but nothttps://www.example.com.There is not enough information in your question to pinpoint the exact origin of this problem.It can be a DNS issue (A/AAAArecords ofexample.comset at an IP address where appropriate bindings to thehttpsport aren't made).It could be an issue with the mismatched certificate:Does your certificate cover bothexample.comandwww.example.com? If not, then you can't have both.If you have separate certificates, you may also need to acquire separate IP addresses, or risk preventing a significant number of users from accessing your site due to lack ofSNI.As of note, it should also be pointed out that it is generally a sloppy practice to not have a unified notation on the way your site is accessed. Especially if SEO is of any concern to you, the best practice is to decide on whether you want to go with or withoutwww, and stick to it. | I have a webpage where http redirects are a bit broken.The current behavior is this:www.example.com, example.com,http://www.example.com,http://example.com,https://www.example.comall gets redirected tohttps://www.example.comandhttps://example.comgets an error saying refused to connect.I want the behavior to be like this:example.com,http://example.com,https://example.comredirects tohttps://example.comwww.example.com,http://www.example.com,https://www.example.comredirects tohttps://www.example.comHere is my Nginx config fileserver {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location ~ /.well-known {
allow all;
}
location / {
try_files $uri $uri/ =404;
}
}Reason is because I want these links to workhttps://www.ssllabs.com/ssltest/analyze.html?d=example.comhttps://www.ssllabs.com/ssltest/analyze.html?d=www.example.comhttps://hstspreload.org/?domain=example.comhttps://hstspreload.org/?domain=www.example.com | How to fix http redirects with Nginx? |
I had foundarticle, that has a little trick to deal with your problem.TLDR:You can remap variables with complex names by using the map module as follows:map $is_args $http_x_origin {
default $http_x-origin;
}The trick is that map does not fully parse its arguments. The syntax is: map A X { default Y; }, with:A any variable, preferably one that does not trigger much internal processing (since nginx configs are declarative, using a variable evaluates it). I use $is_args because it’s cheap to calculate.X is the name for the new variable you’ll be creating, i.e. the map target.Y is the name of the variable you want to access. At this point, it can contain dashes, because map does its own parsing.I guess it might work with$args_too. | I'm using nginx variable$arg_to get url args.But I find if the url is like 'http://foobar.com/search?field-keywords=foobar',$arg_field_keywordsor$arg_field-keywordsdon't work.Can I getfield-keywordswith$arg_?Thanks in advance. | nginx: how to get url args which contain dash? |
Finally figured it out. It was rather simple but on the server side of things.I had to addpublish_time_fix off;in the nginx config for rtmp server.Thanks to thisblog. | PS: First time gstreamer user here. :)Im trying to stream video from a logitech c920 webcam connected to a beaglebone using gstreamer to an nginx server. But somehow rtmpsink is failing on me. However, with filesink im able to save the video on the beaglebone. Though I still have some frame loss issues and no audio, I want the streaming part to be working first. The command Im using isGST_DEBUG=4 GST_DEBUG_FILE=gst2.log gst-launch-1.0 -v -e uvch264src device=/dev/video0 name=src auto-start=true average-bitrate=5000000 iframe-period=33 src.vidsrc ! queue ! video/x-h264,width=1920,height=1080,framerate=30/1 ! h264parse ! flvmux ! rtmpsink location="rtmp://192.168.1.104:1935/hls/movie"My debug output is here.gistgstreamer just quits within 5 seconds.I verified that the streaming server works. But from the client, gstreamer is not giving me any kind of error messages. Or I dont know how to debug it properly.Im stuck on this issue for the last so many days. Any help would be appreciated.Thank you.Update 1:
Im able to send a local file to my rtmp server with ffmpeg and server is handling it as expected.ffmpeg -re -i /Users/r3dsm0k3/10.mp4 -vprofile baseline -ar 44100 -ac 1 -c copy -f flv rtmp://192.168.1.4:1935/hls/exampleTried gstreamer with fakesink and it doesn't give any errors.Update 2Tried with v4l2src as well, without luck. | gstreamer streaming to nginx rtmp server |
Since all process run on the native host (you can run ps aux on host outside container and see them). There should be very little overhead. The network bridging and IP Tables entries to forward packets to virtual host will add some CPU overhead but I can't imagine that being too onerous. | I have a physical server running Nginx, MySQL and serving my PHP website. The server has Multi-Core processor with 16 GB of RAM. This server can handle certain amount of web traffic.Now instead of this single server, if I run multiple docker containers running individual instances of Nginx (App Server) and MySQL (DB Server) in it and load balance between the application and database containers, will it be able to handle the same amount of traffic as a single server handled it or would it be lesser (Performance wise)?How will the performance be if I use a virtual server like EC2 or Digital Ocean Leaflet with the same hardware configuration instead of a physical server? | Efficiently using multiple docker containers in a single host |
There is no direct way (on the todo list of the echo nginx module), but this solution seems finehttps://serverfault.com/questions/404626/how-to-output-variable-in-nginx-log-for-debugging | I'm config a nginx, and debugging the config file,How to show something from the config file directly to log file?for example:location ..... {
to_log "some string";
} | In nginx config, how to show something to log file directly in the config file? |
This works for me:upstream unicorn {
server unix:/tmp/unicorn.example.sock fail_timeout=0;
}
server {
listen 80;
listen localhost;
server_name www.example.com;
keepalive_timeout 5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# this is required for HTTPS:
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
}and in my ./config/unicorn.rb file:# Listen on a Unix data socket
listen "/tmp/unicorn.example.sock", :backlog => 64 | I have set up my up to run on nginx and unicorn as described in Railscasts episode #293.When I try to redirect, such asclass PostsController < ApplicationController
def show
redirect_to posts_path, :notice => "Test redirect"
end
endI get redirected tohttp://unicorn/postsinstead ofhttp://mydomain.com/postsHere's my nginx.conf for the appupstream unicorn {
server unix:/tmp/unicorn.scvrush.sock fail_timeout=0;
}
server {
listen 80 default deferred;
# server_name example.com;
root /var/apps/current/public;
try_files $uri/index.html $uri @unicorn;
location @unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
keepalive_timeout 5;
} | Rails redirect fails on nginx & unicorn setup |
I did find a solution, my problem was with php.ini setting, how lame of me to overlook it.
I fixed it by adding these to the docker entry file.sed -i -e "s/upload_max_filesize = 2M/upload_max_filesize = 64M/g" $PHP_INI_DIR/php.ini
sed -i -e "s/post_max_size = 8M/post_max_size = 64M/g" $PHP_INI_DIR/php.ini
sed -i -e "s/memory_limit = 128M/memory_limit = 256M/g" $PHP_INI_DIR/php.ini | I am using laravel + twill inside docker containers with php7.4.3-fpm + nginx. Everything works fine appart from when I am trying to upload images of high resolution. If I upload image of 3000x3000px there are no problem as soon as I try to do the same with higher resolution (4500x4500px) I get the following error,message: "stream_copy_to_stream(): read of 8192 bytes failed with errno=21 Is a directory"
exception: "ErrorException"
file: "/var/www/backend/vendor/league/flysystem/src/Adapter/Local.php"
line: 159
trace: [{function: "handleError", class: "Illuminate\Foundation\Bootstrap\HandleExceptions", type: "->"},…]
0: {function: "handleError", class: "Illuminate\Foundation\Bootstrap\HandleExceptions", type: "->"}
function: "handleError"
class: "Illuminate\Foundation\Bootstrap\HandleExceptions"
type: "->"
1: {file: "/var/www/backend/vendor/league/flysystem/src/Adapter/Local.php", line: 159,…}
file: "/var/www/backend/vendor/league/flysystem/src/Adapter/Local.php"
line: 159
function: "stream_copy_to_stream"Is it a php-fpm configuration issue? Is it problem with php? Has anyone had similar problems? | High resolution image upload fails stream_copy_to_stream(): read of 8192 bytes |
What you need to do is set thebasePathfor kibana to/kibanaSee the below urlhttps://www.elastic.co/guide/en/kibana/current/settings.htmlYou are looking to configureserver.basePathto/kibana. Then this will sort the reverse proxying issues and you can keep the MQ one directly on root/You can also setSERVER_BASEPATHenvironment variable in yourkibanapod and it will automatically pick the base path from that variable | On my AKS cluster I have a Nginx ingress controller that I used to reverse proxy my kibana service running on the AKS. I want to however add another http services through the ingress, rabbitmq management console.I'm unable to get both to work with the following configuration:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-aegis
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- dev.endpoint.net
secretName: dev-secret
rules:
- host: dev.endpoint.net
http:
paths:
- path: /
backend:
serviceName: kibana-kibana
servicePort: 5601
- path: /rabbit
backend:
serviceName: rabbitmq
servicePort: 15672The Kibana works fine at root however RabbitMQ fails to load with a503with any path except/. If RabbitMQ's path is/then it works fine but then Kibana won't run.I assume this is because internally they are sitting on the root aka localhost:15672 so it redirects to / on dev.endpoint.net.How do I have multiple services like Kibana and RabbitmQ running from one endpoint? | Kubernetes nginx ingress rabbitmq management and kibana |
These are three forms of regular expression location block. Seethis documentfor details.The~*operator makes the test case insensitive.The.character has a special meaning in a regular expression: matching any single character (much like?does in shell globs).The\.sequence (an escaped dot) matches a literal dot character. This means the third example is probably not what you want (assuming you are attempting to match URIs ending with.png).Seethis documentfor more on regular expressions. | Is there a difference between the 3 following directives?location ~* \.(png)$ {
expires max;
log_not_found off;
}
location ~ \.(png)$ {
expires max;
log_not_found off;
}
location ~ .(png)$ {
expires max;
log_not_found off;
}Thank you in advance for having taken the time thus far. | Nginx "location ~ ." vs "location ~* \." |
Okay, looks like I found the answer...two things about the backend servers, at least for the above scenario when using IP addressses:a port must be specifiedthe port cannot be :80 (according to @karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).backend server block(s) should be configured as following:server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}and your reverse proxy server block should be configured like below:upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.Thanks @karliwson for nudging me to reconsider the port. | I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.In short, when I pass my ip address directly into "proxy_pass", the proxy works:server {
location / {
proxy_pass http://01.02.03.04;
}
}When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy. | Nginx Reverse Proxy upstream not working |
Here's what's going on with your SPF record.Go to this link and change the DNS Server to `Google Public DNS (8.8.8.8)https://www.unlocktheinbox.com/dnstools/spf/luckeo.fr/The results of your SPF will bev=spf a mx ip4:176.58.101.240 ~allNow change it to DNS Advantage (156.154.70.1)The results of your SPF will bev=spf1 a mx ip4:176.58.101.240 ~allNotice the differencev=spfvsv=spf1So your DNS hasn't propagated yet and depending on how the receiving email server looks up your DNS Records you're running into issues. Wait 24 hours and if you're still having issues, reply back. | I have configured Postfix with SPF and DKIM but all emails are marked as spam.Here is my domain.db (I use bind9) :...
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; p=ABCD" )I verify with :host -t TXT mail._domainkey.domain.comI receive (OK) :mail._domainkey.domain.com descriptive text "v=DKIM1\; k=rsa\; " "p=ABCD"I've checked also what could be the problem onemail-tester.com, and I get 10/10, DKIM seems also correctly installed.But when I check the content of an email, I see :...
dkim:pass
dkim:pass
SPF:pass
...
X-Spam-Report:
* -0.0 NO_RELAYS Informational: message was not relayed via SMTP
* -0.0 NO_RECEIVED Informational: message has no Received headers
* 0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
X-Spam-Status: No, score=0.0 required=5.0 tests=NO_RECEIVED,NO_RELAYS,
T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0Any idea ?----- UPDATE -------After adding in master.cf :-o receive_override_options=no_header_body_checks,no_unknown_recipient_checks,no_miltersHere is the new email content :...
dkim:pass (now there is only one: OK)
spf:pass
...
X-Spam-Report:
* -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=ham
autolearn_force=no version=3.4.0which seems better, but the email is still marked as spam, grrr | DKIM : Signature header exists but is not valid |
There are a lot of ways to face this. For me, this is the simplest one without having to install extra software or subscribing to dynamic dns sites.I don't know if it's a temporal problem but ipinfo.io didn't work for me, so I use another service in the solution. Change it if desired.First, in your local PC, write the IP you have in the remote/etc/nginx/sites-available/default(the one you calledpubllic_ip) to/tmp/oldIP. Just the IP, like:20.20.20.20This is needed to be done just once.
Then, save the following script wherever you want, provide it execution permissions and add it to cron:#!/bin/bash
VPS_IP= #fill this
VPS_USER= #fill this
MyOldIP=$(cat /tmp/oldIP)
MyIP=$(curl http://bot.whatismyipaddress.com)
if [ $MyOldIP != $MyIP ] ; then
ssh $VPS_USER@$VPS_IP "sudo sed -i 's/$MyOldIP/$MyIP/' /etc/nginx/sites-available/default" \
&& ssh $VPS_USER@$VPS_IP sudo service nginx restart
fi | I have create a droplet in digitalocean,there is a vps_ip i can use.In my home the way connected to the internet is: route+modem+adsl.I built a wordpress on the local pc on my home.The net status is as below when to connect to the web.WAN:
MAC:ommitted for privacy
IP :public_ip PPPoE
subnet mask:255.255.255.255
gateway:153.0.68.1
DNS:114.114.114.114 223.5.5.5
LAN
MAC:ommitted for privacy
IP :192.168.1.1
subnet mask:255.255.255.0
DHCP:active
ifconfig
inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0My goal: let the public access my wordpress site on the home pc with vps_ip digitalocean gave me.Thank to CrypticDesigns .https://www.digitalocean.com/community/questions/how-to-map-my-local-ip-192-168-1-100-with-my-vps_ip?I have solved the problem with the help of CrypticDesigns.In my local network:On my router portforward port 80 and private ip 192.168.1.100 to the outside of your network.In public droplet system:sudo apt-get install nginx
sudo nano /etc/nginx/sites-available/default
server {
listen *:80;
server_name vps_ip;
rewrite .* http://publlic_ip$request_uri permanent;
}
sudo service nginx restartAnyone who go to the vpsip can browse my wordpress now.It is important that my ip address on the wan changes about every 30 minutes.How about 30 minutes later?The publicip will change,the configurration file /etc/nginx/sites-available/default can't work .I want to make improvements on the problem.It is my opinion to make the task done that :1.in my home pcThe command curl ipinfo.io/ip can get my public ip.Write it into crontab for every 30 minutes.2.send the vpsip and change the value of publicip in /etc/nginx/sites-available/default,and restart nginx.How to express the two steps with shell command to make the process automatic? | How to map my private ip which change dynamically onto my vps_ip? |
My answer is brief. There is no standard for this. | Now I'm using WebDAV protocol for sharing files for writing my own webdav client.I want to implement fetching thumbnails previews from specified file located web server (nginx/Apache/other).
WebServer must generate thumbnail/preview image and return that with PROPFIND request or another way.Is there any property from propfind or response header described in RFC and supported from webservers with nginx/Apache? | WebDAV Specification thumbnail/preview file image |
As they say in the documentation:Can I use uWSGI’s HTTP capabilities in production?If you need a load balancer/proxy it can be a very good idea. It will
automatically find new uWSGI instances and can load balance in various
ways. If you want to use it as a real webserver you should take into
account that serving static files in uWSGI instances is possible, but
not as good as using a dedicated full-featured web server. If you host
static assets in the cloud or on a CDN, using uWSGI’s HTTP
capabilities you can definitely avoid configuring a full webserver.So yes, uWSGI is slower than a traditional web server.Besides performance, in a really basic application you're right, uWSGI can do everything the webserver offers. However, should your application grow/change over time you may find that there are many things the traditional webserver offers which uWSGI does not.I would recommend setting up deploy scripts in your language of choice (such asFabricfor Python). I would say my webserver is one of the simplest components to deploy & setup in our applications stack, and the least "needy" - it is rarely on my radar unless I'm configuring a new server. | It seems that uwsgi is capable of doing almost anything I am using nginx for: serving static content, execute PHP scripts, host python web apps, ...
So (in order to simplify my environment) can I replace nginx + uwsgi with uwsgi without loss of performance/functionality? | Replacing nginx with uwsgi |
Naturally, it just took me posting the question to stumble onto the answer. In order to provide info for anyone else searching on this problem, I'll post some details here.The relevant lines from thenginx.conf:user www-data; # in order to have nginx not run as root
passenger_default_user www-data; # likewise for passenger
root /opt/foo/app/current/public;The key at this point is to make sure that the application files are owned by www-data, in particularconfig/environment.rbbecause apparently Passenger looks at its owner to determine who to run as. This might mean that thepassenger_default_userentry is irrelevant? But it's good to have it there as documentation of intent anyway, perhaps.Finally, make sure that the parent directories of your app are all reachable bywww-data-- in my case the system default setup had left a directory0700, which I'd missed. | I've been researching this one and found references to similar problems here and there, but none of them has led to a solution yet. I've installed passenger (2.2.11) and nginx (0.7.64) and when I start things up and hit a Rails URL, I get an error page informing me of a load error:no such file to load -- /path/to/app/config/environmentFrom what I've found online this appears to be some sort of a user/permissions error, but I've tried all the logical fixes: I've made sure that /config/environment.rb is not owned by root, but by a webapp user. I've tried setting passenger_default_user, I've tried setting passenger_user_switching off. I've even tried setting the nginx user, though that shouldn't matter much. I've gotten some differing results, but nothing's actually worked. I'm hoping someone may have the magical combination of settings and permissions for this. I may try backing down to an earlier version of Passenger, because I've never had this issue before; it's been a little while since I set up Passenger though.Thanks for any suggestions.EDITED: See below for the answer I stumbled on. | Passenger problem: "no such file to load" -- /config/environment |
As mentioned by Richard Smith in the comment,559in the nginx log stands for:the number of bytes in the HTML response that Nginx sent to the browserSource:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_formator, as specified in thedocs:$body_bytes_sentnumber of bytes sent to a client, not counting the response header; this variable is compatible with the“%B”parameter of themod_log_configApache module | I'm getting a "502 559" error in my nginx error logs. I know that the 502 means "bad gateway". What does the 559 mean? | What does the nginx 502 559 error code mean? |
I've updated the CORS configuration in AWS to and it worked
http://localhost:3000
https://example.com
GET
HEAD
DELETE
PUT
POST
*
| I'm new to AWS and usedElastic beanstalkto deploy my rest API (api.example.com) in nodeandS3 bucketwithcloudfrontfor my static website (example.com) in React.When calling the API endpoints from website, the browser is giving the CORS error. How can i prevent that?I'm using following code in the node project for CORSapp.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*')
res.header('Access-Control-Allow-Credentials', true)
res.header('Access-Control-Allow-Methods', 'POST, GET, OPTIONS')
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept')
next();
});Edit- based on arudzinska's commentI also configured CORS in the bucket
*
GET
POST
and the server is onnginxThanks in advance for any helpP.S - also, i have seen that few posts gives the reason as there would be some bug in the project code but all the endpoints are working correctly onPOSTMAN. | CORS on AWS Elastic beanstalk |
You need to tell NGINX to forward the Host header upstream to Kong. You can do that withproxy_set_headerlike so:location /api {
proxy_pass: http://kong;
proxy_set_header Host $host;
} | I wanna use Kong as my API Gateway, running in a Docker container. Each request must go first through a NgInx server and if the requested uri matches example.com/api it must result in the api, registered inside Kong.To achieve this I've added my API to Kong with the following command:curl -i -X POST --url ipnumber:8001/apis -d 'name=my-api' -d `enter code here`'upstream_url=http://httpbin.org' -d 'hosts=example.com' -d 'uris=/api/my-api'By executing the following command I get the correct answer, so I suppose Kong is working correctly.curl -i -X GET --url ipnumber:8000/api/my-api --header 'Host: example.com'My NgInx configuration looks like this:upstream kong {
server 127.0.0.1:8000;
}
location /api {
proxy_pass: http://kong;
}In my host file I've configured the IP of the NgInx server with the domain example.com.The problem is: when I'm browsing to the example.com/api/my-api or even example.com/my-api the result is a 404 error page of NgInx.When I browse to ipnumber:8000/api/my-api it results in a message of Kong saying there's no api matching the given values, which is correct because the hostname isn't example.comI'm looking to this problem already a long time but I have not been able to fix it. I was looking also tohttps://getkong.org/docs/0.10.x/configuration/#custom-nginx-configuration-embedding-kongbut I'm not sure if I have to do it that way because I've already my own nginx configuration.Thanks in advance for your feedback. | NgInx as reverse proxy with Kong |
The defaultnginxconfig is in/etc/nginx/nginx.conf. By default that file includes the following lines (at least that is the case on rhel based and arch based distros):include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}Thanks to therootin theserversectionnginxwill keep serving the files under that directory until you comment our these lines. This happens just afterconf.dis loaded (as noted in the snippet above).No matter what you change insideconf.dthat last part of the file will still be loaded. Since it is that file (/etc/nginx/nginx.conf) that loads the configs inconf.d.And yes, you definitely should comment out that defaultserverif you plan to usenginx. | All configurations are being included and conf test is passed too. But Nginx is still serving the default HTML from/usr/share/nginx/html, instead of location root from conf file in conf.d directory.conf filefrom conf.d directoryupstream django {
server unix:///tmp/server.sock;
}
server {
listen 80;
server_name server.test.com;
access_log /srv/source/logs/access-nginx.log;
error_log /srv/source/logs/error-nginx.log;
location / {
uwsgi_pass django;
include /srv/source/conf/uwsgi/params;
}
location /static/ {
root /srv/source/;
index index.html index.htm;
}
location /media/ {
root /srv/source/media/;
index index.html index.htm;
}
# alias favicon.* to static
location ~ ^/favicon.(\w*)$ {
alias /srv/source/static/favicon.$1;
}
} | Nginx including conf from conf.d but still loading default settings |
I was able to get this running by using reverse-proxy setting for nginx.I modified my URL to be as follows:http://10.x.x.120/logsAnd then made the following changes to the nginx.conf file:location^~ /logs {
proxy_pass http://10.x.x.120:8500;
}Now when ever my application makes an HTTP POST request tohttp://10.x.x.120:8500/logsit is redirected tohttp://10.x.x.120:8500.Voila!! Logstash gets the data because it is listening to port 8500. | I have configured Logstash 1.5.2 to consume http input on a linux machine.This is my logstash input configuration:input {
http {
host => "10.x.x.120"
port => "8500"
}
}I can send data to logstash by using curl -XPOST from the linux machine.But when I make a $http.post(url, postData); request from my angularJS application I get the following error:Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resourceI have hosted my application on the same linux machine using nginx within a docker container.
I have tried to configure nginx to allow CORS by adding the following lines to the nginx.conf:add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';But still the error persists.Also when I hithttp://10.x.x.120:8500from my browser address bar I get 'ok'.Any help is highly appreciated.
Thanks. | CORS error while making post request to logstash |
An app server such as gnicorn or uWSGI (used to host the flask applications) is used with nginx. nginx is areverse proxy serverwhich acts as a middleman. This helps with load balancing - handling multiples requests efficiently by distributing workloads over the resources. On top of this, supervisor is just used to monitor and control the server processes (gunicorn or uWSGI in our example).
From my understanding, the web server that comes with Flask (werkzeug server) is not production ready and should be used for development purposes only. | I usually run my Flask applications with uWSGI and an nginx in front of it.But I was thinking that the same could be achieved with just supervisor and nginx, so I googled around and found a lot of posts on how to setup and the benefits of the uWSGI-supervisor-nginx stack. I've decided to turn to SO, risking getting axed online for such a question.So what are the benefits of running a Flask application behind uWSGI, supervisor and nginx?
Why does apparently no one run Flask applications with only supervisor? | Why use uWSGI and supervisor with a Flask app, and not just supervisor? |
I don't have knowledge of the programmers' state of mind when they made this decision but yes - a library is not going to be used in a well-defined scenario or two, it's going to be used however someone coded the main() to call itIf you really want to disable an option then compiling it out seems to me to be the best and safest route. | In case you missed it - an OpenSSL vulnerability in the implementation of theTLS Heartbeat Extensionhas been making the rounds. For more information seehttp://heartbleed.com/.One of the possible mitigation steps is to recompile OpenSSL with the-DOPENSSL_NO_HEARTBEATSoption to disable the vulnerable extension.Why does a system administrator have to recompile the library to disable an extension? Why isn't there a configuration option? Would have made a short term remediation much easier.My best guess that this is a high performancelibraryand as a library by it's nature does not have a configuration file as services do. Searching through Apachemod_ssland NginxHttpSslModuledocumentation I didn't see anything that would allow me to disable the Heartbeat functionality via configuration. Shouldn't this be an option?-EDIT-To clarify, everyone affected needs to revoke and replace affected SSL certificates. The primary problem here is that the vulnerability allowed anyone to pull 64 KB of application memory from a vulnerable server. This could have easily been addressed with a configuration option. Having to revoke and replace SSL certificates is a secondary consequence of this vulnerability, among other concerns with regards to what type of data (usernames, passwords, session info...) could have been leaked from application memory.-EDIT2-To clarify - by configuration I don't mean the configuration when compiling OpenSSL. I meant configuration in the web server. For instance, with apache mod_ssl, I can configure a range of options that affect SSL, such as theCipher Suites available. | Compile Flags vs Configuration Options - TLS Heartbeat |
There's a similar question onserverfault. Here's their answer:server {
listen 80;
listen 443 default ssl;
# other directives
}Thessl parameteris included as of 0.7.14, which means we can't use it, but it's a good solution if you're on a newer version of nginx. | When configuring nginx with a site that has ssl, the examples I find online basically duplicate the location settings. Most examples only have the default root location so it's not that big of a deal, but when you have a few locations and rewrite rules in place duplicating this configuration gets messy to maintain.I've considered proxying the ssl requests to localhost to get around this, but that's kind of ugly. I've also considered using file includes, but the location configs for this site should be in 1 file since they are related.Any suggestions?Edit: We're using nginx version 0.6.32. | How can I reuse server configurations in nginx? |
You can usehttp://nginx.org/r/proxy_passto silently redirect the user to a different page, without changing the URL that's shown to the user within theLocationfield of the browser.To check whether the user is logged in, you can install an error handler viahttp://nginx.org/r/error_pageto redirect the user only if your normal page returns an error (e.g., if the normalproxy_passresults in a403 Forbiddenresponse, then redirect the user's request to the alternativeproxy_passupstream as per the error handler). | I got my web platform built on ruby on rails athttps://example.comMy landing and about pages are hosted in a Wordpress in other host athttps://examplecms.com.What i would like to achieve is to make users to visithttps://example.comget maskedhttps://examplecms.comexcept when they are logged in as my platform's dashboard is routed in the root path /.What i am trying to avoid is the user to see in the URLhttps://examplecms.com.I've tried so far a couple of tricks:In my home/index action controller i've redirected tohttps://examplecms.comif the user is not signed in. But this fails as it still shows the CMS url in the user's browser.Using an Iframe in the render of the home/index view pointing to the CMS site. It works because it stills present my site URL correctly but this seems kind of fishy also deep linking and navigating does not seem to work correctly.I have been thinking into doing at proxy server level, using .htaccess or even using DNS strategies but i can't come up for a solution to these strategies to detect when the user is signed in or not?Any ideas?ThanksUpdate:
Stack:UbuntuRuby on RailsNginx + passengerAmazon Ec2 + Cloudflare DNS | How to mask my landing page when user not signed in? |
I was able to get your example to work by simply omitting the=404:location / {
try_files $uri $uri/ /?q=$uri.md;
}
location ~ \.(gif|jpg|png)$ {
try_files $uri /?i=$uri;
}Quotingthe manual:Checks the existence of files in the specified order and uses the first found file for request processing; [...]If none of the files were found, an internal redirect to theurispecified in the last parameter is made.You want that internal redirect, which is only happening if none of the files are found, but=404is always found. | I'm writing a simple CMS in PHP. Pages (markdown files) and images are accessed like this (respectively):example.org/?q=about.md
example.org/?i=photo.jpgOptionally, I would like to use clean URLs with Nginx, to make the same requests look like this:example.org/about
example.org/photo.jpgI rather usetry_filesthanifandrewritebut after experimenting for hours, I can't get it to work.location / {
try_files $uri $uri/ /?q=$uri.md =404;
}
location ~ \.(gif|jpg|png)$ {
try_files $uri /?i=$uri =404;
}I don't understand why the above code doesn't work (urls with argument work fine but the pretty ones give 404 errors).Is there something wrong with passing the folder name as an argument using$uri?Do I need to escape some weird characters (apart from my landlord)?To be thorough, I'm running Nginx 1.6.2, using the standard nginx.conf.
Here's the rest of my server block:server_name example.org;
root /srv/example/;
index index.html index.htm index.php;
(...)
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
}and fastcgi.conf is also standard. | Nginx clean urls, how to rewrite a folder as an argument with try_files |
Both responses are valid according to HTTP 1.1, so you need to fix your client code that it can handle both. It is a bad idea to try to fix the server so that that it behave in a way that it does not trigger a bug in the client.
The next version of nginx may behave differently, you users may even have proxies that change the transfer, maybe only when they do roaming and use a different provider.If you want to do some finger-printing on the header, the ETag-header may help you, as the ETag should stay constant when the content of the response is not changed, regardless of the transfer.The server typically sends in chunks when it calls a dynamic page, because it then does not need to create a buffer for the whole page and wait till all of the page is generated.The server often send the response in one go if it has the buffer already, for example because it is in cache or the content is on a file and is not to big. Sending in one go is more efficient, on the other hand, an extra copy of the data to buffer the output needs more memory and is less efficient. So the server may even decide this according to the available memory. | I'm building an API on Rails version 4.1.7/Nginx that responds to request from an iOS app. We're seeing some weird caching on the client and we think it has something to do with a small difference in the response that Rails is sending back. My questions...1) I want to understand why, for the exact same request (with only the Authorization header value changed), Rails sends backtransfer-encoding: chunkedsometimes andContent-Length: sometimes? I thought that maybe it had something to do with the response size, but in the example responses whose headers I've pasted below, the data returned in the body is EXACTLY the same.2) Is there a way to force it to useContent-Length? We think that that will fix our caching issues in our iOS app.Response #1HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:31 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: dd36f139-1986-4da6-9645-4438d41e74b0
X-Runtime: 0.123865
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked
Connection: keep-aliveRequest #2HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:36 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 0cfd7705-157b-41b5-aa36-739bc6f8302e
X-Runtime: 0.092672
X-XSS-Protection: 1; mode=block
Content-Length: 2234
Connection: keep-alive | When does Rails respond with 'transfer-encoding' vs. 'content-length'? |
So it turns out that my health check was set up to hitexample.comrather than the ip address: my bad.For the record, I discovered this by adding the$hostvariable to my log formats (see end of line):log_format debug_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" host:"$host"';
access_log /var/log/nginx/access.log debug_format;Cheers anyway | At the moment my AWS health check is hitting my server pretty relentlessly:...
54.228.16.40 - - [14/Jan/2014:10:17:22 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
54.248.220.40 - - [14/Jan/2014:10:17:24 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
54.232.40.110 - - [14/Jan/2014:10:17:25 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
54.241.32.78 - - [14/Jan/2014:10:17:26 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
54.245.168.46 - - [14/Jan/2014:10:17:28 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
54.251.31.174 - - [14/Jan/2014:10:17:28 +0000] "GET / HTTP/1.1" 301 178 "-" "Amazon Route 53 Health Check Service"
...And I'd like to configure NginX to not log any requests with a user agent of"Amazon Route 53 Health Check Service".My current attempt looks as follows:# default server for forwarding all requests over to main www domain
server {
listen 80 default_server;
server_name _;
return 301 $scheme://www.example.com$request_uri;
}
# server configured to catch aws health check requests
server {
listen 80;
server_name 12.345.67.89;
location / {
if ( $http_user_agent ~* 'Amazon Route 53 Health Check Service' ) {
access_log off;
return 200 'Service OK';
}
}
}
# actual application server
server {
listen 80;
server_name www.example.com;
location / {
...
}
}This looks good to me, and in fact when I CURL the same address that the health check is set up to hit:curl --user-agent "Amazon Route 53 Health Check Service" http://12.345.67.89:80/I get what I'd expect:Service OKAnd my request doesn't end up in the logs.However, my logs continue to be swamped by these requests when they come from the actual AWS health check.Any ideas on where I'm doing wrong?Thanks | How can I stop nginx logging Amazon Route 53 Health Check requests? |
How about other php files you call directly? For example an info.php with just aphpinfo();inside?I ask this because your server conf seems to be using try_files just right, but I'm not sure you're serving php scripts right.¿Is your fastcgi pool listening on that sock? ¿Are you sure it isn't listening in port 9000 for example? In any case, I prefer to define an upstream in the http section and use it later in the server sectionhttp {
upstream phpbackend {
server unix:/var/run/php5-fpm.sock;
}
...
server {
...
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass phpbackend;
fastcgi_param SCRIPT_FILENAME /var/www/domain.biz$fastcgi_script_name;
}
}
}Are you sure your php.ini has the cgi.fix_pathinfo set to false? | I am trying to convert trivial htaccess file to Nginx and can't make it work. It returns 404 error.
Here is htaccess content:RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php [L]Here is my current nginx config:server {
listen 80;
server_name domain.biz;
root /var/www/domain.biz;
charset utf-8;
autoindex off;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME /var/www/domain.biz$fastcgi_script_name;
}
} | Rewrite all requests for non existing files to index.php with try_files with Nginx |
Update your lighthouse to version 2.4 onwardsOn prior versions the webp extension was not handled correctlyhttps://github.com/GoogleChrome/lighthouse/issues/3364If that's not working probably You might need to file an issue onGithub | I am trying to improve my performance score on google lighthouse. It was recommending using next-gen image formats, including webp, so I implemented serving webp in place of images where the request accept header includes webp by using Nginx config something like this...map $http_accept $webp_suffix {
default "";
"~*webp" ".webp";
}
server {
root /www/;
listen 80 default_server;
index index.html;
location ~* ^\/images\/ {
expires max;
add_header Vary Accept;
try_files $uri$webp_suffix $uri =404;
}
location / {
try_files $uri $uri/index.html =404;
}
error_page 404 /404.html;
}Now the page loads much faster, and the webp method is working well, with fallback to original image where no webp exists or browser does not support it. However, the lighthouse report is showing an error, so I can't be sure I have implemented everything right. What does this error mean? | Google Lighthouse error loading webp images |
You already provided all the info needed for the answer:"..App Engine determines this code from the client's IP address". So they actually look at an IP from where the connection was made.Since your proxy sits between the client and AppEngine, AppEngine sees connections coming from proxy IP. No way around it. | I'm using Nginx as a proxy to filter requests for my AppEngine Java application. GAE's location services (X-AppEngine-country header) works great without the proxy, but now GAE is using the proxy server's IP as client IP, and the X-AppEngine-country header is quite useless - it returns "ZZ" as the country code.I know that the header is determined by the client IP, as mentionedhere:"X-AppEngine-Country -
Country from which the request originated, as an ISO 3166-1 alpha-2 country code. App Engine determines this code from the client's IP address."The problem is that I don't know from what data this header is derived. I used Nginx modules to set the client IP in X-Forwarded-For, Remote_Addr and Http_Client_IP headers, but apparently the X-AppEngine-country header is derived from somewhere else.How can I provide GAE the client IP so it can retrieve the correct country code from the original IP? | Using Google App Engine's locations services with proxy |
The problem was with session existence, see Edits in Questions for more details | I have a client side that sends a request that need long processing time, the client send the request in ajax. Once the request is accepted on the Server the client redirects to another page, this is accomplished by fastcgi_finish_request (I am running php-fpm)LongWork.php:client.js:$.ajax({
url: "...",
data: {},
success: function() {
top.location.href="next_page.php"
}
});The ajax gets sent and success callback causes redirection to next_page.php as expected.But then the page halts and I do not get any service until the sleep is finished. It looks like my connection is waiting for thesamephp-fpm process to finishI am running nginx with php-fpm, any Idea why this happens?EDIT:After more investigation I found the cause to this behavior is that I have a active session (from facebook SDK), When I destroy the session on LongWork.php:Can you please reflect on this solution?Should I do something different fromsession_destroy()EDIT:following Lachlan Pease comment, I have switchedsession_destroywithsession_write_close | fastcgi_finish_request creates hung connection when open session exists |
Be aware of using of spawning multiple processes for NGINX. Every process handles its own cache. Without an additional layer, it is not possible to access to a cache from different nginx process.This answer was posted as aneditto the questionFlask-Caching use UWSGI cache with NGINXby the OPewrounder CC BY-SA 4.0. | The UWSGI is connected to the flask app per UNIX-Socket:NGINX (LISTEN TO PORT 80) <-> UWSGI (LISTER PER UNIX-SOCKER) <-> FLASK-APPI have initalized a uwsgi cache to handle global data.
I want to handle the cache with python package flask-caching.I am trying to init the Cache-instance with the correct cache address. There seems to be something wrong.
I think, that the parameters for app.run() are not relevant for uwsgi.If I am setting a cache entry, it return always None:app.route("/")
def test():
cache.set("test", "OK", timeout=0)
a = cache.get("test")
return amain.pyfrom flask import Flask
from flask_caching import Cache
app = Flask(__name__)
# Check Configuring Flask-Caching section for more details
cache = Cache(app, config={'CACHE_TYPE': 'uwsgi', 'CACHE_UWSGI_NAME':'mycache@localhost'})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)uwsgi.ini[uwsgi]
module = main
callable = app
cache2 = name=mycache,items=100nginx.confserver {
listen 80;
location / {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/testapp/static;
}
}I am working with the docker build fromhttps://github.com/tiangolo/uwsgi-nginx-flask-docker. The app is working, expect the cache. | Flask-Caching use UWSGI cache with NGINX |
So, eventually what I found, after trying all the answers and more, was that while the site.conf #1 was working with the logged-in users for PDF files with URLs starting with https://, it was not working with previous uploads that used to have the http:// in the URL. I had to update the wp_posts table tohttps://example.com/wp-content/uploads/ and it was finally protecting (only) the PDF files from direct access.Of course this is a rough workaround and keep in mind that this method will also protect PDF files that are otherwise publicly available.Thanks for all the help. | I am running a website and I would like to protect all the PDF files inside the WordPress uploads folder from external access and hotlinking.I am already using a user authentication to protect the posts attached to these files, but the user authentication doesn't protect the direct link to the PDF file or the indexing of these files from search engines.I would prefer not to change the default uploads directory since the PDFs are over 1000 with random filenames and attached to various posts with different dates.The site is hosted on a Debian VPS with Nginx, php5-fpm, and MariaDB.So far, I have tested the following:site.conf 1location /wp-content/uploads/ {
location ~* \.(pdf)$ {
valid_referers blocked example.com *.example.com;
if ($invalid_referer) {
return 301 https://example.com/services/login-error.html;
}
}
}site.conf 2location /wp-content/uploads/ {
location ~* \.(pdf)$ {
valid_referers blocked example.com *.example.com;
if ($invalid_referer) {
deny all;
return 403;
}
}
}Unfortunately, none of the above configurations work as expected. They block the external access but they also redirect the authenticated user to either 403 or 301 errors.Any help or suggestion would be appreciated.Thanks. | PDF files protection from external access. Accessible only to authenticated users. WordPress uploads directory |
Thanks to Alexander Ushakov for providing the answers.The file with the readable permission had been cached by php-fm. Restarting php-fm meant that the cache was cleared and the web server then served the new file with the restricted access. | Perhaps i'm missing something extremely basic, but how is it that my web server is able execute and serve content from php files that have permission 000?Here's the file in question:http://178.62.125.162/test.phpLocation is:/usr/share/nginx/html/wordpress/test.phpHere's the ls:---------- 1 deploy deploy 21 May 22 09:40 test.phpnginx.conf has line:user www-data;So it's not running as root or anything.ps aux | grep [n]ginx
root 30223 0.0 0.1 85876 1364 ? Ss May21 0:00 nginx: master process /usr/sbin/nginx
www-data 30224 0.0 0.1 86172 1796 ? S May21 0:03 nginx: worker process
www-data 30225 0.0 0.1 86172 1796 ? S May21 0:03 nginx: worker process
www-data 30226 0.0 0.2 86516 2732 ? S May21 0:00 nginx: worker process
www-data 30227 0.0 0.1 86172 1796 ? S May21 0:03 nginx: worker processLooks normal to me, AFAIK the master process running as root is expected.And php-fm:ps aux | grep php
root 30311 0.0 1.8 309068 18580 ? Ss May21 0:02 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
www-data 30314 0.0 3.5 393324 36176 ? S May21 0:01 php-fpm: pool www
www-data 30315 0.0 3.1 388956 32112 ? S May21 0:01 php-fpm: pool www
www-data 30391 0.0 2.9 389828 29528 ? S May21 0:00 php-fpm: pool wwwI can't even open the file myself, logged in as deploy:cat test.php
cat: test.php: Permission denied
php test.php
Could not open input file: test.phpGoogled everywhere, but most things I find are related to the opposite- people getting Forbidden errors.Perhaps it's because it's in /usr/share? Thanks!Extra info:Ubuntu x64 LTSPHP-FMUpdate:Restarting the php-fm service after changing the permission fixes it. But this makes no sense to me:chmod 000 test.php - web echos "test"
service php5-fm restart - Access Denied
chmod 644 test.php - web echos "test". No need for a restart this time?
chmod 000 test.php - web echos "test". | My nginx + php-fm webserver is able to serve web pages that have permission 000. Why? |
Because V8 applications are javascript applications. Even if the javascript is finally compiled to machine code the runtime characteristics are different.For example if you call a function in an object and that object does not define the function the runtime must locate the function by traversing the prototype hierarchy, this hierarchy can change at any time during the lifetime of a program. There are clever optimizations that can be done but the overhead exists nevertheless.There is also the memory model. Javascript is garbage collected and GC takes cpu cycles. | According tolanguage benchmarks, JavaScript V8 is faster than other programming languages at regex-dna program. So, why node.js applications (i.e. http server) isn't faster than C applications (i.e. Nginx, Lighttpd)? | V8 engine compiles JavaScript to machine code. So, why node.js isn't faster than C? |
Example for .htm, .html fileslocation ~ \.htm$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.htm;
include fastcgi.conf;
}Example for .js fileslocation ~ \.js$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}just change the extension and port settings if needed | For web servers using PHP as apache module:AddType application/x-httpd-php .html .htmFor web servers running PHP as CGI:AddHandler application/x-httpd-php .html .htmI have an Nginx server and I want to run .js files and and .htm files as PHP, so I will have full PHP code inside them. Anyone know how to configure the Nginx to do this? | How to route .html and .js files as PHP on Nginx FastCGI server? |
Install certbotsudo yum update
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum-config-manager --enable epel
sudo yum install certbot python3-certbot-nginx
certbot --versionGenerate certificationUse the following command to generate the certification and automatic let the certbot to modify the nginx configuration to enable https:sudo certbot --nginxor if you need only the certification, use the following command:sudo certbot certonly --nginxThe certification will be created on the folder/etc/letsencrypt/live/YOUR_SITE_NAME/for example:Certification/etc/letsencrypt/live/www.my-site.com/cert.pemPrivate key/etc/letsencrypt/live/www.my-site.com/privkey.pemEnable automatic renewalUse the following command to enable automatic renewal of the certification:sudo certbot renew --dry-runErrors i have encourredIf during certification creation an error like the following appears:"Could not choose appropriate plugin: The requested nginx plugin does
not appear to be installed"then run the commandsudo yum install certbot python-certbot-nginxand retry to create the certification.NotesFor apache, you can usepython2-certbot-apacheinstad ofpython2-certbot-nginx,
make sure your using the option--apacheinstead of--nginxduring the creation of the certification.DNS must be configured to point to your macchine, othrewise the check of the certbot will fails. | Install certbot/letsencrypt on Amazon Linux 2 and enable HTTPS on nginx (similar process available for apache) | How to enable HTTPS with certobot/letsencrypt on Amazon Linux 2 with nginx |
I followed below steps to installPHP7.1which had alreadyNginx as web serverforAmazon Linux AMI 2018.03#Remove Old PHP
yum remove php*
#Update Reposistory
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
rpm -Uvh https://mirror.webtatic.com/yum/el6/latest.rpm
#Update Amazon AMI
yum upgrade -y
#Install PHP
#List of PHP packages https://webtatic.com/packages/php71/
yum install php71w php71w-cli php71w-fpm
yum install php71w-mysql php71w-xml php71w-curl
yum install php71w-opcache php71w-pdo php71w-gd
yum install php71w-pecl-apcu php71w-mbstring php71w-imap
yum install php71w-pecl-redis php71w-mcrypt
#change listen mode to CGI
sed -i 's/127.0.0.1:9000/\/tmp\/php5-fpm.sock/g' /etc/php-fpm.d/www.conf
/etc/init.d/php-fpm restart
touch /tmp/php5-fpm.sock
chmod 777 /tmp/php5-fpm.sock
service nginx restartThe reason I am still using /tmp/php5-fpm.sock file so that I do not need to change PHP7 sock file in all website nginx conf and assuming server do not have PHP5 as as on first step it has been removed. | How to install PHP 7.1 on Amazon EC2 t2.micro Instance runningAmazon Linux AMI 2018.03having nginx as web server?Reference to PHP7 | How to install PHP 7.1 on EC2 running on Amazon Linux AMI 2018.03 having nginx as web server? |
After many search, I found that Hosts settings not correctThen I checknano /etc/hostsThe Domain point to wrong IP in hosts fileI change the wrong IP and its working FineThis is new error Related tocurl: (7) Failed to connect | This question shows research effort; it is useful and clearI have checked the cURL not working properlyWhen I run the commandcurl -I https://www.example.com/sitemap.xmlcurl: (7) Failed to connect
Failed to connect on all portthis error only on one domain, all other domain working fine, curl: (7) Failed to connect to port 80, and 443Thanks... | curl: (7) Failed to connect to port 80, and 443 - on one domain |
You can try these, I used the same method to installauth_moduleon my mac.brew tap homebrew/nginxbrew install nginx-full --with-rtmp-module --with-debug | How can we set NGINX web server and its RTMP module on mac system?I have tried to set up server using below linkhttps://github.com/arut/nginx-rtmp-module/wiki/Getting-started-with-nginx-rtmphttps://github.com/arut/nginx-rtmp-module/wiki/Installing-via-BuildBut could not run it as it give error as below :-nginx-rtmp-module-master XXXXX$ ./configure --add-module=/path/to/nginx-rtmp-module --with-debug ...
-bash: ./configure: No such file or directory | How can we set NGINX web server and its RTMP module on mac system? |
writing this code at the bottom ofurls.pysomehow worked:admin.site.site_header = 'My admin' | I followedthis question's answers to change my django admin panel title header.I tried this:There is an easy way to set admin site header - assign it to current
admin instance in urls.py like thisadmin.site.site_header = 'My admin'But it just works when I'mrunning the page viaPython manage.py runserverMy question is how can I change the admin title header when I'm running the site viagunicornandnginx | Change header 'Django administration' text on nginx |
Something like thisupstream apache {
server localhost:8000;
}
server {
listen 80;
error_page 502 503 /www/static/503.html;
location /static/ {
root /www/static/;
}
location / {
proxy_path http://apache/;
}
}You can append standard error codes together to display a single page for several types of errors.For example:error_page 502 503 /www/static/503.html;For more reference you can refer theerror_page manualOn theerror_page manualit saysFurthermore, it is possible to change
the code of answer to another, for
example:error_page 404 =200 /.empty.gif;Another optionTo make it return a different error code you can make use of areturnkeywordFor example:# check for a condition
if (condition) {
return 503;
}Also Seenginx: Create HTTP 503 Maintenance Custom Page | I am using nginx as a frontend to an apache server. The config file looks like:upstream apache {
server localhost:8000;
}
server {
listen 80;
error_page 503 /www/static/503.html;
# need some magic here #
location /static/ {
root /www/static/;
}
location / {
proxy_path http://apache/;
}
}For now, when apache is down, I receive a plain 502 page generated by nginx. How to make it serve my custom error page and return status code 503 which is more relevant in this situation? | Show a custom 503 page if upstream is down |
You should remove passphrase from your private key.openssl rsa -in original.key -out unencripted.key | I can't get SSL work on my domain. I just get 102 connection refused.Here is the config:server {
listen 443 default_server ssl;
ssl_certificate /etc/nginx/ssl/www.foreningsdriv.se.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
#if the URL with .php tacked on is a valid PHP file, rewrite the URL to .php
if (-f $document_root$uri.php) {
rewrite ^(.*)$ /$uri.php;
}
root /var/www/foreningsdriv.se;
index index.php index.html index.htm;
server_name www.foreningsdriv.se;
location / {
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}Can anyone see anything wrong?I have tried reissue the certficate but it doesn't help. | Error 102 nginx SSL |
You should rather consider using a middleware likeRack::Attack. As it's lower in app stack it will filter out malicious request earlier and faster than Rails.Rack middleware for blocking & throttling abusive requestsRack::Attack is a rack middleware to protect your web app from bad
clients. It allows whitelisting, blacklisting, throttling, and
tracking based on arbitrary properties of the request.If you take a look at gem readme there are nice examples how to handle cases such as yours.
However keep in mind that if attackers are at least a little smart, they will notice your endeavour and try to outsmart them. DDOS protection is usually cat and mouse game. | I have rails3 + nginx stack.Several days ago it was ddos attack with lots of GET requests similar to:GET /?aaa2=bbbbbbb&ccc=1234212
GET /?aaa1=bbbbbbb&ccc=4324233First of all I added to application controller rule:before_filter :ddos_check
def ddos_check
params.each do |param|
if (!param[1].nil? && (param[1].is_a?String) && !param[1].scan(/bbb/sim).blank?)
redirect_to 'http://google.com/'
return
end
end
endIt protects controllers from heavy DB calls.Is it any gems or nginx modules that can filter ddos messages with specific rules? | Ruby on rails with nginx ddos protection |
Run this command in a terminal (note: capital V):nginx -VDo you find /var/logs there? Your nginx might be compiled with that default file location.[EDIT]I guess that some of your server blocks don't have the "error_log" directive. So nginx tries the default one for them. Note that by default the error_log is always on.To fix this issue, you can add this line on themainblock (the top level) such that all child blocks can inherit the setting:error_log /var/log/nginx/error.log; | I noticed when I test my nginx config usingnginx -t, it gives me a warning:nginx: [alert] could not open error log file: open() "/var/logs/nginx/error.log" failed (2: No such file or directory)Which makes sense, since the log path for nginx is actually set up to be/var/log/nginx/not/var/logs/nginx.I scanned the entire nginx config directory and there is nothing there referencing /var/logs. I'm at a loss as to where this log location could be written? | Nginx trying to log to /var/logs instead of /var/log? |
No, you don't need nginx anymore. | I am planning to move all my static content to a CDN so on my server I only have dynamic content left. I now have Nginx set up as reverse proxy to Apache. The static request that came in where directly delivered by Nginx without having to go to Apache.In this case Nginx handled a large portion of the request and I can clearly see the necessity of Nginx.Now that I moved all the static content to another domain, is there still a need to have nginx in front of Apache. Because now all the request are by default dynamic requests and all go to Apache.Are there any other benefits of having Nginx and Apache running for only dynamic content.My dynamic content is PHP/MySQLEdit:To be clear:I now have Nginx as a reverse proxy. It delivers static and dynamic content.But I am moving my static files to a CDN. Do I then still need Nginx on my domain. | Will an Nginx as reverse proxy for Apache help on dynamic content only |
ADockerfileis used when you want to create a custom image.FROM php:7.4-clispecifies thebase imageyou want to customize.COPY . /usr/src/appcopie thehostcurrent directory.into thecontainer/usr/src/app.CMD [ "php", "/mail/contact_me.php"]specifies what command to run within the container.In your case, I don't think a custom image is required.As you need a webserver with PHP, you can use thephp:7.4.3-apacheimage which comes withPHP7andApachewebserver pre-installed. All you need to do is copy your app to your container, or use a volume. A volume is great because it actually mounts your host directory into your container, allowing you to edit your app from the host and see changes in real-time.You can use adocker-compose.ymlfile for that.version: "2"
services:
webserver:
image: php:7.4.3-apache
ports:
- "8181:80"
volumes:
- ./app:/var/www/htmlAssuming your application is located in anappfolder on your host machine, this folder will get mounted at/var/html/htmlon your container. Here the8181:80will redirect the8181port on your host machine to the80port of your container, which is the http port.Use this command to start your container:docker-compose up -dYour should see your landing page athttp://localhost:8181 | I have a landing page and one PHP file to send emails (feedback form). I want to test this form using Docker.I've written this Dockerfile:FROM php:7.4-cli
COPY . /usr/src/app
CMD [ "php", "/mail/contact_me.php"]But it doesn't work for me.I have the directorymailwith the PHP file in the root of the project, but I'm still unsure if the Dockerfile is correct:Is it enough to inheritFROM php:7.4-clior do I have to add nginx server to run the image?What does the lineCOPY . /usr/src/appexactly do? Is this correct? | How to run Docker container with website and php? |
In nginx,returndirective is from rewrite module, anddenyis from access module. According tonginx documentand source code, rewrite module is processed inNGX_HTTP_REWRITE_PHASEphase (forreturnin location context), the access module is processed inNGX_HTTP_ACCESS_PHASEphase, rewrite phase happens before access phase, thusreturnstops request processing and returns 301 in rewrite phase. | Nginx is behaving unexpectedly for me. Here are two simplified location blocks.This works as expected. Returns 403 error:location / {
deny all;
root /var/www/test;
}I expected a 403 error. However, this returns 301 and redirects:location / {
deny all;
return 301 https://$server_name$request_uri;
}How can I deny and prevent any url redirection withreturndirective? | deny all not preventing return redirection |
Bonus answer:Also you could just capture the UUID value during the location matching to avoid the additional regex on the rewrite, like this:location ~* "^/([0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12})$" { # match and capture a UUID v4
include uwsgi_params;
set $uuid $1;
if ($cookie_admin) {
# if cookie exists, rewrite / to /modif/ and pass to uwsgi
rewrite / /modif/$uuid break;
uwsgi_pass frontend;
}
content_by_lua '
ngx.say("Ping! You got here because you have no cookies!")
ngx.say("UIID: " .. ngx.var.uuid)
';
} | I'd like to redirect a URL to a Django platform (via uwsgi) if and only if a cookie exists. Failing that, I need to defer execution to thecontent_by_luaplugin.Below is my attempt at such logic:location ~* "^/[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$" { # match a UUID v4
include uwsgi_params;
if ($cookie_admin) {
# if cookie exists, rewrite / to /modif/ and pass to uwsgi
rewrite ^/([0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12})$ /modif/$1 break;
uwsgi_pass frontend;
}
content_by_lua '
ngx.say("Ping! You got here because you have no cookies!")
';
}Nginx has deemed it necessary and proper to insult my intelligence with the following log message:nginx: [emerg] directive "rewrite" is not terminated by ";" in /opt/openresty/nginx/conf/nginx.conf:34Perhaps I am as dense as nginx seems to think, but what have I missed?Bonus question:Is my general approach safe and sane? Is there a better way of achieving my goals? | Why is nginx claiming there's no terminating semicolon in my `rewrite` statement? |
basically, you specify url as part of the proxy_pass directive, the following location directive should do it:location ~ ^/customer1/myapp(/?)(.*) {
proxy_pass http://127.0.0.1:8001/$2;
}seehttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_passfor the detailed explanation on how nginx passes the uri | I have a single django-admin app namedmyappthat I would like to deploy multiple instances of on different physical boxes, one per customer. However, I'd like them all to be accessed from a similar domain,mydomain.com/customer1/myapp.I've fiddled with specific proxy settings and tried multiple things suggested on SO, but none quite fit my use case... and since I know very little about bothnginxanddjangoI am at a loss!My current nginx.conf is:server {
listen 80;
server_name myserver.com
location ^~ /static {
alias /path/to/static/files/;
}
# location / {
# proxy_pass http://127.0.0.1:8001;
# }
location ^~ /customer1/myapp/static {
alias /path/to/static/files/;
}
location /customer1/myapp {
rewrite ^/customer1/myapp/(/?)(.*) /$2 break;
proxy_pass http://127.0.0.1:8001;
}
}I can get to the login screen as expected viamyserver.com/customer1/myapp/admin. However when I attempt to log in, nginx rewrites my url tomyserver.com/adminwhich isn't a valid url. How do I keep nginx from actually rewriting the url and only change the url that is passed on to127.0.0.1:8001?FWIW, I am using gunicorn to serve withgunicorn -b 127.0.0.1:8001 -n myapp. If I uncomment the/location and remove the last two location blocks, the app works great.I am far from set on this approach if there are alternatives. The goal is to avoid modifying django code for each deployment and instead just add minimal code to the nginx.conf for new deployments. | multiple django apps with nginx proxy_pass and rewrite |
There is, indeed, a built-in Apache server inside macOS. To stop it, enter the following command to Terminal:sudo apachectl stop | I'm using macOS and I'm just wondering why port 80 is already used as I need to install my own nginx (as docker container) server. Going tohttp://localhostshows me "It works!". But I don't understand where this comes from, as I didn't installed anything by myself. I thought it could be an Apache server shipped with macOS.So I did$ sudo lsof -i:80And I got this result, which I do not understand:COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd 9283 root 4u IPv6 0x2e000a8d22b1a699 0t0 TCP *:http (LISTEN)
httpd 9292 _www 4u IPv6 0x2e000a8d22b1a699 0t0 TCP *:http (LISTEN)
httpd 9307 _www 4u IPv6 0x2e000a8d22b1a699 0t0 TCP *:http (LISTEN) | Unexpected used port 80 on macOS with "It works" result |
location /cdn-directory/ {
location ~* \.(js|css|swf|eot|ttf|otf|woff|woff2)$ {
add_header 'Cache-Control' 'public';
add_header 'X-Frame-Options' 'ALLOW-FROM *';
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
expires +1y;
}
}http://enable-cors.org/server_nginx.html | I have created a folder that will be used for serving static files (CSS, images, fonts and JS etc) I will eventually CNAME the folder into a subdomain for usage on a CDN to work with my Magento 2 setup.I want to allow ALL domains ALL access via CORS - Cross Origin Policy and I want to cache the data too. This is what I have. (I am not asking for security suggestions or tips on JSONP issues - I want global access to the file directory please)location /cdn-directory/ {
location ~* \.(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2|zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "ALLOW-FROM *";
expires +1y;
}
}According todocumentationit saysX-Frame-OptionssupportsALLOW-FROM uribut cannot see examples of using*(all domains) or adding certain multiple domains in thisALLOW-FROM. I need to allow all domains access to my static files folder. | How to add CORS (cross origin policy) to all domains in NGINX? |
If you mean the stylesheets or javascript for example you can update the version of the stylesheet see below for an exampleYou can change toNotice the ?v=1.0 parameter at the end of the source, this works for Javascript also.If you need images and things to update you can find lots here about cache busting hereRefresh image with a new one at the same urlyou can also try adding
To the Head of the HTML Page. | We are experiencing an issue where a previous version of our home page is being displayed. Even though there has been changes since then, the web page will always show the old version.This issue stems from us using a WordPress plugin that added aLast-Modified: Tue, 19 Apr 2016 15:18:40 GMTheader to the response.The only way found to fix this issue is by force refresh on the browser. Is there a way to invalidate that cache remotely for all clients ?The Request-Response header | How to Remotely force a client to purge a cached website? |
Thankfully I figured this out not long after the posting the question, but I think the following information would be available to others looking to solve similar problems:The relevant logs are not inaccess.log, but rather inerror.log.Running this showed me that the.htaccessfile was not in the expected location. Then I moved it to the correct location and was able to authenticate OK. | From looking at tutorials such asthisit seems relatively easy to set up .htpasswd authentication.Here's my HTTPS block which is how I'm accessing my site:server {
listen 443;
server_name potato;
root /var/www/html;
ssl on;
ssl_certificate /srv/ssl/cert.pem;
ssl_certificate_key /srv/ssl/key.pem;
location / {
auth_basic "Restricted Content";
auth_basic_user_file /usr/local/nginx/.htpasswd;
}
}I've gathered fromherethe following snippet to create the .htpasswd file:USERNAME=admin
PASSWORD=password
sudo printf "$USERNAME:$(openssl passwd -crypt $PASSWORD)\n" >> .htpasswdThis initially failed with a permission denied error, which I resolved by first creating an empty .htpasswd then granting myself permission viasudo chown max:max .htpasswd.When I visit the website, I do see the Auth prompt, but I get a 403 error even I type in the correct password.I have been fiddling with this for a while and am continuing to dig through google searches. But I'd appreciate any tips toward a likely source. It'd also be great if someone could show me a dependable way to diagnose the cause of the Auth failure.In my access.log file I have entries like this:73.170.238.232 - admin [05/Sep/2016:12:03:34 -0700] "GET /musicker/dist/ HTTP/1.1" 403 571 "-" "Mozilla/5.0 (X11; CrOS x86_64 8350.68.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"but I don't see much useful information in there. You can see I'm trying to access the website at/musicker/dist/, and in Nginx mylocation /block is catching this and adding auth_basic. | always 403 Forbidden with Nginx .htpasswd |
Since that part of the question wasn't answered:Don't gzip images. JPEG and PNG files are already compressed and re-compressing them with gzip may have little effect, and it may in fact result inlargerfile sizes. By default, nginx doesn't compress image files using its per-request gzip module.If you want to reduce the size of your images, you may wanna look into thewebpfile format or thepagespeedmodule that can handle optimising images for you. | If you configure and install nginx with the flag--with-http_gzip_static_moduleand then you turn on the static gzippinggzip_static on;.HttpGzipStaticModuleWith static gzip when nginx receives a file request it tries to read and return the same file with an extension ".gz".My quesion is: This seems to be a better choice than gzipping the file when the user does the request because the file is already gzipped, right? You win speed, you can serve the files faster. Right now I have gzipped font files and I send to the user a bundle with all the js (concatenated, minified and gzipped) an another bundle with all the css. Should I also pre-gzip the images? | (nginx) Gzip per request vs static gzip |
Once you have created an dJango application. Just follow these steps:STEP 1.Create a file say uwsgi.ini in your Django Project Directory. i.e besides manage.py[uwsgi]
# set the http port
http = :
# change to django project directory
chdir =
# add /var/www to the pythonpath, in this way we can use the project.app format
pythonpath = /var/www
# set the project settings name
env = DJANGO_SETTINGS_MODULE=.settings
# load django
module = django.core.handlers.wsgi:WSGIHandler()STEP 2.Under/etc/nginx/sites-availableadd .conf fileserver {
listen 84;
server_name example.com;
access_log /var/log/nginx/sample_project.access.log;
error_log /var/log/nginx/sample_project.error.log;
# https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production
location /static/ { # STATIC_URL
alias /home/www/myhostname.com/static/; # STATIC_ROOT
expires 30d;
}
}STEP 3.In nginx.conf, pass the request to your Django applicationUnder the server { } block,location /yourapp {
include uwsgi_params;
uwsgi_pass :;
}STEP 4.Run the uwsgi.ini> uwsgi --ini uwsgi.iniNow any request to your nGINX will pass the request to your Django App via uwsgi.. Enjoy :) | I want to deploy Django application on nGINX server. I'm using uWSGI. I looked up in many tutorials but none worked.
Django application runs perfectly as a standalone app. What is the simplest way to have the same app running on nGINX??I'm stuck here and want a solution.. :-(my www folder is in/usr/share/nginx/wwwmy site-enabled nconf.dand all are in/etc/nginx/I did install uWSGI but couldn't find any folder named uwsgi which hasapps-installedfolder/file in it | Django app deployment on nGINX |
NGINX listens on port 80 and forwards to Gunicorn. Gunicorn operates on the 127.0.0.1 IP rather than 0.0.0.0, so it isn't listening publicly, and therefore the only way to access the site externally is through port 80. | I busy setting up a development environment for Django Framework using Gunicorn (as Django service) and NGINX (as a Reverse Proxy).When I look at several tutorialslike thisone andthis one, I see that they use port 8000 and port 8001 (http://127.0.0.1:8000andhttp://127.0.0.1:8001). Is there a special reason not to use port 80, like any other webserver?Port 8000 is often used for radio streaming and malware, so why?BTW: I am running it using Virtualenv on a Ubuntu 12.04 system. | Why does Gunicorn use port 8000/8001 instead of 80? |
Nginx / Unicorn FTW!Nginx in front to serve static files and unicorn to handle Sinatra app.Benefits: Performance, good load balancing with unix socks and deploy/upgrade without any downtimes (you can upgrade Ruby/Nginx/Sinatra/app without downtime).How-to :http://sirupsen.com/setting-up-unicorn-with-nginx/. | I'm looking for arobust way to deploy a Rack application(in this case a Sinatra app). Requests will take a little time (0.25-0.5 sec waiting on proxied HTTP requests) and there may be a decent amount of traffic.Should I go with a traditional mongrel cluster setup? Use HAProxy as a load balancer? nginx? rackup?What solutions have you used and what are the advantages? | Robust way to deploy a Rack application (Sinatra) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.