Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Well, I figured it out. If any one ever run into something like this...Its basically a lack of knowledge about shell scripts that was holding me back.After commenting out each line o the script file i found the problem with the line :source ../bin/activateand all after that.The problem was that it had 2 spaces on front of it, and now I know it needs to be left aligned all the way. Now it works.This is how i figured it out:tail -f /var/log/syslog Jun 26 10:54:59 saturn7 init: app_name main process (3521) terminated with status 127I found out that status 127 is basically a command that is not found. So I know the problem was actually in the script file.But I am not sure why bash ./script.sh would work and not tell me anything is wrong? I need to read up about schell scripts..
First of all I have many Django instances setup and running like this.In each project I have a script.sh shell script that starts gunicorn etc.:#!/bin/bash set -e LOGFILE=/var/log/gunicorn/app_name.log LOGDIR=$(dirname $LOGFILE) NUM_WORKERS=3 # user/group to run as USER=root GROUP=root PORT=8060 IP=127.0.0.1 cd /var/www/webapps/app_name source ../bin/activate test -d $LOGDIR || mkdir -p $LOGDIR exec /var/www/webapps/bin/gunicorn_django -b $IP:$PORT -w $NUM_WORKERS \ --user=$USER --group=$GROUP --log-level=debug --log-file=$LOGFILE 2>>$LOGFILEWhen running this script from the command line withbash script.sh, the site works perfectly, so Nginx is setup right.As soon as I use upstart withservice app_name start theapp starts and then just stops. It does not even write to the log file.This is theapp_name.conffile in/etc/init/app_name.conf:description "Test Django instance" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 exec /var/www/webapps/app_name/script.shSo what is the problem here? Cause running from command line works, but doing trough upstart does not. And I dont know where to see whats wrong?
Gunicorn and Django with Upstart and Nginx
From theMozilla docsTheX-Forwarded-For(XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, theX-Forwarded-Forrequest header is used.In fact, I think that you have misunderstood theHostheader. My understanding is that it will be the IP of the nginx server.
I have the following Nginx configuration for my Django application:upstream api { server localhost:8000; } server { listen 80; location / { proxy_pass http://api; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /staticfiles { alias /app/static/; } }I based this config on atutorial here. After some research, looks like setting theHostheader allows the Django API to determine original client's IP address (instead of the IP address of the proxy).What's the point of theX-Forwarded-Forheader? I see a field called$http_x_forwarded_forin the nginx logs but I'm not sure it's related.
What's the purpose of setting "X-Forwarded-For" header in nginx
You should use a ConfigMap tocustomize the NGINX configuration:ConfigMapsallow you to decouple configuration artifacts from image content to keep containerized applications portable.The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.To configure custom logs, you need to use thelog-format-upstreamkey.e.g.:Create the following configmap:apiVersion: v1 data: log-format-upstream: '$remote_addr - $request_id - [$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status' kind: ConfigMap metadata: name: nginx-ingress-configand make sure that you are using--configmap=$(POD_NAMESPACE)/nginx-ingress-configas commandargsfor your nginx-ingress-controller (example from offical repo here).
I'm using the nginx ingress controller on gke, by default these are what my access logs look like:"10.123.0.20 - [10.123.0.20] - - [22/Apr/2019:18:47:59 +0000] "GET /sdflksdf/sdfsdf HTTP/2.0" 404 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/538.12 (KHTML, like Gecko) Chrome/73.0.3683.100 Safari/537.36" 26 0.002 [default-blah-80] 10.44.0.26:80 0 0.001 404 skjf0s93jf0ws93jfsijf3s3fjs3iI want to add the x-forwarded-for header in my access logs. I'd like that field to be added at the end of the current log lines if possible. Or at the start of the log line would be OK too I guess.Im looking at their docs and its not clear to me how to add x-forwarded-for to the access log:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/
How do I add the x-forwarded-for field to my access logs for the nginx ingress controller?
You can putngx_http_auth_basic_modulesettings into any of the following contexts:http, server, location, limit_exceptYour versionlocation ~ ^/Would work only if you don't have another declared locations in yourserversectionexample:server { ... #some server settings location / { # full equivalent for "~ ^/" auth_basic on; auth_basic_user_file /path/to/some/file; } location /other_location { # here http_auth not inherited } }Just put yourhttp_authsettings intoserversection and all locations described for thisserverwould inherit this settings.example:server { ... # some server settings auth_basic on; auth_basic_user_file /path/to/some/file; location / { # HERE http_auth settings would be # inherited from previous configuration level. } }
I need an expression to match all requests, no matter what.Is this good enough?location ~ ^/I'm worried about other locations taking precedence, bypassing my auth.
How to match all locations in nginx, for auth?
If your django app is proxied by nginx you can usex-accell-redirect. You need to pass a special header in your response, nginx will intercepet this and start serving the file, you can also pass Content-Disposition in the same response to force a download.That solution is good if you want to control which users acess these files.You can also use a configuration like this:#files which need to be forced downloads location /static/high_res/ { root /project_root; #don't ever send $request_filename in your response, it will expose your dir struct, use a quick regex hack to find just the filename if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } #match images if ($filename ~* ^.*?\.((jpg)|(png)|(gif))$) { add_header Content-Disposition "attachment; filename=$filename"; } } location /static { root /project_root; }This will force download on all images in some high_res folder (MEDIAROOT/high_rest). And for the other static files it will behave like normal. Please note that this is a modified quick hack that works for me. It may have security implications, so use it with precaution.
I'm writing an image bank with Django, and I want to add a button to get a hi-res version of an image (the low-res is shown in the details page). If I put just anlink, the browser will open the image instead of downloading it. Adding an HTTP header like:Content-Disposition: attachment; filename="beach008.jpg"works, but since it's an static file, I don't want to handle the request with Django. Currently, I'm using NGINX to serve static files, and dynamic pages are redirected via FastCGI to the Django process. I'm thinking about using NGINXadd-headercommand, but could it set thefilename="xx"part?. Or maybe there's some way to handle the request in Django, but make NGINX serve the content?
custom HTTP headers for static files with Django
Finally found a solution. I had to tell my env mode when starting pm2 aspm2 start server --env production. And it works perfectly on my browser.
I've just deployed my first Next.js app in production through Nginx and pm2. Everything seems okay but the app frequently reloads after some interval on browser. I'm seeing thewebpack-hmris also running in myproductionserver. (Which I think isn't necessary in production)I am using a customserver.jsand I run my app on production usingnext buildthenNODE_ENV=production node server.jscommand, and restarting my server usingpm2.I've added below a screenshot of my dev-tool's network tab which is showing the HMR running onproduction. IfHMRis the possible cause of browser reload, then what should I do to disable it on production?And also if the "frequent reload" isn't happening because of HMR then what will be the cause of it?Have you guys experienced the same issue on production? If so, please share your knowledge and experience. Thanks.Edit:I am also usingnext-pwaand a warning keeps showing on my console for it -GenerateSW has been called multiple times, perhaps due to running webpack in --watch mode. The precache manifest generated after the first call may be inaccurate! Please seehttps://github.com/GoogleChrome/workbox/issues/1790for more information.
Next.js App reloads frequently in production
So, I reached out to another developer at work and when they were reviewing the setup and pointed out that I had/have a typo in my Dockerfile:COPY prod_nginx.conf /etc/nginx/nginx.confgNeeds to be:COPY prod_nginx.conf /etc/nginx/nginx.confSilly little typos! Once I had this fixed, Nginx and Router worked!
My IssueI've read through theofficial documentationfor putting VueJS router in history mode behind Nginx as well as the following:Stackoverflow - vue-router, nginx and direct linkStackoverflow - How to config nginx for Vue-router on DockerAfter reviewing all these and making the changes multiple times, I'm still unable to provide a direct link to my routes and have them load properly (aka, I get a 404 for anything but/).My EnvironmentI'm creating an nginx docker image (nginx:alpine) and having it serve the static VueJS files.My ConfigurationDockerfile# build stage FROM node:lts-alpine as build-stage WORKDIR /app COPY . . RUN npm install && npm run build # production stage FROM nginx:stable-alpine as production-stage COPY --from=build-stage /app/dist /usr/share/nginx/html COPY prod_nginx.conf /etc/nginx/nginx.confg EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]Nginx ConfigNginx Version: 1.16.1user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer"' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _ default_server; index index.html; location / { root /usr/share/nginx/html; index index.html; try_files $uri $uri/ /index.html; } } }
VueJS Router History Mode behind Nginx
Use a named location and an internal rewrite. For example:location / { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/(.*)$ /index.php?url=$1 last; }Seethis documentfor more.
My Nginx conf file :location / { try_files $uri $uri/ /index.php?url=$uri; } ## PHP conf in case it's relevant location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }Trying the following URL :http://example.org/login:expected behavior :http://example.org/index.php?url=loginactual behavior :http://example.org/index.php?url=/login
Nginx conf how to remove leading slash from $uri
After a ton of searching around, I finally foundthis solution.Seems like the issue was that I needed to add a root to the app within "location /blog" and nest the "location ~ .php$" within /blog. Here's my Nginx config that's working now for a Wordpress blog in a Rails app using Unicorn, in case anyone else needs it:upstream unicorn { server unix:/tmp/unicorn.domain.sock fail_timeout=0; } server { server_name www.domain.com; return 301 $scheme://domain.com$request_uri; } server { listen 80 default deferred; server_name domain.com; root /home/dcs/htdocs/domain/current/public; access_log /home/dcs/htdocs/domain/log/access.log; error_log /home/dcs/htdocs/domain/log/error.log; location /blog { root /home/dcs/htdocs/domain; index index.php; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME home/dcs/htdocs/domain/$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; keepalive_timeout 10; }
I just installed a Wordpress blog under a /blog directory within a Rails app, running on Unicorn and Nginx, and my stylesheets and scripts aren't being loaded properly in the browser when I go to my domain.com/blog pages. Chrome console's giving me the following error:Resource interpreted as Stylesheet but transferred with MIME type text/htmlResource interpreted as Script but transferred with MIME type text/htmlHave been trying to figure this out and tried out lots of solutions here on SO, but still can't get through... seems like there needs to be something changed on my Nginx config, particularly for the blog/php location. Here's my config:upstream unicorn { server unix:/tmp/unicorn.domain.sock fail_timeout=0; } server { server_name www.domain.com; return 301 $scheme://domain.com$request_uri; } server { listen 80 default deferred; server_name domain.com; root /home/dcs/htdocs/domain/current/public; access_log /home/dcs/htdocs/domain/log/access.log; error_log /home/dcs/htdocs/domain/log/error.log; location /blog { try_files $uri $uri/ /blog/index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME home/dcs/htdocs/domain/$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; keepalive_timeout 10; }
Nginx - Wordpress blog on Rails loads styles and scripts with mime type text/html
I got the same error on a CentOS 6.3 where I upgraded MySQL to 5.6.14 but I kept the old my.cnf file. After upgrade, MySQL did not start anymore, giving me the same error as you described.The problem was that I had this setting in my.cnf:table_cache=2048According to this linktable_cache renamed table_open_cache.."Seem like in 5.5 the system variable table_cache was renamed table_open_cache.. In 5.6 mysqld fails if it finds an unknown variable this means that upgrades from versions earlier than 5.5 can have problems if table_cache is specified in my.cnf."After I changed the above line totable_open_cache=2048MySQL started perfectly.So, in the case you have MySQL 5.5+ (and maybe an older my.cnf), I suggest you to do the following:remove my.cnf from /etc folder and try to start MySQLif MySQL starts, the the problem is in my.cnf. Comment/uncomment all the settings one by one in order to see which is causing the problem.Hope this helps.
I have CentOS 6.4 with NGINX.When I try tostart/stop/restartmysql server(/etc/init.d/mysqld restart)I get this error:MySQL server PID file could not be found! [FAILED] Starting MySQL..The server quit without updating PID file ([FAILED]/mysql/mysqld.pid).What can I do to solve this problem?Thanks!
MySql server PID not found
Fromthisblog two quotes:Turns out proxy_cache_valid instructs Nginx that the resource could be cached for 1y IF the resource doesn’t become inactive first. When you request a resource that has longer expiration but has become inactive due lack of requests, it causes a cache miss.Conclusion proxy_cache_path should have a higher inactive time than the Expiration time of the requests (proxy_cache_valid).Fromofficial Nginx guide:inactive specifies how long an item can remain in the cache without being accessed. In this example, a file that has not been requested for 60 minutes is automatically deleted from the cache by the cache manager process, regardless of whether or not it has expired. The default value is 10 minutes (10m). Inactive content differs from expired content. NGINX does not automatically delete content that has expired as defined by a cache control header (Cache-Control:max-age=120 for example). Expired (stale) content is deleted only when it has not been accessed for the time specified by inactive. When expired content is accessed, NGINX refreshes it from the origin server and resets the inactive timer.So, the answers for your questions:Does proxy_cache_valid override inactive? 5m later does the cached file exist or not?No. They work in pair.proxy_cache_validmakes cache expired in 5 mins.If cache (does not matter expired or not) has not been accessed within 10 mins - Nginx removes it.If expired cache has been accessed within 10 mins - NGINX refreshes it from the origin server and resets the inactive timer.Alsothis answercan help to understandproxy_cache_validandinactivebetter.
Nginx cache config:proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server { # ... location / { proxy_cache my_cache; proxy_cache_valid 5m; proxy_pass http://my_upstream; } }inactiveinactive specifies how long an item can remain in the cache without being accessed. In this example, a file that has not been requested for 60 minutes is automatically deleted from the cache by the cache manager process, regardless of whether or not it has expired.proxy_cache_validSets caching time for different response codes.If only caching time is specified then only 200, 301, and 302 responses are cached.Doesproxy_cache_validoverride inactive? 5m later does the cached file exist or not?
Nginx cache inactive vs proxy_cache_valid
The problem is that theindexdirective needs the fileindex.phpto exist, in order to internally redirect the URI/to/index.php.You can avoid theindexdirective by adding alocation /to internally redirect everything to/index.php.For example:location / { rewrite ^ /index.php last; } location ~* \.php$ { root /var/www/html/public; client_max_body_size 0; include fastcgi_params; fastcgi_pass php-fpm:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $document_root; }
I'm trying to migrate my legacy monolith to k8s, now I have nginx and php-fpm (with code) images and I want nginx to just serve http traffic and pass it to fpm, but nginx insist on having files, I don't havetry_filesdirective, but it tries to find root and index files anyways.So is it at all possible to not mount source code to nginx, I really don't see why it should be there, but I couldn't find any working examplenginx.conf:server { listen 80; index index.php; # This dir exist only in php-fpm container root /var/www/html/public; location ~* \.php$ { client_max_body_size 0; include fastcgi_params; fastcgi_pass php-fpm:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } }2018/08/17 16:44:40 [error] 9#9: *46 "/var/www/html/public/index.php" is not found (2: No such file or directory), client: 192.xxx.xxx.xxx, server: , request: "GET / HTTP/1.1", host: "localhost"192.xxx.xxx.xxx - - [17/Aug/2018:16:44:40 +0000] "GET / HTTP/1.1" 404 571 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36" "195.xxx.xxx.xxx"
Is it possible to pass request to php-fpm without nginx having volume mounted
start.sh#!/bin/bash /usr/sbin/service php7.0-fpm start /usr/sbin/service nginx start tail -f /dev/nullDockerfileCOPY ["start.sh", "/root/start.sh"] WORKDIR /root CMD ["./start.sh"]With this, you can put more complex logic instart.sh.
This question already has answers here:Why can't I use Docker CMD multiple times to run multiple services?(5 answers)Closed5 years ago.I have a dockerfile that sets up NGINX, PHP, adds a Wordpress Repository. I want at boot time, to start PHP and NGINX. However, I am failing to do so. I tried adding the two commands in the CMD array, and I also tried to put them in a shell file and starting the shell file. Nothing worked. Below is my DockerfileFROM ubuntu:16.04 WORKDIR /opt/ #Install nginx RUN apt-get update RUN apt-get install -y nginx=1.10.* php7.0 php7.0-fpm php7.0-mysql #Add the customized NGINX configuration RUN rm -f /etc/nginx/nginx.conf RUN rm -f /etc/nginx/sites-enabled/* COPY nginx/nginx.conf /etc/nginx/ COPY nginx/site.conf /etc/nginx/sites-enabled #Copy the certificates RUN mkdir -p /etc/pki/nginx COPY nginx/certs/* /etc/pki/nginx/ RUN rm -f /etc/pki/nginx/placeholder #Copy the build to its destination on the server RUN mkdir -p /mnt/wordpress-blog/ COPY . /mnt/wordpress-blog/ #COPY wp-config.php COPY nginx/wp-config.php /mnt/wordpress-blog/ #The command to run the container CMD ["/bin/bash", "-c", "service php7.0-fpm start", "service nginx start"]I tried to put the commands in the CMD in a shell file, and run the shell file in the CMD command. It still didn't work. what am i missing?
DOCKERFILE: Running multiple CMD. (Starting NGINX and PHP) [duplicate]
The key to this is in the docs athttps://www.keycloak.org/docs/latest/server_installation/index.html#identifying-client-ip-addressesTheproxy-address-forwardingmust be set as well as the variousX-...headers.If you're using the Docker image fromhttps://hub.docker.com/r/jboss/keycloak/then set the env. arg-e PROXY_ADDRESS_FORWARDING=true.server { server_name api.domain.com; location /auth { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8080; proxy_read_timeout 90; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8081; proxy_read_timeout 90; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = api.domain.com) { return 301 https://$host$request_uri; } # managed by Certbot server_name api.domain.com; listen 80; return 404; # managed by Certbot }If you're using another proxy the important parts of this is the headers that are being set:proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;Apache, ISTIO and others have their own means of setting these.
How do you correctly configure NGINX as a proxy in front of Keycloak?Asking & answering this as doc because I've had to do it repeatedly now and forget the details after a while.This is specifically dealing with the case where Keycloak is behind a reverse proxy e.g. nginx and NGINX is terminating SSL and pushing to Keycloak. This is not the same issue askeycloak Invalid parameter: redirect_urialthough it produces the same error message.
keycloak Invalid parameter: redirect_uri behind a reverse proxy
Theserver_name _;is irrelevant (and is not required in modern versions ofnginx). If aserverwith a matchinglistenandserver_namecannot be found,nginxwill use thedefault server.In the absence of adefault_serversuffix to thelistendirective,nginxwill use the firstserverblock with a matchinglisten.If your configurations are spread across multiple files, there evaluation order will be ambiguous, so you need to mark the default server explicitly.Try this for thejelasticserver block:server { listen 443 ssl default_server; ssl_certificate /var/lib/jelastic/SSL/jelastic.chain; ssl_certificate_key /var/lib/jelastic/SSL/jelastic.key; ... }Seethis documentfor more.
I have this NGINX configuration as follows:# jelastic is a wildcard certificate for *.shared-hosting.xyz server { listen 443; server_name _; ssl on; ssl_certificate /var/lib/jelastic/SSL/jelastic.chain; ssl_certificate_key /var/lib/jelastic/SSL/jelastic.key; } # fullchain2 is a certificate for custom domain server { listen 443 ssl; server_name my-custom-domain-demo.xyz www.my-custom-domain-demo.com; ssl_certificate /var/lib/nginx/ssl/my-custom-domain-demo.xyz/fullchain2.pem; ssl_certificate_key /var/lib/nginx/ssl/my-custom-domain-demo.xyz/privkey2.pem; } # additional configuration for other custom domains followsThe NGINX server receives requests with host having a pattern like of*.shared-hosting.xyz, e.g.website1.shared-hosting.xyz,website2.shared-hosting.xyzand also with variable hosts having different domains likemy-custom-domain-demo.xyzoranother-custom-domain-demo.xyzetc.Now the problem is the lowerserverNGINX configuration overrides the upper configuration. Having it, the upper does not work anymore, and accessing*.shared-hosting.xyzreturns certificate error, and browser is telling the certificate is formy-custom-domain-demo.xyzonly.What can be done with this such that the lower NGINX config triggers for*.shared-hosting.xyzdomains and every other additional server configuration will not trigger when host is in the pattern of*.shared-hosting.xyz?
How to configure NGINX SSL (SNI)
If you already have nginx set up, use Unicorn. If not, use Passenger Standalone, which comes with its own builtin nginx. Perhaps this also shapes your approach to the docs. There's not much point to separately documenting what is essentially two very well documented products, bundled together.You'll hear good things about both. If you're in a rush, pick one and go. Otherwise, try both and decide based on your own experience of them.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed11 years ago.I'm trying to decide between Unicorn and Phusion Passenger Standalone (formerly Phusion Passenger Lite). I want to host multiple apps on my server. I have nginx running and listening to port 80. I need a webapp server that I can proxy requests to based on a request servername and/or the lack of an existing static directory/file. I am not interested in compiling Passenger as part of nginx (the standard install) because my model allows for more flexibility (like running different versions of Ruby with different apps).I have read a lot about Unicorn and it fits my model well but I see Passenger Standalone can essentially do the same thing. Even though there are tons of docs out there for standard Passenger installs, there don't seem to many for Passenger Standalone. Even the official docs are bare.Can someone please compare and contrast these two Ruby webapp servers and give me the pros and cons of each. Keep in mind they will only be used for "fast clients."Thank you.
Unicorn vs Passenger Standalone behind nginx [closed]
Perhaps a bit late, but in order to fix this, move yourclient_max_body_size 10M;outside of the location section. Like this:client_max_body_size 10M; location / { index index.php; try_files $uri $uri/ /index.php?$query_string; }Seehttps://github.com/heroku/heroku-buildpack-php/blob/beta/conf/nginx/heroku.conf.php#L35for reference on how heroku includes this file.
I'm gettingerror 413when uploading a 4MB file. I have already created a.user.inifile on thepublic/folder. to allow up to 10 MB filesSo I usedclient_max_body_sizelike this on mynginx.conf,but I still get 413.location / { index index.php; try_files $uri $uri/ /index.php?$query_string; client_max_body_size 10M; }That configuration is because I'm usingLaravel 5.This is myProcfileweb: vendor/bin/heroku-php-nginx -C nginx.conf public/Did I do something wrong?
Heroku Nginx HTTP 413 entity too large
If you want to use regex, use~(for case-sensitive matching) or~*(for case-insensitive matching). Your location block should look like this:location ~ "phpmyadmin \.(gif|jpg|png)$" { root /usr/share/phpmyadmin; }You can read more here:http://nginx.org/en/docs/http/ngx_http_core_module.html#location
I'm new with NGINX server and I was wondering how to set something like this.location phpmyadmin \.(gif|jpg|png)$ { root /usr/share/phpmyadmin; }example above fails at server restart.Thanks for all answers.UPDATE:location ~* .(gif|jpg|jpeg|png|ico|wmv|3gp|avi|mpg|mpeg|mp4|flv|mp3|mid|js|css|html|htm|wml)$ { root /home/safeftp/www/public_html; } location ~ "phpmyadmin \.(gif|jpg|png)$" { root /usr/share/phpmyadmin; }
How to set location with extension in NGINX server
Currently, because of the way nginx is built, this is not possible.https://github.com/symfony/symfony/issues/2432
I have read that I should dump symfony2 routes into my web server to bypass the symfony2 router, for performance. I found an example forapache.How would you go about doing this for nginx?
How to dump Symfony2 routes to nginx?
I realized that the deployment setup matcheshttp://coding.smashingmagazine.com/2011/06/28/setup-a-ubuntu-vps-for-hosting-ruby-on-rails-applications-2/When I followed this tutorial(about a year ago), I installed slightly newer versions of nginx and passenger. From what I remember, I think these newer versions prompted me to use nginx as a service when I ran any type of init.d command. (Ubuntu 10.04)Anyways I would switch out the coderun "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"torun "#{sudo} service nginx #{command}"And see if that works.
I am using Capistrano to deploy my Rails application. whenever I deploy, changes would not be reflected on the browser, and I still need to restart nginx to update the site (running sudo /etc/init.d/nginx restart). I'm not really sure why but isn't it supposed to be updated after restarting application? (using touch /app/tmp/restart.txt)Here's my deploy.rbrequire "rvm/capistrano" set :rvm_ruby_string, 'ruby-1.9.3-p194@app_name' set :rvm_type, :user require "bundler/capistrano" set :application, "app_name" set :user, "me" set :deploy_to, "/home/#{user}/#{application}" set :deploy_via, :copy set :use_sudo, false set :scm, :git set :repository, "~/Sites/#{application}/.git" set :branch, "master" role :web, '1.2.3.4' role :app, '1.2.3.4' role :db, '1.2.3.4', :primary => true role :db, '1.2.3.4' namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end
Rails - Nginx needs to be restarted after deploying with Capistrano?
You wantproxy_hide_headerinstead of proxy_ignore_headers
Just wonder if there is any way to overwrite / drop the response back Cache-Control: private from a proxied remote server. The setup architecture looks like this (yes, it's a reverse-proxy set up):[my server] --> [remote server]The setting for my server site-available/default:server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name localhost; location / { if ($arg_AWSACCESSKEY) { proxy_pass http://localhost:8088; } try_files $uri $uri/ /index.php /index.html /index.htm; } # other setting goes here }The setting for my server site-available/remote:server { listen 8088; ## listen for ipv4; this line is default and implied # Make site accessible from http://localhost/ # server_name localhost; location / { proxy_pass http://remoteserver; proxy_set_header Host remoteserverhostname.com; proxy_ignore_headers Cache-Control Expires; proxy_pass_header Set-Cookie; } }But Firebug still report the header contains Cache-Control: private. Did I missed something?Thanks.
Overwrite Cache-Control: Private in Nginx
1) I imagine that one gzip compression is enough and nginx is faster, although I haven't benchmarked it yet.GzipMiddlewareutilizes a few built-ins, which might be well optimized, too.# From http://www.xhaus.com/alan/python/httpcomp.html#gzip # Used with permission. def compress_string(s): import cStringIO, gzip zbuf = cStringIO.StringIO() zfile = gzip.GzipFile(mode='wb', compresslevel=6, fileobj=zbuf) zfile.write(s) zfile.close() return zbuf.getvalue()2) Small gzip'd files just can't take advantage from compression (in fact small files might be bigger, when processed), so one can save time by just skipping this step.3) You could design a test suite including sample data. Then decide on that data, what works best for your application.
Firstly, I'm using Django. Django provides gzip middleware which works just fine. Nginx also provides a gzip module. Would it make more sense to just use Nginx's gzip module because it is implemented purely in C, or are there other performance considerations I'm missing.Secondly, Django doesn't gzip anything under 200 bytes. Is this because gzipping is too expensive to have any value when compressing output smaller than that?Thirdly, the API I'm building will be almost purely dynamic with very little caching. Is gzipping expensive enough to make it unpractical to use in this situation (vs a situation where I could cache the gzipped output on the webserver)?
gzip - questions about performance
So, I found a solution, I can specify the redirect as follows:rewrite ^ $scheme://$http_host/foobar.html redirect;This will preserve the port.
I have an Nginx server listening on80ran inside a Docker container.Inside the Nginx config I need to perform a redirect to a static page in specific situations.rewrite ^ /foobar.html redirect;The user can run the container specifying any port using the docker command line (for reference, she can expose the container on port8000and internally Nginx will still use80).Now, when Nginx redirects the URL, the port is replaced with the one usedinternallyby Nginx instead of using the one used by Docker.I tried to set a bunch of headers but they didn't help:proxy_set_header X-Forwarded-Host $host:$server_port; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://app_server; rewrite ^ /foobar.html redirect;It still redirects to80.How can I tell Nginx to preserve the port used by the user?
Nginx: preserve port during redirect
Previous answers probably covers most of the issues, especially if there were redirection problems of your domain name.In order to be fully portable and use all the possibilities of docker, my recommendation would be to used the Nginx official docker image and make it the only one accessible from the outside (with the opening of ports) and use the--linkto manage connectivity between your Nginx containers and your other containers.I have done that in similar situation which works pretty well. Below is a tentative translation of what I have done to your situation.You start your share latex container without specifying any external port :docker run -d \ -v ~/sharelatex_data:/var/lib/sharelatex \ --name=sharelatex \ sharelatex/sharelatexYou prepare an nginx conf file for your shareLatex server that you place in$HOME/nginx/confthat will look likeupstream sharelatex { # this will refer to the name you pass as link to the nginx container server sharelatex; } server { listen 80; server_name tools.sebastienreycoyrehourcq.fr; location ^~ / { proxy_pass http://sharelatex/; } }You then start your nginx docker container with the appropriate volume links and container links :docker run -d --link sharelatex:sharelatex --name NginxMain -v $HOME/nginx/conf:/etc/nginx/sites-available -v -p 80:80 kekev76/nginxps : this has been done with our own kekev76/nginx image that is public on github and docker but you can adapt the principle to the official nginx image.
I'm a linux noob in admin of docker container using apache or nginx on VPS.I use an OVH classic Vps (4go ram, 25Go SSD) with already installed image of ubuntu 15.04 + docker.Install of docker container is really easy, and in my case i install without problem the imagesharelatex.docker run -d \ -v ~/sharelatex_data:/var/lib/sharelatex \ -p 5000:80 \ --name=sharelatex \ sharelatex/sharelatexSite is accessible on IP of the VPS athttp://51.255.47.40:5000port show that site work without any problem.I have already a sub domain (tools.sebastienreycoyrehourcq.fr) configurated to go on the server ip vps (51.255.47.40routed to External in webfaction panel ), not working, don't understand why.I install an apache server on51.255.47.40, but i suppose the best option is probably to install a docker image of nginx or apache ? Can you advice me on this point ? And after that, how can i redirect to5000port of the docker image on a classic 80 port of apache or nginx linked to my subdomain ?
nginx/apache redirection for output port on docker container on vps
You can decode it by trimming off the headers and using gzinflate.$url = "http://www.dealstan.com" $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); // Define target site curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return page in string curl_setopt($cr, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.3 Safari/533.2'); curl_setopt($ch, CURLOPT_ENCODING, "gzip"); curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects $return = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); $return = gzinflate(substr($return, 10)); print_r($return);
I am trying to decode the webpage www.dealstan.com using CURL by using the below code:$ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); // Define target site curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return page in string curl_setopt($cr, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.3 Safari/533.2'); curl_setopt($ch, CURLOPT_ENCODING , "gzip"); curl_setopt($ch, CURLOPT_TIMEOUT,5); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects $return = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); $html = str_get_html("$return"); echo $html;but, it is showing some junk charater"��}{w�6����9�X�n���.........." for about 100 lines.I tried to find the response in hurl.it, found one interesting point, it looks like the html is encoded twice(just a guess, based on the response)Find the response below: GEThttp://www.dealstan.com/200 OK 18.87 kB 490 ms View Request View Response HEADERSCache-Control: max-age=0, no-cacheCf-Ray: 18be7f54f8d80f1b-IADConnection: keep-aliveContent-Encoding: gzip, gzip ==============>? suspecting this, anyone know about it?Content-Type: text/html; charset=UTF-8Date: Wed, 19 Nov 2014 18:33:39 GMTServer: cloudflare-nginxSet-Cookie: __cfduid=d1cff1e3134c5f32d2bddc10207bae0681416422019; expires=Thu, 19-Nov-15 18:33:39 GMT; path=/; domain=.dealstan.com; HttpOnlyTransfer-Encoding: chunkedVary: Accept-EncodingX-Page-Speed: 1.8.31.2-3973X-Pingback:http://www.dealstan.com/xmlrpc.phpX-Powered-By: HHVM/3.2.0 BODY view rawH4sIAAAAAAAAA5V8Q5AoWrBk27Ztu/u2bdu2bdu2bdu2bds2583f/pjFVOQqozZnUxkVJ7PwoyAA/qeAb3y83LbYHs/3Hv79wKm/2N5cZyJVtCWu1xyteyzLNqYuWbdtHeELCyIZRRp/1Fe7es3+wL3Vfbanyone knows how to decode the response with the header "Content-Encoding: gzip, gzip",That site is loading properly in firefox, chrome etc. but, i am not able to decode using CURL.Please help me to decode this issue?
How to decode "Content-Encoding: gzip, gzip" using curl?
For location, choose what seems best to you. Here are some considerations to help out:Locations under/varare for files whichchange in size, or generally are "variable."/srvgenerally indicates files related to some service running on the machine./homeshould usually be reserved for interactive users. You can set a system user's home directory to anything, though.For security, you should segment as much as possible. The app should not run as the same user as the web server, so that it can't be abused to read sensitive files relating to the server itself (.htaccessor whatever). The app's binary files (or for Django, the python source) should be owned by root, without write access to the application user.Here's my 2 cents on how to set it up:Django app:/usr/lib/appname/or/usr/lib/python/site-packages/appname/if installed. Owned by root, chmod 644.App's files (e.g. sqlite db file, Unix socket for FastCGI, uploaded file storage, etc):/var/lib/appname/. Owned by app-user, chmod 600.app-user's shell is/bin/nologin, home is/var/lib/appname/. User has no configured password.
I see a lot of different advice online as to where to serve your web application from, what user to run it as, etc.For instance, I've seen it served from: /var/www/site, /srv/www/site, /home/$USER/site.I've seen the user be www-data, $USER (i.e. my user account), or a custom user specifically created for that purpose (e.g. user uwsgi).In terms of security, what is the best scheme I could choose?For reference, I'm trying to deploy a Django site with Nginx and uwsgi.Right now, uwsgi is running as root in emperor mode, with uid/gid set as www-data, so vassals spawn with the same permissions as Nginx workers. I'm serving from /home, but thinking of moving.
Best practices for linux user permissions to run web application as?
It turned out that the "sjsxp" library which JAX-WS RI v2.1.3 uses makes Tomcat behave this way. I tried a different version of JAX-WS RI (v2.1.7) which doesn't use the "sjsxp" library anymore and it solved the issue.A very similar issue posted on Metro mailing list:http://metro.1045641.n5.nabble.com/JAX-WS-RI-2-1-5-returning-malformed-response-tp1063518.html
I’m investigating a problem where Tomcat (7.0.907.0.92) returns a response with no HTTP headers very occasionally.According to the captured packets by Wireshark, after Tomcat receives a request it just returns only a response body. It returns neither a status line nor HTTP response headers.It makes a downstream Nginx instance produce the error “upstream sent no valid HTTP/1.0 header while reading response header from upstream”, return 502 error to the client and close the corresponding http connection between Nginx and Tomcat.What can be a cause of this behavior? Is there any possibility which makes Tomcat behave this way? Or there can be something which strips HTTP headers under some condition? Or Wireshark failed to capture the frames which contain the HTTP headers? Any advice to narrow down where the problem is is also greatly appreciated.This is a screenshot of Wireshark's "Follow HTTP Stream" which is showing the problematic response:EDIT:This is a screen shot of "TCP Stream" of the relevant part (only response). It seems that the chunks in the second response from the last looks fine:EDIT2:I forwarded this question to the Tomcat users mailing list and got some suggestions for further investigation from the developers:http://tomcat.10.x6.nabble.com/Tomcat-occasionally-returns-a-response-without-HTTP-headers-td5080623.htmlBut I haven’t found any proper solution yet. I’m still looking for insights to tackle this problem..
Tomcat occasionally returns a response without HTTP headers
Looking at the implementation ofssl_session_cachebyngx_http_ssl_session_cacheinnxg_http_ssl_module.c, it creates one shared memory zone named "SSL", i.e. one ssl session cache.Any subsequent call tossl_session_cacheretrieves the previously configured shared memory zone named "SSL", instead of creating a new one (cmp.ngx_shared_memory_addinngx_cycle.c).This can easily be verified by configuring different sizes for the same name like so:... ssl_session_cache shared:SSL:4m; server { ... ssl_session_cache shared:SSL:50m; }This results in an error message such as:[emerg] the size 52428800 of shared memory zone "SSL" conflicts with already declared size 4194304 in /etc/nginx/nginx.conf:37Details(KajMagnus added)The shared memory zone gets addedhere:sscf->shm_zone = ngx_shared_memory_add(cf, &name, n, &ngx_http_ssl_module);and as you can see, differentnames result in different caches being created. So, one can have many different shared memory caches, each one with its own unique name. However, each server, can use only one shared SSL memory zone — there's just oneshm_zoneper SSL server config, on thengx_http_ssl_srv_conf_t *sscfstructure.tl;drWhether a SSL session cache is declared at http or server level does not matter. The same cache is used as long as the same name is assigned to the cache. To prevent an error message for caches with the same name the same size must be used throughout.
To me,the Nginx docs about howssl_session_cacheworks, is a bit unclear. I'm wondering if this:ssl_session_cache shared:SSL:10m;declared either in thehttpblock, or ineachserver(i.e. virtual host) block, results in 1) one single global cache namedSSL, 10 MB large. Or 2) in one 10 MB cache per server, with combined size of all caches = num servers x 10 MB.The docs:shareda cache shared between all worker processes. The cache size is specified in bytes; one megabyte can store about 4000 sessions. Each shared cache should have an arbitrary name. A cache with the same name can be used in several virtual servers.If there'll be just one single cache, then I'd like to multiply its size with the number of servers. So, if I have 5 servers (i.e. 5 virtual hosts), then I'd placessl_session_cachein thehttpblock and:ssl_session_cache shared:SSL:50m; # 10 * 5 = 50So, the question: Doesssl_session_cache shared:SSL:10m;create one 10 MB cache per server, or one 10 MB cache for all servers?If it's per server, then is there no way to configure one single global cache, for all servers, instead? (If not possible, then why not, in case anyone knows?) Seems to me as if that would result in more efficient memory usage. (Because one server with many clients, could then use the memory that would otherwise have been dedicated to some other server that might have zero clients for the moment.)
Can all Nginx vhosts share the same ssl_session_cache?
This is aknown issuein rc1. The current work around is to add the following to your nginx configuration;proxy_set_header Connection keep-alive;Fixis scheduled for rc2.
I have a ASP.NET 5 MVC6 application behind a Nginx server that acts as a reverse proxy. Its configuration is :server { listen 80; server_name example.com; location / { proxy_pass http://localhost:5000; client_max_body_size 50M; proxy_set_header Host $host; } }It was working very well on Linux until the ASP.NET 5 RC1. Since then, and on Windows before that, the requests to MVC 6 controllers would fail: I see the response but the browser continues to load as if the response was not complete (static files are served correctly). A direct request tohttp://localhost:5000/api/xxxresponds and closes immediatly.I tried to addproxy_buffering offbut it had no effect. I suspect that it is related to the chunked mode but I found nothing online about this.
ASP.NET 5 behind nginx
To control application's lifecycle Unix provides a mechanism called Unix signals. USR1 is custom and usually handles the log rotation, other signal like HUP is standard and performs reload.http://nginx.org/en/docs/control.htmlTERM, INT fast shutdown QUIT graceful shutdown HUP changing configuration, keeping up with a changed time zone (only for FreeBSD and Linux), starting new worker processes with a new configuration, graceful shutdown of old worker processes USR1 re-opening log files USR2 upgrading an executable file WINCH graceful shutdown of worker processesBefore send a signal to PID rename the file. After you rename it log entries will still be going into the same file because inode hasn't been changed.cd /var/log/nginx mv access.log access.log.old mv error.log error.log.old kill -USR1 `cat /var/run/nginx.pid`
I have over 10.0G logs under /var/log and /var/log/nginx. How can I safely clean it?7.8G /var/log/nginx/custom 2.0G /var/log/nginx 2.0G /var/log
Clean /var/log/nginx logs file
The issue was that Rails thinks any192.168.x.xaddress is a private address, so strips them from theX-Forwarded_Forheader.# IP addresses that are "trusted proxies" that can be stripped from # the comma-delimited list in the X-Forwarded-For header. See also: # http://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces TRUSTED_PROXIES = %r{ ^127\.0\.0\.1$ | # localhost ^(10 | # private IP 10.x.x.x 172\.(1[6-9]|2[0-9]|3[0-1]) | # private IP in the range 172.16.0.0 .. 172.31.255.255 192\.168 # private IP 192.168.x.x )\. }xSee the relevant Rails sourcehereandhere.One solution is to add this to yourconfig/application.rb:config.action_dispatch.trusted_proxies = /^127\.0\.0\.1$/ # localhostThat way, IPs on your local network will not be replaced by '127.0.0.1'.
We are running a Rails application on Unicorn + Nginx. The server has two NICs that we use.eth0handles requests for the public internet, andeth2handles requests from our private network.When a request comes througheth0, the nginx logs show the public IP, and the Rails logs also show this IP. However, when a request comes througheth2, the nginx logs show the private IP correctly (e.g.192.168.5.134), but the Rails logs show127.0.0.1.So it seems like public requests oneth0get theirX-Forwarded-Forheader set correctly, but this isn't happening for requests oneth2.Our nginx config is pretty basic:upstream example.com { server unix://var/www/example.com/shared/sockets/unicorn.socket fail_timeout=0; } ... server { listen 443 ssl; ... location @example.com { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real_IP $remote_Addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if ($host ~* "^(.+)\.example.com$") { set $subdomain $1; } proxy_pass http://example.com; }Any ideas?
Rails shows IP as 127.0.0.1 when accessed from private NIC, but Nginx shows the correct IP. Public IP gets forwarded fine
This could help a lot:https://github.com/chaselee/tornado-linodeCheck out the link in the Readme to see how to deploy in production on Ubuntu 10.04.Basically I keep the nginx conf in my repo, which gets pulled into the server, and the conf file is symlinked into the actual nginx directory where it needs to go.
I understand that there's an nginx configuration file athttp://www.friendfeed.comBut i don't really know how to set up Tornada for production use on Ubuntu 10.04 with Nginx.Here's my situation and assumptions: 1) Assuming my Tornado project is set up as such:project/ src/ static/ templates/ project.pyAnd I have installed Tornado by downloading the repositary from Github and thansudo python setup.py install2) I've installed Nginx and started it based on the instructions here :http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucidMy questions are: Where does my nginx configuration file go ? Within the src/ folder? After configuring Nginx, how do I start my Tornado project?
Setting up Tornado with Nginx on Ubuntu 10.04 for production use
Try to remove this block:location = /index.php { root /var/www/html; }
I recently installed NGINX and PHP-FPM on a Centos6 server. I'm able to view other php pages on my site, but for some reason my index.php file gets downloaded rather than processed like a normal php page.Here is the nginx config:# The default server # server { listen 80 default_server; server_name example.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /var/www/html/; index index.php index.html index.htm; } error_page 404 /404.html; location = /index.php { root /var/www/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } }
NGINX and PHP-FPM is downloading index.php instead of processing it
This is a good question because there is an important distinction that gets elided in most coverage of container architecture- that between multithreaded or event-driven service applications and multiprocess service applications.Multithreaded and event-driven service applications are able with a single process to handle multiple service requests concurrently.Multiprocess service applications are not.Kubernetes workload management machinery is completely agnostic as to the real request concurrency level a given service is facing- agnostic in the sense that different concurrency rates by themselves do not have any impact on automated workload sizing or scaling.The underlying assumption, however, is that a given unit of deployment- a pod- is able to handle multiple requests concurrently.PHP in nearly all deployment models is multiprocess. It requires multiple processes to be able to handle concurrent requests in a single deployment unit. Whether those processes are coordinated by FPM or by some other machinery is an implementation detail.So- it's fine to run nginx + FPM + PHP in a single container, even though it's not a single process. The number of processes itself doesn't matter- there is actually no rule in Docker about this. The ability to support concurrency does matter. One wants to deploy in a container/pod the minimal system to support concurrent requests, and in the case of PHP, usually putting it all in a single container is simplest.
We're hosting a lot of different applications on our Kubernetes cluster already - mostly Java based.For PHP-FPM + Nginx our approach is currently, that we're building a container, which includes PHP-FPM, Nginx and the PHP application source code. But this actually breaks with the one-process-per-container docker rule, so we were thinking on how to improve it. We tried to replace it by using a pod with multiple containers - a nginx and a PHP container.The big question is now where to put the source code. My initial idea was to use a data-only container, which we mount to the nginx and PHP-FPM container. The problem is, that there's seems to be no way to do this in Kubernetesyet.The only approach that I see is creating a sidecar container, which contains the source code and copies it to an emptyDir volume which is shared between the containers in the pod.My question: Is there a good approach for PHP-FPM + Nginx and a data container on Kubernetes, or what is best practice to host PHP on Kubernetes (maybe still using one container for everything)?
PHP-FPM + Nginx on Kubernetes
After so many hours trying a lot of combinations, the way I got it working was:location ^~ /status { alias /mnt/data/site/www-cachet/public; try_files $uri $uri/ @status; location = /status/ { rewrite /status/$ /status/index.php; } location ~ ^/status/(.+\.php)$ { fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /mnt/data/site/www-cachet/public/$1; include fastcgi_params; } } location @status { rewrite /status/(.*)$ /status/index.php?/$1 last; }The most important thing wasfastcgi_param, I had to set it to an absolute path instead of$document_root$fastcgi_script_nameor something like it. I'm not sure if it's a good pratice, but addingaliasto the block just doesn't work, and neither nginx or FastCGI show us the path of the file they're trying to read.Nevertheless I couldn't get CachetHQ to work well. Problem is that all paths in source code are absolute, so they won't point to the subdirectory which our files are hosted. The solution was do something that I was reluctant since beginning: host it in a subdomain.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed8 years ago.Improve this questionI'm trying to serveCachetHQin nginx + php-fpm in a specific location. The docs gives this as example that serves instatus.example.com(which works):server { listen 80; server_name status.example.com; root /var/www/Cachet/public; index index.php; location / { try_files $uri /index.php$is_args$args; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; fastcgi_keep_conn on; } }However, instead of serving instatus.example.com, I would like to serve inexample.com/status.I was expecting that this would work, but from error.log I see it's trying/etc/nginx/htmlindex.php, but it should be/mnt/data/site/www-cachet/public/index.php:location /status/ { index index.php; root /mnt/data/site/www-cachet/public; try_files $uri index.php$is_args$args; location ~ ^/status/.+\.php$ { root /mnt/data/site/www-cachet/public; include fastcgi_params; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; fastcgi_keep_conn on; } }
Nginx to host app in different location [closed]
That's an nginx timeout error. Look at the following article for some clues as to which parameter you need to adjust to avoid the timeout, if you really want to allow more than 10 minutes to complete the task.How do I prevent a gateway timeout with nginx
I'm running a rails3.0.7 project with phusion-passenger on nginx. While I was doing a ajax which took about 15 mins to process. It jump up an error with firebug which said "504 Gateway Time-out" after 10 mins from calling the ajax.Could someon give me some idea of how I could find the problem.Thanks, benenvironmentOS: mac osx 10.6.7ruby: 1.9.2p180 installed with rvmgem: 1.6.2passenger 3.0.7rails: 3.0.7mysql: 5.5.10 installed with brewnginx: 1.0.0 stand alone installed with passender
nginx 504 Gateway Time-out
Just faced the same issue (in remi installation of nginx+php-fpm on a RHEL6 server), you can solve it by adding the following line in /etc/nginx/fastcgi_paramsfastcgi_param SCRIPT_FILENAME $request_filename;I found this line missing in RHEL, while present in a perfectly working Debian nginx.
I use the following configuration for nginx:http://gist.github.com/340956However, this configuration causes aNo input file specifiederror with PHP. The only way I have been able to solve it is by altering this line:fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;Note the "/" between$document_rootand$fastcgi_script_name. I was informed that this is the wrong configuration but no one has been able to tell me exactly why my configuration requires this extra slash.How can I get rid of that extra slash?
nginx and trailing slashes on $document_root?
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IISIf you do not have possibility to use IIS then look atUsing nginx as HTTP load balancer
As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.What other options do we have?Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something likehttps://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/Suggest an API gateway outside the cluster, like KONGhttps://getkong.org/Something else ?PS: The client applications do not send many requests per second, maybe a few per minute.
Load balancer for Azure Service Fabric Cluster on-premises
Invalidating cache by changing assets urls is a normal practice.But for that to work you need your html files not to be cached forever so that browser will have some info when these names change.So separate locations for html and assets. Matcher can be different, depending on how you store them, for example:location / { try_files $uri $uri/ =404; gzip_static on; } location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; }
I'm serving a single page JavaScript application via nginx and when I deploy new version, I want to force browsers to invalidate their JS cache and request/use the newest version available.So for example when I replace a file on the server's folder, namedmy-app-8e8faf9.js, with a file namedmy-app-eaea342.js, I don't want browsers to pullmy-app-8e8faf9.jsfrom their cache anymore. But when there is no new version available, then I still want them to read the assets from cache.How do I achieve this with the nginx config? This is my existing config:server { listen 80; server_name my.server.com; root /u/apps/my_client_production/current; index index.html; # ~2 seconds is often enough for most folks to parse HTML/CSS and # retrieve needed images/icons/frames, connections are cheap in # nginx so increasing this is generally safe... keepalive_timeout 10; client_max_body_size 100M; access_log /u/apps/my_client_production/shared/log/nginx.access.log; error_log /u/apps/my_client_production/shared/log/nginx.error.log info; location / { try_files $uri $uri/ =404; gzip_static on; expires max; add_header Cache-Control public; } # Error pages error_page 500 502 503 504 /500.html; }
Expire assets cache in browsers when replacing fingerprinted files server via nginx
This setup worked for me:Include the nginx port inconfig.asset_hostconfig.assets.debug = falseconfig.assets.digest = trueconfig.assets.compile = truebefore starting the Rails server, runrm -rf public/assets; rake tmp:clear tmp:cache:clear assets:clean assets:precompilelaunch the Rails serverOn every asset change, runrake assets:precompileagain. Guard can take care of that.
I'm working on a Rails app with a high number of assets, which sadly cannot be reduced. In production this is not a problem, but in development, ~20 asset requests per visited page cannot be quickly served by an application server (like webrick or Thin).So I started using nginx in development for serving anything inpublic/assets. Note that nginx is purely a development facility - we don't intend to use it in production.For it to work I just had to do two things:Setconfig.assets.debugto falserunrake assets:precompileSadly there are two problems (the latter being the most important one) with my setup:Every assets change requires manually runningrake assets:precompileagainFor the app server to pick up the newly-compiled assets, I have to restart it.What is a correct nginx / Asset Pipeline setup which does not require a Rails server restart after precompilation?Automatic compilation would also be welcome.
Compile assets automatically and serve them with nginx (development)
Although you already mentioned that you switched to Varnish to accomplish what you asked for, the correct answer would have been to use theheaders-more-nginx-modulewhich basically allows you the same as the Varnish function does (and much more).
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed4 years ago.Improve this questionI have nginx set up as a reverse proxy server and I want to remove certain cookies set on the backed server (apache)My website uses a lot of cookies which I can not control (Expression Engine CMS, don't ask me why). I want to delete some of those cookies (lets say cookies A B and C) and keep some other (cookies D and E).After that I will set up nginx to respond with cached content only if the request has no cookies.Do you have any idea how to do this? ThanksSo far I have in my config:proxy_cache_path /opt/nginx/cache levels=1:2 keys_zone=mycache:20m max_size=1G; proxy_temp_path /opt/nginx/tmp_cache/; proxy_ignore_headers Expires Cache-Control Set-Cookie; proxy_cache_use_stale error timeout invalid_header http_502; proxy_cache_bypass $cookie_nocache; proxy_no_cache $cookie_nocache;...location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache mycache; proxy_cache_valid 200 302 6h; proxy_cache_valid 404 1m; proxy_pass http://x.x.x.x:8080; }
How to remove certain cookies from nginx response [closed]
Solution was to addindex index.phpindex index.php try_files $uri $uri/ $uri/index.php /index.php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }
Below is my nginx.conf.In case of non existing files/index.phpis served fine.But when my URL is/foo/bar => /foo/bar/index.phpis served as PHP source code via download.Any ideas?try_files $uri $uri/ $uri/index.php /index.php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }
Try_files does not hit PHP ( NginX configuration)
This happens because when you hardcode the value passed tohttp://nginx.org/r/proxy_pass, without using any variables, then the default resolver, from/etc/resolv.conf, is used at the time that the configuration is parsed and loaded — any subsequent changes in the IP address won't be picked up.If, instead, you use variables, then you must also use thehttp://nginx.org/r/resolverdirective to specify a resolver.Note that you can still use a DNS name when specifying a resolver, but keep in mind that such name will likely only be resolved once, at configuration load or reload time. Of course, as per Dayo's answer, it's best to use a local DNS resolver for best security, but if, for example, you know that all your domains will be delegated to a certain authoritative nameserver, for example, includingns2.he.net., then you might as well simply specify such a server as theresolver.Speaking of security, however, it doesn't seem like a very good idea to trust user's input for specification of the upstream server. This is one of these things that greatly increases the attack vector — both for using your server as a freeproxy_passto anywhere on the internet (potentially exhausting your resources from being available for valid use), as well as by a malicious actor to try to exploit a potential vulnerability in your nginx by a malicious upstream server controlled by the attacker (take a look atCVE-2013-2070, for example).
I'm using Nginx and trying to redirect using proxy_pass to a URL that comes as a query string. I also want to avoid passing any other parameters to that URL.This is the url I'm sending to the proxy:http://10.10.10.10/proxydownload?url=http://www.test.com/d/guid/download&session=123This is what I have in the nginx.conf:location /proxydownload { proxy_pass $arg_url; }However, this is generating a 502 error, and I don't know why. According to the logs, $arg_url contains "http://www.test.com/d/guid/download", and that's the url I want to hit. I tried to hardcode the URL in proxy_pass and it worked:location /proxydownload { proxy_pass http://www.test.com/d/guid/download; }Is there's something incorrect on the way I use $arg_url?
Nginx proxy_pass redirect to URL from query string
If you use the stock controllers you will be able to switch on hostname and go to different backends services. It sounds like you don't want to enumerate all the subdomains -> service mappings, in which case you probably need to write your own controller that writes out an nginx config that uses $http_host in the appropriate proxy_pass or redirect lines. Give it a shot (https://github.com/kubernetes/contrib/tree/master/ingress/controllers) and file bugs in that same repo if you need help.
I need for each of my users to access a service at a custom url eg. abccompany.mycloudapp.com , each service being a kubernetes service I'm looking at ingress controllers but I need a way to use a wildcard host field and somehow read the value into the path: and service: fields ; here's a sample ingress controller of what I have in mind:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: *.bar.com http: paths: - path: /{{ value of * in * .bar.com }} backend: serviceName: {{value of * in *.bar.com }}Svc servicePort: 80
Kubernetes Ingress controllers for wildcard url mapping
Here is what I've ended up doing. I've not rolled it out to our prod servers yet, but all testing thus far looks good.Nginx does not support CGI natively, so you need another means to do it.thttpdfit the bill nicely. There is a good write up the nginxwikishowing how to use it.I configured thttpd with the following:dir=/var/www/htdocs user=thttpd logfile=/var/log/thttpd.log pidfile=/var/run/thttpd.pid port=8000 cgipat=**.cgiAnd added this to my nginx config:error_page 502 @thttpd; location @thttpd { include proxy.include; proxy_pass http://127.0.0.1:8000; }Finally, I created a basic CGI script that calls PHP on the command line and passed in my already-written PHP script. This was an ideal solution for me because the script was already set up to log to our alerts table and fire off an email. This is also real-time, as the script will execute as soon as nginx returns a 502 code (subsequent 502s will not hammer me with emails, per the logic of the script).I was able to run some simulation tests be forcing nginx to return a 502 (see morehere).I'm going to continue tweaking this, but I'm pretty happy with the relative ease of deploying it and that I could re-use existing code.
I wrote a quick PHP page to handle 502 requests. Nginx will re-direct to this page when a 502 is encountered and an email is fired off.The problem is, most of the time that the 502 is encountered is because PHP has died, so writing to the DB and sending an email using PHP is no longer possible. Tweaks to PHP-FPM settings have done a lot to help (restarting PHP, etc), but I'd still like a fall-back.There are numerous ways to send an email outside of PHP, but I am curious what others out there are doing with good success? I'd like to keep it simple for configuration (i.e. not have yet another complex dependency to worry about on the servers) and reliability reasons.Googling and searching SO didn't turn up much, probably because "dies" and "fail" bring back a lot of false positives for my scenario.
Best way to send email when PHP process dies
After testing I found out it was turbolinks causing the issue. It was doing a XHR request in the background, downloading the file first then allowing the browser to actually download the file. After adding 'data-no-turbolink'='true' to my link, do files download instantly.
I have a download link that goes to a method in a controller which uses send_file so that I may rename the file (it is an MP3 with a uuid as a filename). After clicking on the link I see the request in the NGINX logs and Rails logs, however it takes up to 90 seconds before the download beings. I have tried various settings with proxy_buffers and client_*_buffers with no affect. I have an HTML5 audio player that uses the real URL for the file and it streams the file right away with no delay.My NGINX config:upstream app { server unix:/home/archives/app/tmp/unicorn.sock fail_timeout=0; } server { listen 80 default deferred; server_name archives.example.com; root /home/archives/app/public/; client_max_body_size 200M; client_body_buffer_size 100M; proxy_buffers 2 100M; proxy_buffer_size 100M; proxy_busy_buffers_size 100M; try_files /maintenance.html $uri/index.html $uri.html $uri @production; location @production { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Sendfile-Type X-Accel-Redirect; proxy_set_header X-Accel-Mapping /home/archives/app/public/uploads/audio/=/uploads/audio/; proxy_redirect off; proxy_pass http://app; } location ~ "^/assets/*" { gzip_static on; expires max; add_header Cache-Control public; } location ~ (?:/\..*|~)$ { access_log off; log_not_found off; deny all; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/archives/app/public; } }Rails controller:def download send_file @audio.path, type: @audio_content_type, filename: "#{@audio.title} - #{@audio.speaker.name}" end
NGINX download slow to start with send_file
I have actually been through so many solutions on StackOverflow today and sadly none of which work and some even given some horrid recommendations. What's scary is how many I came across that were marked as answers.I just did a brand new Ubuntu 16.04 LEMP server, everything cleanly installed this morning Nginx, mySQL, PHP7.0 and PhpMyAdmin.This problem of redirecting toh**p://my.server.ip/after logging into phpymadmin instead ofh**p://my.server.ip/phpmyadminisnothing actually to do with the cgi.fix_pathinfo being set to 0as recommended by all those guides you read. Read up a little more on why it should be set to 0 in your php.ini file and don't just go and disable it as above.So in other words leave (as recommended to you) cgi.fix_pathinfo = 0 in your config file for PHP.THE FIXfrom this web site(the only one with the correct answer) is to add the following to your /etc/nginx/sites-available/default configuration file. Then restart Nginx ... works immediately, no more re-directing back to root after login.# Phpmyadmin Configurations location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; }
I'm setting up phpMyAdmin with nginx. I can visit phpMyAdmin athttp://localhost/phpmyadmin. However, when I logged in, the URL is redirected tohttp://localhost/sql.phpinstead ofhttp://localhost/phpmyadmin/sql.php.I have phpMyAdmin symlinked in my /var/www/html/ folder.sudo ln -s /usr/share/phpmyadmin /var/www/html/phpmyadminserver { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html index.php; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { include /etc/nginx/snippets/fastcgi-php.conf; # With php7.0-fpm: fastcgi_pass unix:/run/php/php7.0-fpm.sock; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
Nginx with phpmyadmin wrong redirect after login
When you load your typical large Python web application on top of the most popular WSGI servers, the performance difference isn't actually that much and usually nothing to get excited about. Hello world benchmarks like the one you quote are very misleading as they test a very narrow use case and the configurations used are usually never comparable. You should consider watching my PyCon talk which talk about bottlenecks in web servers and web applications.http://pyvideo.org/video/703/web-server-bottlenecks-and-performance-tuningGiven that the WSGI server is not usually the problem, you should just choose that which you find easiest to manage and has the sorts of features you think you will require. Then use benchmarking and monitoring of that choice to work out how to set it up so as to perform best for your specific web application. Even then, any increase in performance or gains in user satisfaction are not usually going to come from such tuning.
What advantages and disadvantages using nginx+Apache+mod_wsgi vs nginx+uWSGI(vurtualenv) in productionAdvantages of first variant using i see in that mod_wsgi developing since 2007 and have more stable version and easy administratedAdvantages of second variant is more high perfomance (seeBenchmark of Python WSGI Servers, available to use uWSGI server in virtualenv that is more secure.Disadvantage of second variant is a still no major version, need to creating something controling scripts for starting uWSGI servers for each virtual host (or use supervisor)What do you thinking about it?
Compare nginx+Apache+mod_wsgi vs nginx+uWSGI?
There are two type of Time zone settings. One is a system level. That you can set using/etc/localtimeSee the Dockerfile steps belowENV TZ=America/Los_Angeles RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezonePS: Taken fromhttps://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changesAlso you can refer to another articleUsing docker-compose to set containers timezonesNext is the PHP/APP level setting. For that you can create a ini file. Which can be done by adding below line to DockerfileRUN printf '[PHP]\ndate.timezone = "US/Central"\n' > /usr/local/etc/php/conf.d/tzone.ini
I need to set the default timezone in a Dockerfile. I have two containers (nginx and php7-fpm).When I enter the PHP container's bash and runphp --info | grep timezoneI get:Default timezone => UTCdate.timezone => no value => no valueMy dockerfiles are the following:nginx/Dockerfile:FROM debian:jessie RUN apt-get update && apt-get install -y nginx ADD nginx.conf /etc/nginx/ ADD site.conf /etc/nginx/sites-available/ RUN ln -s /etc/nginx/sites-available/site.conf /etc/nginx/sites-enabled/site RUN rm /etc/nginx/sites-enabled/default RUN echo "upstream php-upstream { server php:9000; }" > /etc/nginx/conf.d/upstream.conf RUN usermod -u 1000 www-data CMD ["nginx"] EXPOSE 80 EXPOSE 443php-fpm/Dockerfile:FROM php:7.0-fpm RUN apt-get update && apt-get install -y git unzip RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer RUN composer --version RUN rm /etc/localtime RUN ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime RUN "date" RUN docker-php-ext-install pdo pdo_mysql RUN pecl install xdebug RUN docker-php-ext-enable xdebug RUN echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "xdebug.remote_connect_back=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "xdebug.idekey=\"PHPSTORM\"" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini RUN echo "xdebug.remote_port=9001" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini WORKDIR /var/www/siteI've tried to use the answers of similar questions with no results.Note: I'm usingdocker-compose buildanddocker-compose up -dto run the complete stack, which is exactlythis one.
Configure timezone in dockerized Nginx + PHP-FPM
Just use nginx's configuration.While OS X Lion's Network Link Conditioner works as expected it's stillannoyingto use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.As such, I've found it much more convenient to set rate limiting in mynginx.conffile, e.g.,:location ~ /files/(.*\.(mp4|m4v|mov))$ { ... limit_rate 50k; # <-- Limit download rate per connection to 50kbps ... }EDIT: See thenginx HttpCoreModule docs.
We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.Does anyone know of good techniques forsimulating"poor bandwidth" in a local environment for testing purposes?More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).EDIT: It may be relevant that the dev environment is Mac OS X.
Simulate poor bandwidth in a testing environment (Mac OS X)?
syntax: merge_slashes [on|off] default: merge_slashes on context: http, serverYou must use:merge_slashes off;
I have web service which takes several filter parameters, something like :http://mydomain.com/filter1/value1/filter2/value2/filter3/value3The tricky thing is sometimes some of the filter variables are absent, so urls as such could be passed to this service:http://mydomain.com/filter1//filter2//filter3/value3Now I need to configure my nginx (or fastcgi) to keep the double slashes. Currently it's replacing double slashes to single ones. I'm new to nginx & fastcgi configuration and I don't know how to do that. I captured the request_uri from my php script when I requested the second url, and I gothttp://mydomain.com/filter1/filter2/filter3/value3Plz help me. Thanks in advance.
nginx: How to keep double slashes in urls
I didn't want to change the current document root (/var/www/html) since my 'ci' folder is located at/var/www/html/ci.So instead, I created a new location block in/etc/nginx/conf.d/default.conf:server{ ... location /ci { try_files $uri $uri/ /ci/index.php?/$request_uri; } ... }Thanks toMert Öksüzfor suggesting to usetry_files $uri $uri/ /ci/index.php?/$request_uri;.This one also work:location /ci { try_files $uri $uri/ /ci/index.php?$query_string; }
/etc/nginx/conf.d/default.confserver{ listen 80; listen [::]:80; server_name 192.168.56.101 192.168.101.100 localhost; root /var/www/html; index index.php index.html index.htm; location / { try_files $uri $uri/ =404; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } }my codeigniter folder is 'ci' which is located in /var/www/html/ci what configuration do I need to work url rewriting?...
NGINX server configuration for Codeigniter
location / { rewrite ^/(.*)$ /index.php?q=$1 } location = /index.php { #Do your normal php passing stuff here now }Is that what you were looking for?As an answer to your second question, you can parse the protocol in php. Nginx doesn't need to do that. To parse the url, you can use theparse_urlfunction
Let's say I have a web server (nginx)server.comwhere I have only one php fileindex.php(there is no directory structure). I want to be able to access anything after server.com. It will be an url structure. For example server.com/google.com, server.com/yahoo.com.au etc...An example would behttp://whois.domaintools.com/google.com(They don't have a directory that's called/google.com, right?)Q1:How can I access whatever is after 'server.com' fromindex.phpQ2:Can I get the protocol from such URL? For exampleserver.com/http://www.google.comorserver.com/https://www.google.comPSI'm not certain if the termvirtual directoryis used here correctly. I just want to do what I saw somewhere else.
How to make nginx virtual directories accessible in php?
you can use options to specify the path to the 'nginx' binary and conf directory (it seems that certbot expectsnginx.conffile in nginx's installation directory if you do not specify it manually)certbot certonly --nginx --nginx-ctl /usr/local/openresty/nginx/sbin/nginx --nginx-server-root /usr/local/openresty/nginx/conf
I'm trying to install certbot on my digital ocean droplet. I'm using ubuntu 20.04 and following instructions fromhttps://certbot.eff.org/lets-encrypt/ubuntufocal-nginx.The error occurs when I runsudo certbot --nginx. The error I get is:The nginx plugin is not working; there may be problems with your existing configuration. The error was: NoInstallationError("Could not find a usable 'nginx' binary. Ensure nginx exists, the binary is executable, and your PATH is set correctly.")This is my first time using digital ocean and such so please explain the solution. Thank you.
Could not find a usable 'nginx' binary. Ensure nginx exists, the binary is executable
I has a similar issue when moving to Amazon Linux 2.Simply creating a file at.platform/nginx/conf.d/calledproxy.confwith the content below was enough for me.client_max_body_size 50M;If you go digging around the main config for nginx you'll see how this file is included into the middle of the file so there's no need to have it wrapped with http.This is similar to adam tropp's answer, but it follows the example given by AWS
I followed the advicehereto configure the nginx reverse proxy to allow files larger than the default 1mb. So, my code in/.platform/nginx/conf.d/prod.conflooks like this:http { client_max_body_size 30M; }However, this seems to have no effect, and nginx still registers an error when I try to upload a file larger than 1mb.I also tried doing this without thehttpand braces, as detailed in the accepted answer tothis question, like this:client_max_body_size 30M;This also had no effect.I thought it might be necessary to restart nginx after applying the configuration, so I added a file in the .ebextensions directory called01nginx.config, which looks like this:commands: 01_reload_nginx: command: "sudo service nginx reload"This also had no effect.I have seenthis questionand the above-referenced question, as well asthis one. However, they all seem either outdated or non-applicable to an Amazon Linux 2 instance, since none of them mention the.platformdirectory from the above-referenced elastic beanstalk documentation. In any case, none of their answers have worked for me thus far. So, what am I missing?
How to extend nginx config in elastic beanstalk (Amazon Linux 2)
This is what fixed this issue thanks to @Paulo Almeida.In the nginx file I changed what I previosly had too...location /protectedMedia/ { internal; root /home/{site-name}/; }My url is...url(r'^media/', views.protectedMedia, name="protect_media"),And the View is...def protectedMedia(request): if request.user.is_staff: response = HttpResponse(status=200) response['Content-Type'] = '' response['X-Accel-Redirect'] = '/protectedMedia/' + request.path return response else: return HttpResponse(status=400)This works perfectly! Now only admin users can access the media files stored in my media folder.
I have been fumbling around with trying to protect Django's media files with no luck so far! I am simply trying to make it where ONLY admin users can access the media folder. Here is my Nginx file.server { listen 80; server_name xxxxxxxxxx; location = /favicon.ico {access_log off; log_not_found off;} location /static/ { alias /home/{site-name}/static_cdn/; } location /media/ { internal; root /home/{site-name}/; } location / { this is setup and working. Didn't include Code though }My Url Fileurlpatterns = [ url(r'^media/', views.protectedMedia, name="protect_media"), ]And my viewdef protectedMedia(request): if request.user.is_staff: response = HttpResponse() response['Content-Type'] = '' response['X-Accel-Redirect'] = request.path return response else: return HttpResponse(status=400)This is producing a 404 Not Found Nginx error. Does anything look blatantly wrong here? Thanks!BTW, I have tried adding /media/ to the end of the root URL in the Nginx settings.
Django and Nginx X-accel-redirect
That is because you also have to address where the gem location ( specifically where bundler is installed ) in your nginx start script as well.bin/start#!/bin/bash TMPDIR=/home/shadyfront/webapps/truejersey/tmp GEM_HOME=/home/shadyfront/.rvm/gems/ruby-1.8.7-p330@true /home/shadyfront/webapps/truejersey/nginx/sbin/nginx -p /home/shadyfront/webapps/truejersey/nginx/
If I runbundle install, everything passes. I reboot nginx, and when I visit the site I see the passenger error with this :git://github.com/spree/spree.git (at master) is not checked out. Please run `bundle install` (Bundler::GitError)My gemfile :source 'http://rubygems.org' gem 'rails', '3.0.3' gem 'spree', :git => 'git://github.com/spree/spree.git' gem 'haml' gem 'ruby-debug' gem 'sqlite3', :require => 'sqlite3' gem 'ckeditor', '3.4.2.pre' gem "aged_revolt", :require => "aged_revolt", :path => "aged_revolt" gem "spree_easy_contact", '1.0.2', :path => "#{File.expand_path(__FILE__)}/../vendor/gems/spree_easy_contact-1.0.2" gem "honeypot-captcha"When I runbundle show spree:/home/shadyfront/.rvm/gems/ruby-1.8.7-p330@revolting_gems/bundler/gems/spree-44e4771f3a2aAny idea how/why this is occuring and how I can get past this ?This is my nginx.conf :env GEM_HOME=/home/shadyfront/.rvm/gems/ruby-1.8.7-p330@revolting_gems; worker_processes 1; events { worker_connections 1024; } http { access_log /home/shadyfront/logs/user/access_revolting_age.log combined; error_log /home/shadyfront/logs/user/error_revolting_age.log crit; include mime.types; passenger_root /home/shadyfront/webapps/revolting_age/gems/gems/passenger-2.2.15; passenger_ruby /home/shadyfront/webapps/revolting_age/bin/ruby; sendfile on; passenger_max_instances_per_app 1; rails_spawn_method conservative; passenger_max_pool_size 2; server { listen 56943; passenger_enabled on; root /home/shadyfront/webapps/revolting_age/releases/20110215175319/public; server_name localhost; } }
Installing Gems with Bundler == Big problem
A502 Bad Gatewayerror isnot caused by static HTMLlike you just displayed.The server was probably having an internal error or an error communicating with other servers - maybe there was a (temporary) overload, or another server/service was not reachable. Does it still happen when you clear your cache or use another browser/computer?Can you tell us more about your webserver, and its links to other servers/services?
My boss was messing around with this page and suddenly it stopped working and started giving us a 502 Bad Gateway error. Is there anything you can see that explains why this happened? About A Deo Our Wines Tenuta A Deo - Red Tenuta A Deo - White Tenuta A Deo - Oil Popova Kula Kokino Lucca Olive Oil The Farm Villa Lucca Casa Casciani Tenuta A Deo Tourist information How to Purchase Gallery
error Bad Gateway NGINX 502 PHP-FPM fastcgi
Try to remove this line:try_files $uri $uri/ =404;With this directive nginx tries to serve a static file (or directory), and returns404if there is no such file.
I'm trying to configure an Express server with NGINX as a reverse proxy. NGINX to serve static files, and Express for the dynamic content.Problem : The normal root link works (website.com) , but when I navigate to (website.com/api), I get a 404 from NGINXThis is my server.js :var express = require("express"); var app = express(); var server = app.listen(process.env.PORT || 5000); console.log("Server Running"); app.get("/",function(req,res){res.send("HOME PAGE")}); app.get("/api", function(req, res) { res.send('API PAGE'); });This is my NGINX Config file:server { listen 80 default_server; listen [::]:80 default_server; server_name website.com www.website.com; location ~ ^/(assets/|images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) { root /home/foobar/public; #this is where my static files reside access_log off; expires 24h; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; try_files $uri $uri/ =404; } }
How to setup routes with Express and NGINX?
Each query parameter is exposed as avariable prefixed with$arg_in the configuration file. For example,devicewould become$arg_device. Using this you can make the comparison check within your location block, for example:location / { if ($arg_device = desktop) { return 301 $uri; } }
Nginx, I am trying to permanently redirect the URLs with adeviceGET parameter (http://www.example.org/page?device=desktop) to the relative URL without this parameter (http://www.example.org/page).I did this, but it doesn't work.location { rewrite ^(.*)\?device=desktop $1 permanent; }
Nginx redirect URL with specific query parameter
Useadd_header Content-MD5 $upstream_http_content_md5;SinceX-Accel-Redirectcauses internal redirect nginx will not send returned headers, but it will keep them in$upstream_http_...variables. So you could use them.
I am serving restricted downloads in rails usingX-Accel-Redirectwith nginx. To validate my downloads in client app, i am trying to send the checksum in the non standard HTTP headerContent-MD5to theX-Accel-Redirectrequest. But this is not working.below the rails snippet used to do the redirectionheaders['X-Accel-Redirect'] = '/download_public/uploads/stories/' + params[:story_id] +'/' + params[:story_id] + '.zip' headers['X-Accel-Expires'] = 'max' checksum = Digest::MD5.file(Rails.root.dirname.to_s+'/public/uploads/stories/' + params[:story_id] +'/' + params[:story_id] + '.zip').hexdigest headers['Content-MD5'] = checksum request.session_options[:skip] = true render :nothing => true, :content_type => MIME::Types.type_for('.zip').first.content_typeThis is the nginx sectionlocation /download_public { internal; proxy_pass_header Content-MD5; add_header Cache-Control "public, max-age=315360000"; add_header Content-Disposition "inline"; alias /var/www/sss/public; }This is not working apparently. I am not able to get the Content-MD5 header in my responses. Is there any way to pass my Content-MD5 header from rails?I know there are ways to do that entirely in nginx, like compiling nginx with perl or lua and easily calculate the MD5 on the fly. But i dont want to do that.Any help is much appreciated.
Adding custom HTTP headers to nginx X-Accel-Redirect
you can build your own image and in the Dockerfile you canapt install ...but there is also an official image with apache + php-fpm here:https://hub.docker.com/_/phpso you dont have to. its ready to go.but i believe it could work by exposing yourphp-fpmport and configuring your apache FastCgiExternalServer to this port instead of a unix socket.
We can deploy apache and php in separate docker containers and then link them.But is there any way to install apache locally (using apt-get install apache2) and php-fpm in docker container and then link them?Thanks
How to deploy php-fpm on docker container and apache/nginx on localhost (Ubuntu)
As it happens in these cases, I was actually editing a wrong configuration file that didn't get loaded by Nginx.Adding the following to the right file did the trick:fastcgi_read_timeout 600; fastcgi_send_timeout 600; fastcgi_connect_timeout 600;
I am having issues with a long-running PHP script:<?php sleep(70); # extend 60s phpinfo();Which gets terminated every time after 60 seconds with a response504 Gateway Time-outfrom Nginx.When I inspect the Nginx errors I can see that the request times out:... [error] 1312#1312: *2023 upstream timed out (110: Connection timed out) while reading response header from upstream, ... , upstream: "fastcgi://unix:/run/php/php7.0-fpm.sock", ...I went through the related questions and tried increasing the timeouts creating a/etc/nginx/conf.d/timeout.conffile with the following content:proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; fastcgi_read_timeout 600; fastcgi_send_timeout 600; fastcgi_connect_timeout 600;I also read through the Nginx documentation for bothfastcgiandcoremodules, searching for any configurations with defaults set to 60 seconds.I ruled out theclient_*timeouts because they returnHTTP 408instead ofHTTP 504responses.This is my Nginx server config portion of FastCGI:location ~ \.php$ { fastcgi_pass unix:/run/php/php7.0-fpm.sock; include fastcgi_params; }From what I read so far this doesn't seem to be an issue with PHP rather Nginx is to blame for the timeout. Nonetheless, I tried modifying the limits in PHP as well:My values from thephpinfo():default_socket_timeout=600 max_execution_time=300 max_input_time=-1 memory_limit=512MThe php-fpm pool config also has the following enabled:catch_workers_output = yes request_terminate_timeout = 600There is nothing in the php-fpm logs.I am also using Amazon's Load Balancer to route to the server, but the timeout configuration is also increased fromthe default 60 seconds.I don't know where else to look, during all the changes I restarted both php-fpm and nginx.Thank you
Nginx + Php-fpm fastcgi upstream timed out
This is a subjective thing and use-case dependent. So the question you should ask yourself isWhat is the max size beyond which you don't want to allow an uploadthen use that.Next what mistake people make is that they just setclient_max_body_size 150M;In thenginxconfig in the server block. This is actually wrong because you don't want to allow people to be able to upload 150M of data to everyone and to every url. You will have a specific url for which you want the upload to be allowed. So you should have location like belowlocation /upload/largefileupload { client_max_body_size 150M; }And for rest urls you can keep it to as low as2MB. This way you will be less susceptible to a generic ddos attack (large body upload attack). See below urlhttps://www.tomaz.me/2013/09/15/avoiding-ddos-attacks-caused-by-large-http-request-bodies-by-enforcing-a-hard-limit-in-your-web-server.html
What is the maximum recommended value ofclient_max_body_sizeon Nginx for upload of large files?The web app that I'm working right now will expect uploads of max 100mb. Should I setclient_max_body_sizeto something like 150mb to upload in a single request or do the slice strategy and send chunks of 1mb to the server keeping theclient_max_body_sizelow?
Maximum recommended client_max_body_size value on Nginx
The trailing slash does this magic, take it out from proxy_pass and it should help:server { listen 80; server_name example.com; location /work/ { proxy_pass http://10.255.8.77:8065; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header HOST $host/work; proxy_read_timeout 90; } }Let's see through the docs:A request URI is passed to the server as follows:If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive:location /name/ { proxy_pass http://127.0.0.1/remote/; }If proxy_pass is specified without a URI, the request URI is passed to the serverin the sameform as sent by a clientwhen the original request is processed, or the full normalized request URI is passed when processing the changed URI:location /some/path/ { proxy_pass http://127.0.0.1; }
i want use nginx location proxy my applicationsnginx(ip address) : 10.255.1.10 php(10.255.1.20)Ip access:10.255.1.20/ "access ok(200)" 10.255.1.20/api "access ok(200)" 10.255.1.20/project "access ok(200)"but i use nginx proxy access 404example.com/work "access ok(200)" example.com/work/api "access not found(404)" example.com/work/project "access not found(404)"Nginx ConfigFile:server { listen 80; server_name example.com; location /work/ { proxy_pass http://10.255.8.77:8065/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header HOST $host/work; proxy_read_timeout 90; } }i want this:"curl http://example.com/work 200" "curl http://example.com/work/api 200" "curl http://example.com/work/project 200"thanks for everybody.
Nginx proxy and remove proxy_pass prefix
This has been answered before:https://serverfault.com/questions/431274/nginx-services-fails-for-cross-domain-requests-if-the-service-returns-error.add-headerdoesn't work with HTTP errors, but the optionalheaders_moremodule can be used to workaround this limitation.
I want to add CORS to my server.I have configured my nginx according to this:https://michielkalkman.com/snippets/nginx-cors-open-configuration.htmlIt seems to work fine when the server returns 200. However, if the server returns something else, like 400 when the request is wrong, or 500 if internal error, the browser shows theNo 'Access-Control-Allow-Origin' headerinstead of reaching the error handler like it should.What configuration am I missing to make it work?
How to make nginx CORS configuration work when server returns error?
The answer is found in this threadReact-router and nginxWhat I had to do was modifydefaultconfiguration file in/etc/nginx/sites-available/defaultto:location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri /index.html; }
I'm using"react-router-dom": "^4.2.2".If I test onlocalhost:3000/secondit works perfectly.When I upload this on ubuntu server with nginx and I trywww.website.com, it works . When I try to usewww.website.com/secondit gives me404 not found. I'm usingcreate-react-app.app.jsclass TestRoutes extends React.Component{ constructor(props){ super(props); } render(){ return( ); } } ReactDOM.render(, document.getElementById("root"));/etc/nginx/sites-available/defaultHere's the configuration file from the serverserver { listen 443 ssl; root /var/www/reactcamera/build; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name website.com www.website.com; ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem; # managed by Certbot location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; }
React Router routes not working on nginx create-react-app
You may configurenginxto start automatically on system boot using below command.#chkconfig nginx onOnce you run above command, nginx will be always started whenever system boots.You may check , if service is configured to start automaticaly on system boot using below command.# chkconfig nginx --listYou may disable service auto start using below command# chkconfig nginx off
A few weeks ago I configured an ec2 server on AWS and database is on RDS and I use nginx as web server. When i reboot server from the AWS console my nginx wont restart automatically. I did this usingservice nginx startcommand.Is there any way to configure nginx server, So it restarted when i reboot my ec2 instance
nginx service won't start after reboot AWS Linux server
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
EnvironmentI have set up Proxy Protocol support on an AWS classic load balancer as shownherewhich redirects traffic to backendnginx(configured withModSecurity) instances.Everything works great and I can hit my websites from the open internet.Now, since my nginx configuration is done in AWSUser Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.ProblemBefore enablingproxy protocolI used to check whether my nginx instance is healthy, and ModSecurity is working by checking a403response from this command$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per thislink.# curl -k https://localhost -v * About to connect() to localhost port 443 (#0) * Trying ::1... * Connected to localhost (::1) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * NSS error -5938 (PR_END_OF_FILE_ERROR) * Encountered end of file * Closing connection 0 curl: (35) Encountered end of file # cat /var/logs/nginx/error.log 2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ򫂱�u��%d�z��mRN�[e��<�,� �+̩� �0��/̨��98k�̪32g�5=�/< " while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
Verify if nginx is working correctly with Proxy Protocol locally
Nginx can act as L3/4 balancer with stream module:https://www.nginx.com/resources/admin-guide/tcp-load-balancing/Because SSL still tcp - Nginx can proxy SSL traffic without termination.Also stream module can terminate SSL traffic, but it's optional.Example 1: TCP tunnel for IMAP over SSL without SSL terminationstream { upstream stream_backend { server backend1.example.com:993; server backend2.example.com:993; } server { listen 993; proxy_pass stream_backend; } }In this case, SSL termination processed by backend1/2.Example 2: TCP tunnel for IMAP with SSL termination.stream { upstream stream_backend { server backend1.example.com:443; server backend2.example.com:443; } server { listen 993 ssl; proxy_pass stream_backend; ssl_certificate /etc/ssl/certs/server.crt; ssl_certificate_key /etc/ssl/certs/server.key; } }In this case traffic between nginx and backend1/2 unencrypted (IMAP 443 port used).Example 3: Receive unencrypted and encrypt itstream { upstream stream_backend { server backend1.example.com:993; server backend2.example.com:993; } server { listen 443; proxy_pass stream_backend; proxy_ssl on; proxy_ssl_certificate /etc/ssl/certs/backend.crt; proxy_ssl_certificate_key /etc/ssl/certs/backend.key; } }So, clients connect to our nginx without SSL and this traffic proxed to backend1/2 using SSL encryption.
Due to some reason, I need to set up Nginx TCP load balance, but with SSL termination. I am not sure whether Nginx can do this. Since TCP is layer 4, SSL is layer 5, SSL pass-thru definitely work. But with SSL-termination?
Can Nginx do TCP load balance with SSL termination?
The following snippet does this in a general way, without having to hard code any hostnames (useful if your server config handles requests for multiple domains). Add this inside any server definition that you need to.if ($http_host ~ "\.$" ){ rewrite ^(.*) $scheme://$host$1 permanent; }This takes advantage of the fact (pointed out byIgor Sysoev) that$hosthas the trailing dot removed, while$http_hostdoesn't; so we can match the dot in$http_hostand automatically use$hostfor the redirect.
How can I redirect "http://domain.com." to "http://domain.com" with Nginx?What's the recommended way of doing this? Regex or is there any other options?
Nginx - Redirect Domain Trailing Dot
A general answer is that you can set variables inifand then use the variable. Like this:set $variable ""; if ($http_X_Amz_Cf_Id) { set $variable "somevalue"; } proxy_set_header someheader $variable;
I'm using AWS CloudFront to terminate my SSL before hitting my backend, and need to distinguish this traffic from non-CloudFront traffic to set aproxy_set_headerin Nginx.I believe the best way to do this would be to check for theX-Amz-Cf-Idheader (added by CloudFront), and set theproxy_set_headerwhen it's present. However, I'm aware it's not possible to setproxy_set_headerin an Nginx if statement, which leads to my question.How can I set aproxy_set_headervalue only when that header is present?
Nginx set proxy_set_header if header is present
Short answer, try this configuration:location = / { if ( $arg_abc ) { rewrite ^ /otherpath/ permanent; } }
I want to redirect anything that comes directly to my server with perticuler query string to other location in same domain.If user comes tohttp://www.mydomain.com/?abc=js9sd70sI want to redirect it tohttp://www.mydomain.com/otherpath/?abc=js9sd70sthe query string?abc=js9sd70sshould be the same to new url.Please suggest nginx config file.I have tried most of the alternative for below code. by keeping '\' before ? and = sign.location ~ /?abc=.* { rewrite ^/(.*)$ http://www.mydomain.com/otherpath/$1 permanent; }Please suggest me this location change.
nginx location regexp for query string
The problem was that I was running it on VM on windows which happens with docker .So in one of thebeginner tutorialsit was mentioned that the port is forwarded to this VM port not on the windows port. (Just read the note below the hello world! browser image)So you have to find the ip address of your VM OS and paste it in the browser along with the port number.
1.I was usingthisguide to get a nginx webserver image to run and used the commandsdocker run -p 8888:80 nginxdocker run -p 80:80 nginxI guess two or more containers got up and running but when I open localhost:8888 it shows the site cannot be reached.I have also usedthisto try and expose something on my browser. It showed the same problem too.2.One more question, when I run more containers with the same image file, the terminal shows nothing as console and doesn't even terminate i.e, return to the the dollar sign . So I would be stuck and forced to open another terminal. Is there some trick concept I'm missing here.Please noteI have installed docker on windows and used docker quick start terminal for the above.
Docker image NGINX not exposing : site cannot be reached
But when I run it, this is my output (noticed the different IP of the container)Since this a Windows machine, I assume that you're usingDocker ToolboxDocker for Windows.10.0.75.2is the IP of theboot2dockervirtual machine.If you are using Windows or Mac OS, you will need some form of virtualization in order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)Use a Linux distribution! Also you can enableExpose container ports onlocalhostinDocker For WindowsSettings:
I'm followingDigital Ocean's tutorialon how to start a nginx docker container (Currently on Step 4). Currently this is their output:$ docker run --name docker-nginx -p 80:80 -d nginx d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginxBut when I run it, this is my output (noticed the different IP of the container):C:\>docker run --name docker-nginx -p 80:80 -d nginx d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c C:\>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginxWhy does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)Edit: I'm usingDocker for windows(recently released) which apparently runs native using Hyper-V. My output fordocker-machine lsis this:C:\>docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS C:\>
My docker container isn't starting on localhost (0.0.0.0) on Docker for Windows (Native using Hyper-V)
In this case, a static resource refers to one that is not generated with code on the fly, meaning that its contents won't change from request to request.Images, JavaScript, CSS, etc., are all candidates for this. Basically, you set a large cache time for these resources, and your Nginx servers can keep a copy on disk (or in Redis or something similar) so that they are ready to return to the client without hitting your application servers.It's important to remember to use versioned file names when setting large cache times. header-image-20140608.png for example, means you can have a later version without worrying about the old one still being in the cache.
I am primarily a front-end developer/designer, however recently, I've been exploring end to end solutions. Yesterday I finished a TODO application using the mean stack and would like to start exploring deployment options to my VPS.That being said, I've been advised to use nginx as a reverse proxy is for serving up static resources? Unfortunately, I'm getting stuck on simple questions.What are the example static resource?What factors define static resources?What are examples of non-static resources?Lastly, are there any weird edge-cases I should be aware of?Sorry about the noobness of this question.
What's difference between static and non-static resources?
Your files must be in UTF-8 as well and the HTTP header you send is more important than the meta tag.To deliver all your content with UTF-8 encoding (HTTP header) via nginx do the following:# /etc/nginx/nginx.conf http { charset utf-8; }But the important part is that your files actually have to be encoded in UTF-8 for anything to work. A good editor (e.g. Notepad2, Notepad++, NetBeans IDE, Adobe Dreamweaver, …) allows you to change the encoding of your file.
I have a self-hosted server running nginx and PHP5-fpm on a debian (raspbian wheezy) machine.My problem is UTF-8 special characters (åäö) aren't working. I've setin the head of the website. All files are encoded with utf-8 without BOM.As adviced by Fleshgrinder's answer I've addedcharset utf-8;to nginx.conf without results.How can I fix this?
UTF-8 not working nginx
I've had similar problems with Nginx on Ubuntu 12.04, with nginx compile from source and init script taken from a similar source as yours.Service started well with the init file, but stop or restart didn't. In the end the cause was a different path to the pid file in the nginx.conf and the init script. Make sure they both point to the same location.
My nginx was compile from the source, only give the flag that--conf-path=/etc/nginx/nginx.confEverything works and I was trying to usethisservice init.d script to make nginx as a system service,Here's the script that I made only 2 minor changes:1. DAEMON=/usr/local/nginx/sbin/nginx 2.NGINX_CONF_FILE="/etc/nginx/nginx.conf"then sudo service nginx startit worksyozloy@SNDA-172-17-10-158:/usr/local$ sudo service nginx start * Starting Nginx Server... ...done.but service nginx stop and service nginx reloadgive me a erroryozloy@SNDA-172-17-10-158:/usr/local$ sudo service nginx stop * Stopping Nginx Server... ...fail!and the error doesn't appear on the log/error.log file
Add nginx as a ubuntu service stop and reload doesn't work
The ngx_lua module is for running Lua code directly in the nginx webserver. It is possible to run entire Lua applications in this way but this is not the specific target of that module. Actually, some of the module directives specifically should not be used with long running or complex routines.You will need to recompile Nginx with this module as you cannot just download an Nginx module and use it like that.To run Lua applications similar to the way you run PHP, you can configure nginx to pass ".lua" requests to a Lua handler (Similar to PHP).You can set up a webserver such asthe Lua webserver, xavanteorthttpdor even Apache and "proxy_pass" to this similarly to how many do with Apache for PHP.You can set Lua up to run as CGI (similar to PHP with FastCGI although Lua does not have the equivalent of FPM) and call this as needed.You do not need ngx_lua for either of the two options.Basically, PHP, Lua and such fall under the broad category of "CGI" scripts and any "how to" on running these can be applied to Lua.BTW openresty is just regular Nginx with some 3rd party modules bundled in including ngx_lua and the people behind openresty are the same behind ngx_lua.You can manually add as many of the same bundled modules to Nginx yourself as you wish.
As a learning exercise I've dedicated some time to picking up Lua by creating some basic apps. I've gotten it installed and running great on Natty/Ubuntu, however, I'm a bit lost as to how to get it to play nice with nginx.I've read a bit herehttp://wiki.nginx.org/HttpLuaModule#InstallationAnd cloned this repohttps://github.com/chaoslawful/lua-nginx-moduleinto my /etc/nginx folder...However, I'm still rather lost and unsure how to get it working even on a basic level. Is it possible to just include something into my nginx.conf file to handle /lua requests, or do I need to recompile/reinstall nginx altogether? (i'd rather avoid this).I've already been using php under nginx via fpm for quite a while, but I'm really not sure where to start getting Lua working in a similar fashion.
Running Lua under nginx (writing a website with Lua)
Createapi.example.comin DNS, pointing to your API.Create a second Origin in CloudFront, pointing toapi.example.com. Leave "Origin Path" blank, because it does not do what you might assume.Create a new Cache Behavior in CloudFront, with the Path Pattern of/api*. Point this to the newly-created origin.CloudFront will send all requests for/api*toapi.example.comand everything else to the default Cache Behavior Origin, which would be the bucket.
I have a S3 bucket which hosts a website and is delivered with CloudFrontand right now I have attached the distribution to my apex root domain like - www.xyz.comSo, previously we were using Nginx to serve a static frontend from a webserver root on the same domain - www.xyz.com and had also setup a reverse proxy - www.xyz.com/api/** which routed traffic to upstream backend server on the same machine.Now, I would like to move the website to S3 but still run the backend API on the same machine and to do so I will have to change my DNS records and point them to the CloudFront distribution.But, then the existing and previously deployed and running services which use www.xyz.com/api for backend services will break So, I want to forward all request on this path pattern to http:///api so that the existing applications don't break.Is there a way we can achieve this ? i.e -Forward request from a subpath of CloudFront distribution delivering a static frontend from S3 to an external application server ?---UPDATE--- ---Nginx conf to redirect requests---location /api/ { proxy_pass http://localhost:4040/api/; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_http_version 1.1; }this is within a server directive which exposes the root domain & frontend to the world currently but now I want to migrate the frontend to S3 and thus only keep this location block /api for compatibility purposes until I update the configuration on all clients.If so, please suggest how this can be done or what information you need from my side that could help out in getting this done ?Thanks,
CloudFront how to setup reverse proxy on an existing distribution serving website from S3
In nginx you still need to usehttpfor protocol in your url and notws.proxy_pass http://service_name:3600;Thewsandwssprotocol is required for browser, on server side you add below to handle the websockets over httpproxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";
First time withnginx.I have a nodejs WebSocket server listening atws://service_name:3600.I'm usingdocker-compose:version: "2" services: # stuff service_name: image: imagename ports: - 3600:3600 links: # stuff - proxy proxy: image: image-from-nginx-with-custom-config ports: - 80:80 - 443:443 - 8443:8443My config:// stuff server { listen 8443; server_name localhost; ssl on; ssl_certificate /etc/nginx/certs/crt.pem; ssl_certificate_key /etc/nginx/certs/key.pem; keepalive_timeout 60; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; location / { proxy_pass ws://service_name:3600; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }I getnginx: [emerg] invalid URL prefix in /etc/nginx/conf.d/default.confat startup.So nginx doesn't recognizews, what do I do?
nginx ws invalid URL prefix
So this did seem to be the solution (in my note above).If using my example from above, you want this to look like:stream { server { listen 11016 udp; proxy_pass juniper_close_stream_backend; proxy_responses 0; } }This tells nginx not to expect a response, which it shouldn't need from UDP. I don't know why theirexamplesdon't show this when discussing DNS, which can be entirely UDP driven.
I have a main syslog server that is receiving syslog from several sources, and I want to send those logs to a Graylog cluster. To help the cluster keep up (on some slow VMs), I need to be able to load balance the messages to Graylog, as sometimes they come in massive chunks from the endpoints (some send 5k logs in bursts every 10 seconds).I'm trying to use nginx as a load balancer for the syslog messages, but I can't seem to get it to work, and it seems to be because nginx is looking for responses from the Graylog servers. With UDP, it's not going to get a response. At least this is what I think is happening.The error I'm getting is this:2016/12/01 11:27:59 [error] 2816#2816: *210325 no live upstreams while connecting to upstream, udp client: 10.0.1.1, server: 0.0.0.0:11016, upstream: "juniper_close_stream_backend", bytes from/to client:932/0, bytes from/to upstream:0/0As an example of this rule in my nginx.conf, it looks like:stream { server { listen 11016 udp; proxy_pass juniper_close_stream_backend; } upstream juniper_close_stream_backend { server 10.0.1.2:11016; server 10.0.1.3:11016; server 10.0.1.4:11016; } }In this instance, my syslog box is 10.0.1.1, and my downstream Graylog boxes are 10.0.1.[2-4]. I see this error message for all of them.Any clue on what is happening? When I run tcpdump on the Graylog boxes, I'm seeing the traffic coming from the load balancer, which means it's working. But I think nginx is expecting a response and is giving me an error.
UDP forwarding with nginx
It's not possible to extend the expiration of an existing certificate once issued. The only way is to issue a new certificate.Most certificate authorities offers a "renewal" concept, which provides some advantages compared to a new purchase. For example, you can renew in advance to the certificate expiration, and they will issue the new certificate from the expiration of the previous one, and not from the day the new one is issued.The re-issue or re-key is a different thing. It generally means re-keying an existing certificate order with a different private key and/or CSR. It generally doesn't change the expiration of the certificate, hence it's not a renewal. Both renews and rekeys result in a new certificate (again, it's not possible to change an existing certificate once issued), but the rekey only alters the certificate information and not the expiration.A renewal can be issued with the same original CSR and key, or with a completely new one. It's up to you.As in all cases a new certificate is issued, you will have to replace the existing one. Replacing a certificate is generally a no-downtime task. You simply upload the new one, change the server settings and reload them (or restart the server).Most webservers including Nginx supports hot reloads, therefore you don't need to restart the server and wait for it to reboot.If planned correctly, the renewal will be a no downtime task.
I have an SSL certificate that I am using to secure port 443 (HTTPS) on my nginx server running on Ubuntu for about 10 months now.When I bought the cert, I got it for one year, so I have about 2 more months with this certificate. My question is: "When I renew this cert, Will I just need to pay for renewal? or will I have to re-issue the cert with a new CSR, and have a potential downtime while installing?I need to plan for any downtime from now.Thanks in advance for your answers.
Does renewing SSL certificate require re-issuing the cert?
We do something like this, only we tell Apache to store the the Django sessionid cookie.LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %{sessionid}C" withsession CustomLog logs/example.com-access_log withsessionIt's sort of a two-step process to map the sessionid to the user, but it's easy to implement. You could do something similar by setting a cookie with the explicit ID in it and then using the custom log to capture it.
Is there a way to inject an application level username or id (in this case, the django username or id) into the Apache or ngnix log? Note that I'm not asking about the HTTP auth username.
Injecting app level username/userid into nginx/Apache log
The last component of thetry_filesstatement should be a URI. Assuming that yourindex.htmlfile is located under the/var/www/reactAppsubfolder, you should use:location /reactApp { root /var/www; index index.html; try_files $uri $uri/ /reactApp/index.html; }Seethis documentfor more.
I am trying to deploy a React application in a subfolder on my Nginx server.The location of this React app is structured like: www.example.com/reactApp.I tried to set up my current nginx.conf like so:server { ..other configs.. location /reactApp { root /var/www; index reactApp/index.html; try_files $uri $uri/ /index.html; } ..other configs.. }This has not worked. What do I need to change to fix my subfolder routing?
How do I configure my Nginx server to work with a React app in a subfolder?
Sure you can but in an indirect way:error_page 500 /500.html; location = /500.html { root /usr/var/nginx/errors; allow all; internal; }seehttp://wiki.nginx.org/HttpCoreModule#error_page
Is there a way I can set an absolute path for nginx error_pages? Not absolute as inhttp://, but absolute as in/usr/var/nginx/errors/500.html.
Absolute path for error_page in nginx?
change/home/appuser/test_appto/home/appuser/test_app/public
I have install Nginx server and configured all needed stuff, but currently I'm having error with 403 forbidden error. Log says:2010/12/28 17:38:59 [error] 28664#0: *27 directory index of "/home/appuser/test_app" is forbidden, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.1", host: "xxx.xxx.xxx.xxx"My config:worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; root /home/appuser/test_app; passenger_enabled on; } }Any solutions?
nginx + passenger + rails - 403 forbidden error
Supply acommandto the container including the--authoption.mongodb: image: mongo:latest expose: - "27017" volumes: - "/home/open/mymongo:/data/db" command: mongod --authThe latest mongo containers come with"root" auth initialisation via environment variablestoo, modelled on the postgres setup.
I'm using docker-compose to run my project created by node,mongodb,nginx;and I have build the project usingdocker buildand then I usedocker up -d nginxto start my project. but I haven't found the config to run mongodb image with '--auth', so how to add '--auth' when compose start the mongodb?here is my docker-compose.yml:version: "2" services: mongodb: image: mongo:latest expose: - "27017" volumes: - "/home/open/mymongo:/data/db" nginx: build: /home/open/mynginx/ ports: - "8080:8080" - "80:80" links: - node_server:node node_server: build: /home/laoqiren/workspace/isomorphic-redux-CNode/ links: - mongodb:mongo expose: - "3000"
how to add --auth for mongodb image when using docker-compose?
I suggest you to install remi repository. I assume you use fedora 23.sudo dnf install http://rpms.remirepo.net/fedora/remi-release-23.rpmAfter installing remi repository, you have to edit/etc/yum.repos.d/remi.repofile and enable it. Finally you can install various versions of php. for example:sudo dnf install php70-php php56-phpYou can use them asphp70andphp56along with option or php file you want to run.
How can I setup PHP5 and PHP7 on one Fedora system?As I see, PHP in fedora is not one directory, it's spread in OS.On Windows systems, PHP is one folder, so I can just rename it when I need a specific version of PHP. What about Fedora?Maybe there are some useful links but I haven't found them.Also, it will bephp5+apache(httpd)andphp7+nginx, but I don't think it matters for now.
Php7 and php5 on fedora at the same time
On Ubuntu or Debian it's as simple as using thereloadargument:service nginx reloadThe official way is to send SIGHUP:kill -HUP $(ps -ef | grep nginx | grep master | awk '{print $2}')The above command will get the process ID of the nginx master process and send a SIGHUP signal to it.See theControlling Nginxdocumentation for Nginx.You can also use the Nginx binary:nginx -s reload
I want Nginx to update configuration file without reloading or restarting Nginx. It seem API or anything (http://nginx.com/products/on-the-fly-reconfiguration/).
how to nginx configuration updates without having to reload or restart nginx
The problem was for theswagger-ui-expressmiddleware that redirect user to host/api-docs and don't use the prefix of path, so I solved this problem with a trick I use middleware with this path :const swaggerUi = require('swagger-ui-express'); const swaggerDocument = require('./swagger.json'); app.use('/app-prefix/api-docs',swaggerUi.serve, swaggerUi.setup(swaggerDocument));and in nginx I defined two location :location /app-prefix/api-docs { proxy_pass http://172.18.0.89:3000/app-prefix/api-docs; } location /app-prefix/ { proxy_pass http://172.18.0.89:3000/; }so when user request to nginx , nginx route it to application second path :/app-prefix/api-docsand after that swagger middlware redirect it tohost/app-prefix/api-docsand redirect to correct path, now application route and swagger works fine.
I use swagger-ui-express package(https://github.com/scottie1984/swagger-ui-express) (Node.js) and work fine with this config:const swaggerUi = require('swagger-ui-express'); const swaggerDocument = require('./swagger.json'); app.use('/api-docs',swaggerUi.serve, swaggerUi.setup(swaggerDocument));when directly got to /api-docs every thing is fine, but when i come from nginx for examplehost/myApp/api-docsredirect me tohost/api-docsand it's obvious that after redirect I get 404
Swagger UI not working as expected while service behind Nginx reverse-proxy
###Onangular.json>build>optionsconfiguration add this line with target sub directory"baseHref" : "/v2/",**like this **"build": { "builder": "@angular-devkit/build-angular:browser", "options": { "baseHref" : "/v2/",
New to Angular. App works fine if deployed innginx/var/www/mydomain.com/html. But I want to deploy it in/var/www/mydomain.com/html/myappfolder. I setupnginx available sitesto this folder andindex.htmlworks fine. But relative paths in Angular app (e.g., images/mypic.png) being attempted to be retrieved from/var/www/mydomain.com/html/imagesfolder (hence 404 error code) instead of/var/www/mydomain.com/html/myapp/imagesfolder. How do I set a url prefix/myappglobally in Angular so all relative paths have this prefix. I have seen some answers here but they require changes in the component code. Isn't there a way to made this setting at deployment time so the samedistcan be deployed in any path?
Deploying Angular app in a different folder than root folder
Thengx_http_stub_status_modulemodule provides access to basic status information.location /basic_status { stub_status; }This configuration creates a simple web page with basic status data which may look like as follows:Active connections: 291 server accepts handled requests 16630948 16630948 31070465 Reading: 6 Writing: 179 Waiting: 106Source:https://nginx.org/en/docs/http/ngx_http_stub_status_module.html
Nginx.conf, looks like thisuser www-data; worker_processes 4; pid /run/nginx.pid;events { worker_connections 768; # multi_accept on; }The commandulimit -ngives the number of worker connections available, I want the number which is currently being used by nginx.
How to get current worker connections being used by nginx?
The command line probably does not use the same php.ini file than the web server. Usephpinfo();to know which configuration file is loaded in both cases and then declare your extension in the ini file used by your web server.
I met a weied problem when installing phpredis bycd phpredis && ./configure && make && make installafter that, I addextension=redis.sointo php.ini.I can get an OK by runningphp -r "if (new Redis() == true){ echo \"\r\n OK \r\n\"; }"BUT when running http:127.0.0.1, nginx throw a error " Fatal error: Class 'Redis' not found in index.php" $client = new Redis(); I guess this may be some problems related with environment...Thanks for any advice!
phpredis errors Class Redis not found in Linux
You have various approaches:1) copytruncate in the logrotate script, this will work reliably and without the help of uWSGI2) uWSGI log rotation:--log-maxsize will automatically rotate logs when a specific size is reached3) classic logrotation + log reloading, just add--log-masterand trigger log reloading withhttp://uwsgi-docs.readthedocs.org/en/latest/MasterFIFO.htmlThere are other approaches too (like triggering the log reopen when touching a files), but the previous one are the most common.
My goal is rotating the logs generated by uWSGI, but when the original log file is deleted (after compression) it is not re-created again.So I thought that the app needs a graceful restart of the master process after the file is deleted. I use this RESTART script:/home/tester/uwsgi-18 --reload /var/run/uwsgi/my_app_tester/my_app_tester.pidThe app restart, but the log does not.To get the logging work again I need to kill -2 the process and run the START script again, so another process number is generated and the logging works again.Obviously I do not want such hard STOP just for rotating logs...My app is built with Catalyst, the server runs Nginx and here it is the uWSGI START script:/home/tester/uwsgi-18 --master --daemonize /var/log/uwsgi/my_app_tester/log --socket /tmp/uwsgi/my_app_tester/my_app_tester.socket --processes 1 --psgi /home/tester/my_app/my_app.psgi --pidfile /var/run/uwsgi/my_app_tester/my_app_tester.pid --procname-master TESTER -LIs there another way to restart the app without loosing connections and logging?Thank you in advance: Migue
uWSGI logging not working if log file is removed
How about enforcinguser==ownerat the view level, preventing access to the files, storing them as FileFields, and only retrieving the file if that condition is met.e.g. You could use the@login_requireddecoratoron the view to allow access only if logged in. This could be refined usingrequest.userto check against the owner of the file. The User Auth section of theDjango documentationis likely to be helpful here.The other option, as you mention is via S3 itself, generating urls within Django which have a querystring allowing an authenticated user access to download a particular s3 object with a time limit. Details on that can be found at thes3 documentation. A similar question has been asked beforehereon SO.
I am building a system that allows users to generate a documents and then download them. The documents are PDFs (not that it matters for the sake of this question) and when they are generated I store them on my local file system that the web server is running on with uuid file namesc7d43358-7532-4812-b828-b10b26694f0f.pdfbut I know "security through obscurity" is not the right solution ...I want to restrict access to they files on a per account basis if possible. One thing I think I could do is upload them to S3 and provide a signed URL, but I want to avoid that for now if possible.I am using Nginx/Django/Gunicorn/EC2/S3What are some other solutions?
Restricting access to static files in Django/Nginx
Here are two approaches for fixing this:1)exec { "add-apt-repository ppa:nginx/stable && apt-get update": alias => "nginx_repository", require => Package["python-software-properties"], creates => "/etc/apt/sources.list.d/nginx-stable-natty.list", }That will tell the exec to only run if that file doesn't exist. If there's some other way to check that the exec has run successfully, you could have anonlyif =>orunless =>to specify a command to check.2)exec { "add-apt-repository ppa:nginx/stable && apt-get update": alias => "nginx_repository", require => Package["python-software-properties"], refreshonly => true, subscribe => Package["python-software-properties"], }That will tell the exec to only run if it's notified, and will tell that package to notify the exec that it should run. (You could instead specifynotify => Exec["nginx_repository"]in the python-software-properties package stanza; the effect of a notify on one end of a relationship is the same as a subscribe on the other end of the relationship.)The downside of the second approach is that if anything goes wrong, puppet will never figure it out, and if the package is installed some other way than via that puppet rule (such as pulled in as a dependency elsewhere) it will never run the exec (and the nginx package install will keep failing).In other words, the first approach of having the exec have some way of checking whether or not it has already run is vastly preferable.
I'm new to Puppet and have a question about working with dependencies.I'm using Puppet to install Nginx 1.0.5 on Ubuntu 11.04. It requires adding a new apt repository since natty normally comes with Nginx 0.8. At the commandline, the install goes like this:# apt-get install python-software-properties # add-apt-repository ppa:nginx/stable # apt-get update # apt-get install nginxSo I wrote this Puppet script:class nginx::install { package { "nginx": ensure => present, require => Exec["nginx_repository"], } exec { "add-apt-repository ppa:nginx/stable && apt-get update": alias => "nginx_repository", require => Package["python-software-properties"], } package { "python-software-properties": ensure => installed, } }The script works, but the exec{} directive runs every time, instead of only when nginx is actually being installed. Ideally, I'd like the "apt" commands to be run only before actual nginx installation, not when nginx installation is simply being checked.I have a rudimentary understanding of the notify/subscribe model, but I wasn't sure how to have the nginx directive send a "notify" signal only when actually installing nginx.
How to work with Puppet dependencies when installing Nginx 1.0.5 on Ubuntu 11.04
Here is what I got working. Using Upstart (Ubuntu 10.04) to start the passenger daemonMy environment uses rvm with ruby 1.9.2 and apache and my rails app is deployed via capistrano# Upstart: /etc/init/service_name.conf description "start passenger stand-alone" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on started mysql # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Run before process pre-start script end script # Start the process script cd /var/path/to/app/staging/current sh HOME=/home/deploy /usr/local/rvm/gems/ruby-1.9.2-p136@appname/gems/passenger-3.0.7/bin/passenger start --user 'deploy' -p '5000' -a '127.0.0.1' -e 'production' end scriptand the apache config: ServerName myapp.com PassengerEnabled off Order deny,allow Allow from all ProxyPass / http://127.0.0.1:5000/ ProxyPassReverse / http://127.0.0.1:5000/ Upstart doesn't set ENV['HOME'] which passenger relies on, so we have to pass that when executing the passenger command. Other than that its pretty straight forward.A note for debugging:https://serverfault.com/questions/114052/logging-a-daemons-output-with-upstart(append something like>> /tmp/upstart.log 2>&1to the second line in the script block)Hope this helps.
I have a few apps running rails 3 on ruby 1.9.2 and deployed on a Ubuntu 10.04 LTS machine using nginx + passenger. Now, I need to add a new app that runs on ruby 1.8.7 (REE) and Rails 2. I accomplished to do that with RVM, Passenger Standalone and a reverse proxy.The problem is that, every time I have to restart the server (to install security updates for example), I have to start Passenger Standalone manually.Is there a way to start it automatically? I was told to use Monit or God, but I couldn't be able to write a proper recipe that works with Passenger Standalone. I also had a few problems with God and RVM, so if you have a solution that doesn't use God, or if you know how to configure God/Rvm properly, it's even better.
How can I keep a Passenger Standalone up even after a restart?
You were on the right track, but it's simpler than you were making it. To run a Flask app named run with entry point app via gunicorn under supervisor with the path you gave:/etc/supervisor/conf.d/run.conf[program:run] command = /var/www/sitename/env/bin/gunicorn run:app -b localhost:8000 directory = /var/www/sitename user = siteuserYou can provide the environment argument to set stuff like production mode or whatever, but this is all you need to have the virtual environment version of gunicorn, running python 3 if it's a python 3 venv, run your flask app in the same virtual environment.
I'm trying to deploy a flask application to an ec2 instance using (1) nginx (2) gunicorn, (3) git, and (4) supervisor. I've set up nginx, git, gunicorn, but I've having trouble writing the supervisor script.I'm unable to get supervisor to launch gunicorn within the context of the virtualenv.When I rungunicorn run:appoutside of the virtualenv it returnsImportError: No module named flaskWhen I run the same command within the virtualenv it works just fine.When I run the same command outside of the virtualenv but specify the gunicorn in the virtual env (i.e/var/www/sitename/env/bin/gunicorn run:app) it works just fine again.That's a problem that I couldn't figure out, but I figured if I could just have supervisor run gunicorn inside the virtualenv it wouldn't be a problem, but I'm not able to do that either.I've tried adding two programs in the supervisor script, one to launch the virtual environment and the other for gunicorn, added the two commands together using quotes which one similar SO answer suggested, using&&to combine activating the virtualenv and launching gunicorn, declaring anenvironment=PATH=variable, and a number of other options; I just can't get it to work.I have no idea what I'm doing wrong or what else to try; does anyone know what I can do at this point?I'm running python3 - I read that supervisor is limited to v2 but someone else mentioned in an answer that it's just a task manager and it shouldn't matter.
Activating Gunicorn through virtualenv with Supervisor for Flask Application
ASP.NET Core(ASP.Net 5) doesn't requireKestrel!You're right,Kestrelis just a simple HTTP server with a small set of features. You can runASP.NET CorewithoutKestrelonLinuxorMac, but you must either have an HTTP server or a fastCGI server.Nginxis used as a reverse proxy for static contents in general and you can also enable gzip compression on your dynamic content.Kestreldoesn't have this feature.You can also write your own HTTP server with the specific HTTP features you need (HTTP2 for example).
I am trying to understand the entire web/framework/application stack when installing ASP.NET 5 on Linux.All the instructions I have read, includingthis onehaven't really answered my question:Why can't Nginx server workwithoutKestrel like here:http://www.mono-project.com/docs/web/fastcgi/nginx/?Or am I way off. I'm trying to understand what the reason is for this structure:.NET Core(or mono) --> Kestrel --> NginxIsn't Kestrel just another web server like Nginx but with a lot less features?
Why does ASP.NET 5 on Linux require kestrel?