Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
It is easy to achieve in Nginx. There are two steps involved in it.Port 443 will be used only when yourdomain.com/shop is accessed. All other requests would be redirected to port 80 (HTTP)Port 80 will check for yourdomain.com/shop. If found, it'd be redirected to port 443 (HTTPS).Here is a quick overview of how it could be done...server { listen 443; server_name yourdomain.com; # directives for SSL certificates # root, index, error_log, access_log directives location /shop { # directives to handle what's inside /shop, for example # try_files $uri $uri/ /index.php; } location ~ \.php$ { # directives to handle PHP files } # leave everything else to port 80 location / { rewrite ^ http://$host$request_uri permanent; } } server { listen 80; server_name yourdomain.com; # root, index, error_log, access_log directives # redirect yourdomain.com/shop to port 443 # Please put this before location / block as # nginx stops after seeing the first match location /shop { rewrite ^ https://$host$request_uri permanent; } location / { # directives to handle what's inside /, for example # try_files $uri $uri/ /index.php; } location ~ \.php$ { # directives to handle PHP files } }
I have a nginx server running with ssl enabled. Currently I have https enabled for all directories. How enable ssl only forwww.example.com/shop/*directory and disable for other??here is my conf file :# Redirect everything to the main site. server { server_name *.example.com; listen 80; ssl on; ssl_certificate /opt/nginx/conf/server.crt; ssl_certificate_key /opt/nginx/conf/server.key; keepalive_timeout 70; access_log /home/example/nginx_logs/access.log ; error_log /home/example/nginx_logs/error.log ; root /home/example/public_html/example.com; location ~ \.php$ { try_files $uri $uri/ /index.php?q=$uri&$args; root /home/example/public_html/example.com/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /opt/nginx/conf/fastcgi_params; #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /home/example/public_html/example.com$fastcgi_script_name; index index.php index.html index.htm; } if ($http_host != "example.com") { rewrite ^ http://example.com$request_uri permanent; } include global/restrictions.conf; # Additional rules go here. #Only include one of the files below. include global/wordpress.conf; # include global/wordpress-ms-subdir.conf; # include global/wordpress-ms-subdomain.conf; }tnanks, D
ssl for subdirectory within nginx server configuration
Nginx has a mechanism for this. When you are returning a special status code 444 (it's non-standard), Nginx silently drops the connection. This happens only when you return this code from the Nginx config, i.e. likelocation = /drop { return 444; }and you cannot return this status code from your application. The workaround is to returnX-Accel-Redirect: /dropheader from the app to tell Nginx use/droplocation for this request.
There are many security reasons why one would want to drop an HTTP connection with no response (eg.OWASP's SSL best practices). When these can be detected at the server level then it's no big deal. However, what if you can only detect this condition at the application level?Does Rails, or more generally Rack, have any standard way of telling the server to drop the connection without a response? If not, are there some standard headers to pass in that will accomplish that in common web servers (I'm thinking Nginx or Apache)? Even if there is not a standard header is there a reasonable way to configure that behavior? Is this a fool's errand?
Is there anyway to make a Rails / Rack application tell the web server to drop the connection
443 port opened in aws ec2After two days of never ending debegging, I understood the problem. I had not opened 443 port in EC2 security group. Things to keep in mind whomever struggling with a similar issue -> Ensure that your OS firewall allows connections through 443 also ensure that your instance allows connections through 443.
Below is my nginx configuration. I modified the 'default' file (which is placed at 'sites-available). I am able to access the website when it's through 'http'. But when I try through 'https', there is a connection time out and the page cannot be reached. Nginx is strangely not making any entries to the logs(both access.log and error.log). I am seeking for help since I am completely new to this.# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80; listen [::]:80; root /var/www/main.x.com/html; index index.html server_name main.x.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } server { listen 443 ssl; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_certificate /etc/letsencrypt/live/main.x.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/main.x.com/privkey.pem; server_name main.x.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location / { root /var/www/main.x.com/html; index index.html; } }
website down after installing ssl certificate through certbot in nginx
This happens when you are trying to overwrite the default nginx config file which does not accept some root properties likehttp,user. If you need those extra configurations you can try copying what you have to/etc/nginx/nginx.confSo instead of this:COPY default.conf /etc/nginx/conf.d/default.confDo this:COPY nginx.conf /etc/nginx/nginx.confNote: By copying to the/etc/nginx/conf.d/, you will be overwriting the default config.
I'm trying to setup a server with nginx and docker-compose, but I get these errors every time I try 'docker-compose up':webserver | 2019/06/10 13:04:16 [emerg] 1#1: "http" directive is not allowed here in /etc/nginx/conf.d/default.conf:1 webserver | nginx: [emerg] "http" directive is not allowed here in /etc/nginx/conf.d/default.conf:1I've tried wrapping all up with html {}, removing server {}, another port instead of 80...nginx DockerfileFROM nginx COPY default.conf /etc/nginx/conf.d/default.confdefault.confserver { listen 80; server_name localhost; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://app:8080/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
nginx: [emerg] "http" directive is not allowed here in /etc/nginx/conf.d/default.conf:1
I also faced similar issue on nginx 1.10.2.Instead of./nginx -s reload, I used./nginxto start nginx and it solved this issue.
When I am executing this command:./nginx -s reloadThrow error:nginx: [alert] kill(57200, 1) failed (3: No such process)when I open the nginx.pid file:vim /usr/loca/nginx/logs/nginx.pidthe process id is:57200.But when I am checking nginx process,it does not have master process,the output is:[root@localhost logs]# ps -aux|grep nginx Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ root 12191 0.0 0.0 28172 8956 ? S Aug29 3:54 nginx: worker process root 12192 0.0 0.0 28172 8960 ? S Aug29 3:53 nginx: worker process root 12193 0.0 0.0 28436 9272 ? S Aug29 3:46 nginx: worker process root 12194 0.0 0.0 28172 8948 ? S Aug29 3:55 nginx: worker process root 12195 0.0 0.0 28436 9156 ? S Aug29 3:56 nginx: worker process root 12196 0.0 0.0 28172 8944 ? S Aug29 3:49 nginx: worker process root 12197 0.0 0.0 28172 8988 ? S Aug29 3:58 nginx: worker process root 12198 0.0 0.0 27908 8740 ? S Aug29 3:42 nginx: worker process root 12199 0.0 0.0 27908 8744 ? S Aug29 3:39 nginx: worker process root 53760 0.0 0.0 103252 832 pts/1 S+ 22:14 0:00 grep nginx root 80835 0.0 0.0 27908 8740 ? S Aug31 2:30 nginx: worker processWhat's wrong? How to solve this problem? The nginx version is 1.10.2.
nginx: [alert] kill(57200, 1) failed (3: No such process)
Try adding a=to your location, that will do an exact match:server { server_name _; listen 80 default_server; location = /credentials.js { deny all; return 404; } location / { add_header Content-Type text/plain; return 200 "hello world\n\n"; } }From thenginx location docs:If an exact match is found, the search terminates. For example, if a “/” request happens frequently, defining “location = /” will speed up the processing of these requests, as search terminates right after the first comparison. Such a location cannot obviously contain nested locations.
Currently my config file (/etc/nginx/sites-available/default) saysserver { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location /credentials.js { deny all; return 404; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } }but I can still access credentials.js via example.com/credentials.js from the web. Any suggestions?
Nginx Restrict Access to File
I don’t think that you can use an underscore in the header name of a custom header as it’s a feature that’s disabled by default. More information can be foundhereYou could test this out by removing this from the header name.
I am sending a header along with a GET request to a PHP script but either Postman does not send the header or the PHP script does not receive it. I am using Nginx for the server (Apache2 gave almost the same result with api_token absent). I am not able to find what is wrong.The server side PHP code is as follows:$val){ echo $key . ': ' . $val . ''; } ?>After checking from Postman console, it appears that the header is actually sent but not received inside PHP script for some reason.
The header sent by Postman is not received in the PHP script
I think you need those two lines:proxy_set_header Host "XXXXXX.execute-api.REGION.amazonaws.com"; proxy_ssl_server_name on;Here is the explanation about the why:TheHOSTheader is required as is describedhereThe Amazon API Gateway endpoint. This value must be one of the region-dependent endpoints listed underRegions and Endpoints.In the US East (N. Virginia) region, it is apigateway.us-east-1.amazonaws.com.Theproxy_ssl_server_name on;Enables passing of the server name through TLS Server Name Indication extension (SNI) when establishing a connection with the proxied HTTPS server.
Problem:I've set up a Lambda function behind API gateway which works beautifully. I have a hosted site that I want only a certain location to hit the API.Examplehttps://www.example.com/(Serves up html from hosted server)https://www.example.com/foobar(Returns a JSON payload that is generated by Lambda and returned by AWS)Here is my server block:location = /foobar { proxy_pass https://someawsuri; proxy_redirect default; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; }I've tried reviewing the docs and read multiple SO posts, but I haven't been able to do what I want. Everything that I've tried has produced the error502 Bad Gateway.Question:How do I configure nginx to make a request to API gateway?Thanks.
Setting up proxy_pass on nginx to make API calls to API Gateway
You could try like this!server { server_name app.somename.com; location / { proxy_pass http://192.168.0.16:80; proxy_set_header Host app.somename.com; } }
I have a dynamic IP which I manage using ddclient. I use no-ip to maintain the hostnames to point to my IP.I have www.somename.com, sub.somename.com and app.somename.com. Obviously, these all point to my IP. The first two are a couple of wordpress pages on a server (server1) running NGINX, with separate configs in sites-available for each site. The latter is a separate application server (server2) running GitLab.My router does not allow me to switch on subdomain, so all port 80 traffic is routed to server1. I'm hoping there is a config I can apply in nginx that will allow me to send all traffic for app.somename.com to a local IP address on my network (192.168.0.nnn), but keep the address of the page as app.subdomain.com.Right now, I have :-/etc/nginx/site-available$ ls somename.com domain sub.somename.com app.somename.comThe relevant ones are linked in sites-enabled. For the app server, I have :-server { server_name app.somename.com; location / { proxy_pass http://192.168.0.16:80; } }The problem, is that in the browser address bar, this results in :-http://192.168.1.16/some/pagesWhere I want :-http://app.somename.com/some/pagesHow do I resolve this?
nginx redirect subdomain to seperate server ip
What version of nginx do you have in ubuntu? If you have 14.04 with default repo, it is1.4.6. And expires accepts variables only since1.7.9. You can add nginxofficial repoand install 1.10 from it. Semi-automatic installation:apt-get install -y lsb-release LIST="/etc/apt/sources.list.d/nginx.list"; OS=`lsb_release -si | tr '[:upper:]' '[:lower:]'`; RELEASE=`lsb_release -sc`; if [ ! -f $LIST ]; then echo -e "deb http://nginx.org/packages/$OS/ $RELEASE nginx\ndeb-src http://nginx.org/packages/$OS/ $RELEASE nginx" > $LIST; else echo "File $LIST exists! Check it."; fi wget -q -O- http://nginx.org/keys/nginx_signing.key | apt-key add - apt-get update apt-get remove -y nginx-full nginx-common apt-get install nginx
My nginx.conf is working on local (OSX) but throwing an error on prod (Ubuntu)The full file:https://github.com/thomasdane/partywave/blob/master/nginx.confBut the relevant part is:# Expires map map $sent_http_content_type $expires { default off; text/html 1w; text/css 1w; application/javascript 1w; ~image/ 1w; } server { listen 80; expires $expires; server_name http://www.partywave.co/; location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }On my mac, that works great and outputs:curl -I localhost/images/hero.jpg HTTP/1.1 200 OK Server: nginx/1.10.1 Date: Tue, 25 Oct 2016 09:27:51 GMT Content-Type: image/jpeg Content-Length: 270624 Connection: keep-alive X-Powered-By: Express Accept-Ranges: bytes Cache-Control: max-age=604800 Last-Modified: Fri, 09 Sep 2016 09:57:09 GMT ETag: W/"42120-1570e612108" Expires: Tue, 01 Nov 2016 09:27:51 GMTHowever when I run the exact same nginx.conf on production (Ubuntu 14.04) I get the following error:nginx: [emerg] "expires" directive invalid value in /etc/nginx/nginx.conf:46If I delete the $expires code, it works fine on production (without the expires of course).I've been googling for a while and cannot figure out why. Would love any help.
Expires in nginx.conf throws error on ubuntu but not on OSX
The structure ofMessageIDshould be:If yourMessageIDdoesn't have this exact structure - PHPMailer will ignore yourMessageIdand generate it's own MessageId.You can change your code to:$mail->MessageID = "<" . md5('HELLO'.(idate("U")-1000000000).uniqid()).'-'.$type.'-'.$id.'@domain.com>';And it should work.
I send emails using PHPMailer, evthg works well but I would to set a uniq MessageID for each email.PHPMailer version : "v5.2.16"(loaded with Composer fromhttps://github.com/PHPMailer/PHPMailer.git)I found the documentation here :http://phpmailer.github.io/PHPMailer/classes/PHPMailer.html#property_MessageIDso here is what I tried :$mail = new PHPMailer; ... $mail->MessageID = md5('HELLO'.(idate("U")-1000000000).uniqid()).'-'.$type.'-'.$id.'@domain.com';Result :This is always the default MessageID generated by PHPMailer :and not mine... :(Then I tried sthg more simple :$mail->MessageID = "[email protected]";Result: KO (the same)The documentation indicates that we can set a MessageID, and it should be a string, I don't understand at all why it doesn't work...Any idea ?
PHPMailer : replace the default messageID
nginx configserver { listen 80; server_name example.com; root /home/myuser/psk/dist; index index.html; location / { try_files $uri /index.html; } }Make sure to add abase urlto yourindex.html. In case of Polymer starter kit & nginx the base element will help direct access to URLs with query parameters such as :http://example.com/users/Chuckthat you can find under "Users" in PSK 1.2.x ...Otherwise nginx will go into a loop without a base URL.nginx error.logrewrite or internal redirection cycle while internally redirecting to "/index.html"Browser goes into an infinite cycle loading the index page instead of static files.Uncaught SyntaxError: Unexpected token <
the Polymer Starter Kit (PSK) contains instructions on using Pretty URLs when hosting on FirebaseHEREI am attempting to do similar using Nginx Server, but cannot figure out the Location Block for page reloads. Using the sample data that comes with PSK, how would you configure "/users/sam", for example.
Polymer Starter Kit - Pretty URLS on Nginx Server
In principle, it suffices to use a syntaxcat >file <<EOL $my_var EOLThat is, use the vars as they are, without escaping$.So instead ofbaseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/ ^ ^saybaseurl=http://nginx.org/packages/centos/$releasever/$basearch/Fromman bash:Here DocumentsThis type of redirection instructs the shell to read input from the current source until a line containing only delimiter (with no trailing blanks) is seen. All of the lines read up to that point are then used as the standard input for a command.The format of here-documents is:<<[-]word here-document delimiterNo parameter expansion, command substitution, arithmetic expansion, or pathname expansion is performed on word. If any characters in word are quoted, the delimiter is the result of quote removal on word, and the lines in the here-document are not expanded.If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion. In the latter case, the character sequence \ is ignored, and \ must be used to quote the characters \, $, and `.See an example:$ cat a.sh r="hello" cat - <<EOL hello $r EOL echo "double quotes" cat - <<"EOL" hello $r EOL echo "single quotes" cat - <<'EOL' hello $r EOLAnd let's run it:$ bash a.sh hello hello <-- it expands when unquoted double quotes hello $r <-- it does not expand with "EOL" single quotes hello $r <-- it does not expand with 'EOL'
I am working on a bash script that needs to create a file in this location:/etc/yum.repos.d/nginx.repowith the following contents:[nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=0 enabled=1So, I have tried to do it like this:cat >/etc/yum.repos.d/nginx.repo <<EOL [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/ gpgcheck=0 enabled=1 EOLWhen I check the contents of the file, I see the following:As you can see, the dollar sign weren't getting escaped, so the variable was evaluated to null/empty string and the contents do not look correct. Because, when I try to install nginx, I get this error:http://nginx.org/packages/centos///repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not FoundAny ideas?
Escaping dollar sign when echo write to file in CentOS linux bash script
If your certificate is for example.com only and not for www.example.com then any access to www.example.com will trigger a certificate warning, no matter if you want just redirect it or not. Redirection is done at the HTTP level and before it talks HTTP it first does the SSL handshake (which triggers the problem), because HTTPS is just HTTP inside SSL.And before you ask, tricks with DNS (like CNAME) will not help either because the browser will compare the certificate against the name in the URL, not against possible DNS alias names. There is simply no way around getting a proper certificate.
I have a valid certificate for example.com. If users go to my site athttp://example.com, they get redirected tohttps://example.comand all is good. If they go tohttps://example.com, all is good. If they even go tohttp://www.example.com, they get redirected tohttps://example.comand all is good.However, if they go tohttps://www.example.com, Chrome triggers its SSL warning before I can redirect and tells the user to not trust this site. I don't have this problem in Safari or Firefox.Here's my nginx configuration. What am I doing wrong?```# Configuration for redirecting non-ssl to ssl; server { listen *:80; listen [::]:80; server_name example.com; return 301 https://example.com$request_uri; } # Configuration for redirecting www to non-www; server { server_name www.example.com; ssl_certificate ssl/ssl_cert.crt; ssl_certificate_key ssl/ssl_key.key; listen *:80; listen *:443 ssl spdy; listen [::]:80 ipv6only=on; listen [::]:443 ssl spdy ipv6only=on; return 301 https://example.com$request_uri; } server { listen *:443 ssl spdy; listen [::]:443 ssl spdy; ssl_certificate ssl/ssl_cert.crt; ssl_certificate_key ssl/ssl_key.key; server_name example.com; }```EDIT: I see that this is a problematic configuration because the second block will look at the certs. What's the proper way to set this up with a cert that reads from "example.com" rather than "www.example.com"?
How do I redirect www traffic without triggering browsers SSL check?
First, create/etc/nginx/conf.d/redirect.conf:server { listen 80; server_name old-gitlab.mydomain.com; rewrite ^/(.*)$ http://new-gitlab.mydomain.com/$1 permanent; }(if the/etc/nginx/conf.d/path does not exist, go ahead and create it)Now edit the configuration file at/etc/gitlab/gitlab.rbto add the following line:nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/redirect.conf;"Finally, rungitlab-ctl reconfigureto rewrite the nginx configuration and restart nginx.
I migrated my Gitlab to a new domain. I'd like to redirect all HTTP requests from the old URL to the new one. Both domains currently point to the same server (usingADNS records).I use Gitlab Omnibus package, with the bundled nginx install. How to do this?
Gitlab Omnibus: how to redirect all requests to another domain
When you build the image you probably want to specify the image name with-toption.docker build -t my/nginx .To run a container use theruncommanddocker run --rm -ti my/nginxYou probably should add the following command to your DockerfileCMD ["nginx"]Or with php5-fpmCMD service php5-fpm start && nginxUPDATE. You should run nginx as daemon off. Add the following to your Dockerfile after installing nginx.RUN echo "daemon off;" >> /etc/nginx/nginx.confUpdate2.-tioption in run allows you to check the log messages if any. Usually you should run a container in background using-dinstead of-ti. You can attach to a running container using theattachcommand.You may also checkdocker referenceto see how to stop and remove a container and other commands.
I have docker image withDockerfile, that successfully build withdocker build .command. TheDockerfilecontent is:FROM ubuntu RUN apt-get update && apt-get install -y nginx php5 php5-fpm ADD . /codeHow can I run my docker container to see thatNginxis work?UPDATE: When I try to use next Dockerfile:FROM ubuntu RUN apt-get update && apt-get install -y nginx php5 php5-fpm RUN sudo echo "daemon off;" >> /etc/nginx/nginx.conf CMD service php5-fpm start && nginxIt build successfully withdocker build -t my/nginx ., but when I enterdocker run --rm -ti my/nginxcommand, my terminal not response:
How can I run my docker container with installed Nginx?
I just realized the problem by myself.HHVM with its default setting without specifying the IP to listen, was only listening ipv6 addresses. Because of that i could connect with localhost but not with 127.0.0.1Specifying the IP by hhvm.server.ip = 127.0.0.1 solved the problem.
I'm trying to install HHVM on Debian 7 by prebuilt package. I've another server with the same configuration and it's working but in one of the servers HHVM FastCGI refuses the connectionIt's /var/log/nginx/error.log2014/11/25 23:24:10 [error] 422#0: *39 connect() failed (111: Connection refused) while connecting to upstream, client: 213.128.95.22, server: , request: "GET /api/v2/checkaccess HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "77.75.35.140"I'm sure that HHVM daemon is working and listening 9000. port.COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME hhvm 12192 www-data 16u IPv6 792971 0t0 TCP *:9000 (LISTEN)But also I cannot connect with telnetroot@server:/home/itusozlukcom# telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refusedI'm sure that there's no problem with the code because the same code works in another server and in this server the code works by HHVM CLI.HHVM error.log is empty.What can be the problem?
HHVM + NGinx Connection refused
os.execute() will block for the time of execution of the command you are running and since you generate some output, usingio.popenwon't help you much as you'd need to read from the pipe (otherwise the process will still block at some point).A better way may be torun the process in the background:os.execute("ls >a.txt 2>&1 &"). The order of redirects>and2>matters and the&at the end will run the command in the background, unblockingos.execute.
In my nginx+lua app OS executing a command line something likeos.execute("ls 2>&1 | tee a.txt")I want to know does it block main app? I want use command "execute-and-forgot" case. If it blocks how to fix it and execute a simple line in background thread?
Does os.execute block thread in lua?
I'm not sure if this is the right way to do this but it seems like this did the trickscaleToSeconds(derivative(stats.*.*.*.nginx.handles),1)Anyone sees any problems with this ?
Is there any way how to get Graphite to graph req/s ?When you retrieve nginx requests from nginx_status you are sending an absolute value to the graphite, so I'm thinking if there is any way how you can get the rate per second ?My understanding is thatderivative(series)would give you requests/minute but I could really use requests/s.Cheers.
Graphite nginx requests per second
Solved it, It was all due to a wrong setup in my production.rb in rails, which made the default behavior fail, so the hack of putting the assets into /public manually isn't necessary anyways.I had:config.action_dispatch.x_sendfile_header = "X-Sendfile"Which instead for nginx should be:config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'Thanks for all the help :-)
I'm on rails 3.2 and my production setup is using nginx and unicorn.I have a problem with some assets that a ruby gem called sidekiq uses. However those assets are not being served properly when I request them. My nginx config looks like this:upstream unicorn { server unix:/tmp/unicorn.myapp.sock fail_timeout=0; } server { listen 80 default deferred; # server_name example.com; root /home/deployer/apps/myapp/current/public; if (-f $document_root/system/maintenance.html) { return 503;my } error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /system/maintenance.html last; break; } location ~ ^/assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; if (-f $document_root/system/maintenance.html) { return 503; } error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /system/maintenance.html last; break; } }From my browser I can see that it'se.g. requestinghttp://www.myapp.com/admin/sidekiq/stylesheets/application.css.If I ssh into the server and write:ls /home/deployer/apps/myapp/current/public/admin/sidekiq/stylesheets application.css bootstrap.cssYou can see that it's actually there. So why isn't it being served?.
Why can't nginx find my assets?
According to the documentation for Passenger, you create a new vhost for each app you want to deploy. And point the siterootat your apps public directory, and add thepassenger_enableddirective. Exactly the same as deploying with Apache.http { ... server { listen 80; server_name www.mycook.com; root /webapps/mycook/public; passenger_enabled on; } ... }More here:http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_a_ror_appIn regards question 2. Restarting depends on what you are trying to do. I'm going to assume you're using a distro that usesinit.dThese are 3 cases where you do a different kind of 'restart'.You have an issue with some config you have on Nginx. Or it's behaving strangely. So you would restart the Nginx service like this:/etc/init.d/nginx restartThe next case is you have a rails or sinatra app deployed on Nginx with the passenger module. And you want to make it reload some changes you just pushed to the server. Passenger watches thetmp/restart.txtfile in your application. So by simply runngingtouch tmp/restart.txt. While cd'd into the app's folder will tell Passenger to reload the application.And the last case for restarting/reloading is reload for Nginx. You use this when you add or change your VHOSTs./etc/init.d/nginx reload. This allows you to reload your vhosts and other config without dropping connections.Have a gander at the Passenger Documentation, it is very thorough.nginx-passenger docs
I searched google for deploying multiple rails websites using phusion passenger 3.0.17 with nginx but I didn't get relevant results. Any how I completed passenger nginx setup by running passenger-install-nginx-module command.Ques 1)I am looking for proper beginner tutorial for running multiple rails websites using phusion passenger 3.0.17 with nginxQues 2)I am looking commands for start, stop, restart the (whole passenger nginx server (ie) for all websites) and also for (Individual rails websites)Note:I amnot looking for passenger standalonesolution. I am using REE 1.8.7 and rails 2.3.14
running multiple rails websites using phusion passenger 3.0.17 with nginx
Just set up multiple domain aliases for that server entry.server { listen 80; server_name www.a_domain.com www.b_domain.com www.c_domain.com; root /webapps/mycook/public; passenger_enabled on; }That'll serve requests to each of those domains, and all hit the same app pool.
I have one rails application needed to be deployed by passenger module nginx. This application needs to be served for hundred domain names. I don't have enough memory to launch hundred rails instances. I'm not sure the proper way to launch rails in few instances. It's the same application under different domain names.server { listen 80; server_name www.a_domain.com; root /webapps/mycook/public; passenger_enabled on; } server { listen 80; server_name www.b_domain.com; root /webapps/mycook/public; passenger_enabled on; } server { listen 80; server_name www.c_domain.com; root /webapps/mycook/public; passenger_enabled on; }As you can the above code, it would launch three rails instances. It would be nice to launch only instance to serve under these 3 domain. Anyone has some suggestions?
One rails application for multiple domain names
Yes, nginx can serve http/3 on multiple virtual hosts, butreuseportoption is supported only for 1 virtual host per the samelisten IP:PORTdirective.So, you should use different IPs for your virtual hosts or removereuseportoption.
NGINX 1.25 introduced support for http/3 (over QUIC).To enable it, one can addlisten 443 quic reuseport;to theserverblock, alongside the likely existinglisten 443 ssl http2;However, if I add thequiclisten for more than one server block (which all have a differentserver_nameset), then NGINX rejects the config with the following error:[emerg] 2611#2611: duplicate listen options for 0.0.0.0:443 in /etc/nginx/sites-enabled/site.confIt is possible to listen ondifferentports for different domains, but that doesn’t seem to be user-friendly — Firefox will display the port number in the url, even if it loaded the page over http/2 first and then got the http/3 port from anAlt-Svcheader. It’s also tedious to manually allocate ports and to configure the firewall for this.All myserverblocks are using the same certificate. All domains that I have aserverblock for are subject alternative names in the single certificate.RFC9114 says that http/3 clients must support Server Name Indication, but even without it, because all my domains use the same certificate, it should be possible in theory to establish a connection and then decide what content to serve based on theHostheader. This is not what happens though, when I send a request over QUIC, NGINX serves from theserverblock that thelisten 443 quicis in, it seems to ignore the server name.Is it possible with NGINX 1.25 to serve multiple domains over http/3 all on port 443?
Enabling QUIC / http/3 on multiple domains with NGINX 1.25
The depth actually is the maximum number of intermediate certificate issuers, i.e. the number of CA certificates which are max allowed to be followed while verifying the client certificate.A depth of 0 means that self-signed client certificates are accepted only, the default depth of 1 means the client certificate can be self-signed or has to be signed by a CA which is directly known to the server (i.e. the CA's certificate is under SSLCACertificatePath), etc. A depth of 2 means that certificates signed by a (single level of) intermediate CA are accepted i.e. by an intermediate CA, whose CA certificate is signed by a CA directly known to the server.Our perl test about this directive has some very useful comments and will help you to understand my explanation in a NGINX Context a little bit better.https://github.com/nginx/nginx-tests/blob/7a9e95fdd30729540ee9650be7f991c330367d5b/ssl_verify_depth.t#L145
I am wondering what does ssl_verify_depth mean in nginx.conf? Thedocsare not very detailed, there is just this sentece:Sets the verification depth in the client certificates chain.What does increasing or decreasing do? I've noticed that increasing it makes nginx more likely to accept the cert, but why is that?
What does ssl_verify_depth mean in nginx.conf?
The error message is right, you can't use a wildcard originandcredentials:https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-OriginFor requestswithout credentials, the literal value "*" can be specified, as a wildcard; the value tells browsers to allow requesting code from any origin to access the resource. Attempting to use the wildcard with credentialswill result in an error.Instead, just pass back the actual origin, the one that arrived in the Origin HTTP header, then it will always match:add_header Access-Control-Allow-Origin $http_origin always;
I'm working on building a web application that communicates with a Laravell API through an Nginx server. I tried following the directions on the Nginx website forwide open cors, but it doesn't like the wild card response when sending credentials.Access to fetch at 'https://api.***.com/' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '' when the request's credentials mode is 'include'.The API server requires a Bearer access token to authenticate, and each endpoint is at its own path on the server. What is the correct way to configure Nginx in this scenario?
Nginx using CORS with credentials
You should load the 'ngx_http_headers_more_filter_module.so' by adding the below code snippet in nginx.conf file.load_module modules/ngx_http_headers_more_filter_module.so;It will work.Cheers!
I could not able to remove 'Server' header from response header of Nginx version 1.18 in Ubuntu 20.04 OS.I have done the following steps:sudo apt-get updateInstalled nginx-extras by using command 'sudo apt-get install nginx-extras'Added the code snippet 'more_clear_headers Server;' in http section of nginx.conf file.After restarting the Nginx service, it shows the error that 'unknown directive more_clear_headers'.But, I could remove 'Server' header from response header of Nginx version 1.4.6 in Ubuntu 14.04 OS by doing the above steps.Can anyone please help me how could I remove 'Server' header from response header of Nginx 1.18 in Ubuntu 20.04 OS?Thanks In Advance
Removing 'server' header from response header of Nginx version 1.18 in Ubuntu 20.04
If the source file path is/app/nginix.confthen dockefile should contain:COPY /app/nginx.conf /etc/nginx/conf.d/default.confIf you're runningdocker buildcommand from/appdirectory on your host then your above dockerfile should work.Update:If you're expecting /app/nginx.conf file ofnodedocker image to present innginx:alpineimage then you need to usemulti-stagedocker builds.Change yourdockerfiletoFROM node as build WORKDIR /app COPY package json files RUN npm build FROM nginx:alpine COPY --from=build /app/nginx.conf /etc/nginx/conf.d/default.confThis will copy/app/nginx.conffile fromnodeimage tonginx:alpineimage.
Unable to Copy config file in my project dir to /etc/nginx/conf.d/default.confsource file location: /app/nginix.confCOPY nginx.conf /etc/nginx/conf.d/default.confdestination : /etc/nginx/conf.d/default.confSteps in docker file :Tried the multi stage build: - FROM node:8.9.0 as buid - WORKDIR /app - COPY package.json package-lock.json ./ - RUN npm install - COPY . ./ - RUN npm run build - FROM nginx:alpine - RUN mkdir -p /var/www/html/client - COPY --from=buid /app/nginix.conf /etc/nginx/conf.d/default.conf - COPY --from=buid /app/build/ /var/www/html/clientTried commenting the first copy command, and it was able to copy the build and it was good. when it is able to find the build in the app dir why is it not able to find the nginix.conf file which is also in the same dir, did a ls -la and saw the nginix.conf file.TIA
Unable to COPY config file in nginix /etc/nginx/conf.d/default.conf
Even if you fix the thesub_filterconfiguration snippet by includingsub_filter_once on;as suggested in the other answer it will not work, because thebasetag works only with relative paths (see:https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base).Theapp-rootsolution is technically correct, but boils down to the same thing as, and thus also will fail forhreftags with a leading slash.I can see two possibilities:1) You can fix all yourhreftags, removing leading slashes. If you do that both solutions should work.2) You can try to change your location rule to- path: /(odin)?/?(.*)and the rewrite annotation to:nginx.ingress.kubernetes.io/rewrite-target: /$2(not forgetting to include thenginx.ingress.kubernetes.io/use-regex: "true"annotation as well) and it will work with a leading slash as well.It looks fragile though, as the rule will now match (almost) anything and you loose the possibility to fan out with NGINX with different path prefixes.A modification of "2)" to make it a more viable alternative would be to match the/v0prefix if it's consistent amonghreftags, instead of basically anything.
How do I redirect all myhrefswithin my response to hit my new path. For e.g., my ingress file isapiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-odin annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /odin/?(.*) backend: serviceName: flask-app-tutorial servicePort: 8080When I visit the page athttps://mysite/odin. It works and returns back the response:The HTML response is: .. Home Login However, as you can see, the relative links are likeHome. If I click on that, it won't work since there is no link likehttp://mysite/v0/index. If I click on the link, I want it to go tohttp://mysite/odin/v0/index. Is it possible either by modifying the links in the response to haveodinor if I do click on it, it looks at the source url i.e.http://mysite/odinand direct it relative to that?Nginx Version: 1.15.10 ingress-nginx: 0.24.0So far,I have tried the following.nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Accept-Encoding ""; #disable compression sub_filter '' ' '; nginx.ingress.kubernetes.io/add-base-url: ":true" nginx.ingress.kubernetes.io/app-root: /odin nginx.ingress.kubernetes.io/use-regex: "true"I have also triedthisi.e.change the spec.rules.host.paths.path from /odin/?(.*) to/(odin/.*)There may be a typo in the advice above. I think it shouldhttpinstead ofhost.
Make links in response relative to new path
One possibility is to use Google Cloud Load balancer.https://cloud.google.com/load-balancing/docs/1) Create a backend service that listen on port 80802) Create a frontend service that listen on port 803) Then forward frontend trafic on this backend service4) Bonus : You can create a ssl certificate auto managed by GCPhttps://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs
I'm trying to configure a port forwarding (port 80 to port 8080) for a Node.js application hosted on Google Cloud Compute Engine (Ubuntu and Nginx).My ultimate goal is to have an url like "api.domain.com" showing exactly the same thing from "api.domain.com:8080" (:8080 is working actually).But because it's a virtual server on Google platform, I'm not sure what kind of configuration I can do.I tried these solutions without success (probably because it's a Google Cloud environment):Forwarding port 80 to 8080 using NGINXBest practices when running Node.js with port 80 (Ubuntu / Linode)So two questions here:1.Where I need to configure the port forwarding?Directly in my Ubuntu instance with Nginx or Linux config files?With gcloud command?In a secret place in the UI of console.cloud.google.com?2.What settings or configuration I need to save?
How to configure Port Forwarding with Google Cloud Compute Engine for a Node.JS application
tldr; Theupstreamdirective must be embedded inside anhttpblock.nginx configuration files usually haveeventsandhttpblocks at the top-most level, and thenserver,upstream, and other directives nested insidehttp. Something like this:events { worker_connections 768; } http { upstream foo { server localhost:8000; } server { listen 80; ... } }Sometimes, instead of nesting theserverblock explicitly, the configuration is spread across multiple files and theincludedirective is used to "merge" them all together:http { include /etc/nginx/sites-enabled/*; }Your config doesn't show us an enclosinghttpblock, so you are most likely runningnginx -tagainst a partial config. You should either a) add those enclosing blocks to your config, or b) rename this file and issue anincludefor it within your mainnginx.confto pull everything together.
I'm trying to start up my node service on my nginx webserver but I keep getting this error when I try to do nginx -tnginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:3 nginx: configuration file /etc/nginx/nginx.conf test failedMy current nginx.conf is like this:upstream backend { server 127.0.0.1:5555; } map $sent_http_content_type $charset { ~^text/ utf-8; } server { listen 80; listen [::]:80; server_name mywebsite.com; server_tokens off; client_max_body_size 100M; # Change this to the max file size you want to allow charset $charset; charset_types *; # Uncomment if you are running behind CloudFlare. # This requires NGINX compiled from source with: # --with-http_realip_module #include /path/to/real-ip-from-cf; location / { add_header Access-Control-Allow-Origin *; root /path/to/your/uploads/folder; try_files $uri @proxy; } location @proxy { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://backend; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; } }I tried to look up some solutions but nothing seem to work for my situation.Edit: Yes, I did edit the paths and placeholders properly.
Nginx upstream failure configuration file
I almost completely gave up on this! However at the last minute I came up with the answer.The servers are on Amazon AWS behind a load balancer. The load balancer had the idle-timeout attribute set at 60 seconds. Changing this setting fixed the problem!!
I am using php-fpm with nginx. I have scripts which take an uploaded excel sheet and process it. This is a long running job. However, after 60 seconds of execution time I get a 504 Gateway Timeout error.The php script keeps running to completion. So nothing is stopping the script from completing.I need to stop this error.I have been playing with the fastcgi_read_timeout parameter. However it doesn't seem to fix this problem. However I know it's taking this parameter into consideration because if I change it to 0 and restart nginx, then the 504 gateway timeout shows straight away.location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fpm/www.sock; fastcgi_read_timeout 300; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PHP_VALUE "upload_max_filesize = 190M \n post_max_size=190M \n max_execution_time = 300"; }Any help would be appreciated as I have hit a roadblock in terms of resolving this issue.
Nginx / PHP-FPM 504 Gateway Timeout
Yes, it is certainly possible.There are absolutely no downsides° (°unless you care for tracking, user-login, or having any sort of preferences, although alternatives exist as well).On the other hand, there are plenty of upsides — you ensure that if one user shared the URL with another one, that the URL will work as expected, as it doesn't depend on any cookies.Note that with the help of nginx you can actually remove cookies even from backend applications that strictly do require the cookies. E.g., I did it for my OpenGrok installation athttp://BXR.SU/, where I use nginx to strip the cookies, both ways, and effectively use the URL path on the client-facing side as the preference identifier in place of saving such information in the cookies, and subsequently converting such$uriinto$args(in place of cookies) when passing the requests back to OpenGrok (if OpenGrok would not have supported $args as a fallback, it'd also be possible to still use cookies within the backend, but still clear them up before serving the content back to the client).Seehttp://serverfault.com/questions/462799/leverage-proxy-caching-with-nginx-by-removing-set-cookie-header/467774#467774for some more discussion of my implementation. For example, the following may be used to ensure your backend can neither set nor get any cookies:proxy_hide_header Set-Cookie; proxy_ignore_headers Set-Cookie; # important! Remember the special inheritance rules for proxy_set_header: # http://nginx.org/ru/docs/http/ngx_http_proxy_module.html#proxy_set_header proxy_set_header Cookie "";Note that even with the above code, cookies could still be set and read by the front-end with the help of JavaScript.
I see, especially here in Germany, more and mor web sites, asking for permission to set cookies. My current project doesn't require cookies on the application level. So I am wondering if i shouldn't drop cookies entirely.My questions:Is it possible to set up static web site with nginx entirely without the use of cookies?And if so, is there a downside to cookieless sites?
Is it possible to set up nginx without cookies?
You can do this with Nginx (or HAProxy) running in EC2 and acting as a reverse proxy in front of the buckets, yes, but if you are not already familiar with how to configure it, it may be simpler to just use CloudFront... a second time.The solution here is to create a separate distribution each web site subdomain.Assuming the bucket is named example.com and it is in the us-west-2 region, verify that you already enabled web site hosting on the bucket and then find the web site endpoint for the bucket, which in this case would be example.com.s3-website-us-west-2.amazomaws.com.For subdomain1.example.com, the content would be under, let's say,subdomain1/, in the example.com bucket. So, you create a new CloudFront distribution for this subdomain, configuring the CloudFront distro with subdomain1.example.com as the alternate domain name. For the origin server, use the bucket website endpoint hostname, mentioned above (no path -- just the hostname). Then, configure the Origin Path to be/subdomain1. Note there is a leading slash but not a trailing slash.Now, whenthis, distribution sees a request for (e.g.)/images/cat.jpgit will send it to S3... but before it does that, it will prepend the Origin Path onto the request, and ask the bucket for/subdomain1/images/cat.jpg.Point the DNS for subdomain1 to the new CloudFront distribution, and you should have what you need -- a subdomain whose content lives under a path in a bucket.Repeat for each subdomain. This step is easily automated with one of the SDKs or the CLI.You may eventually need to request an increase in the number of CloudFront distributions your account is allowed to create, since thedefault is 200but this is a simple process as long as you are able to describe your use case.Of course, it occurs to me that your reason for wanting to do this might be related to the 100 bucket limit per account, butthat limit is no longer a fixed limit. AWS will now allow you to request that this limit be increased as well.
I have an s3+Cloudfront solution on Amazon. I would like to host different websites in different folders inside the bucket and access them in one of this two way: - a subdomain -> mywebsite1.mydomain.com or - path -> www.mydomain.com/mywebsite1I read that a proxy based on nginx could solve my problems. Is it true? Is it possible to get nginx on s3?
How can my s3 bucket host multiple websites in different folders using nginx?
First, you need to stop having Nginx listen on port 22. Nginx doesn't handle SSH forwarding, your firewall does.If you're using iptables, then these rules will forward all requests through your Nginx host to your Gitlab host.sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j DNAT --to-destination [GITLAB-IP]:22 sudo iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 22 -j SNAT --to-source [NGINX-IP]You may need to changeeth0in those commands to fit your server setup.Then you need to enable packet forwarding by editing the/etc/sysctl.conffile and uncommenting this line:net.ipv4.ip_forward=1Then reload the configuration you just changed with this command:sudo sysctl -pFinally, those iptables rule are not persistent by default and will be erased when you reboot the server. The easiest way to make them persistent is to use theiptables-persistentpackage. You install that package like this:sudo apt-get install iptables-persistentAnd after it's installed you can save/restore the iptables rules anytime with these commands:sudo invoke-rc.d iptables-persistent save sudo invoke-rc.d iptables-persistent reloadIf you're on Ubuntu 16.04 or later, then those commands aresudo netfilter-persistent save sudo netfilter-persistent reloadYou'll want to run the save command after you get the rules working and you've tested them. Then, when your server reboots the rules you saved will be loaded automatically.
My Nginx Server is acting as a proxy for a Gitlab Server, the problem is when I try "**git clone[email protected]:username/project.git**" I'm unable to clone the project [it is not tunneling from Nginx server to Gitlab server]When I update my local system's /etc/hosts file with IP Address of Gitlab Server then it clones fine without password [I've updated my profile with SSH Public Key on Gitlab].So I came to the conclusion that I've to update my Nginx Configuration with rules that can tunnel the SSH communication from any client system to Gitlab Server through Nginx Server.Tried the code on thisLinkby making changes as followed:upstream gitlab { server 192.168.61.102:22; } server { listen 22; server_name gitlab.example.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gitlab; } }but it is not working. It would be great if someone helps me in tweaking the rules to make it work.Note:In the above code 192.168.61.102 is the IP Address of my gitlab server, my Nginx server is at 192.168.61.101
Git clone through Nginx proxy for Gitlab server is not working
So it seems the problem is related to theheroku.confgenerated by$root/vendor/heroku/heroku-buildpack-php/conf/nginx/heroku.conf.phpHeroku localruns nginx withnginx: master process nginx -g daemon off; include $root/vendor/heroku/heroku-buildpack-php/conf/nginx/heroku.conf;So both/usr/local/etc/nginx/nginx.confandheroku.confare loaded, hence the duplicate directive.I've modified/usr/local/etc/nginx/nginx.confto only includeworker_processes 1; events { worker_connections 1024; }since without the events section nginx won't start, and left the rest toheroku.conf.
I'm running into the following issue when runningheroku local:[emerg] 595#0: "http" directive is duplicate in /usr/local/etc/nginx/nginx.conf:17I've gotheroku/heroku-buildpack-php": "*"in my composer.json, and a fresh install of nginx (usingbrew install nginx)Could someone explain to me what could be happening?
Heroku Local: "http" directive is duplicate
use the command like this:php app/console server:run 127.0.0.1:8080to run the server on port 8080 or change the port to your own preference
I'm getting this error when I'am trying to runmyproject, symfony2 project. I think that the error came up because on that port8000I haveajentiserver running withnginx.Server running on http://127.0.0.1:8000 Quit the server with CONTROL-C. RUN '/usr/bin/php5' '-S' '127.0.0.1:8000' '/srv/myproject/vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Resources/config/router_dev.php' RES 1 Command did not run successfullyWhat should I do ?I checked files permissions.sudo chmod 777 -R /srv/myproject
Symfony server:run error
So far, I've managed to get a workaround by adding a rewrite rule to Nginx, under plain http:rewrite ^/(.*)$ https://www.example.com/$1? permanent;Which redirects all plainhttprequests to thehttpsserver.Update:Apparently, as I want the app not to care about how the front web server is serving it, it's up to the same web server to "clean the mess" of redirections the app by itself cannot handle unless it's been specifically configured (not desired). So, I will stick to this answer (workaround rather...)UPDATEFollowing papirtiger's answer, I saw that I ended up missing the flashes, which should be added to the overridingredirect_toas a parameter.But I've found a way to do my life a lot easier, simply by overriding a different function, that is called from withinredirect_to.def _compute_redirect_to_location(options) #:nodoc: case options # The scheme name consist of a letter followed by any combination of # letters, digits, and the plus ("+"), period ("."), or hyphen ("-") # characters; and is terminated by a colon (":"). # See http://tools.ietf.org/html/rfc3986#section-3.1 # The protocol relative scheme starts with a double slash "//". when /\A([a-z][a-z\d\-+\.]*:|\/\/).*/i options ## WHEN STRING: THIS IS REMOVED TO AVOID ADDING PROTOCOL AND HOST ## # when String # request.protocol + request.host_with_port + options when :back request.headers["Referer"] or raise RedirectBackError when Proc _compute_redirect_to_location options.call else url_for(options) end.delete("\0\r\n") endThus, without having to change anything else in my code, I have a working relative redirect.
To start with, this sounds more like a bug then anything else.My rails application is served by Unicorn. Then, using Nginx as a reverse proxy, I serve the application to the outside world using SSL.So far so good, no problem. I'm using relative paths (Restful path helpers), so there should be no problem to produce this (forhttps://www.example.com):new_entry_path => https://www.example.com/entries/newThis works fine in most cases.The problem however appears when in a controller I try to redirect to a "show" action (using resources), let's say after a successful update (suppose Entry with id 100):redirect_to @entry, flash: {success: "Entry has been updated"}orredirect_to entry_path(@entry), flash: {success: "Entry has been updated"}they both produce a redirect to:http://www.example.com/entries/100 # missing 's' in https...instead of/entries/100 # implying https://www.example.com/entries/100As far as I've noticed, this only happens withshowaction and only in controller redirects.I'm bypassing this by doing something horrible and disgusting:redirect_to entry_url(@entry).sub(/^http\:/,"https:"), flash: {success: "Entry has been updated"}Has anyone ever confronted something similar? Any ideas will be gratefully accepted...
Rails application behind proxy, with SSL, renders paths as "http://"
response = HttpResponse(content_type = mimetype, status=206) response['Content-Disposition'] = "attachment; filename=%s" % \ (fileModel.FileName) response['Accept-Ranges'] = 'bytes' response['X-Accel-Redirect'] = settings.MEDIA_URL + '/' + fileModel.FileData.MD5 response['X-Accel-Buffering'] = 'no' return responseThis worked out for me. Now authentication with django + streaming with nginx is accomplished.
I am building a music player application with Django + nginx for which I need a backend which supports byte range requests.Django is authenticating the media file correctly but django dev server does not support range requests (206 partial response). Nginx directly serves byte range requests after usingthis configuration, I verified that the response header has content range. However I am unable to forward the request from django to nginx, to serve the content.I tried usingX-Accel-Redirectin a django view but still the response header doesn't have content range the way it would have been if the file had been directly served by nginx.Django dev server - Authentication done but no byte range support (response 200)Nginx - No authentication, byte range request support (response 206)Django view + X-Accel-Redirect + nginx - Authentication done but no byte range support (response 200)So I am trying to find a way to authenticate using Django and provide support for byte range requests with nginx or another static file server.
Stream music with byte range requests with Django + nginx
I had the same issue with the redirect (using same nginx conf code as shown here).Then, I put my redirect configs as the last server{} block (at the end of my domain.com config file), and the ELB was able to find the instances again.I have not looked more into it, but it seems that vhost processing is done in order, so when ELB hits the IP address of my servers, it lands on the first defined server{} block, which in my case was the redirect. Nginx throws a 301 return code, and ELB freaks out since its not 200.Just a guess though.So, my config (real vhost on top, redirect is last):less /etc/nginx/sites-enabled/domain.comserver { listen 80; server_name domain.com; ..... } server { listen 80; server_name www.domain.com; return 301 http://domain.com$request_uri; }
I have an app running on example.com and now I wanna redirect all the traffic to the www.example.com since we are collaborating with Akamai's CDN for your website. My domain is parked in Route53, added the CNAME of Elastic Load Balancer's CNAME to pointing to *.example.com and I am running nginx web server with the following configuration. If I use this configuration ELB is throwing the instance out, now domain.com works fine and www.domain.com fine, but redirecting is not happening.server { listen 80; server_name example.com; # rewrite ^ http://www.example.com$request_uri? permanent; root /home/ubuntu/apps/st/public; passenger_enabled on; }but I am not able to achieve the redirect.Also tried to add the PTR whose value is the cname of the load balancer.Tried to change the nginx configuration, as you can seerewrite ^ http://www.example.com$request_uri? permanent;even that is not working.As soon as I add the above line, the app instance goes and ELB throws out of the stack.What is the solution, I have been trying to achieve this since 3-4 days, its become a night mare now, kindly help me up. I have also put posts and threads in aws forums, nothing is helping as such.
non www to www using AWS Elastic Load balancer and Nginx
Found it!As of PHP 5.2.4, the default is now to cause a 500 error, because the alternative is an empty page.Other discussionssuggest that this behavior can not be changed for the "PHP Fatal" error type, which don't flow through the normal error handler routines and can not be caught or stopped.
My server is setup with Nginx + PHP + FastCGI. Whenever PHP throws a Fatal error, it gets logged inside of nginx/error.log, but the server reports HTTP Error 500 back to the browser instead of displaying the PHP Fatal error to the browser as is desired and typical in other setups. I've been searching for how to resolve this and keep coming up short. Anyone have anything helpful about this? Much appreciated!
When PHP Fatal error happens, Nginx reports HTTP Error 500 to browser
I have the same setup with Varnish on port 80 and nginx on port 8080 and OmniAuth (no Devise) was doing exactly the same thing. I tried settingX-Forwarded-Portetc in Varnish andfastcgi_param SERVER_PORT 80;in nginx, both without success. The other piece in my setup is Passenger (which you didn't mention) but if you are indeed using Passenger then you can use:passenger_set_cgi_param SERVER_PORT 80;(The docs say you can set this in anhttpblock but that didn't work for me and I had to add it to theserverblock.)http://modrails.com/documentation/Users%20guide%20Nginx.html#passenger_set_cgi_param
I have a Rails app that is running on port8080that I need to trick to think it's running on port 80.I am running Varnish on port80and forwarding requests to nginx on port8080, but when the user tries to login with OmniAuth and the Devise gem generates a url to redirect back to the server, it thinks its on port 8080 which then the user will see.Is there any way to trick the Rails app to hard code the port as 80 (I would think it's a bad practice), or have nginx forward the request as if it's running on port 80?Since I am not running a nginx proxy to the Rails app I can't think of a way to trick the port.Has anyone ran into this issue before, if so what sort of configuration is needed to fix it?Thanks in advance!EDIT: Both nginx and Varnish are running on the same server.
Tricking a Rails App to think it's on a different port
The problem here is probably that you a something likeaccess_log on;in one of your config-files. Just changeonto thepath/to/your/logfile:)
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be about programming within the scope defined in thehelp center.Closed10 years ago.Improve this questionWe are running into the following error when we try to start Nginx (on Ubuntu) "" Starting nginx "/usr/local/nginx/on" "No such file or directory" ""It started showing up all of a sudden.Did anyone run into this scenario? Any pointers?Thanks in advance
nginx fails to start [closed]
server { ... if ($cookie_PHPSESSID = "XXXXXXXXXXXX") { return 403; } }
I want to block exact cookie value like PHPSESSID in Nginx. Does this possible? My site under DDoS but I can't block by IP due to shared addresses. Attackers use same value of Cookies so I try to block by cookie value.Thanks
Can I block request by Cookie value in Nginx?
Could you please try below steps and let me know if it's working or not?Apply below changes in settings.py file:STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static')Remove below line from your settings.py:STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'), ]Execute below command in production:python manage.py collectstaticUpdate nginx file like below one:# prevent css, js files sent as text/plain objects include /etc/nginx/mime.types; server { listen 80; server_name MY_SERVER_IP; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { autoindex on; autoindex_exact_size off; root /home/MYUSERNAME/myproject; } location /media/ { autoindex on; autoindex_exact_size off; root /home/MYUSERNAME/myproject; } }Explanations:STATIC_ROOTis the folder where static files will be stored after usingpython manage.py collectstaticSTATICFILES_DIRSis the list of folders where django will search for additional static files aside from thestaticfolder of each app installed.In this case our concern was Admin related CSS files that why we useSTATIC_ROOTinstead ofSTATICFILES_DIRS
The user interface is working well, and all CSS styling and static files are served correctly, but the admin interface is missing CSS styling. I looked at similar posts but in those posts people had the issue with both the user and the admin interface. My issue is only with the admin interface.Please see my static file settings below fromsettings.py:STATIC_URL = '/static/' #Location of static files STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'), ] STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')And this is my nginx configuration:server { listen 80; server_name MY_SERVER_IP; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/MYUSERNAME/myproject; } location /media/ { root /home/MYUSERNAME/myproject; }I already executedpython manage.py collectstaticon the server and got this message:0 static files copied to '/home/MYUSERNAME/myproject/staticfiles', 255 unmodified.I restarted nginx after that and also tried emptying my browser cache, but the issue persisted.More info as requested by @Omar Siddiqui.Using Django 3.2 My mysite/urls.py contains:from django.contrib import admin from django.urls import path, include # Imports to configure media files for summernote editor from django.conf import settings from django.conf.urls.static import static urlpatterns = [ path('admin/', admin.site.urls), path('', include('qa.urls')), path('summernote/', include('django_summernote.urls')), path('chatbot/', include('chatbot.urls')), ] # Enable media files for summernote editor if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
Django admin interface missing css styling in production
Use themapdirective:map $cache $control { 1 "public, no-transform"; } map $cache $expires { 1 1d; default off; # or some other default value } map $uri $cache { ~*\.(js|css|png|jpe?g|gif|ico)$ 1; } server { ... expires $expires; add_header Cache-Control $control; ... }(you can also putexpiresandadd_headerdirectives into thelocationcontext or even leave then in thehttpcontext). nginx won't add the header at all (or modify an existing header) if the value calculated via the map expression for$controlvariable will be an empty string. This is not the only possible solution, you can also rely on theContent-Typeresponse header from your upstream (seethisanswer for an example).You should be aware ofthisdocumentation excerpt:There could be severaladd_headerdirectives. These directives are inherited from the previous configuration level if and only if there are noadd_headerdirectives defined on the current level.
I like add cache control header with nginx for some extensions such as .jpg, etc but so far some of the solutions I found on the net, I couldn't get it to work. I will tell you what I have tried.I have tried variations of the following in different place in the .conf file of my site and when I tried the site become blank and I found a lot of 404 errors on the console. The site is developed in React.location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1d; add_header Cache-Control "public, no-transform"; }My conf file looks like the following. The thing is I have to do reverse proxy as the sites are actually hosted in Docker containers.server { server_name mysite.net; root /usr/share/nginx/html; location / { proxy_pass http://web:3005/; } location /api/ { rewrite ^/api(/.*)$ $1 break; proxy_pass http://api:5005/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; fastcgi_read_timeout 1200; proxy_read_timeout 1200; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/mysite.net-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/mysite.net-0001/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = mysite.net) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name mysite.net; return 404; # managed by Certbot }
how to add cache control header with proxy pass in nginx for some file extensions
The goal here is to access the MySQL databases with clients such as MySQL Workbench from outside the local networkAll modern MySQL GUI clients support SSH tunneling. This is the most secure approach to connect and requires zero configuration on the server-side: if you can connect via SSH, then you can connect to MySQL on that host.In MySQL Workbench, while creating a connection, select "Standard TCP/IP over SSH" as the connection method, then fill out SSH connection details and MySQL connection details. The key point is putting MySQL server as127.0.0.1as you typically want to connect to MySQL instance which is running on the machine you SSH into. That's all there is to it.
Opening the default port 3306 to the outside world is something I would like to avoid if possible. We have Nginx running for reverse proxy purposes for other applications. The goal here is to access the MySQL databases with clients such as MySQL Workbench from outside the local network, in a secure way. The MySQL server runs on a Debian (Linux) Virtual Machine.I configured a server block as described below. Connecting to mysql.domain.com, port 80, with a non-root user in MySQL Workbench results in a failure.Server block:server { server_name mysql.domain.com; location / { proxy_pass http://localhost:3306/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } }Error message:Failed to Connect to MySQL at mysql.domain.com:80 with user non-root. Lost connection to MySQL at 'waiting for initial communication packet', system error: 10060
How do I allow secure remote connections to a local MySQL database using Nginx?
Following best pratice your API will be under BASE/api/That will allow you to host backend + Frontend on the same serverserver { server_name domain.name; location /api/ { # Backend proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; ... } location / { # Frontend root /app-path/; index index.html; try_files $uri $uri/ /index.html; ... } }
I've created a restful api with nodejs and I'm planning to use sapper/svelte for front-end. In the end, these will be seperate apps and I want to run them on the same server with same domain. Is this approach reasonable? If it is, what should my nginx configuration file look like? If not, what should be my approach?This my conf for api:server { server_name domain.name; location / { proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } . . . }
I want to deploy back-end and front-end seperate apps on the same server with nginx
The problem was bad ssl certificates file. It was necessary to use docker container certificate. Proxy option is not anymore required.After setup ddev container, you need to copy docker certificate to some location:docker cp ddev-router:/etc/nginx/certs ~/tmpAfter that just update path to correct certificates files. My gulpfile task now looks like this:browserSync.init({https: { key: "/Users/username/tmp/master.key", cert: "/Users/username/tmp/master.crt" }, open:false});Thanks @rfay for solution!
I'm running DDEV nginx server on Bedrock wordpress site and trying to load snippet for Browsersync.gulpfile.js browserSync task:browserSync.init({ proxy: { target: "https://web.ddev.site" }, https: { key: "/Users/user/Library/Application Support/mkcert/rootCA-key.pem", cert: "/Users/user/Library/Application Support/mkcert/rootCA.pem" }, open:false});Browser doesnt load snippet and print following error:(index):505 GET https://web.ddev.site:3000/browser-sync/browser-sync-client.js?v=2.26.7 net::ERR_SSL_KEY_USAGE_INCOMPATIBLEHow can I get this two things to work together? Before DDEV I was using MAMP but DDEV has much better performance and I want to switch to this app. Thanks for help.
Cant connect Browsersync with DDEV nginx server, because SSL Error
Yes, its technically possible to install 2 nginx instances on the same server but I would do it another way.1 - You could just create multiple EC2 instances. The downside of this approach is that maybe it's gets harder to maintain depending on how many instances you want.2 - You could useDockeror any of its alternatives to create containers and solve this problem. You can create as many containers you need and totally separate nginx instances. Although docker is simple to learn and start using it in no time, the downside of this approach is that you need to put a little effort to learn it and your main EC2 instance needs to have enough resources to share between the containers.I hope it helps!
I have an EC2 instance with AWS and I have installed nginx and created multiple server blocks to server multiple applications.However, if nginx goes down, all the applications go down as well.Is there any way to setup seperate nginx instance for each application? So if one nginx instance goes down, it won't affect other instances.
Nginx multiple instances
Additionally, as part of ourcommercial subscription, starting from version 1.9.13 the signature on error pages and the “Server” response header field value can be set explicitly using the string with variables. An empty string disables the emission of the “Server” field.Source:http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokensIt requires a commercial subscription.Otherwise, installngx_headers_moremodule.And add the following to your nginx conf, and restart nginx. This will remove the "server" header. -more_clear_headers "Server"; more_clear_headers "server";Installation:https://github.com/openresty/headers-more-nginx-module#installation
Nginx version: 1.15.8According to nginx doc:http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens:"starting from version 1.9.13 the signature on error pages and the “Server” response header field value can be set explicitly using the string with variables. An empty string disables the emission of the “Server” field."But when I put in thisserver_tokens '';it complains:nginx: [emerg] invalid value ""Also tried:server_tokens "";server_tokens;None of them work. Note that I want to remove the "Server" header completely not just the version which can be done straightforwardly with "server_tokens off;"Does anyone have it working this way ? Comments & suggestions are welcome.Thanks,
nginx: Remove "Server" response header - not honour what said in doco
If you want pure environment variables into nginx config, you will need implements some code in Lua Language:https://blog.doismellburning.co.uk/environment-variables-in-nginx-config/If you don't have a high load on this NGinx, I recommend implements this above solution.In my specific case, to reduce CPU load, I prefer to use separated files with variables and a script inrc.local(or dockerfile) to change these files when launch the machine.conf.d/exemple.confinclude backends/exemple.host; location ~ ^/exemple { proxy_pass $exemple; }backends/exemple.hostset $exemple {BACKEND};rc.localsed -i "s@set \$exemple.*@set \$exemple $HOSTNAME\;@" /etc/nginx/backends/exemple.hostTo the last solution works, I need change the NGinx start order on O.S.
I try to add a proxy_pass in the nginx.conf likelocation /example/ { proxy_pass http://example.com; }But instead of hard codinghttp://example.comin the conf file I want to have this value in an environment variable.How can I use environment variables in nginx.conf? Or is there a better way with nginx no have external configuration?
Use Environment Variable or Parameter in nginx.conf
As suggested by @Douwe de Haan you don't have to use the public part, just call{{ asset("js/frontend/jquery.min.js") }}
I am developing a Laravel project. It is under 5.7 Laravel version. I am using Homestead as a local development environment. When I run any route in the project, it doesn't load public assets. instead, it gives 404 error. see attached image below.I coded correctly on getting assets path{{ asset("public/js/frontend/jquery.min.js") }}but all these assets are existing in the project folder. and when i try to go any of those assets, it gives laravel's 404 error.see below image.It would be great if anyone can give me a solution for this. Thanks in advance.
Laravel project's all assets are giving 404 error
It has nothing to do with your aurelia application. You are missingEXPOSEstatement (which is mandatory) in yourDockerfile. You can change it like this.FROM nginx:1.15.8-alpine EXPOSE 80 COPY dist /usr/share/nginx/htmlIf you try to run it withoutEXPOSE, you will get an errorERROR: ValidationError - The Dockerfile must list ports to expose on the Docker container. Specify at least one port, and then try again.You should test your application before pushing it to ElasticBeanstalkinstalleb cli(assuming that you havepip, if not then you need to install it as well)pip install awsebcli --upgrade --userthen initialize local repository for deploymenteb init -p docker and you can test iteb local run --port
I've deployed an Aurelia application to AWS Elastic Beanstalk via AWS ECR and have run into some difficulty. The docker container, when run locally, works perfectly (see below for Dockerfile).FROM nginx:1.15.8-alpine COPY dist /usr/share/nginx/htmlThe deployment works quite well, however when I navigate to the AWS provided endpointhttp://docker-tester.***.elasticbeanstalk.com/I get502 Bad Gateway nginx/1.12.1.I can't figure out what might be the issue. The docker container in question is a simple Hello World example created via theau newcommand; it's nothing fancy at all.Below is my Dockerrun.aws.json file{ "AWSEBDockerrunVersion": "1", "Image": { "Name": "***.dkr.ecr.eu-central-1.amazonaws.com/tester:latest", "Update": "true" }, "Ports": [ { "ContainerPort": "8080" } ], "Logging": "/var/log/nginx" }My Elastic Beanstalk configuration is rather small with an EC2 instance type oft2.micro. I'm using the free tier as an opportunity to learn.I greatly appreciate any help, or links to some reading that may point in the right direction.
Aurelia, Docker, Nginx, AWS Elastic Beanstalk Showing 502 Bad Gateway
I've lost the better half of a day on this issue, likely digging through the same threads as you. This thread held the answer:https://twittercommunity.com/t/twitter-card-error-fetching-the-page-failed-because-other-errors/112895/6Enabling AES128 as an ssl cipher will allow the Twitterbot to connect. This can be done within the nginx configuration within the Forge UI.Happy tweeting!
Our website is running on Laravel Forge with 'Let's encrypt SSL,' and HTTPS is OK in the browser. We added FB, Twitter meta tags for having branded FB and Twitter cards when sharing on these media.Following 'ERROR: Fetching the page failed because of other errors. ' is raised when trying to display Twitter card in tweets (tested withhttps://cards-dev.twitter.com/validator) and this is linked, after research, to this dedicated well-known issue:https://twittercommunity.com/t/error-fetching-the-page-failed-because-ssl-handshake-error/30204/9.It establishes that on Apache server serving different sites, a ServerName directive that matches the SSL certificate’s CN is needed to avoid Apache sending the local hostname or the IP of the connection.How to solve this Twitter issue on Forge NGINX server? Does someone know if this is the similar issue as Apache server and what configuration changes are needed?
Twitter: "Fetching the page failed because other errors", on Forge NGINX server with SSL
upstream max_connsis the number of connections from thenginxserver to an upstream proxy server.max_connsis more to make sure backend servers do not get overloaded. Say you have an upstream of 5 servers thatnginxcan send to. Maybe one is underpowered so you limit the total number of connections to it to keep from overloading it.limit_connis the number of connections to thenginxserver from a client and is to limit abuse from requests to thenginxserver. For example you can say for a location that an IP can only have 10 open connections before maxing out.
Environment: Nginx 1.14.0 (seedockerfilefor more details).To limit the number of concurrent connections for a specific locationin a server, one can use two methods -limit_conn (third example for all ips)andupstream max_conns.Is there a difference in the way the two methods works?Can someone explain or refer to explanation?example of limiting using upstream max_conns:http { upstream foo{ zone upstream_foo 32m; server some-ip:8080 max_conns=100; } server { listen 80; server_name localhost; location /some_path { proxy_pass http://foo/some_path; return 429; } } }limiting using limit_conn:http { limit_conn_zone $server_name zone=perserver:32m; server { listen 80; server_name localhost; location /some_path { proxy_pass http://some-ip:8080/some_path; limit_conn perserver 100; limit_conn_status 429; } } }
Nginx: limit_conn vs upstream max_conns (in location context)
I ran into this exact same issue while testing a deployment on Google Kubernetes Engine. I found out that if you assign a static IP address to your load balancer, that is the additional IP address that traffic will be forwarded from. Static IP addresses are always out of the listed range for Google's load balancers since they can be reserved for purposes other than load balancing. In my case I whitelisted the range that Google listed along with my static IP and everything is working fine; traffic doesn't get forwarded from any other IP addresses.Whitelisting the entire range of Google's IP addresses might open a security hole where someone will be able to spoof their IP on your site. If someone uses a Google Compute Engine instance that is assigned one of Google's IPs that you whitelisted, they will be able to spoof their IP by changing the forwarded-for headers.
How to determine the IP ranges used by the GCP load balancersI am operating several VM instances on Google Cloud Platform (GCP). They are behind an HTTP(S) load balancer.In order to restrict the access based on the origin IP address, I configured the Nginx on each VM instance as follows:server { listen 80; listen [::]:80; server_name www.example.com; real_ip_header X-Forwarded-For; real_ip_recursive on; set_real_ip_from 130.211.0.0/22; # GCP load balancers set_real_ip_from 35.191.0.0/16; # GCP load balancers ... }I found the IP ranges130.211.0.0/22and35.191.0.0/16on theFirewall rulessection of "HTTP(S) Load Balancing Concepts" document page.But, in the actual operation, I noticed that accesses could come from another IP range35.190.0.0/17.So, I consulted asectionof the Google Compute Engine FAQ and I learned that I can get the list ofallpublic IP ranges of GCP.This list is very long and seems to include the IP ranges that arenotused by the load balancers.I have two questions:How can I determine the IP ranges used by the GCP load balancers?How can I update the Nginx configuration when the IP ranges change?
How to determine the IP ranges used by the GCP load balancers
Check path withwhich nginxThen, you can remove from that path.
Output ofnginx -v:nginx version: nginx/1.14.0.After runningbrew uninstall nginxorbrew remove nginx, it gives error:Error: No such keg: /usr/local/Cellar/nginxI have tried :rm -f /usr/local/sbin/nginx rm -f -R /usr/local/etc/nginx rm -r /usr/local/opt/nginxBut stillnginx -vgiving output:nginx version: nginx/1.14.0How can I remove the nginx installation?
Unable to uninstall nginx on Mac OS X
The$uri/term in thetry_filesstatement causesnginxto append a trailing/to the requested URI, if that URI resolves to a local directory. Seethis documentfor more.The trailing/is appended by issuing a 3xx response, and of coursenginxgets the port wrong as it knows nothing about port 8000.If you do not wantnginxto issue any 3xx responses, simply remove the$uri/term from yourtry_filesstatement.For example:location /app { root /usr/share/nginx/html; index index.html index.htm; try_files $uri /app/index.html?$args; }
I have the following config:server { listen 80; server_name localhost; location /app { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /app/index.html?$args; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }When navigating tohttp://localhost:8000/app/all works as expected but when removing the trailing slash (http://localhost:8000/app) nginx returns 301 status response and I am being redirected tohttp://localhost/app.How can I make nginx work with bothhttp://localhost:8000/app/andhttp://localhost:8000/app(with and without trailing slash).
Nginx missing trailing slash returns 301
I have the same problem and solve it by editing my htaccess, and it looks like this:# BEGIN WordPress RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress # WP Maximum Execution Time Exceeded php_value max_execution_time 300 php_value upload_max_filesize 50M php_value post_max_size 80M php_value max_input_time 300 php_value max_execution_time 3000 Install the plugin "WP Maximum Execution Time Exceeded" (It will help your files not breaking when uploading)Then only add this part at the end of the htaccess:# WP Maximum Execution Time Exceeded php_value max_execution_time 300 php_value upload_max_filesize 50M php_value post_max_size 80M php_value max_input_time 300 php_value max_execution_time 3000 Of course you can also change the numbers!That helped me! Hope will help you also!Atanas
System OS:DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTSI have installed a LEMP stack:nginx/1.10.0 (Ubuntu) MySQL 5.7.18-0ubuntu0.16.04.1 PHP 7.0.15-0ubuntu0.16.04.4The system is hanging and displaying a 'connection reset' error message in the browser when I try to upload a theme or a plugin.I have managed to install some plugins from the Wordpress repository, but I can not install a 15MB plugin I am uploading via a zip from my remote machine via the browser.I have increased the memory limit to 512mb by editing/etc/php/7.0/fpm/php.iniand the php.ini script is now reporting this is taking effect:memory_limit 512M 512MI have also increased the max memory limit inwp-config.phpby inserting this as the first line in the file:define('WP_MEMORY_LIMIT', '512M');I have also created the following settings in the php config file at/etc/php/7.0/fpm/php.ini:max_execution_time = 240 max_input_time = 240 upload_max_filesize = 100MYet plugins or themes are still not uploading.I have tried both Firefox and Chrome.In Chrome, you get a % complete while the zip is being uploaded. The upload gets to 44% and then crashes, and I get the 'Connection Reset' error in the browser.I have changed ownership of the plugin directory and wp-content directory towww-data:www-data.I don't know what else to try, any ideas?
Why is the connection reset when uploading Wordpress plugin or theme on Ubuntu
you can add to your reverse proxy configurationlocation /ws { # For websocket support proxy_pass http://zeppelin:8080/ws; proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection upgrade; proxy_read_timeout 86400; }Reference:Zeppelin 0.7 auth docs
In our current architecture we have two apache front servers, in front of them, we have an nginx load balancer. And in front of that an nginx reverse proxy.My problem is that i'm trying to run Apache Zeppelin through the reverse proxy, and i'm having some problems with the websockets.I get an error like this :400 HTTP method GET is not supported by this URLAnd here is a screenshot of what the Chrome's Networks tab shows :I add my reverse proxy config for Zeppelin:error_log /var/log/nginx/nginx_error.log warn; server { listen 80; server_name localhost; location /zeppelin/ { proxy_pass http://zeppelin:8080/; proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection upgrade; } # fallback location / { return 301 http://ci.blablalablab.com/app/; } }Zeppelin is running inside a docker container, and i have exposes the 8080 port, its host name is : zeppelin.If you have any questions on the architecture or so, don't hesitate to ask.Thank you very much guys !
Running Apache Zeppelin with nginx as reverse proxy
1)check your installed packagesphp -mif bzip2 is installed move to step3directly , if not installed then install it by running :2)for php7 :apt-get install php7.0-bz2for php5:apt-get install php-bz23)then make sure that you've enabled your extension via :phpenmod bz24)then you can restart your serverservice nginx restart
I have an Ubuntu 16.04 server with PHP7 + nginx running. I already have a project in PHP Laravel 5.1 running in my local enviroment (Windows with Xampp) and everything is running great. I have a PHP script that uses the functionbzdecompressofBzip2but then, in the server just crash and show this message:Call to undefined function App\Http\Controllers\bzdecompress()I don't see instructions of how install this library (if needed) or how to load it or check at least that is loaded. Thank you very much!
Call to undefined function bzdecompress PHP
After doing a lot of experiments I finally fixed the issue. The solution is as below:cd /etc/php.d/And create a file namedsolr.ini.Added this line:extension=solr.soAnd now I have to remove the above extension from thephp.inifile and restartphp-fpmThat's all, worked for me.
After success install I'm getting the below errorNOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/solr.so' - /usr/lib64/php/modules/solr.so: undefined symbol: php_json_decode_ex in Unknown on line 0can any one help me out in this Issue ?my server details are as:-php :PHP 5.4.16 (cli) (built: Aug 11 2016 21:24:59) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologienginx:nginx version: nginx/1.10.1When I'm executing thisphp -vI'm getting the below message:PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/solr.so' - /usr/lib64/php/modules/solr.so: undefined symbol: php_json_decode_ex in Unknown on line 0PHP 5.4.16 (cli) (built: Aug 11 2016 21:24:59) Copyright (c) 1997-2013 The PHP GroupZend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies
PHP Warning: Unable to load dynamic library '/usr/lib64/php/modules/solr.so' undefined symbol: php_json_decode_ex in Unknown on line 0
Actually nginx provides stale-while-revalidate byproxy_cache_use_staleandNginx supports Cache-Control extensions since 1.11.10:location / { ... proxy_cache_use_stale updating error timeout http_500 http_502 http_503 http_504; proxy_cache_background_update on; }Yes, it does not support Cache-Control extension, with so if your application does not use stale-while-revalidate in Cache-Control header nginx will be enough.
We are currently moving our servers to a new one, with PLESK 12.5 which doesn't support Varnish cache for our PHP applications.We use Varnish, mostly for the 'stale-while-revalidate' capability, so that we can send whole pages or parts (using ESI) without any waiting time for any customer while cache is refreshing.Is there any alternative to Varnish for a similar kind of cache ? Either another "program" that could run on PLESK, or any PHP/server cache ?PLESK comes with NGINX, but it does not seem to provide 'stale-while-revalidate' capabilities ; I also know Squid isn't supported on PLESK.
Stale-while-revalidate cache replacement from Varnish
The real client IP is available in$proxy_add_x_forwarded_forvariable i.e.X-Forwarded-Forheader. It will have "," separated entries. The very first value is the real client IP.To log the real client IP in Tomcat's access logs, modify the pattern value in the AccessLog Valve as:%{X-Forwarded-For}i %l %u %t "%r" %s %b
I have Nginx in front of a Spring Boot 1.3.3 application with Tomcat access log enabled, but the logging always write the proxy IP address (127.0.0.1) instead of the real client IP.Is the X-Real-IP header used to get the real client IP?Is this header used by tomcat to write the IP address in the access log?I have this configuration:application.propertiesserver.use-forward-headers=true server.tomcat.internal-proxies=127\\.0\\.0\\.1 server.tomcat.accesslog.enabled=trueNginx configuration:location / { proxy_pass http://127.0.0.1:8091; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; }
How to log the real client IP on embedded Tomcat access log on Spring Boot application with Nginx as reverse proxy?
You have to set the following options:proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host;This is a sample that works:server { listen 80; listen 443; server_name api.mysite.dev; location /api/ { proxy_pass http://127.0.0.1:8001/; proxy_redirect off; proxy_set_header X-Forwarded-Host $host; proxy_set_header Host "api.mysite.dev"; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; } }
I have an Nginx config similar to:server { listen 80; listen 443; server_name api.mysite.dev; location / { proxy_set_header Host "api.mysite.dev"; proxy_set_header X-Real-IP $remote_addr; proxy_pass $scheme://127.0.0.1:8001; } } server { listen 80; listen 443; server_name mysite.dev www.mysite.dev; # Forward all /api/ requests ti api.mysite.dev # sadly need to keep this around for backwards compatibility location /api/ { proxy_set_header Host "api.mysite.dev"; proxy_set_header X-Real-IP $remote_addr; proxy_pass $scheme://127.0.0.1:8001/; } # The proxy_pass here ends up putting the port (8002) in the response URL. location / { proxy_set_header Host "www.mysite.dev"; proxy_set_header X-Real-IP $remote_addr; proxy_pass $scheme://127.0.0.1:8002; } }So, as said in the comment, when I request www.mysite.dev, my browser is forwarded to www.mysite.dev:8002. Any ideas what I'm doing wrong here?Thanks in advance!
nginx proxy_pass is setting port in response
There's nothing stopping you from serving requests from Go directly.On the other hand, there are some features that nginx provides out-of-the box that may be useful, for example:handle many virtual servers (e.g. have go respond onapp.example.comand a different app onwww.example.com)http basic auth in some paths, say www.example.com/secureaccess logsetcAll of this can be done in go but would require programming, while in nginx it's just a matter of editing a.conffile and reloading the configuration. Nginx doesn't even need a restart for this changes to take place.(From a "process" point of view, nginx could be managed by an ops employee, with root permissions, running in a well known port, while developers deploy their apps on higher ones.)
This question already has answers here:What are the benefits of using Nginx in front of a webserver for Go?(4 answers)Closed8 years ago.Sorry, I cannot find this answer from Google search and nobody seems to explain clearly the difference between pure Go webserver and nginx reverse proxy. Everybody seems to use nginx in front for web applications.My question is, while Go has all http serving functions, what is the benefit of using nginx over pure Go web servers?And in most cases, we set up the Go webserver for all routes here and have the nginx configurations in front.Something like:limit_req_zone $binary_remote_addr zone=limit:10m rate=2r/s; server { listen 80; log_format lf '[$time_local] $remote_addr ; access_log /var/log/nginx/access.log lf; error_log /var/log/nginx/error.log; set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; real_ip_recursive on; server_name 1.2.3.4 mywebsite.com; }When we have this Go:func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) }Are the traffic to nginx and Go web server different? If not, why do we have two layers of web server?Please help me understand this.Thanks,
Go web server with nginx server in web application [duplicate]
The socket has to be readable and writable by both client and server. Under the assumption that the server is running aswww-dataand the client is running asforgewith groupforge, the following steps should fix the issue.Change the group ownership of the socket to the group of userforge.chgrp forge /var/run/fcgiwrap.socketChange the group permission to allow write for groupforge.chmod g+w /var/run/fcgiwrap.socketThe socket will now be readable and writable by both server and client.
I'm trying to install Gitweb on my Nginx server. Everything seems to be configured correctly, but I seem to be getting the following error in the gitweb.log:`2015/06/08 08:42:05 [crit] 29135#0: *5 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 83.36.85.6, server: git.mydomain.co.uk, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "git.mydomain.co.uk"`I've checked the owner/permissions and all seems to be fine.srwxr-xr-x 1 www-data www-data 0 Jun 8 08:44 /var/run/fcgiwrap.socketThe output ofps aux | grep nginxis:root 30283 0.0 0.0 90552 1296 ? Ss 08:59 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; forge 30284 0.0 0.0 90884 1924 ? S 08:59 0:00 nginx: worker process forge 30285 0.0 0.1 90884 2408 ? S 08:59 0:00 nginx: worker process root 30528 0.0 0.0 11980 928 pts/0 R+ 09:03 0:00 grep --color=auto nginxAny ideas what the problem could be?
Nginx connect() to unix:/var/run/fcgiwrap.socket failed
The underlyingphp:fpmimage has the following line in theDockerfile:WORKDIR /var/www/htmlYou then delete this directory, which breaks the defaultCMDcommand as it usesWORKDIRas its base.I don't know much about PHP and this Docker image, but it's worth reading the documentation and looking at any examples you can find on how to use it; deleting a directory feels like you're working against the image.
I want to play around with docker so I created my own 2 container, nginx and php. Both container are build successfully and are published on docker hub. After that I created a fig.yml in my projects folder. If I runfig up -din my terminal, then I got the following error:Recreating playground_php_1... Cannot start container e087111c...: [8] System error: no such file or directoryAny ideas how I can fix this problem?Here is my fig.yml:web: image: mc388/docker-nginx:latest ports: - "80:80" - "443:443" links: - php volumes: - ./logs/nginx:/var/log/nginx volumes_from: - php php: image: mc388/docker-php:latest volumes: - .:/var/www/And here are the links to the config of both docker containers:https://github.com/mc388/docker-nginxhttps://github.com/mc388/docker-php
Docker: Nginx & PHP Container: no such file or directory
You must quote location which contains{or;characters.location ~ "^/abc/\w{1,3}$" { ... }otherwise nginx parse it aslocation ~ ^/abc/\w { 1, ...and fails with syntax error.
How to match multiple times in nginx location regex ?it seems the {x,x} syntax never works!for example:location ~ ^/abc/\w{1,3}$ { ... }nerver work!
nginx location regex, match multiple times
Unfortunately notPHP-FPM simply logs each line of PHP output as a separate event. There's nothing you can do in/with PHP-FPM to change this.PHP CodeYou'll need to "fix" this in your application (PHP code). There are 3 ways you can influence the way PHP reports errors, and you'll probably want to use all 3:Register a custom error handler withset_error_handler(). This handler is called on all errors exceptE_ERROR,E_PARSE,E_CORE_ERROR,E_CORE_WARNING,E_COMPILE_ERROR,E_COMPILE_WARNING, and most ofE_STRICTraised in the file whereset_error_handler()is called.Register a custom exception handler withset_exception_handler(). This handler is called when an uncaught exception occurs.Register a custom shutdown function withregister_shutdown_function(). This function is called after script execution finishes orexit()is called. This one is useful for detecting errors that are not handled with the error handler.Log libraryI can advise you to have a look atMonolog. It's a PSR-3 complaint logging library which also facilitates what I described above.In addition it has an impressive list of "handlers" which can write the logs to all sorts of services. Chances are the service you're using now is among them!AlternativeAnother option is to create a proxy script that will read the PHP-FPM log files and buffers lines until a "full event" is gathered. Then writing that as a single entry to the service you're using.I would advise younotto go this way. Writing such a script can be tricky and is very error-prone. Logging from your application itself is much more stable and reliable.
I have a problem with PHP-FPM registering a single event as multiple events. Take for example the stack trace below:[30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "NOTICE: PHP message: PHP Fatal error: Uncaught exception 'Zend_View_Exception' with message 'script 'new-layout.mobile.phtml' not found...." [30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "Stack trace:" [30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "#0 /usr/share/nginx/html/site.com/142-webapp/library/Zend/View/Abstract.php(884): Zend_View_Abstract->_script('new-layout.mobi...')" [30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "#1 /usr/share/nginx/html/site.com/142-webapp/library/Zend/Layout.php(796): Zend_View_Abstract->render('new-layout.mobi...')" [30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "#2 /usr/share/nginx/html/site.com/142-webapp/library/Zend/Layout/Controller/Plugin/Layout.php(143): Zend_Layout->render()" [30-Jul-2014 05:38:50] WARNING: [pool www] child 11606 said into stderr: "#3 /usr/share/nginx/html/site.com/142-webapp/library/Zend/Controller/Plugin/Broker...."As you can see, each line of the stack trace is effectively a separate event with its own timestamp. This is problematic when forwarding logs to another service for analysis because then each stack trace will be broken up when it should be considered as one event. At the moment I am using Kibana 3 and it is a nightmare viewing and managing stack traces since each line is a separate event and the individual events are not always in chronological order.How do I make php-fpm register each stack trace asoneevent ?
PHP-FPM breaks up stack trace log into separate events
It seemsyou have a custom processor trying to resolve path:File "./project/context_processors.py", line 88, in app_delegate app_name = resolve(request.path).app_nameQuoting djangoresolve()docs:If the URL does not resolve, the function raises a Resolver404 exception (a subclass of Http404) .I suggest to you tomanage exception in custom processorcode to look like this one:try: resolve_result = resolve(request.path) app_name = resolve_result.app_name ... your code .... except Resolver404: pass
I am using Django 1.6, uwsgi and nginx, the application works fine but I am getting 500 error and the email below for every invalid URL that I am trying to access, instead of a 404 error.I get this forhttp://my_project_url.com/whateveror even forhttp://my_project_url.com/favicon.icoI have looked over the URL's but there is no regex matching this pattern.Here is the traceback from the email:Traceback (most recent call last): File "/project/virtenv/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/project/virtenv/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/project/virtenv/local/lib/python2.7/site-packages/django/views/defaults.py", line 30, in page_not_found body = template.render(RequestContext(request, {'request_path': request.path})) File "/project/virtenv/local/lib/python2.7/site-packages/debug_toolbar/panels/templates/panel.py", line 55, in _request_context__init__ context = processor(request) File "./project/context_processors.py", line 88, in app_delegate app_name = resolve(request.path).app_name File "/project/virtenv/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 453, in resolve return get_resolver(urlconf).resolve(path) File "/project/virtenv/local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 333, in resolve raise Resolver404({'tried': tried, 'path': new_path}) Resolver404: {u'path': u'favicon.ico', u'tried': [[<RegexURLResolver <module 'custom_If I am trying to access an URL from the app where I doraise Http404it's fine, I get the regular nginx error page.
Django Internal Server Error instead of 404
The fpm adress path was missing:nano /etc/nginx/conf.d/php5-fpm.confedit :upstream php5-fpm-sock { server unix:/var/run/php5-fpm.sock; }
I am trying to set a virtual host for a fresh ubuntu/php5.5/nginx installation as suchetc/nginx/sites_available/mydomain.com :server { listen 80 default_server; root /home/www/mydomain.com/public/; index index.php index.html access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server_name mydomain.com; location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; } }etc/hosts :127.0.0.1 mydomain.comsymlink in the symlink in the 'sites-enabled' folder :sudo ln -s /etc/nginx/sites-available/mydomain.com /etc/nginx/sites-enabled/mydomain.comtheservice nginx restartfail and the var/log/nginx/error.log givesno port in upstream "php5-fpm-sock" in /etc/nginx/sites-enabled/mydomain.com:12what can be wrong?
nginx virtual host: php5-fpm-sock error
Plesk really shouldn't have its core edited. When you need domain level config changes there's a file you need to edit outside that file. Under Apache that file was calledvhost.confunder the directory for your domain. It would then append that to the base config. It looks like nginx uses a similar process.Based onthis posthere's what you need to doHere steps how to add custom include in nginx virtual host config:mkdir /usr/local/psa/admin/conf/templates/custom/domaincp /usr/local/psa/admin/conf/templates/default/domain/nginxDomainVirtualHost.php /usr/local/psa/admin/conf/templates/custom/domain/add in /usr/local/psa/admin/conf/templates/custom/domain/nginxDomainVirtualHost.php:domain->physicalHosting->vhostDir . '/conf/nginx.conf')): ?> include domain->physicalHosting->vhostDir;?>/conf/nginx.conf; /usr/local/psa/admin/bin/httpdmng --reconfigure-all # to apply new configuration for all domainsAs result, if domain has conf/nginx.conf - it will be included into virtual host config.
I use Plesk and I have three domains and subdomains with different ngnix configs. At the moment I change the ngnix config in the /etc/ngnix/plesk.conf.d/vhost/manual after every update, because my changes are being overwritten by the httpdmng.Now to my question; Can I create a separate template in /usr/local/psa/admin/conf/templates/folder for each domain?Example:mydomain1use the template from/usr/local/psa/admin/conf/templates/mydomain1/nginxDomainVirtualHost.phpmydomain2use the template from/usr/local/psa/admin/conf/templates/mydomain2/nginxDomainVirtualHost.php
Plesk nginx config for every domain and subdomain
It was replied earlier in another thread:https://stackoverflow.com/a/12904282/2324004Unfortunately, Socket.io wiki lacks of some of information but the clue is to set up resource:Clientvar socket = io.connect('http://localhost:8081', {resource: 'test'});Servervar io = require('socket.io').listen(8081, {resource: '/test'});Above is working (I've tested it at the moment). This one should work with the configuration above:location /test/ { proxy_pass http://localhost:8081; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; }
i want bind nodejs to a url, like this:http://myproject.com/nodejs/Currently, i have node in port 8080.And i have nginx configuration :upstream app { server 127.0.0.1:8080; } server { listen 80; ## listen for ipv4; this line is default and implied root /home/myproject/www; index index.html index.htm; server_name myproject.com; location /nodejs/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://app/; proxy_redirect off; } }When i open the url i get :Welcome to socket.io.The connection is ok.But, when i try to connect in my website, i get this :GET http://myproject.com/socket.io/1/?t=1385767934694 404 (Not Found) socket.io.js:1659this is my website line for do the connection :var socket = io.connect('http://myproject.com/nodejs/');How can i do that ?
Nginx set up nodejs in port 80
I can't speak explicitly to node.js architecture decisions, but I can address your CloudFront and ELB questions.CloudFront is a great CDN for static assets, but there are a few gotchas. As the saying goes,"There are only two hard problems in Computer Science: cache invalidation and naming things."If you want to replace your static assets with identically-named-but-updated assets, you'll have to start mucking withcache-control-headers,which determine how long any given CF node caches an asset. There's little advantage to caching in CF if you set it too low. It's such a pain, I strongly recommend versioning your assets and setting up your application in such a way as to deploy assets and updated asset references simultaneously. At this time, there is no way to manually expire an object's caching from CF via an API call or some other method.As for ELB scaling: Yes. Use them. Netflix does, and if they can you can, too. The key to ELB use is understanding their role in your application's traffic: You want them to be highly available, which means maintaining a fleet of webservers in multiple availability zones, and making sure the ELB is properly configured to hit them. In fact, Netflix recentlyopen-sourcedtheircloud management applicationcalled "Asgard". I suggest checking it out. The linked blog post has a great explanation of ELB use, and Asgard looks like a decent way to manage cloud deployments with zero downtime.
I did post this on serverfault, but not getting any views or responses.I've read a bunch of posts on here about whether or not you need a webserver when using Node.js, and the answer always seems to be yes to serve up static files.My question is this though. If the site I'm working on is mostly dynamic, couldn't I just use Node.js as the server and the dynamic parts, and then put all the static files (css, js, images, etc) on CloudFront to serve them up? This way I don't have to worry about caching (through varnish or redis or what have you), or running an http server like nginx and proxying through to get to Node.js (which I've read causes problems with socket.io).Along a similar topic, will Amazon's ELB suffice as a load balancer or if the site go big enough to need to be load balanced would I need to do something else? Thanks in advance!
AWS and Node.js, do I need nginx or apache?
You should movecoffee-railsgem from the:assetsgroup to the main group.
I'm having an unexplainable issue with my Rails app. I'm using a lot of JavaScript in all parts of the app. In development everything is working just fine, but in production it seems that the code in my javascriptviewsis not executed.This is particularly odd because all other JavaScript on the page works great. Custom tabs with JavaScript work. Even my custom made calendar works as expected. The only things that do not work are remote links which trigger views ending with .js.coffee.My webkit inspector returns this when clicking a remote link:http://cl.ly/IhyfHowever, when looking at the response tab of that request it shows me nothing:http://cl.ly/IhxsFurthermore, the console does not show any errors. It's as if my new.js.coffee is just empty.The view that is being called contains some simple JavaScript to show a modal:$('#modal_container').html("<%=j render 'form' %>") $('#modal_container').modal()In development mode, all views load properly. I also ran rake:assets:precompile multiple times, but that didn't seem to help.
Rails JavaScript views not working in production
As far as I see - you have mappedlocation /to go tolocalhost:8000. When you have 2 different upstreams, you'll need two different location mappings, one for each upstream. So assuming that the django app is the primary site on your domain, you'll have the default location as it is now:location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8000/; }but then add another location for the other app:location /tilestache { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8080/; }The only difference here is the port. This way the domain.com/tilestache will be processed bylocalhost:8080, while all other addresses will default to the django app atlocalhost:8000. Place thelocation /tilstachebeforelocation /.For clarity you can define your upstreams like this:upstream django_backend { server localhost:8000; } upstream tilestache_backend { server localhost:8080; }and then inlocationsection, use:location / { ..... proxy_pass http://django_backend; } location /tilestache { ..... proxy_pass http://tilestache_backend; }
I'm trying to host a site that consists of a django app and map tiles served by tilestache. I can get them running and serving content separately by using eithergunicorn_django -b 0.0.0.0:8000for the django app, orgunicorn "TileStache:WSGITileServer('tilestache.cfg')"for tilestache. I've tried daemonizing the django app and running them at the same time with the tilestache process on a different port (8080), but tilestache doesn't work. I assume the issue lies in my nginx conf:server { listen 80; server_name localhost; access_log /opt/django/logs/nginx/vc_access.log; error_log /opt/django/logs/nginx/vc_error.log; # no security problem here, since / is alway passed to upstream root /opt/django/; # serve directly - analogous for static/staticfiles location /media/ { # if asset versioning is used if ($query_string) { expires max; } } location /static/ { # if asset versioning is used if ($query_string) { expires max; } } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8000/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; }Can I just add anotherserverblock in the conf forproxy_pass http://localhost:8080/? Additionally, I'm very new to this stack (I've relied greatly onAdrián Deccico's tutorialhereto get the django part up and running) so any "woah that's an obvious mistake" or suggestions would be greatly appreciated.
Nginx conf for two gunicorn applications (django and tilestache)
You could use javascript.Try this:http://detectmobilebrowsers.com/
We are launching a mobile version of our website very soon. Our full website and mobile website are different only in theme, i.e. URLs are the same, only difference is on the front-end.We need to be able to do the following when a user visits our site:1.Check a cookie (mobile == true OR false) to determine if full vs. mobile preference has already been defined (by user manually or by detection on our end).2.If no mobile cookie is set, detect user's device on first page view and set mobile cookie to true or false.3.Serve the appropriate experience, full or mobile, based on results of #1 and/or #2.Initially I was using PHP to detect devices which works fine. However, our site utilizes extreme full HTML caching on the home page and some other pages (.html files are written to a folder in our web root and if Nginx finds them they are served instead of the request going through PHP - cache is cleared every 15 minutes) so I cannot rely on PHP to detect a mobile device from our main point of user entry (as far as I know at this point...).Not being able to rely on PHP, I then put the mobile cookie check and device detection into the Nginx configuration file (Apache locally for me while developing, translated by our server guy for Nginx). However, our server management folks got back to us saying the performance hit from the new Nginx configuration file would be large (and "performance hit" is a 4-letter word in our office).Basically I'm being told full HTML caching of the home page has to stay in place and that I can't change the Nginx configuration file at all. Is there another method for cookie/device detection I could utilize given the restrictions in place from above?
How can I detect mobile devices (and/or mobile cookie) without scripting (PHP) or server configuration (Nginx)?
You could still use the rewrite rules from Apache, with slight modifications (I took this fromNginx Primer):Apache:RewriteCond %{HTTP_HOST} ^example.org$ [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L]Nginx:if ($host != 'example.org' ) { rewrite ^/(.*)$ http://www.example.org/$1 permanent; }Another issue is .htaccess files, but this would only be an issue if your sharing the server with others.I would also research any Apache modules that your relying on and ensure that the Nginx equivalents include the same functionality. Definitely test your webapp with Nginx in a test environment first to identify any issues.In the end, if your goal is better performance then the migration from Apache to Nginx should be worth it.
Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed10 years ago.Improve this questionI have been running a website which servers javascript widgets for about 2 years. Now my question is whether I should use nginx purely or I should continue using apache with nginx.I have about 200 sign ups a day and that means that sometimes the request rate for widget goes up by 2000 a day. So, now the problem is switching to nginx means that i would not be able to use the rewrite rules that i am using in apache.Now that is one problem I know of, but are there any other issues that I can expect to see in an nginx environment that I dont in Apache?Would you suggest me to switch purely to nginx or stay with apache and nginx as reverse proxy?
Nginx vs Apache or using Apache with nginx [closed]
You haven't mentioned any reasons why you would want to put these pages in the Nginx server. I would recommend keeping it with the rest of your site, that is, on the Django server. Moving part of your site to the Nginx server is a good idea to solve scalability problem, but complicates your deploy. I certainly hope you aren't seeing a significant fraction of your site's traffic going to your error pages!
I know there is 404 error handling in django. But is it better to just put that config in nginx ?This ST thread has the solution for putting it. -http://stackoverflow.com/questions/1024199/nginx-customizing-404-pageIs that how everyone handles it when using nginx ?I have created my own 404.html & 500.html in the sites theme, want to display them.
django on nginx & apache : where to handle 404 & 500 error?
I didn't include Apache since its code base is an order of magnitude larger than the two mentioned.Actually Apache code is quite readable. It has large code base because it does lots of things. But it is well structured and quite easy to understand. You can also check APR library (Apache Portable Runtime) which has plethora of small things to learn from.IMO if you want to learn programming, you should start with lower profile projects - and not HTTPd, but something simpler.Both nginx and LightHTTPd (just like Apache) are production quality software, meaning very steep learning curve. And thelearningunfortunately often meansdigging archives to see why it is that way- that comes with age to any mature project.If you are simply into C and learning design, you might want to check the FreeBSD or its derivatives. In my experience it is a better place for starting: there are lots of tools and libraries of all calibers there. And their TODO lists are never empty, what serves well as a guide to where to start.
Primary goal is to learn from a popular web server codebase (implemented in C) with priority given to structure/design instead of neat tricks throughout the code.I didn't include Apache since its code base is an order of magnitude larger than the two mentioned.
Which has a better code base to learn from: nginx or lighttpd?
Although/is an unusual and undesirable character, your script will break foranynon-ASCII character.response['X-Accel-Redirect'] = urlurlis Unicode (and it isn't a URL, it's a filepath). Response headers are bytes. You'll need to encode it.response['X-Accel-Redirect'] = url.encode('utf-8')that's assuming you're running on a server with UTF-8 as the filesystem encoding.(Now, how to encode the filename in theContent-Dispositionheader... that's an altogether trickier question!)
I have a list of strangely encoded files:02 - Charlie, Woody and You/Study #22.mp3which I suppose isn't so bad but there are a few particular characters which Django OR nginx seem to be snagging on.>>> test = u'02 - Charlie, Woody and You/Study #22.mp3' >>> test u'02 - Charlie, Woody and You\uff0fStudy #22.mp3'I am using nginx as a reverse proxy to connect to django's built in webserver (still in development stages) and postgresql for my database. My database and tables are allen_US.UTF-8and I am using pgadmin3 to view my tables outside of django. My issue goes a little beyond my title, firstly how should I be saving possibly whacky filenames in my database? My current method is'path': smart_unicode(path.lstrip(MUSIC_PATH)), 'filename': smart_unicode(file)and when I pprint out the values they do showu'whateverthecrap'I am not sure if that is how I should be doing it but assuming it is now I have issues trying to spit out the download.My download view looks something like this:def song_download(request, song_id): song = get_object_or_404(Song, pk=song_id) url = u'/static_music/%s/%s' % (song.path, song.filename) print url response = HttpResponse() response['X-Accel-Redirect'] = url response['Content-Type'] = 'audio/mpeg' response['Content-Disposition'] = "attachment; filename=test.mp3" return responseand most files will download but when I get to02 - Charlie, Woody and You/Study #22.mp3I receive this from django:'ascii' codec can't encode character u'\uff0f' in position 118: ordinal not in range(128), HTTP response headers must be in US-ASCII format.How can I use an ASCII acceptable string if my filename is out of bounds?02 - Charlie, Woody and You\uff0fStudy #22.mp3doesn't seem to work...EDIT 1I am using Ubuntu for my OS.
Django: Unicode Filenames with ASCII headers?
aliasbasically gives you the possibility to serve a file with another name (e.g. serve a file namedfoo.jsonat location/.well-known/assetlinks.json). Even if this is not required in your case I would favor this config, as it is easily understandable:location = /.well-known/assetlinks.json { alias /var/www/static/google-association-service/assetlinks.json; }Note that the=is required to not match locations like/.well-known/assetlinks.json1111.
I'm trying to serve my AppLink for google association services. The following works:location /.well-known/assetlinks.json { root /var/www/static/google-association-service/; types { } default_type "content-type: application/json"; }Provided I have the correct file placed at/var/www/static/google-association-service/.well-known/assetlinks.jsonThe URI is what it it is, Google gets to decide that, but I'd rather not have it resolve to a hidden file at the bottom of a directory where the next guy wonders why the directory is even there because he forgot to ls with a '-a'. Is there a way to make it so I can map this URI to something like:/var/www/static/google-association-service/assetlinks.json# omit the hidden sub directory?(I've tried understanding the difference between root and alias, but I'm not seeing how alias would help me here)
How to not need a .dotted file path in nginx configuration
The/etc/nginx/mime.typesfile already contains a mapping for URIs ending with.img, which is set toapplication/octet-stream.When you edit the file, did must also remove this existing mapping.Alternatively, you can override the content-type for a single URI.For example:root /path/to/root; ... location = /images/logo.img { types {} default_type image/svg+xml; }
I'm usingnginx/1.10-3on Debian.I have a file named logo.img which infact is an svg.I've modified/etc/nginx/mime.typesto include the.imgas an extension for the svg file type:image/svg+xml svg svgz img;But the headers of the file served is stillapplication/octet-streamFor some bizarre reason, i've been asked to serve the.imgfile as an svg for a logo on a site, i've got it to work on Apache2 using mime magic. But, as far as i know, that doesn't exist on NGINX.
NGINX - Can't set content-type header
Thisis the only solution that worked.It's necessary to overwrite the default nginx file after AWS created it. So there has to be two more files:Write the nginx file.Create a script that overwrites the default nginx file.Run the script after AWS created the default file.
I want a redirect from HTTP request to HTTPS on Elastic Beanstalk with nginx as proxy system.I've found a lot of advices on Google but no one helped, it doesn't redirect.That is my currenttest.configfile in.ebexentionsdirectory:files: "/etc/nginx/conf.d/proxy.conf" : mode: "000644" owner: root group: root content: | server{ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; } }I've also tried countless other settings, none of them worked.That are my load balancer settings:I hope you can help me. :)
Redirect Elastic Beanstalk HTTP requests to HTTPS with nginx
You can use regular expression instead of cut. For example to extract version number fromnginx-1.15.0use:echo 'nginx-1.15.0' | grep -o '[0-9.]*$'Output:1.15.0
I'm trying to check if the installed nginx version is equal with the version defined in a config file.My code:#check version command="nginx -v" nginxv=$( ${command} 2>&1 ) nginxvcutted="echo ${nginxv:21}" nginxonpc=$( ${nginxvcutted} 2>&1 ) if [ $nginxonpc != ${NGINX_VERSION} ]; then echo "${error} The installed Nginx Version $nginxonpc is DIFFERENT with the Nginx Version ${NGINX_VERSION} defined in the config!" else echo "${ok} The Nginx Version $nginxonpc is equal with the Nginx Version ${NGINX_VERSION} defined in the config!" fiThis code 'can' work, but I have got a problem: If the version number changes, the cut number (nginxv:21in this example) doesn't fit anymore.Example:nginx-1.13.12 vs nginx-1.15.0 (13 vs 14 chars)Is there any way to get this working, without that trouble?Solution:I adapted the solution from @Mohammad Saleh Dehghanpour and it's working like a charm:command="nginx -v" nginxv=$( ${command} 2>&1 ) nginxlocal=$(echo $nginxv | grep -o '[0-9.]*$') echo $nginxlocal 1.15.0
Bash: Nginx Version check cut
You cannot match the query string (anything from the?onwards) inlocationandrewriteexpressions, as it is not part of the normalized URI. Seethis documentfor details.The entire URI is available in the$request_uriparameter. Using$request_urimay be problematic if the parameters are not sent in a consistent order.To process many URIs, use amapdirective, for example:map $request_uri $redirect { default 0; /somepath/somearticle.html?p1=v1&p2=v2 /some-other-path-a; /somepath/somearticle.html /some-other-path-b; } server { ... if ($redirect) { return 301 $redirect; } ... }You can also use regular expressions in themap, for example, if the URIs also contain optional unmatched parameters. Seethis documentfor more.
I have to migrate a lot of URLs with params, which look like that:/somepath/somearticle.html?p1=v1&p2=v2 --> /some-other-path-aand also the same URL without params:/somepath/somearticle.html --> /some-other-path-bThe tricky part is that the two destination URLs are totally different pages in the new system, whereas in the old system the params just indicated which tab to open by default.I tried different rewrite rules, but came to the conclusion that parameters are not considered by nginx rewrites. I found a way using location directives, but having 2000+ location directives just feels wrong.Does anybody know an elegant way how to get this done? It may be worth noting that beside those 2000+ redirects, I have another 200.000(!) redirects. They already work, because they're rather simple. So what I want to emphasize is that performance should be key!
nginx: rewrite a LOT (2000+) of urls with parameters
Solved it usingrewrite ^/api(/.*)$ $1 break;but I can't just using/api- it must be/api/(with trailing/)For me it's fine, interesting though if anyone knows how to have support for/apitoo.
I have thisnginx.confnginx configuration:http { ... upstream app_servers { server admin; } upstream status_servers { server status:5000; } # Configuration for the server server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; location / { proxy_pass http://app_servers; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } location /api { proxy_pass http://api_servers; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } } }/is server to one server, and/apito another api server.The problem is with the second (the api server).The calls are reaching/api/**while I want them to reach the root of the api server (basically remove the /api when calling the api server).so calling/api-> will reach/in the api server, and calling/api/foo-> will reach/fooin the api server.I guess I'm looking for some kind of rewrite rull for that(?)I have tried Inside the/apilocation:rewrite ^/api(.*) /$1 last;but it didn't seem to work.Any kind of help would be appreciated!
nginx - rewrite location to root of server
Yes, you can use theauth-request modulein nginx.
I have an own OAuth2 provider where you can ask for a token and validate it. I want to protect my REST API (resource server) with OAuth2, so, in every single request, the access token must be validated, against OAuth2 server.I have been doing this validation in the REST API code itself, by intercepting every request and doing another request to OAuth2 server.I wonder if there is any way to do it in the Nginx server instead of in the REST API. This way, it would be easier to setup in another REST API, instead of copy/paste the code (or share a library).Maybe, should I create my own nginx module? Or running an script in every request? If so, how can I do it?Any advice will be appreciated.
Nginx proxy for OAuth2 validation
For access check you can useifdirective and ssl module variables:$ssl_client_s_dn,$ssl_client_serial. Examplelocation /not/for/jhon/ { if ($ssl_client_s_dn ~ Jhon) { return 403; } }Good way to maintain list of allowed certificated themapdirective. Examplemap $ssl_client_s_dn $ssl_access { default 0; 01 1; 02 1; } server { ... location /authorize/by/cert { if ($ssl_access = 0) { return 403; } } }or, rules can be more complicatedmap $ssl_client_s_dn $ssl_access { default ""; 01 -a-b-c-; 02 -b-e-; } server { ... location /authorize/by/cert/a { if ($ssl_access !~ -a-) { return 403; } } location /authorize/by/cert/b { if ($ssl_access !~ -b-) { return 403; } } }But read this article beforehttps://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/. Useifonly withreturnorrewrite ... last;directives.
I have SSL enabled in nginx with the client certificate enabled in my browser. With this I'm able to hit my site via HTTPS through port 443.What I'm looking for now is to use this information about the client to allow access to different parts of the API (URLs) but deny access to other parts. I can do this using IP address and the ngx_http_access_module with something like the following:location /allowed/loc { allow 192.168.1.0/24; } location /protected/loc { allow 192.168.1.10; deny all; }What I'm looking for is some way to allow/deny based on the certificate of the client instead of on the IP address, ie something akin to this:location /authorize/by/cert { allow client-cert-serial; deny all; }Is there a way to do this in nginx, or do I need to use something else? Or am I thinking about this all wrong?Thanks for the help.
nginx authorization based on client certificates
Should you always modify the application when you want to assign a domain/path?No, you shouldn't have to modify the application at all.When you useproxy_passin this manner, you need to rewrite the URL with regex. Try something like this:location ~ ^/app1/(.*)$ { proxy_pass http://localhost:8080/$1$is_args$args; }See also:https://serverfault.com/q/562756/52951
ProblemI have a web which works fine on a root domain likemydomain.comwithout any modification. But if I want to serve it asmydomain.com/app1I I need to modify the source code in the backend and statics links (css, images, etc) in the htmlnodejs :fromapp.get('/')toapp.get('/app1')htmlfromsrc="main.css"tosrc="app1/main.css"QuestionShould I always modify the application when I want to assign adomain/pathusing nginx?Sample apphttps://github.com/jrichardsz/nodejs-static-pages/blob/master/server.jsapp.get('/', function(req, res) { // ejs render automatically looks in the views folder res.render('index'); });Nginx for root domainThis is my nginx configuration which works formydomain.comserver { listen 80; server_name mydomain.com; location / { proxy_pass http://localhost:8080/; } }Attempt formydomain.com/app1server { listen 80; server_name mydomain.com; location /app1/ { proxy_pass http://localhost:8080/app1/; } }And this is the fix in node js appapp.get('/app1', function(req, res) { // ejs render automatically looks in the views folder res.render('index'); });I tried :https://github.com/expressjs/express-namespacehttp://expressjs.com/en/4x/api.htmlBut in both cases, I need change my node js app.Thanks in advance.
How to use custom location or path instead root for several apps using nginx?
So, one way I got around this was to do this in my uwsgi.ini filetouch-reload = /home/vagrant/PythonVision/app.pyThen I touch the file app.py and BANG sorted
Every time I update my Python file, I have to reboot the server to see changes. I have tried restarting Nginx and uWSGI with no luck. Flask is running in debug mode. How can I see changes without rebooting the entire server?app.pyfrom flask import Flask import time import cv2 app = Flask(__name__) @app.route("/") def main(): return "Hello cob at " + time.time().__str__() + "\n" if __name__ == "__main__": app.run(debug=True)uwsgi.ini[uwsgi] socket = :9090 plugin = python wsgi-file = /home/vagrant/PythonVision/app.py process = 3 callable = appnginx.confserver { location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9090; } }I am testing this with these steps:change the return message from "Hello cob" to "hello bob", save the fileRefresh the page in a browser (clear cache of browser) No changeDo sudo service uwsgi restart, sudo service nginx restartRefresh the page in a browser (clear cache of browser) No change
Nginx, uWSGI, Flask app doesn't show changes until the server is restarted
Are you using antivirus software (e.g. Avast) and is it inspecting your HTTPS traffic?It does this by acting like a MITM so you connect it it and it connects to the real website. And if they only support http/1 (which as far as I know they only do) then that would explain this. Though oddly not for for Medium unless you have an exception for this.Should be easy enough to check by looking at the HTTPS cert when visiting the site to see if it was "issued" by your local Avast server.If not that then suggest you look at your ciphers as HTTP/2 is picky about which ones it uses. Anything weird showing onhttps://www.ssllabs.com/servertestfor your site? What cipher is it using for Chrome?
I have a site, that runs on a Nginx 1.10.0 on Ubuntu 16.04 server (OpenSSL 1.0.2h). I want to serve this site over HTTP/2, so I configured Nginx accordingly:listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_serverAnd it works fine in FF 47 and Chrome 51 on my office Ubuntu 15.10 desktop and in the same browsers on my home Ubuntu 15.10 desktop.However on my home Windows 10 desktop and laptop HTTP/2 works only in FF. Chrome 51, IE 11 and Edge are using HTTP/1.1 on this site.So, I'm baffled.This servicesays, that my site supports HTTP/2 and ALPN (which is required for HTTP/2 to work in Chromesince version 51).Chrome versions and capabilities are exactly the same:HTTPS works, and Security panel in Chrome Dev Tools shows, that everything is secured.This demoin Chrome, IE and Edge displays message "This browser is not HTTP/2 enabled.", and "Your browser supports HTTP/2!" in FF. But HTTP/2 onmedium.comworks just fine in all of this browsers.So, my question is: what's going on and how to fix this?
Why HTTP/2 on a specific site works in FF, but doesn't work in Chrome, IE and Edge on the same Windows 10 computer?
As @Javier-Segura mentioned, on with native Docker on Linux you should be able to hit the container via it's IP and port, so in your casehttp://172.17.0.2:80- the 8080 port would be on the host IP.With Docker for Mac Beta it does not appear to work the same way for the container. It changes a bit with every release but right now it appears you can not reach a container by ip via conventional means.Unfortunately, due to limtations in OSX, we’re unable to route traffic to containers, and from containers back to the host.Your best bet is to use a different non-conflicting port as mentioned. You can use different Compose config files for different environments, so as in the example above, use 8081 for development and 8080 for production, if that is the desire. You would start Compose in production via something likedocker-compose -f docker-compose.yml -f production.yml up -dwhere production.yml has the overrides for that environment.
I installed the docker-beata (https://beta.docker.com/) for osx. Next, I created a folder with this filedocker-compose.yml:web: image: nginx:latest ports: - "8080:80"After, I used this command :docker-compose up. Container start with success.But the problem is to access in my container. I don't know what ip use. I try to find ip withdocker psanddocker inspect ...:"Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "6342cefc977f260f0ac65cab01c223985c6a3e5d68184e98f0c2ba546cc602f9", "EndpointID": "8bc7334eff91d159f595b7a7966a2b0659b0fe512c36ee9271b9d5a1ad39c251", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02" } }So I try to usehttp://172.17.0.2:8080/to access, but I have aERR_CONNECTION_TIMED_OUTerror. But, if I usehttp://localhost:8080/, I can access to my container ! (But my localhost is already use by my native config on my mac, so if I want use localhost I must stop my native apache).Why it's doesn't work with the ip ?
Docker Beta on Mac : Cannot use ip to access nginx container
so I needed to add to the nginx configgzip_http_version 1.0;the default which is 1.1 was not good I guess for apachebench
this is my nginx gzip config:gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 4; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;I approve that it works by all the gzip testing websites that confirim my site is service gzip my is simple html file that its content type isContent-Type:text/html; charset=UTF-8my page content without gzip is 300kb and with gzip should be 20kb tried run all the options like:ab -r -n 200 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://example.com ab -r -n 200 -c 10 -k -H "Accept-Encoding: gzip" http://example.com ab -H "Accept-Encoding: gzip" -n 200 -c 10 -k http://example.com ab -H "Accept-Encoding: gzip, deflate" -n 200 -c 10 -k http://example.cometc no matter what I always get in the sumDocument Length: 183675 byteswhich mean its not getting the gzip version which should be much smaller.any idea how to get it work? I am trying to stress test my website but I am always get limited by my Network Out speed which is only 250 Mbps, and cpu and ram only 10% max when I reach the limit
cant get ab test to work with gzip
Although not explicitly stated in the documentation, thenginxtypes directive appears to behave similarly to other directives with regard to inheritance.The directive is inherited from the previous level if and only if there are notypedirectives defined on the current level.Thetypesdirective may appear in thehttp,serverorlocationblock level.To extend the mime types (rather than redefining them) to can add atypesblock with your additions in any file containing anhttp,serverorlocationcontext.But if you add atypesblock at theserverorlocationlevel, you should also add anotherinclude mime.typesstatement at the same level, so as not to lose the system defaults.In your sites-enabled files:# (1) server { # (2) location ... { # (3) } }If your sites-enabled file includes theserver { ... }block definition, you might add a types block in position (1) which would augment the mime types already loaded by the mainnginx.conffile.However, if you added a types block in positions (2) or (3), you would also need to add anincludestatement to pull in the system types again:server { include mime.types; types { ... } ... }Thetypesdirective isdocumented here.
I'm looking to extend the mime types in my Nginx configuration.I've learned that I could, in principle, either edit the mime.types file, or after including mime.types in thehttpblock of the config you could followinclude mime.typeswith atypes = {...}to append more types a lathis answer.Since I'm setting up Nginx with Chef, I have a templated configuration in sites-enabled folder that's included into the Nginx config proper. I would prefer not to have to template the nginx config or mime.types file, so I'm hoping it's possible to get it in the sites-enabled config file.In a similar spirit to the linked question above, could I include this in my sites-enabled file to get the same effect?http { types { # here is additional types } }My working theory is that if blocks work as described in the link above, adding such a block would not overwrite thehttpblock in the Nginx config, but would extend it as if I had added thetypesdirectly to thehttpblock innginx.conf.Is this a valid strategy? Or am I overlooking something easier?
extending mime types in a Chef deployed Nginx server
I've found the solution.The /var/lib/php/session/ folder was not writeable by nginx/php-fpm, I've just edited the permissions to 777 and now it works.
I am using Nginx with Centos 7 which is working fine.After that I installed phpMyAdmin which was successfully installed as well, however when I access it on the browser it shows white blank page with no HTML source code.What am I doing wrong?
phpmyadmin on NGINX with Centos7 shows blank white page?
Reading the fileauto/optionsgives me the impression that this module is enabled by default.See the definitionhere:HTTP_MAP=YESand the definition of a./configureoptionhere:--without-http_map_module
I'm having problems adding the modulengx_http_map_moduleinto my nginx ./conf I tried the--with-ngx_http_map_moduleparam but it doesn`t work. I'm kinda new to this so I could be doing something wrong. After I run that configuration I get this error :./configure: error: invalid option "--with-http_map_module"What am I doing wrong?
How to add ngx_http_map_module into nginx
Put your rewrite rules in an include file.Create Nginx configuration just for testing that pulls in the rewrite include file.Using the-cand possibly the-gflags, Runnginxas a user. Since you can run it on an alternate port, this won't conflict with a web server running on port 80.Have your automated testing run tests against this "test server".Shut down the nginx test server.
I want to get as close as I can to "unit testing" Nginx rewrite rules. How close can one get to doing this? Is there a way to run the request rewriting module without running the entire web server? Can I embed the Nginx server in a process running Ruby or Python or, if I have to, Java? What tricks can you think of?
Is it possible to run (and check) Nginx rewrite rules without running the whole web server?
Removing theX-Forwarded-Protoheader fromnginx.confor addingX-Forwarded-Port(recommended) resolved the issue.
Nginx terminates SSL for our app, communication between Nginx and app servers happens on port 8080, our app had nograils.serverURLset in config.We had a regular redirect:redirect uri: '/login'But when accessed via HTTPS the redirect returned:Location: https://domain.com:80/loginAs a result browsers attempt to open SSL connections over port 80 and fail.
Redirect location includes ":80" when X-Forwarded-Proto is https