Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
After searching for a bit I came across ablog postabout performing this in a super simple manor. Unfortunately I found the providedyamldid not quite work correctly as the oauth2_proxy was never being hit due to nginx intercepting all requests (I am not sure if mine was not working due to me wanting the oauth-proxy url to beexample.com/oauth2rather thanoauth2.example.com). To fix this I added back the oauth2-proxy path to the Ingress for the proxy i.e.apiVersion: extensions/v1beta1 kind: Ingress metadata: name: oauth2-proxy namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: example.com http: paths: - backend: serviceName: oauth2-proxy servicePort: 80 path: / - backend: serviceName: oauth2-proxy servicePort: 4180 path: /oauth2and made sure that the service was also still exposed i.e.apiVersion: v1 kind: Service metadata: labels: k8s-app: oauth2-proxy name: oauth2-proxy namespace: default spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: http-proxy port: 4180 protocol: TCP targetPort: 4180 selector: k8s-app: oauth2-proxyThen to protect services behind the oauth proxy I just need to place the following in the Ingress annotations:nginx.ingress.kubernetes.io/auth-url: "https://example.com/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://example.com/oauth2/start?rd=/redirect/$http_host$request_uri"
I've been setting up a kubernetes cluster and want to protect the dashboard (running atkube.example.com) behind thebitly/oauth2_proxy(running atexample.com/oauth2on imagea5huynh/oauth2_proxy:latest) as I want to re-use the OAuth proxy for other services I will be running. Authentication is working perfectly but after a user logs in, i.e. the callback returns, they are sent toexample.comwhere instead they should be sent to the original hostkube.example.comthat initiated the flow. How can I do this? (I am using the nginx-ingress-controller).Annotation on OAuth2 Proxy:kubernetes.io/ingress.class: "nginx", nginx.ingress.kubernetes.io/force-ssl-redirect: "true", nginx.ingress.kubernetes.io/secure-backends: "true", nginx.ingress.kubernetes.io/ssl-passthrough: "true"Annotation on Dashboard:kubernetes.io/ingress.class: "nginx", nginx.ingress.kubernetes.io/auth-signin: "https://example.com/oauth2/start", nginx.ingress.kubernetes.io/auth-url: "https://example.com/oauth2/auth", nginx.ingress.kubernetes.io/backend-protocol: "HTTPS", nginx.ingress.kubernetes.io/force-ssl-redirect: "true", nginx.ingress.kubernetes.io/secure-backends: "true", nginx.ingress.kubernetes.io/ssl-passthrough: "true", nginx.ingress.kubernetes.io/ssl-redirect: "true"I expect to be redirected to the original hostkube.example.comafter OAuth flow is complete but am being sent back to the OAuth2 hostexample.com
How to get oauth2_proxy running in kubernetes under one domain to redirect back to original domain that required authentication?
This is how I solved it configuring the Jenkins image context path without the need to use the ingress rewrite annotations:kind: Deployment metadata: creationTimestamp: null labels: app: jenkins name: jenkins spec: replicas: 1 selector: matchLabels: app: jenkins strategy: {} template: metadata: creationTimestamp: null labels: app: jenkins spec: securityContext: fsGroup: 2000 runAsUser: 1000 runAsNonRoot: true volumes: - name: jenkins-storage persistentVolumeClaim: claimName: jenkins containers: - image: jenkins/jenkins:lts name: jenkins ports: - containerPort: 8080 name: "http-server" - containerPort: 50000 name: "jnlp" resources: {} env: - name: JENKINS_OPTS value: --prefix=/jenkins volumeMounts: - mountPath: "/var/jenkins_home" name: jenkins-storage status: {}Ingress:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: prfl-apps-devops-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/add-base-url: "true" spec: rules: - http: paths: - path: /jenkins backend: serviceName: jenkins servicePort: 8080
I have deployed Jenkins on Kubernetes and am trying to configure the nginx ingress for it.Assume I want it to be available athttps://myip/jenkinsThis is my initial ingress configuration:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/add-base-url: "true" spec: rules: - http: paths: - path: /jenkins backend: serviceName: jenkins servicePort: 8080With this when I accesshttps://myip/jenkinsI am redirected tohttp://myip/login?from=%2F.When accessinghttps://myip/jenkins/login?from=%2Fit stays on that page but none of the static resources are found since they are looked for athttps://myip/static...
nginx ingress Jenkins path rewrite configuration not working
I encountered a similar issue when using nginx with certbot. I am hosting under Ubuntu 16.04 LTS and certbot is quite outdated (0.10.2).Asdescribed herethis version of certbot suffers an issue when emiting a certificate. The standard commands don't works,specific commandsmust be used.Certbot comes with an auto updater that will renew certificates automatically. This updater fails to use the workaround and also fails to start the nginx service after operations.What I did is to disable this service. There is a file at/etc/systemd/system/timers.target.wants/certbot.timer. Edit this file and comment the lines that enable the timer.[Unit] Description=Run certbot twice daily [Timer] OnCalendar=*-*-* 00,12:00:00 Persistent=true #[Install] #WantedBy=timers.targetNow you will have to renew the certificates manually.
I have an issue I am not sure how to troubleshoot. My setup:Amazon EC2 (t2.medium) running Ubuntu Linux 16.04 (fully up to date)NGINX 1.10.38 websites running Node JS (Express) that are bound to ports 3000-3007 through pm2, with NGINX as the reverse proxy (proxy_passin virtual host files)PHP 7.1 (to power a Wordpress site)The Node sites use the Wordpress REST API (from the Wordpress site) to serve contentThe Issue:Every few days it seems like NGINX stops working. I can tell because I am unable to access the Wordpress site until I runsudo service nginx restart. It does not seem to be a PHP issue, since if I restart PHP the Wordpress site DOES NOT go back online until the NGINX restart. The server logs in/var/log/nginxdon't seem to give any insight, and I am unsure how to troubleshoot the issue.Any ideas on where to start? Any monitoring I can set up (apart form just a basic "site down") that might provide insight? Maybe there is some setting that I can toggle in NGINX that handles overuse (if that is the issue)?
NGINX randomly stops working, required manual restart
Kestrel is a very simple web server and doesn't offer the features of something like IIS, Apache, or Nginx. If you want to do things like SSL, Load Balancing, Rate Limiting, etc adding an extra layer in front of it can come in handy.Another benefit is you can host multiple applications on port 80. Nginx will handle the requests on 80 and route them to the correct application running on the server.See this for more info:https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel
The officialMS-documentationsays that if I want to host an ASP.NET core app on linux I should either put an apache or nginx reverse proxy in front of it. However I cant find any reasons why I should do that.Why should I do that? Why can't it just run on kestrel? Why is the reverse proxy needed?
Why should I use a proxy server with Kestrel?
Try to change nginx configurationserver { listen 80; allow all; location / { proxy_pass http://portainer:9000/; resolver 127.0.0.11; } }portaineris container name defined into yourdocker-compose.ymlfile127.0.0.11is embedded docker DNS serverAlso. Alternative way. You can usejwilder/nginx-proxyinstead of your reverse-proxy.
I have a problem with reverse proxy to my Docker services. I have a local machine with IP 10.0.0.163 and with Docker stack running on it with nginx and portainer (for this question only they matter).docker-compose.yml:... portainer: image: portainer/portainer ports: - "9000:9000" volumes: - "/var/run/docker.sock:/var/run/docker.sock" - "/mnt/StorageDrive/Portainer:/data" deploy: placement: constraints: [node.role == manager] networks: ... - proxy reverse-proxy: image: reverseproxy:latest ports: - "80:80" networks: - proxy networks: ... proxy:nginx.conf:worker_processes 1; ## Default: 1 events { worker_connections 1024; } http { sendfile on; server { listen 80; allow all; location / { proxy_pass http://10.0.0.163:9000; } } }Dockerfile for reverseproxy image:FROM nginx:alpine COPY nginx.conf /etc/nginx/nginx.confWhen trying to access 10.0.0.163 I get error 502 and logs from reverseproxy show this:2017/10/09 07:43:02 [error] 5#5: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.255.0.2, server: , request: "GET / HTTP/1.1", upstream: "http://10.0.0.163:9000/", host: "10.0.0.163" 10.255.0.2 - - [09/Oct/2017:07:43:02 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"When typing 10.0.0.163:9000/ into browser - everything works fine. What is the problem? And how can I make it work with this URL10.0.0.163/portainer/... -> 10.0.0.163:9000/...
Nginx reverse proxy for Docker containers
You have to have two server directives to accomplish this task:upstream test1 { server 1.1.1.1:50; server 1.1.1.2:50; } upstream test2 { server 2.2.2.1:60; server 2.2.2.2:60; } server { listen 80 server_name location / { proxy_pass http://test1; } } server { listen 80 server_name location / { proxy_pass http://test2; } }
I have my nginx configuration file under /etc/nginx/sites-available/ with two upstreams sayupstream test1 { server 1.1.1.1:50; server 1.1.1.2:50; } upstream test2 { server 2.2.2.1:60; server 2.2.2.2:60; } server { location / { proxy_pass http://test1; } location / { proxy_pass http://test2; } }Sending a curl request to:80works but I want to use:80fortest1and:80fortest2. Is it possible to define this in nginx?
Setting up nginx with multiple IPs
I found a more or less suitable solution. It's a bit hackish but it works.The key was to set the index document of my S3 bucket to a non-existing filename. This causes requests to / on the S3 bucket endpoint to result in 403.Since the nginx proxy maps all incoming requests to / on the S3 bucket endpoint, the result is always 403 which the nginx proxy can intercept. From there, the error_page directive tells it to respond by requesting a specific document (in this caseerror.json) in the S3 bucket endpoint and use 503 as the response status code.location ~* ^/. { proxy_intercept_errors on; error_page 403 =503 /error.json; }This solution involves two requests being sent to the S3 bucket endpoint (/, /error.json) but at least caching seems to be enabled for both requests using the configuration in the more complete snippet above.
I'm having a hard time configuring nginx to act as a proxy of a public S3 endpoint. My use case necessitates altering the status code of the S3 response, while preserving the response payload.The possible status codes returned by S3 include 200 and 403. For my use case, I need to map those status codes to 503.I have tried the following which does not work:location ~* ^/.* { [...] proxy_intercept_errors on; error_page 200 =503 $upstream_http_location }Nginx outputs the following error:nginx: [emerg] value "200" must be between 300 and 599 in /etc/nginx/nginx.conf:xxHere's a more complete snippet:server { listen 80; location ~* ^/.* { proxy_http_version 1.1; proxy_method GET; proxy_pass http://my-s3-bucket-endpoint; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header Connection ""; proxy_set_header Host my-s3-bucket-endpoint; proxy_set_header Authorization ''; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header Set-Cookie; proxy_ignore_headers "Set-Cookie"; proxy_cache S3_CACHE; proxy_cache_valid 200 403 503 1h; proxy_cache_bypass $http_cache_purge; add_header X-Cached $upstream_cache_status; proxy_intercept_errors on; error_page 200 =503 $upstream_http_location; } }Is it possible to achieve what I need with nginx?
How to change the status code of a proxied server response in nginx?
As specified in nginx's docproxy_pass:A server name, its port and the passed URI can also be specified using variables:proxy_pass http://$host$uri;[…]In this case, the server name is searched among the describedserver groups, and, if not found, is determined using aresolver.
When nginx proxy_pass is a dynamic value expected to be build by substituting hostname part in URL, nginx is failing to proxy request with error:no resolver defined to resolveservicewhereservice=$1. Instead of trying to resolve service.abcd.local, it seems it is trying to resolve justservice. Is there solution to this ?location ~ ^/(.*)/(.*)$ { proxy_pass http://$1.abcd.local/$1/$2; }
nginx proxy_pass dynamic hostname part
If you want to run uWSGI as particular user, there are only 2 options:run uWSGI server directly from this userrun uWSGI as root and add uid and gid options.
I have Django setup in NGINX + uWSGI. I'm able to get it running fine under my current logged in user (with help from aquestionI asked few days back) but now I want torunuwsgi --ini uwsgi.inias a limited-access user.Here is what I've done so far:1. Created a userdjangouserwithout login access and without a home directory.2. Added usernginxinto groupdjangouser3. Placed my django files into/mnt/djangodirectory and changed file permissions ofdjangotodrwxrwx--- djangouser djangouser(recursive)4. Changed the conf files to match the file locationsuwsgi.ini file[uwsgi] chdir=/mnt/django/project/awssite module=awssite.wsgi home=/mnt/django/project master=true processes=2 uid=djangouser gid=djangouser socket=/mnt/django/djangosocket/awssite.socket chmod-socket vacuum=trueWhen I try to runuwsgi --ini uwsgi.ini, this is the error I get[uWSGI] getting INI configuration from uwsgi.ini *** Starting uWSGI 2.0.12 (64bit) on [Thu Feb 18 00:18:25 2016] *** compiled with version: 4.8.3 20140911 (Red Hat 4.8.3-9) on 01 February 2016 04:17:11 os: Linux-4.1.13-19.31.amzn1.x86_64 #1 SMP Wed Jan 20 00:25:47 UTC 2016 nodename: ip-10-200-1-89 machine: x86_64 clock source: unix detected number of CPU cores: 1 current working directory: /home/ec2-user detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! chdir() to /mnt/django/project/awssite chdir(): Permission denied [core/uwsgi.c line 2586] chdir(): Permission denied [core/uwsgi.c line 1608]Note: When I added my logged in user todjangousergroup,uwsgi --ini uwsgi.iniran fine and I was able to load the django pages.I'm not sure where else to add permissions to allow this to work. Addingsudo chown-socket=djangouser:djangouserin uwsgi.ini didn't work either.I appreciate the help :)
How do I run uWSGI as a limited-access user?
SimplifyCreate a "Hello world" index.html and copy it into your project's root directory*.Divide and conquerMy suggestion to you is to strip your nginx.conf down to a very simple form, like the one below.server { listen 80 default; server_name yourdomainname.com; root /home/your_app_name/public; try_files $uri/index.html $uri ; error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }Organization*I would recommend you not have your index.html in an nginx directory. Use as your root directory something that is specific to your project, like in the example above. Place your Hello World index page there.RestartNow reload NGINX and see if it is loading your simple "Hello World" index.html. If so, start to add complexity, one component at a time.File permissionsThe #1 gotcha on unix based OS is the file permissions. It's important to look at your NGINX error logs to see if you're getting pinged by user/group blocks on files and dirs. If NGINX does not have permission to read index.html, game over.Digital Ocean calls their tools "one-click installs" and this is misleading. I have several DO VPSs setup, so I know that their installs are not by any means complete installs, as you'd expect. Going back to installing components one at a time and confirming each is working is the best method.
I recently bought a DigitalOcean account, and am attempting to set up my web site. However, whenever I enter the IP address of my site, I get this page:Welcome to nginx!If you see this page, the nginx web server is successfully installed and working. Further configuration is required.For online documentation and support please refer to nginx.org. Commercial support is available at nginx.com.Thank you for using nginx.I have searched for answers, but have not found anything that works for me. I am running Ubuntu LEMP on 14.04, and used the one-click installation. I am planning to put my pages/files into the "usr/share/nginx/html" folder, which I have declared as the root.Here is the "etc/nginx/available-sites/default.conf" file to hopefully accomodate this:server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost unaviamedia.ca; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } ......... }However, I am still only getting the Nginx welcome page when I access my site by IP, and it is getting annoying. How can I show the home page?Edit: Updated code to match my latest attempt. Also, for those who are wondering, I have restarted nginx several times.Let me know if I need to add anything else. Thanks!
Nginx page displaying instead of home page (Digital Ocean - LEMP)
You should tell Nginx to pass the host to Gunicorn like this:proxy_set_header Host $host;Additionally I would pass these values (example) also so you have access to the IP of the request:proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Please also check your Nginx Logs if you haven´t done so. I hope this helps.EDIT: Try also to set set server name like:server_name your_domain.com www.your_domain.comLast but not least try to set your environment like this (solution in this case):os.environ['DJANGO_SETTINGS_MODULE'] = "app.settings_prod"
I'm trying to run my server with Django, nginx and gunicorn. On the development Server, everything went fine. But on the production server, gunicorn always returns a Bad Request (400).I'm aware that I need to set myALLOWED_HOSTSvariable, and I did. I tried the correct domain, an asterisk, or even setting DEBUG to True. But still, it's always Bad Request (400).Here is my nginx-config:server { listen 80; location /static { alias /home/username/sites/sub.domain.example.com/static; } location / { proxy_set_header Host $http_host; proxy_pass http://localhost:8000; } }Mywsgi-prod.pyfile:import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings_prod") from django.core.wsgi import get_wsgi_application application = get_wsgi_application()Thesettings_prod.pyfile (shortened):DEBUG = False ALLOWED_HOSTS=["*"]I start gunicorn the following way (with virtualenv):gunicorn --bind 127.0.0.1:8000 app.wsgi_prod:applicationWhen I start the server withmanage.py runserver --settings=app.settings_prod, the site is accessible. gunicorn's error log shows nothing, and the access log only shows the 400. The static content does work.
Django + Gunicorn + Nginx: Bad Request (400) in Debug=True
Okay resolved. The issue was thelocation @handler { rewrite / /index.php }Removed it and all is well again.
Closed.This question isnot reproducible or was caused by typos. It is not currently accepting answers.This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may beon-topichere, this one was resolved in a way less likely to help future readers.Closed10 years ago.Improve this questionI cannot figure out why this error is happening:"rewrite or internal redirection cycle while internally redirecting to "/index.html""I found asimilar postand tried various recommendations based on what I read, but to no avail.Here is my nginx config. Any help would be appreciated!server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html/public; index index.php; # Make site accessible from http://localhost/ server_name ourdomain.com; location @handler { rewrite / /index.php } location / { try_files $uri $uri/ /index.php?$query_string; } location ~ .php$ { fastcgi_split_path_info ^(.+.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; # add the following line in for added security. try_files $uri =403; } }
nginx - rewrite or internal redirection cycle while internally redirecting to "/index.html" [closed]
This worked:location /blah { rewrite ^/blah/(.*) /$1 break; proxy_pass http://$server_addr:8080; }
I'm trying to use proxy_pass in nginx to forward requests to another port on localhost like this:location /foo { rewrite ^/foo/(.*) /$1 break; proxy_pass http://127.0.0.1:8080/; } location /bar { rewrite ^/bar/(.*) /$1 break; proxy_pass http://localhost:8080/; } location /blah { rewrite ^/blah/(.*) /$1 break; proxy_pass http://192.168.77.56:8080/; }So only the last one works. The first two give me a page-unavailable error. I know the endpoint is working as I can go directly to localhost:8080 and see output I expected.Any idea what I'm doing wrong?[Edit]: Further enlightenment... It seems the rewrite line has something to do with it. Using it like I have here seems to work on non-localhost IPs, i.e. it removes /blah from the path and keeps the rest as it sends it to its final destination. If I remove the rewrite line I can proxy to localhost (of course losing my indented other stuff on the url).
nginx proxy_pass to localhost
Apparently there is some way to do this. Nginx has agunzipmodule that gzip decompresses responses:The ngx_http_gunzip_module module is a filter that decompresses responses with “Content-Encoding: gzip” for clients that do not support “gzip” encoding method. The module will be useful when it is desirable to store data compressed, to save space and reduce I/O costs.This module is not built by default, it should be enabled with the --with-http_gunzip_module configuration parameter.Source:http://nginx.org/en/docs/http/ngx_http_gunzip_module.htmlThen you can use it like:gunzip on;Hope that works for you.Also see this SO question:Is there sort of unzip modules in nginx?
Dose nginx supprt this ? Whoul you please show me some configuration of it?[Client] [Nginx Reverse Proxy] [BackEnd] | [Raw Post] | [gzip encoded request] | |--------------------> | ----------------------------->| | | | | [Raw Response] | [gzip encoded response] | | <------------------ | <-----------------------------| | | |
does nginx support compress the request to the upstream?
The is a problem in your Procfile.Thenginxcommand can't usesudoinsideforeman, because it willalways ask for a passwordand then it will fail. That's why you are not startingnginxand the logs are empty.If you really need to use sudo inside a procfile you could use something like this:sudo_app: echo "sudo_password" | sudo -S app_command nginx: echo "sudo_password" | sudo -S service nginx startwhich I really don't recommend. Other option is to callsudo foreman startFor more information check out thisissue on github, it is precisely what you want to solve.Keep me posted if it works for you.
I am currently runningForemanon staging (Ubuntu) and once I get it working will switch to using upstart.My Procfile.staging looks like this:nginx: sudo service nginx start unicorn: bundle exec unicorn -c ./config/unicorn.rb redis: bundle exec redis-server sidekiq: bundle exec sidekiq -v -C ./config/sidekiq.ymlI can successfully start nginx using:$ sudo service nginx startHowever when I run$ foreman start, whilst the other three processes start successfully, nginx does not:11:15:46 nginx.1 | started with pid 15966 11:15:46 unicorn.1 | started with pid 15968 11:15:46 redis.1 | started with pid 15971 11:15:46 sidekiq.1 | started with pid 15974 11:15:46 nginx.1 | Starting nginx: nginx. 11:15:46 nginx.1 | exited with code 0 11:15:46 system | sending SIGTERM to all processes SIGTERM received 11:15:46 unicorn.1 | terminated by SIGTERM 11:15:46 redis.1 | terminated by SIGTERM 11:15:46 sidekiq.1 | terminated by SIGTERMSo why isn't nginx starting when started by Foreman?
Foreman Cannot Start Nginx, But I Can Start it Manually. Why?
As of Gitlab 5.3 you can configure it to run in a suburi out of the box using the official installation document.Uncomment line 8 of config/puma.rb: ENV['RAILS_RELATIVE_URL_ROOT'] = "/"Similarly for line 23 in config/gitlab.yml: relative_url_root: /I didn't have to modify my nginx config at all for it to work.
The nginx configuration for gitlab is:# GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen YOUR_SERVER_IP:80; # e.g., listen 192.168.1.1:80; server_name YOUR_SERVER_FQDN; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }What should i change to serve gitlab as a surURI, www.mysuperserver.com/gitlabi've tried many different things, but nothing worked thanks
How to configure nginx to serve gitlabhq on a SubURI
Trivial check:$ curl -si 'http://localhost/rdr/http://www.google.com' | head -8 HTTP/1.1 301 Moved Permanently Server: nginx/1.2.0 Date: Sun, 05 Aug 2012 09:33:14 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http:/www.google.comAs you can see, there is only one slash after scheme inLocation.After adding the following directive toserver:merge_slashes off;We'll get the correct reply:$ curl -si 'http://localhost/rdr/http://www.google.com' | head -8 HTTP/1.1 301 Moved Permanently Server: nginx/1.2.0 Date: Sun, 05 Aug 2012 09:36:56 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://www.google.comIt becomes clear from the comments you may want to pass hostname without the schema to your redirecting service. To solve this problem you need to define two locations to process both cases separately:server { listen 80; server_name localhost; merge_slashes off; location /rdr { location /rdr/http:// { rewrite ^/rdr/(.*)$ $1 permanent; } rewrite ^/rdr/(.*)$ http://$1 permanent; } }Here I've defined/rdr/http://as a sub-location of/rdrjust to keep the redirector service in one block -- it's perfectly valid to create both locations atserver-level.
What I'm trying to do is route all requests to/rdr/extern_urlto redirect toextern_urlthrough my web server instead of doing it through PHP.location /rdr { rewrite ^/rdr/(.*)$ $1 permanent; }What's wrong here is it if I accesshttp://localhost/rdr/http://google.commy browser is telling me:Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.How do I redirect properly?
Nginx redirect to an external URL
It is easy - the only thing you have to do is to switch your mind - neither capybara nor cucumber are not tied to local environment you can test application that is located in internet and it will not care about it - you can even test google.com if you want.For your particular problem you'll have to setCapybara.run_server = false Capybara.server_port = 8000 # or whatever port is your instance of nginx is configured to serve Capybara.app_host = 'http://www.google.com' # if your instance is running on remote machine, else just drop it and capybara will use localhostYou can easily control restarting of your application using cucumber hooks, you can configure it to restart before each test or before test suite. (Seecucumber wiki) Within hook you'll have to issueFileUtils.touch tmp/restart.txtcommand. The same with database - you can manually setup hook to truncate it whenever it is needed (Seedatabase_cleanergem)
Is is possible to run capybara with nginx and passenger? instead or webrick? Capybara is installed with cucumber in a rails app.
Running capybara with nginx
Settingkeepalive_requests 0;convinced nginx to sendConnection: close.
nginx seems to be replacing theConnection: closeheader that upstream is sending, and replacing it with aConnection: keep-aliveheader. Is there any way I can override it?http { upstream main { server 127.0.0.1:8000; } server { listen 443; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; location / { proxy_pass http://main; } location /find { proxy_pass http://main; proxy_buffering off; } } }
nginx and proxy_pass - send Connection: close headers
Have you tried proxy_buffering off in nginx? Not sure it will close the connection but at least the response will be transmited as is to the client. :-)
I've got apache as a back-end server, which runs php scripts and nginx as a reverse-proxy server which deals with static content. A php-script, which gives me ID of some process and then performs this process (pretty long). I need to pass to browser only the ID of that proccess.// ... ob_start(); echo json_encode($arResult); // only this data should be passed to browser $contentLength = ob_get_length(); header('Connection: close'); header('Content-Length: ' . $contentLength); ob_end_flush(); ob_flush(); flush(); // then performed a long process(I check the status of the proccess with another ajax-script)This works fine under apache alone. But I have problems when apache is behind nginx. In this case I get response only when the proccess is completly finished.nginx settings:server { #... proxy_set_header Connection close; proxy_pass_header Content-Length; #... }But I still get Connection keep-alive in FireBug.How can I get nginx to immediately give a response from apache?Hope the question is clear.Thanks.
Nginx as a reverse-proxy while long-polling
you can add the command to remove the default nginx index page just before copyingCOPY ./conf/default.conf /etc/nginx/conf.d/default.conf RUN rm -rf /usr/share/nginx/html/* <--- add this COPY --from=builder /app/dist/ /usr/share/nginx/htmland change your nginx config to :try_files $uri $uri/ /index.html =404;
I developed anAngular 7application and now I'm going to deploy it on production server on an nginx server. I'm pretty new to frontend deployment onnginxserver, so probably I'm missing something easy to find. I decided to useDockerto manager the deployment.The application name isMyWalletFe.nginx server configuration filePath:./conf/default.confserver { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html; } }Dockerfile# build FROM node:alpine as builder WORKDIR /app COPY package.json . RUN npm install COPY . . RUN npm run build # deploy FROM nginx EXPOSE 80 COPY ./conf/default.conf /etc/nginx/conf.d/default.conf COPY --from=builder /app/dist/ /usr/share/nginx/htmlWhere I copy my nginx configuration file in the default location, then I copy the output ofnpm run buildfrom previous stage in the/usr/share/nging/html.OutputI always get the default nginx webpage:And the reason why this happens is that the folder/app/distcontains a subfolder with the name of the app,MyWalletFe, which contains all the file of the Angular application (the image is taken after run locallynpm run buildto show the folder structure):while on the production server in folder/usr/share/nginx/htmlthere is still the defaultindex.htmlwith this page that is served.root@3b7da83d2bca:/usr/share/nginx/html# ls -l total 12 -rw-r--r-- 1 root root 494 Mar 3 14:32 50x.html drwxr-xr-x 2 root root 4096 Apr 13 20:32 MyWalletFE -rw-r--r-- 1 root root 612 Mar 3 14:32 index.htmlI think that for this to work the content ofMyWalletFefolder should be copied to the parent folder (/usr/share/nginx/html); in this casedefault.conforDockerfilecontain some errors.Is there something that is configured in a wrong way? In Resources section I've added the article which I followed.ResourcesAngular in Docker with Nginx
Deploy angular application on nginx server with Docker Welcome to nginx
Just have multiple error_log and access_log entries inside your blockerror_log syslog:server=localhost:5447,facility=local7,tag=nginx_client,severity=error; access_log syslog:server=localhost:5447,facility=local7,tag=nginx_client,severity=info; error_log stderr; access_log /dev/stdout;Should do the trick
By default, mynginxserver is plotting logs tostdoutandstderr.I want to forward logs to my syslog server, and I'm doing so successfully, fromnginx.conf:server { ... error_log syslog:server=localhost:5447,facility=local7,tag=nginx_client,severity=error; access_log syslog:server=localhost:5447,facility=local7,tag=nginx_client,severity=info; ... }How can I config my server toalsoplot the logs tostdoutandstderr?
Send nginx logs to both syslog and stdout/stderr
This is actually a bug on Safari. As ofWebKit build r230963this is fixed, but there hasn't been an update on Safari yet. In case you want to keep compatible behavior you need to remove file fields that are empty from form data sent in your axios request.Something like:$('#myForm').find("input[type='file']").each(function(){ if ($(this).get(0).files.length === 0) {$(this).remove();} }); var fData = new FormData($('#myForm')[0]);This solution is jQuery dependent, but you can adapt this logic to any library.
I'm trying send form withContent-type: multipart/form-data. All works fine in the Chrome, FF, Edge but not in Safari. It gets 400 from nginxUsed Laravel + Nuxtjs + AxiosAfter enabling error_log debug in the nginx conf I see[info] 11687#11687: *1 client prematurely closed stream: only 767 out of 907 bytes of request body received
Nginx returns 400 error in Safari
This is in no way a secure solution... but this is what I have currently in my set up and it is working. Maybe you can modify it to your needs. Feel free everyone to tell me how wrong it is and maybe we can get a better solution for everyone.location / { dav_methods PUT DELETE MKCOL COPY MOVE; # Preflighted requestis if ($request_method = OPTIONS) { add_header "Access-Control-Allow-Origin" *; add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD, DELETE"; add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept"; return 200; } # CORS WHITELIST EVERYTHING # This is allowing everything because I am running # locally so there should be no security issues. if ($request_method = (GET|POST|OPTIONS|HEAD|DELETE)) { add_header "Access-Control-Allow-Origin" *; add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept"; } try_files $uri $uri/ /index.php$is_args$args; }
I got stuck that I don't know how to enableCORSinnginx? Honestly, I've found so many solution to enableCORSin nginx and one of them ishttps://enable-cors.org/server_nginx.htmlbut I've added those code inside my/etc/nginx/nginx.confand restartnginxserver. But I've tried inside postman again and following error raised bynginx. 405 Not Allowed 405 Not Allowed nginx/1.12.1 Please let me know how to fix it. Thanks.server { listen 80 default_server; listen [::]:80 default_server; server_name localhost; root /var/www/test/app/; # Load configuration files for the default server block. include /etc/nginx/default/*.conf; add_header 'Access-Control-Allow-Origin' *; add_header 'Access-Control-Allow-Methods' 'GET, POST'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; location / { }
How to enable CORS in nginx
You can actually chainmapdirectives a which would make it cleaner. For example:map $http_server_a $server_a_check { default "http://server-b.company.com"; "" "http://server-a.company.com"; } map $http_account $account_check{ default $server_a_check; "~([\d]{8})(0[0-4])" "http://server-a.company.com"; } server { .... location / { proxy_pass $account_check; } }Soproxy_passis getting its value from$account_checkwhich does a check againstAccountheader with a regex check. Then if none of the variants pass their checks, thedefaultgets its value from the result of$server_a_checkwhich looks forServer-Aheader with no data for the value as the question didn't state what the accepted values would be.
I'm looking for a way to reroute all requests that have set an account-id in the HTTP headerAccount-IDwith the last two digits (of a ten digit number)00to05(to touch only 5% of the total traffic). In addition, if a request has set the HTTP headerServer-A, that request should be forwarded to that server regardless of the set account-id. Otherwise and by default, all traffic should be handled by server-b. My current location-rule looks like this:location /rest/accounts/ { if ( $http_account_id ~ '([\d]{8})(0[0-4])') { proxy_pass http://server-a.company.com; } if ( $http_server_a = true) { proxy_pass http://server-a.company.com; } proxy_pass http://server-b.company.com; }As I read in the official documentshere,ifis consideredevil.Is there a better solution for my approach which isn't consideredevil?
proxy_pass based on headers without using if condition
Presumably you have found the answer, but for anyone that lands here looking for the answer I found this:"core" is the low-level concept for uWSGI concurrency context in a process (can be a thread or a greenlet or a fiber or a goroutine and so on...) while switches count is incremented whenever an app "yield" its status (this has various meanings based on the lower concurrency model used)Source:http://lists.unbit.it/pipermail/uwsgi/2015-March/007949.html
I'm running anginxwithuwsgiapplication withdjangothat used thesqlitestructure.When runninguwsgi, when aGETorPOSTrequest is made, it sends output such as:[pid: 29018|app: 0|req: 39/76] 136.61.69.96 () {52 vars in 1208 bytes} [Wed Jul 19 17:25:12 2017] POST /binaryQuestionApp/?participant=A3AJJHOAV7WIUQ&assignmentId=37QW5D2ZRHPMKY652LHT9QV23ZD8SU => generated 6 bytes in 722 msecs (HTTP/1.1 200) 3 headers in 195 bytes (1 switches on core 0)What does the last bit mean? Sometimes it says 2 switches, sometimes 1.
What does this UWSGI output mean? > "(X switches on core 0)"
As mentioned in the question, trailing slashes in URIs are important. I fixed this in the location, however, I didn't add it to the URI I pass usingproxy_pass.As for the nginx proxy I got it to work using the following config:server { listen 443; ssl on; ssl_certificate /etc/tls/cert.pem; ssl_certificate_key /etc/tls/key.pem; location /api/ { proxy_pass http://api.default.svc.cluster.local/; } }Concerning the ingress solution, I was not able to get it to work by adding the missing trailing slash to the path. The service is specified due its name and therefore no trailing slash can be added (i.e. it would result in an error).
Consider the following nginx config file:server { listen 443; ssl on; ssl_certificate /etc/tls/cert.pem; ssl_certificate_key /etc/tls/key.pem; location / { proxy_pass http://api.default.svc.cluster.local; } }All incoming TCP requests on 443 should redirect to my server running onapi.default.svc.cluster.local:80(which is a node REST-Api btw). This works fine, I cancurl https:///nginx and get a correct response, as expected.Now, I'd like to change the location from/to/api, so I can fire acurl https:///apiin order to get the same response as before.1. AttemptSo I change the location line in the config to:location /api {Unfortunately this won't work, instead I get an errorCannot GET /apiwhich is a node error, so obviously it gets routed to the api but something's still smelly.2. AttemptIt seems as the trailing slash in an URI is required so I added it to the location:location /api/ {Now something changed. I won't get the same error as before, instead I get an "301 moved permanently". How can I fix my nginx config file?Additional information regarding the environmentI'm using a kubernetes deployment that deploys the nginx reverse proxy incl. the config introduced. I then expose nginx using a kubernetes service. Also, I tried using kubernetes ingress to deal with this situation, using the same routes, however, the ingress service would respond with adefault backend - 404message.
nginx responding "301 moved permanently"
I had it asroot /home/forge/distributor-application/laravel;I updated it toroot /home/forge/distributor-application/laravel/public;Final lookserver { listen 80 default_server; listen [::]:80 default_server; server_name default; root /home/forge/distributor-application/laravel/public; ... }My site is loading back now.
I kept getting403 Forbidden nginx/1.11.9I already run :sudo composer updateI think I set up proper permissions.-rwxrwxrwx 1 root root 149 Feb 24 03:45 .gitignore -rwxrwxrwx 1 root root 12 Feb 24 03:45 .gitattributes -rwxrwxrwx 1 root root 146 Feb 24 03:45 CONTRIBUTING.md drwxrwxrwx 15 root root 4096 Feb 24 03:45 app -rwxrwxrwx 1 root root 567 Feb 24 03:45 phpunit.xml drwxrwxrwx 2 root root 4096 Feb 24 03:45 bootstrap -rwxrwxrwx 1 root root 2452 Feb 24 03:45 artisan drwxrwxrwx 19 root root 4096 Feb 24 03:45 public -rwxrwxrwx 1 root root 519 Feb 24 03:45 server.php -rwxrwxrwx 1 root root 0 Feb 24 03:45 satisfiable -rwxrwxrwx 1 root root 1599 Feb 24 03:45 readme.md drwxr-xr-x 5 root root 4096 Feb 24 03:55 .. -rwxrwxrwx 1 root root 992 Feb 24 14:53 composer.json -rw-r--r-- 1 root root 116004 Feb 24 14:53 composer.lock drwxrwxrwx 6 root root 4096 Feb 24 14:53 . drwxrwxrwx 27 root root 4096 Feb 24 14:53 vendorWhat else should I look into ?ps aux | grep nginxroot 12792 0.0 0.1 148960 1504 ? Ss Feb24 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; forge 12793 0.0 0.4 148960 4840 ? S Feb24 0:00 nginx: worker process root 26625 0.0 0.1 12952 1032 pts/0 S+ 15:10 0:00 grep --color=auto nginx
403 Forbidden nginx/1.11.9 - Laravel 4
You can't achieve you goal with simple rewrite. Laravel always knows about the realURI.The key point is that you need to handle all requests with just one route. Laravel uses$_SERVER['REQUEST_URI']variable to route and it is passed to Laravel fromfastcgi. The variableREQUEST_URIis set infastcgi_paramsfile from nginx's$request_urivariable:fastcgi_param REQUEST_URI $request_uri;So you need to passREQUEST_URIas/to Laravel to handle request/bla/blaas it is/.Just add one line to your config:location ~ \.php$ { # now you have smth like this fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; # add the following line right after fastcgi_params to rewrite value of the variable fastcgi_param REQUEST_URI /; }If you have/api/as well, you need some edits for the line:set $request_url $request_uri; if ($request_uri !~ ^/api/(.*)$ ) { set $request_url /; } fastcgi_param REQUEST_URI $request_url;Nginx warns thatifis an evil, that's just a first idea.To sum up:/goes to Laravel/route./api/*go to Laravel api routes.Another requests go to Laravel/route.
I have the following issue, I need to configure Nginx, so on any URL user accesses, it will keep the uri (exampledomain.com/some/url/), but pass to laravel only/and let Angular handle the routing.Route::get('/', function(){ return view('index'); });And when accessing/api/{anything}Laravel will kick in.For now I returnindex.htmlfrom public folder until I find solution Here is My Config:location / { index index.html; try_files $uri $uri/ /index.html; } location /api { index index.php; try_files $uri $uri/ /index.php?$query_string; }I know I can make a route like:Route::get('{anything?}', function(){ return view('index'); });But is to broad.Update:location / { rewrite ^/(.*)$ / break; index index.php; try_files $uri $uri/ /index.php; } location /api { index index.php; try_files $uri $uri/ /index.php?$query_string; }
Laravel + AngularJS Nginx routing
The "user" directive defines with which user Nginx will run the web server process. You may start Nginx with root, but it will launch sub processes owned by the specified user.If the www-data user does not exist, you can create it. Or, you can specify any other user. But it is better that a web server has a dedicated user, for the sake of security.On my server I created a "www" user with group "www" and used it in my Nginx configuration.
A nginx is installed on AWS EC2 running ubuntu 14.04. the/etc/nginx/nginx.conffirst line saysuser www-data;.Is there such a system user on my EC2 instance? if not. what value is given to this directive? Thanks
meaning of nginx user directive
You can't, Puma is an application server.On the TCP/IP stack each application gets assigned to a port so that a received packet can be proxied to the application that's expecting it. Imagine that multiple applications live on the same port: There would be no way for an application to know if the receiving packet is really intended for it or for another application on the same port.That's why we use proxies and reverse proxies. Nginx, being a reverse proxy, resolves the requested URL to an application and proxies the request to it. It's a single application that receives all incoming packets on a given port and then proxies them to an application on another port or socket.To have multiple web servers on the same port you would have to have a reverse proxy such as Nginx or HAproxy in-front of them.
Nginx importance in production was normally based on its ability to serve slow clients; In the setup of RESTful API it seems to be an unnecessary layer to the production stack, especially as Puma (unlike the widely used unicorn can handle nginx work).Puma can allow multiple slow clients to connect without requiring a worker to be blocked on the request transaction. Because of this, Puma handles slow clients gracefully. Heroku recommends Puma for use in scenarios where you expect slow clients.refHow to enable Puma to serve multiple ruby applications on the same port without using nginx as a reverse proxy?
Puma without nginx - multiple ruby applications on the same IP:PORT
The problem here is with thelinksdirective in yourdocker-compose.ymlfile. You have:links: - databaseThat's basically saying that the linkname:aliasisdatabase:database, according to thedocker-compose.ymlreference.Also, if you read thelinking container docsyou can see that the environments exported to the source container are of the formatALIAS_XXXfor exampleALIAS_PORT_3306_TCP_PORT. So in essence in yourdatabase.ymlwhat you want to do is something like this:default: &default adapter: mysql2 database: dockertest host: <%= ENV['DATABASE_PORT_3306_TCP_ADDR'] %> port: <%= ENV['DATABASE_PORT_3306_TCP_PORT'] %> username: dockertest password: dockertest development: <<: *default production: <<: *defaultIf you want to use theMYSQLalias your links would have to look something like this in yourdocker-compose.ymlfile.links: - database:mysqlThe error:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)is basically coming from your Rails app not being to see what's in your database.yml and defaulting to a local/var/run/mysqld/mysqld.sockconnection.Hope it helps.
I'm learning Docker and I've a problem trying to connect a Rails app on thepassenger-fullcontainer and amysqlcontainer. Both are linked in a compose fileapp: build: ./rails ports: - "80:80" links: - database volumes: - ./rails:/home/app/webapp database: image: mysql environment: - MYSQL_DATABASE="dockertest" - MYSQL_USER="dockertest" - MYSQL_PASSWORD="dockertest" - MYSQL_ROOT_PASSWORD="root"So I added theapt-get installat the top of my Dockerfile like thisFROM phusion/passenger-full RUN apt-get update && apt-get install libmysqlclient-dev mysql-client -y # Set correct environment variables. ENV HOME /root # Use baseimage-docker's init process. CMD ["/sbin/my_init"] RUN rm -f /etc/service/nginx/down RUN rm /etc/nginx/sites-enabled/default ADD webapp.conf /etc/nginx/sites-enabled/webapp.conf RUN mkdir /home/app/webapp WORKDIR /home/app/webapp ADD . /home/app/webapp RUN cd /home/app/webapp && bundle install RUN touch /home/app/webapp/tmp/restart.txt # Clean up APT when done. RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*Also this is mydatabase.ymlin the Rails app.default: &default adapter: mysql2 database: dockertest host: <%= ENV['MYSQL_PORT_3306_TCP_ADDR'] %> port: <%= ENV['MYSQL_PORT_3306_TCP_PORT'] %> username: dockertest password: dockertest development: <<: *default production: <<: *defaultThe problem is that I cant stop receiving the errorCan't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)The webconf file is# /etc/nginx/sites-enabled/webapp.conf: server { listen 80; server_name localhost; root /home/app/webapp/public; passenger_enabled on; passenger_user app; passenger_ruby /usr/bin/ruby2.2; }Is that the right way to do this? As you can see I'm pretty new to docker.
Docker MySQL can't connect to socket
You should rewrite theLocationheaders that your backend sends to Nginx described inhttp://wiki.nginx.org/HttpProxyModule#proxy_redirect, so:proxy_redirect http://localhost:3000/_oauth/google http://sub.example.com/_oauth/google;the other option, that would work for popup-style login as well is to set theROOT_URLenvironment variable for Meteor at startup as follows:ROOT_URL="http://sub.example.com" PORT=3000 node main.js
I have an Ubuntu 14.04 server and I have a meteor application that runs atlocalhost:3000on this server. The public FQDN of my server issub.example.com. The meteor application uses Google OAuth 2.0, I have the following configured in the Google API Console:URI REDIRECTION http://sub.example.com/_oauth/google http://sub.example.com/_oauth/google?close ORIGINES JAVASCRIPT http://sub.example.comMy Nginx config file looks like this:server { listen 80 default_server; server_name sub.example.com www.sub.example.com; location / { proxy_set_header HOST $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:3000; } }The proxy works and I can access my meteor application when I go tosub.example.com. But when in this application I try to use Google OAuth 2.0, a pop up opens as it should and I get :Error: redirect_uri_mismatch The redirect URI in the request: http://localhost:3000/_oauth/google?close did not match a registered redirect URI.I have played with the header in the nginx config file with no luck.I'm obviously missing something.
Nginx proxy with Google OAuth 2.0
Try thislocation ~ ^/test/(.*)$ { alias /srv/myproject/xyz/main/; try_files $1.html =404; }
root directory = /srv/myproject/xyz/main/in the "main" folder I have few *.html files and I want all of them to point at a url say/test/(which is quite different from the directory structure)this is my very basic nginx configurationserver { listen 80; error_log /var/log/testc.error.log; location /test/ { root /srv/myproject/xyz/main/; #alias /srv/myproject/xyz/main/; default_type "text/html"; try_files $uri.html ; } }If I use simple aliaslocation /test/ { alias /srv/myproject/xyz/main/; }then its work perfectly, I mean I can access those html files byhttp://www.myurl.com/test/firstfile.htmland so onbut I dont want that html extension.I tried to follow these threads but no successhttp://forum.nginx.org/read.php?11,201491,201494How to remove both .php and .html extensions from url using NGINX?how to serve html files in nginx without showing the extension in this alias setup
Serving static HTML files in Nginx without extension in url
I made it work! First thing first. I had an error in my config.The lineif (!-d /home/sites/dev/ilundev.no/public/$1) {was wrong, and should beif (!-d /home/sites/dev/$1) {And, I had to set up a wildcard entry to my domain, at my domain provider. The entry looked like "*.ilundev.no" and I used the "A" option - and it worked!Updated and optimized config:This will work as long as the DNS at your domain provider properly sets "*.dev" in a subdomain for your domain, with the "A" option - and the IP of your server.server { listen 80; server_name dev.ilun.no www.dev.ilun.no; root /home/sites/dev; } server { listen 80; server_name ~^(.*)\.dev.ilun\.no$; if (!-d /home/sites/dev/$1) { rewrite . http://dev.ilun.no/ redirect; } root /home/sites/dev/$1; }However, now I'm stuck trying to make the server run php code in such a subdomain.
I have this folder: /home/sites/dev/ Nginx serves the content of this folder if I visit "domain.com"But, let's say that if I create a folder inside this folder, for example "wp-test", I want nginx to serve this folder if I visit "wp-test.domain.com"It seems like "ianc" made it work on hisblog post, but I can't get it to work.Here's my config so far for nginx:server { listen 80; server_name www.ilundev.no; root /home/sites/dev; } server { listen 80; server_name ~^(.*)\.ilundev\.no$; if (!-d /home/sites/dev/ilundev.no/public/$1) { rewrite . http://www.ilundev.no/ redirect; } root /home/sites/dev/$1; } server { listen 80; server_name ilundev.no; rewrite ^/(.*) http://www.ilundev.no/$1 permanent; }
Nginx: Automatic sub-domain creation if a folder exists
Herethe matter is nginx configuration, not nodejs code.nginx write temp files in disk before sending them to the client, it's often a good idea to disable this cache if the site is going to serve big static files, with something like:location / { proxy_max_temp_file_size 0; }(no limit)
I have this node.js app proxied by Nginx (on production). A route is something like this:exports.download = function(req, res){ var id = req.params.id; if (id && id == 'latest') { res.download(config.items.release_directory+'/<1.6GB-file>.zip', function(err){ if (err) { console.log(err); } else { // do something } }); } else { res.redirect(301, '/'); } };So, clicking the right route/URL the browser starts to download the big file but then it stops always at 1.08GB (the file is about 1.6GB), truncating it.I really cannot understand why. Any ideas?EDIT: The config.items.release_directory is a static Express directory declared as:app.use('/releases', express.static(path.join(__dirname, '..', 'releases')));EDIT2: On development with grunt serving directly the app without Nginx it works fine.SOLVED: read the comments below, problem is proxy_max_temp_file_size variable in Nginx
Nginx node.js express download big files stop at 1.08GB
Assuming the play app is running on the same machine as Nginx - and is listening on port 9000upstream play_app { server 127.0.0.1:9000; } server { listen 80; location / { proxy_pass http://play_app; } }This will route all requests from port 80 via nginx - to the play app on the same machine on port 9000.If you wish for NGinx to serve your local assets - add a second location before the catch all rule.server { listen 80; location /assets { root /var/www; } location / { proxy_pass http://play_app; } }
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.Closed10 years ago.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.This question appears to be off-topic becauseit lacks sufficient information to diagnose the problem.Describe your problem in more detail orinclude a minimal examplein the question itself.Improve this questionI want to use Nginx to server /assets folder for my Play! application. I would like to:Proxy most files to Play!Point /assets to a local folderI am using the following configuration, but it's not working:worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; upstream play_app { server 0.0.0.0:9000; } server { listen 80; location / { proxy_pass http://play_app; } } }Also, if I get this working, will I be able to write to Nginx /assets folder from play viaPlay.getFile("/assets/images")?
nginx configuration for PlayFramework static files [closed]
Try this:resolver 8.8.8.8; location ~* ^/tun/(.+)$ { proxy_pass http://$1; proxy_set_header X-Real-IP $remote_addr; }
I currently have this rule in mynginxconfig:location /tun { proxy_pass http://url.domain.com/mp3.mp3; proxy_set_header X-Real-IP $remote_addr; }Which i use for tunneling in a private project. However, i want to make it dynamic.I am looking for something like this:location /tun/$URL$ { proxy_pass $URL$; proxy_set_header X-Real-IP $remote_addr; }So users can type in their own url's like that.I understand there are security flaws in this, but i really want to see this happening!Thanks in advance!
Getting part of nginx url as "variable"?
Try use$urior$request_uriinstead$0
I'm trying convert my Apache rewrite rules for my new nginx webserver, but I'm having problems translating this particular line:RewriteRule ^(arin|barry|john|ross|danny).*$ /share/$0 [NC]As for my old Apache server, this rule causedhttp://example.com/danny/awesomeVideo.avito viewhttp://example.com/share/danny/awesomeVideo.aviinstead, without the link changing.To be honest, as my Apache setup was a long time ago, I'm not even sure whether or not the linknotchanging in the address bar for someone who views the file was due to this rule or not.Most online converters will propose this rule for nginx:rewrite ^/(arin|barry|john|ross|danny).*$ /share/$0 last;Unfortunately,$0seems to be faulty, as this is what I get when restarting nginx:Restarting nginx: nginx: [emerg]unknown "0" variablenginx: configuration file /etc/nginx/nginx.conf test failed`Does anyone know how to express Apache's$0in nginx?
What's $0 in nginx? (mod_rewrite)
How about:upstream nodejs { server 127.0.0.1:3001; } upstream rails { server 127.0.0.1:3002; } server { listen 80; location /nodejs { proxy_pass http://nodejs; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /rails { proxy_pass http://rails; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }or shortly:server { listen 80; location /nodejs { proxy_pass http://127.0.0.1:3001; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /rails { proxy_pass http://127.0.0.1:3002; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }?Most of the proxy directives are optional (you probably just needproxy_passandproxy_redirect) but useful.
I'd like to serve several applications from the same server, reversed-proxied through nginx. I'd like these applications to be available through a single domain name with sub-uris.e.g.www.mydomain.com/nodejs=> caught by nginx listening to port 80 and served through to a node.js app running on port 3001www.mydomain.com/rails=> caught by nginx listening to port 80 and served through to a rails app running on port 3002My first stab is to start with two upstreams:# /etc/nginx/sites-available/mydomain.com upstream nodejs { server 127.0.0.1:3001; } upstream rails { server 127.0.0.1:3002; } server { listen 80 default deferred; # What do I put here so that # mydomain.com/nodejs is proxied to the nodejs upstream and # mydomain.com/rails is proxied to the rails upstream ??? }Does anyone know this or point me in the right direction?
NGINX => serve several applications on a single host name with sub-uris
Run two instances of php-fpm, describe it in oneupstreamsection.upstream fast_cgi { server localhost:9000; server localhost:9001 backup; }Change nginx.conf, to usefastcgi_pass fast_cgi;. After that, if you restart one instance, nginx will process request through second php-fpm instance.
When restarting the php-fpm service on my Linux system, the PHP CGI process take a while to shutdown completely. Until it does, trying to start a new PHP CGI instance fails because port 9000 is still held by the terminating process. Accessing the site during this time results in a 502 Gateway Error, which I'd like to avoid.How can I restart php-fpm smoothly without getting this error?
How can I avoid getting a 502 Gateway Error while restarting php-fpm?
The problem ending up being that using an INI config file results in uWSGI running in single interpreter mode. The exact same config in XML allows everything to work correctly. The uWSGI developer this would NOT be the case in future versions.
uWSGI config[uwsgi] socket = /tmp/uwsgi.sock chmod-socket = 666 processes = 1 master = true vhost = true no-site = trueNginx configserver { listen 80; server_name www.site1.com; location / { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; uwsgi_param UWSGI_PYHOME /var/virtualenvs/site1; uwsgi_param UWSGI_CHDIR /var/www/site1; uwsgi_param UWSGI_SCRIPT wsgi; } } server { listen 80; server_name www.site2.com; location / { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; uwsgi_param UWSGI_PYHOME /var/virtualenvs/site2; uwsgi_param UWSGI_CHDIR /var/www/site2; uwsgi_param UWSGI_SCRIPT wsgi; } }Whatever site I hit first is the one it is stuck displaying, so if I goto site2 first I can't ever see site1. Any thoughts on why the uWSGI vhost setting seems not to be workin?
uWSGI vhost problem
Launchd on OSXUpstart/init on the unices.uwsgi also has its own process manager, so you can just run that as well.Tuning:Check themailing list, for advice on your particular requirements. Uwsgi is amazing, it is a complete deploy solution.Nginx above 0.8.40 will build the uwsgi bindings by default, Build nginx, build uwsgi and you are golden.
I am leaning towards uwsgi+nginx for my Django app, can anyone share the best method for starting up my uwsgi processes? Does anyone have experience tuning uwsgi?
uwsgi + django via Nginx - uwsgi settings/spawn?
I am not familiar with BaseHTTPRequestHandler but I will try to help with the curl response as I had pretty similar issue few days ago.Did you try to run curl with http flag set to 0.9?curl 'http://your-domain.com' --http0.9Maybe your server does respond with HTTP/0.9. Since curl 7.66.0, HTTP/0.9 is disabled by default so it could be a reason of "Received HTTP/0.9 when not allowed" response.
Am using BaseHTTPRequestHandler http server and copy/pasted the code from the interwebs. Here's the part where the response/header is setclass S(BaseHTTPRequestHandler): def _set_response(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers()But when calling the server with curl the response is:curl: (1) Received HTTP/0.9 when not allowedWhen calling through browser:ERR_INVALID_HTTP_RESPONSEprotocol_version is http/1.0The web server is called through nginx reverse proxy, which just doeslocation / { proxy_http_version 1.1; proxy_pass http://${NODE_NAME}:9000/; }Are there more headers needed for this? How do we set correct http headers in BaseHTTPRequestHandler or nginx?
missing http headers in web server
I have some great news!We're using the same cert on our cloud dev environments (however, they are in pfx form). Locally I run Linux as mentioned, and I had to convert the pfx to a RSA file and a CRT file.I entered our dev domain on this site:https://whatsmychaincert.com/and it downloaded a *.chain.crt file. Together with my old crt file, and this command:cat example.com.crt example.com.chain.crt > example.com.chained.crtIn Nginx I then referenced the.chained.crtfile.Now Chrome accepts my local, secure webpage.
A couple of weeks ago we implemented the SameSite cookie policy to our cookies. If I want to develop locally, I needed a certificate to get the cookies.We're running a Node express server and that is reversed proxied to an nginx configuration where we add the cert.# Server configuration # server { listen 443; server_name test-local.ad.ourdomain.com; ssl_certificate /home/myname/.certs/ourcert.crt; ssl_certificate_key /home/myname/.certs/ourkey.rsa; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:9090; proxy_read_timeout 90; proxy_redirect http://localhost:9090 https://test-local.ad.ourdomain.com; } }Now to the wierd part. We updated to Chrome 80 today, and all of a sudden I got an HSTS issue. I was unable to access site even if I wanted to (no opt in possibility). I tried to clear that inside chrome://internals/#hsts, and that worked. However, I still getNET::ERR_CERT_AUTHORITY_INVALIDbut I now have the opt in alternative.Accessing it from Chrome Incognito mode works like a charm, no issues there. Same with Firefox, no issues there either. It says Certificate is Valid, green and pretty. Checked here as well:https://www.sslshopper.com/certificate-decoder.htmland its 100% green.I'm running Ubuntu 19.10 using Regolith.My colleagues are using same cert, also Chrome 80, but they're running Mac, no issues there in Chrome.Any idea? I tried to clear Browser settings, no change.
NET::ERR_CERT_AUTHORITY_INVALID in Chrome not incognito and Firefox locally with valid certs on nginx
If you access the service usinghttp://localhost:8081/nexus, it works.Your current configuration is usingproxy_passto change the URI/nexusto/nexus/. Generally, it is advisable to have a trailing/on both thelocationandproxy_passURIs or on neither of them.For example:location /nexus { proxy_pass http://localhost:8081/nexus; ... }In fact, you so not need to modify the URI at all, so you can remove it from theproxy_passdirective altogether.The following should be equivalent, but more efficient:location /nexus { proxy_pass http://localhost:8081; ... }By default, theHostheader is set to the value of theproxy_passdirective (i.e.locatlhost:8081), which is known to work correctly. You may find the your statementproxy_set_header Host $host:$server_port;is unnecessary.Seethis documentfor details.
I am trying to get Nexus3 to run behind Nginx.Nginx is used as a reverse proxy and for SSL termination. When accessing the /nexus path through Nginx, I get multiple errors such as "Operation failed as server could not be reached" and "unable to detect which node you are connected to". Accessing the Nexus UI without going through Nginx works perfectly which lead me to think the error is on Nginx.NginX Config Filelocation /nexus { proxy_pass http://localhost:8081/nexus/; proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; resolver 8.8.8.8 8.8.4.4 ipv6=off; }
Nexus3 + Nginx Reverse proxy
Angular apps are perfect candidates for serving with a simple static HTML server. You don't need a server-side engine to dynamically compose application pages because Angular does that on the client-side.If the app uses the Angular router, you must configure the server to return the application's host page (index.html) when asked for a file that it does not have.Suppose in your nginx server conf just add some thing like this.try_files $uri $uri/ /index.html;Reference -https://github.com/auth0-samples/auth0-angular-samples/tree/master/02-User-ProfileThank you JB Nizet, it worked finally.
Hi I am trying to build the Angular 4 app, steps followed is below -Buildng buildIn my amazon ec2 instance I am running apache. Steps followed -#!/bin/bash yum install httpd -y yum update -y cp dist/* /var/www/html/ service httpd start chkconfig httpd onEverything works but my app is usingauth0for authentication, I see they do callback tohttp://ip/callbackMy Application says - 404 Not found.I tried to build likeng build --base-href ., it didnt worked! Please help me how to build this one, please note that when I useng serve` in my local everything works awesome. but when I am trying to deploy to production its giving this error. I am pretty sure something wrong I am doing while building the app.I triednginxdocker container, it gives the same error. My docker file looks like this.FROM nginx COPY dist /usr/share/nginx/htmldocker build -t ng-auth0-web-dev .docker run -d -p 8080:80 ng-auth0-web-devAnything wrong in above docker file?https://github.com/auth0-samples/auth0-angular-samples/tree/master/01-Login- sample app codehttps://manage.auth0.com/#/logs- No error in logs, that means auth0 is working fine but I am getting build error with angular.Exact error:Update -I tried building like this also -ng build --base-href http://34.213.164.54/andng build --base-href http://34.213.164.54:80/, but same error.So the problem is Narrowed down to How I am building the Angular Apppublic handleAuthentication(): void { this.auth0.parseHash((err, authResult) => { if (authResult && authResult.accessToken && authResult.idToken) { window.location.hash = ''; this.setSession(authResult); localStorage.setItem('email', profile.email); this.verticofactoryService.registerUser(u); this.router.navigate(['/home']); }); } else if (err) { this.router.navigate(['/home']); console.log(err); alert(`Error: ${err.error}. Check the console for further details.`); } }); }
How to build angular 4 apps in nginx or apache httpd?
So I finally configured the nginx properly. I added root and removed hard path of static, also added log-files that clearly shows that static and css is being loaded from nginx! I also changed the listening port to be 80 (suprise).server { listen 80; server_name myapp.com; root /home/pi/Public/myapp; access_log /home/pi/Public/myapp/logs/nginx-access.log; error_log /home/pi/Public/myapp/logs/nginx-error.log; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static/ { } location /uploads/ { } }
I have a flask app runninggunicorn -w 1 -b 0.0.0.0:8000 flaskapp:appwith below nginx config. However, how can I tell if nginx is actually serving the static files or not? I tried changing thealias /home/pi/Public/flaskapp/static/;to.../static-testing/;and just put a placeholderstyle.cssthere but the page seems to load like before.server { listen 5000; server_name _; location / { proxy_pass http://127.0.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static { alias /home/pi/Public/flaskapp/static/; } }Am I missing something obvious? does one have to specify something in the routes of flask?
Verify that Nginx is serving static files instead of Flask
I found a nginx module calledngx_http_dyups_modulematches my question.
I mean add anupstreambutnot a server in an upstream.That means I don't have an upstream block like:upstream backend { # ... }I want create an upstream block dynamically. That is something like:content_by_lua_block { upstream_block.add('backend'); upstream_block.add_server('backend', '127.0.0.1', 8080); upstream_block.add_server('backend', '127.0.0.1', 8081); upstream_block.add_server('backend', '127.0.0.1', 8082); upstream_block.del_server('backend', '127.0.0.1', 8080); } proxy_pass http://backend
How to dynamically add an upstream in Nginx?
Change$servername = "localhost";to$servername = "mysql";. Your mysql service isn't on the localhost of your webserver container. You should use the name of the service instead
I have problem to connect to MySQL container.docker-compose.ymlversion: '2' services: mysql: image: mysql:latest environment: MYSQL_ROOT_PASSWORD: JoeyW#1999 MYSQL_DATABASE: wiput MYSQL_USER: web MYSQL_PASSWORD: Web#1234 volumes: - ./mysql:/var/lib/mysql networks: - code-network php: image: wiput1999/php:latest volumes: - ./code:/code networks: - code-network nginx: image: nginx:latest ports: - "80:80" - "443:443" volumes: - ./code:/code - ./site.conf:/etc/nginx/conf.d/default.conf - /etc/letsencrypt:/etc/letsencrypt networks: - code-network networks: code-network: driver: bridgePHP test script:setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); echo "Connected successfully"; } catch(PDOException $e) { echo "Connection failed: " . $e->getMessage(); } ?>This script reponse me :Connection failed: SQLSTATE[HY000] [2002] No such file or directoryWhat's wrong with my code? because I think It's should be fineIf anyone have a better solution Thank you for your help.
Docker Compose with PHP, MySQL, nginx connection issue
"elements". Your googling was accurate.
I'm reading nginx source code, and I foundeltsis in many data structure declaration, such as:struct ngx_array_s { void *elts; ngx_uint_t nelts; /* some members are omited */ }From the code, I knoweltsis the address of the array that is used to store elements. But I wonder whateltsstands for. After googling a bit. and feel like maybe it stands for element start (reference). Is it right, or what is the exact words it stands for?
what does "elts" stands for in nginx source code
Ok, it turned out it was an issue with socket.io's namespaces in node.js code. More info here:http://socket.io/docs/rooms-and-namespacesHere's a working example of the servervar app = require( 'express' )(); var http = require( 'http' ).Server( app ); var io = require( 'socket.io' )( http ); var nsp = io.of('/chat'); // this is what needs to happen // and then we're listening to communication in the proper namespace nsp.on( 'connection', function( socket ){ console.log( 'user connected' ); socket.on('disconnect', function(){ console.log('user disconnected'); }); socket.on('message', function(msg){ nsp.emit( 'message', msg ); // this will broadcast message to everybody connected within the same namespace console.log('message: ' + msg); }); }); http.listen( 8090, function(){ console.log( "listening on :8090" ); });
I'm trying to run a socket.io chat app with nginx as proxy. It works fine when I connect to the server via http+port, but it doesn't work with https. I see user connected/disconnected events pass through, but no emit reach client or server.Here's my server .conf (nginx/1.4.6 Ubuntu)upstream websocket { server 127.0.0.1:8090; } server { listen 80; return 301 https://example.com$request_uri; } server { listen 443 ssl; ssl_certificate /home/andrew/example.com/nginx/certs/example.com.cer; ssl_certificate_key /home/andrew/example.com/nginx/certs/example.com.private.key; root /home/andrew/example.com/public; index index.html index.htm; server_name example.com; location /chat/ { rewrite ^/chat/?(.*)$ /$1 break; proxy_pass http://websocket; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; } location / { try_files $uri $uri/ =404; } }Server (Node v0.12.1, Socket.io v1.3.5)var express = require('express'); var app = express(); var server = require('http').createServer( app ); var io = require("socket.io").listen( server ); server.listen(8090); console.log('listening on port 8090'); io.sockets.on( 'connection', function( socket ){ console.log( 'user connected' ); var msg = "user connected!!!"; socket.emit( 'message', msg ); socket.on('disconnect', function(){ console.log('user disconnected'); }); socket.on('message', function( msg ){ socket.emit( 'message', msg ); console.log('message: ' + msg); }); });Client Send
Is Nginx + Node.js + Socket.io + SSL possible?
Actually what I want really is not possible, so it's required to have two separateConnectortags and two upstreams in Nginx, like so:Tomcat'sserver.xml: Matching Nginx configuration:server { listen 80; listen 443 ssl spdy; location /saiku-ui { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://saiku-server-$scheme; # This is upstream name, note the variable $scheme in it proxy_redirect off; } } upstream saiku-server-http { server ip.of.tomcat.server:8080; } upstream saiku-server-https { server ip.of.tomcat.server:8443; }Please note that Tomcat receives plain HTTP traffic on both 8080 and 8443 ports (no SSL there, it's terminated by Nginx), but for connections on 8443 port it will generate links must start withhttps://instead ofhttp://(via attributesscheme="https" secure="true") and will insert in links ports, specified inproxyPortattribute.Nginx will terminate SSL and proxy all secure connections to the 8443 port of Tomcat viasaiku-server-httpsupstream, wherehttpsis the value of$schemeNginx request variable (seelocationblock)
DescriptionWe're installing some application running Tomcat 6 behind Nginx for different clients. Some of those installations are HTTP only, some HTTPS only, somewhere both. One of those installations has HTTP and HTTPS working on non-standard ports (8070 and 8071) due to lack of public IPs. Application at hand is displayed as an iframe in another app.Current behaviourTomcat redirects all HTTPS requests to HTTP (so nothing displayed in iframe due to browser restrictions for mixed content).Current configurationIframe code:Tomcat'sserver.xml: Nginx vhost:server { listen 80; listen 443 ssl spdy; location /saiku-ui { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://saiku-server; # This is upstream name proxy_redirect off; } } upstream saiku-server { server ip.of.tomcat.server:8080; }Desired behaviourTomcat should listen on one single port for both HTTP and HTTPS requests.If there will be twotags it will be much harder to configure Nginx.Tomcat should not redirect between schemas.Nginx may listen on arbitrary ports (e.g.listen 8071 ssl spdy;).Links, generated by Tomcat should be either relative or include schema, host, and port as provided by Nginx.Additional infoI've tried to addschemaandproxyPortattributes to, after that Tomcat will always redirect from HTTP to HTTPS (at least it's better).I can't google such a configuration and not experienced with Tomcat. Please help.
Tomcat behind Nginx: how to proxy both HTTP and HTTPS, possibly on non-standard ports?
Okay! So the problem was, I think, related tothis bug. It seems that even though apparmor wasn't configured to prevent access to sockets inside the containers it was actually doing something to prevent reading from them (though not creation...) so turning off apparmor for the container (following these instructions) worked to fix it.The two relevant lines were:sudo apparmor_parser -R /etc/apparmor.d/usr.bin.lxc-startsudo ln -s /etc/apparmor.d/usr.bin.lxc-start /etc/apparmor.d/disabled/and addinglxc.aa_profile = unconfinedTo the containers config file.NB: These errors were not recorded in any apparmor logs.
I have a flask app running under uWSGI behind nginx.*1 readv() failed (13: Permission denied) while reading upstream, client: 10.0.3.1, server: , request: "GET /some/path/constants.js HTTP/1.1", upstream: "uwsgi://unix:/var/uwsgi.sock:", host: "dev.myhost.com"The permissions on the socket are okay (666, and set to the same user as nginx), in fact, even when I run nginx as root I still get this error.The flask app/uwsgi is sending the request properly. But it's just not being read by Nginx. This is on Ubuntu Utopic Unicorn.Any ideawherethe permission might be getting denied if the nginx process has full access to the socket?As a complicating factor this server is running in a container that has Ubuntu 14.04 installed in it. And this setup used to work... but I recently upgraded the host to 14.10... I can fully understand that this could be the cause of the problem. But before I downgrade the host or upgrade the container I want to understand why.When I run strace on a worker that's generating this error I see the call it's making is something like this:readv(14, 0x7fffb3d16a80, 1) = -1 EACCES (Permission denied)14seems to be the file descriptor created by this system callsocket(PF_LOCAL, SOCK_STREAM, 0) = 14So it can't read from a local socket that it has just created?
nginx permission denied while reading upstream - even when run as root
I guess the "red" colour in LS means that unicorn_app is a broken symbolic link. Please tryls -l /etc/init.d/unicorn_app, see if the file it points to exist or not.In addition, you may runfile /etc/init.d/unicorn_appto see if it is a broken symbolic link or not.
I am trying configure unicorn + nginx in my ubunton 10.04 in linode for deploy my rails app and when I execute the command:sudo update-rc.d unicorn_app defaultsI get the next error:update-rc.d: /etc/init.d/unicorn_app: file does not existHowever I can see theunicorn_appfile ininit.dfolder. Theunicorn_appfile color is red with background grey in ubuntu terminal.How can I fix this problem?
"update-rc.d: /etc/init.d/unicorn_app: file does not exist" in ubuntu 10.04
Create file/etc/systemd/system/nginx.servicewith the content:[Unit] Description=Nginx After=syslog.target network.target [Service] Type=forking ExecStart=/usr/local/nginx/sbin/nginx ExecReload=/usr/local/nginx/sbin/nginx -s reload [Install] WantedBy=multi-user.targetAfter that you can control it with:sudo systemctl stop|start|restart nginx.serviceorsudo service nginx stop|start|restartTo enable nginx to start on boot you can runsudo systemctl enable nginx.service.
I am running Fedora 16 32bit and I installed passenger with nginx (option 1 during installation, everything was handled for me). Installation went ok, but nginx is not registered as service.The only way I can run it is directly through/opt/nginx/sbin/nginx. There is no possibility to run it via/etc/init.d/nginxIs there any way how to register it as service?
Passenger with NginX not registered as service in Fedora
in each server, you can define which version of PHP, Nginx should use:location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.4-fpm.sock; }or :location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php8.0-fpm.sock; }
I'm running Ubuntu 18.04 with nginx/1.14.0. I have been running PHP 7.2 but some of my web applications require a newer php for security reasons.Since it is nginx, I use PHP-FPM.I used apt to upgrade to the latest version of PHP.# /usr/bin/php -v PHP 8.0.2 (cli) (built: Feb 14 2021 14:21:15) ( NTS ) Copyright (c) The PHP Group Zend Engine v4.0.2, Copyright (c) Zend Technologies with Zend OPcache v8.0.2, Copyright (c), by Zend TechnologiesSo that looks right. But the application still complains about PHP-FPM 7.2 and phpinfo confirms:PHP Version 7.2.34-13+ubuntu18.04.1+deb.sury.org+1So it sounds like I should change the PHP conf file. Here's what I get when I try to find it:# locate php.conf | more /etc/nginx/snippets/fastcgi-php.confOK. So I seek php.ini:# locate php.ini | more /etc/php/7.2/cli/php.ini /etc/php/7.2/fpm/php.ini /etc/php/7.2/fpm/php.ini.orig /etc/php/7.2/fpm/php.ini.ucf-dist /etc/php/8.0/apache2/php.ini /etc/php/8.0/cli/php.ini /etc/php/8.0/fpm/php.ini /usr/lib/php/7.2/php.ini-development /usr/lib/php/7.2/php.ini-production /usr/lib/php/7.2/php.ini-production.cli /usr/lib/php/8.0/php.ini-development /usr/lib/php/8.0/php.ini-production /usr/lib/php/8.0/php.ini-production.cliI am not seeing a conf file that would make the choice for NGINX or PHP where I would tell it to use PHP-FPM 8.0.How do I get NGINX/PHP to use the new version of PHP that is on my server instead of the old one?
after upgrade to php8.0, nginx still uses php7.2 for PHP-FPM
I had the same issue, not for upgrading to Catalina but because of installing a program which upgrade my version of OpenSSL, so it brokes other apps which depended on OpenSSL. In my case Ruby (2.3.8 with RVM) and MySQL (MariaDb in fact). In the case of Ruby, it was incompatible with the new version of OpenSSL, so I had to install it with pkg depdendencies for RVMrvm pkg install openssl rvm reinstall 2.3.8 --with-openssl-dir=$HOME/.rvm/usrIn the case of MySQL I just upgraded it, so it got installed with the new openSSL on my System.brew upgrade mariadbThats solves my issues. I think in your case you could upgrade (or uninstall and reintall) MySQL and Nginx, so they will correctly use the new version of OpenSSL.(P.D. In the case of MySQL it conserved my databases without problems)
I have updated my development enviroment to the latest version of OSX Catalina. Then nginx and mysql server has stopped working. When I try to run any of these I get the same error:dyld: Library not loaded: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib Referenced from: /usr/local/bin/nginx Reason: image not foundI´m reading a lot of posts and they say mostly the same: Openssl is a dependency library with the new OsX. Fix, looks pretty ease, removed the openssl installation and re-installed the latest version, which is[email protected]. I have already done it, but however I´m still getting the same error.I think it´s because according to the error message, both nginx and mysql are expecting the version 1.0.0 and I´m installing the latest 1.1. I have been trying to install the version 1.0 with homebrew, but I´m not able to find it. Is it possible to get this old version? Or should I upgrade my nginx and mysql software versions?
dyld: Library not loaded: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib when running nginx and mysql after macOS upgrade to Catalina
Update :So the solution was pretty simple. For IP addresses to work with the Subject Alternative Names we must provide the IP inside of the ext files that are used for creating certificatesubjectAltName = @alt_names extendedKeyUsage = serverAuth [alt_names] DNS.1 = localhost IP.1 = 192.168.98.18Now it's working properlyEdit:You can see the complete steps that I've followed from here:https://medium.com/@pavanskipo/how-to-secure-a-private-ip-address-with-https-nginx-ubuntu-ef8374dbfa4e.
Problem :I want to run nginx with https, likehttps://192.168.100.110What I've tried :I've followed the quick guide athttps://www.humankode.com/ssl/create-a-selfsigned-certificate-for-nginx-in-5-minutes, I am able to openhttps://localhostproperly on chrome but i want the self signed certificate to work withhttps://192.168.100.110Please let me know if more clarification is needed.
Secure https with nginx for Private IP address
The last element of thetry_filesstatement is the default action, which is either a URI or a response code. The/index.htmlstarts a search for a newlocationto process the request, and ends up back at the start, so you have a redirection loop.You can fix the problem by making/index.htmlafileterm instead. For example:try_files /$geoip_country_code/index.html /index.html =404;The=404is never actioned, because/index.htmlalways exists.Personally, I would use a generic solution:try_files /$geoip_country_code$uri $uri /index.html;Seethis documentfor more.
I am using $geoip_country_code module of nginx to redirect user based on IP. My config code is as given below-server { listen 80; gzip on; server_name example.com; root html/example; location / { index index.html; try_files /$geoip_country_code/index.html /index.html; } location ~* \.(gif|jpg|jpeg|png|js|css)$ { } }Idea is to redirect the user based on country for which I have localised otherwise the user goes to default index i.e generic for everyone else. It works perfectly when opened in browser from my country as I have localised for it. But when opened from a different country it shows internal server error. Its not going to default index page.Can someone point me what am I doing wrong?
Nginx try_files not working for default index.html
No, theauto-index featuredoes not support filtering. But you can change permissions of the files to not be visible/served, but that only works if you dont want them accessible at all.You could try to manuallymodify the body response with sub_moduleusing regexp that matches files to hide.
I finally figured out how to show a directory listing of a folder using nginx. The problem is that it shows every file and directory in that folder. Is it possible to filter the results? Like show only files with a specific extension or something like that? Thanks
Show only some files in directory listing with NGINX
The reason that a firewall rule may have no immediate effect on blocking traffic may be due tostatefulinspection of packets.It may be inefficient for the firewall to analyse every single packet that arrives in the line, so, for performance reasons, what happens is that the rules the user creates often apply only to the initial packets that establish the connection (known as TCP'sSYN,SYN+ACK,ACK) — subsequently, said connection is automatically whitelisted (to be more precise, it is the state that the original rule has created that is whitelisted), until terminated (FIN).What likely happens here is that, due to pipelining and keep-alive connections, which nginx excels at, a single connection may be used to issue and process multiple independent HTTP requests.So, in order for you to fix the issue, you could either disable pipelining and keep-alives in nginx (not a good idea, as it'll affect performance), ordrop the existing whitelisted connections, e.g., with something liketcpdrop(8)on *BSD— surely there must be a Linux equivalent tool, too.However, if you're simply having an issue with a single client performing too many requests, and as such overloading your backend, then the appropriate course of action may be torate-limitthe clients based on the IP-address, with the help of the standardlimit-reqdirective of nginx. (Note, however, that some of your customers may be behind a carrier-grade NAT, so, be careful with how much you apply the limiting to ensure false-positives won't be an issue.)
I have the followingIPTableswithIPSetas rule source to block attackingIP, but when I add an attackingIPtoIPSet, in mynginxaccess log, I still see continuous access of the attackIP. After a while,maybe 3~5 minutes, theIPwas blocked.iptables~$ sudo iptables -nvL --line-numbers Chain INPUT (policy ACCEPT 317K packets, 230M bytes) num pkts bytes target prot opt in out source destination 1 106K 6004K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set Blacklist src Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set Blacklist src Chain OUTPUT (policy ACCEPT 350K packets, 58M bytes) num pkts bytes target prot opt in out source destinationipsetsudo ipset -L Name: Blacklist Type: hash:ip Revision: 4 Header: family inet hashsize 1024 maxelem 65536 timeout 60 Size in memory: 13280 References: 2 Members: xxx.xxx.xxx.xxx(attacker ip) timeout 0I don't know why the rule has not effect immediately, which make me crazy just like the attacker is laughing at me.I addipsetto theiptablesrule with-Ioption which should keep the rule at the first position. So maybe theChain INPUT(policy Accept)do the trick?Please help me out, thanks so much.BTW.I useNginx+Djano/uWSGIto deploy my application, and I use shell script to analyze nginx log to put evil ip toBlacklist ipset.
IPTables do not block IP with ipset immediately
Wow, I did pretty much the same (checking docker stats and then using graphana with cadvisor and influxDB to plot the increase) with my application(not nginx). And I agree with your conclusion that page cache is contributing to that increase in memory. After some digging into cgroups metrics for that container, I solved my own question:https://stackoverflow.com/a/41687155/6919159If you set a limit to the container's memory usage as described in my answer, you should see the container reclaiming memory. Hope it helps, though its been 2 months!
I'm trying to set up proxy content caching with Nginx inside of Docker, but am experiencing memory issues with my container. The actual Nginx implementation works fine (pages are being cached and served as expected), but as soon as pages start being cached, my container memory (measured with "docker stats") climbs extremely quickly.I would expect about a 1MB increase for every 8,000 pages cached as per the Nginx docs (https://www.nginx.com/blog/nginx-caching-guide/), but the growth is far greater - probably around 40MB every 8000 pages. Additionally, when running "top" inside my container, the nginx process memory looks normal - a couple MB - while my container memory is skyrocketing.It almost seemed liked the cached pages themselves, which are stored in a specific directory, are taking up memory? This shouldn't be the case, as only the cache keys should be in memory. I think I've tested to around 25,000 pages being cached - container memory never falls off. Additionally, if I'm just proxying requests with caching turned off, there is no container memory spike.I'm running an extremely basic nginx configuration setup - pretty much what is detailed in the Nginx docs link.proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server { ... location / { proxy_cache my_cache; proxy_pass http://my_upstream; } }Docker images tested - official nginx image, alpine:3.4 with nginx installed, centos:7 with nginx installedDocker versions tested: Docker for Mac 1.12.1, Docker 1.11.2 (on Kubernetes)Grafana dashboard showing memory growth
Nginx content caching causing Docker memory spike
TrySSL checkerto check whether the SSL is a problem or not.It will verify your server certificate and tell you where is the problem.
Here is my Nginx conf file:upstream app { server unix:/home/deploy/example_app/shared/tmp/sockets/puma.sock fail_timeout=0; } server { listen 80; listen 443 ssl; # ssl on; server_name localhost example.com www.example.com; root /home/deploy/example_app/current/public; ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; try_files $uri/index.html $uri @app; location / { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_pass http://app; } location /.well-known { allow all; } location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt { gzip_static on; expires max; add_header Cache-Control public; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }The path to certificates are correct but when I accesshttps://example.comit stay loading forever.Is there any problem with my SSL setup?
Enable SSL on Ruby on Rails app with Nginx and Puma
This is becauseNginx caches the DNS response for upstream servers- in your workflow you're only restarting the app container, so Nginx doesn't reload and always uses its cached IP address for theapicontainer.When you run a newapicontainer, as you've seen, it can have a different IP address so the cache in Nginx is not valid. Thepingworks because it doesn't cache Docker's DNS response.Assuming this is just for dev and downtime isn't an issue,docker-compose restart nginxafter you rebuild the app container will restart Nginx and clear the DNS cache.
I'm usingdocker-composewith "Docker for Mac" and I have two containers: one NGINX, one container serving a node-app on port 3000.docker-compose.ymllooks like this:version: "2" services: nginx: build: ./nginx ports: - "80:80" links: - api api: build: ./api volumes: - "./api:/opt/app"In the NGINX's config I say:upstream api { server api:3000; } server { # .... location ~ ^/api/?(.*) { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://api; proxy_redirect off; } }Now, when I change something in the node code and rebuild the container$ docker-compose stop api && docker-compose up -d --build --no-deps apithe container is getting rebuilt and started. The problem is, that sometimes the internal IP of the container changes and NGINX won't know about that. Funny enough, when I go into the NGINX container andping apiI get the new IP address$ ping api PING api (172.19.0.3): 56 data bytes 64 bytes from 172.19.0.3: icmp_seq=0 ttl=64 time=0.236 msbut NGINX logs still say2016/10/20 14:20:53 [error] 9#9: *9 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: localhost, request: "GET /api/test HTTP/1.1", upstream: "http://172.19.0.7:3000/api/test", host: "localhost"where the upstream's172.19.0.7is still the old IP address.PS: this doesn't happen every time I rebuild the container.
docker compose: rebuild of one linked container breaks nginx's upstream
The filename is constructed from the value of therootdirective and the URI. So in this case:location /another { root html/test; index bar.html; }The URI/another/bar.htmlwill be located athtml/test/another/bar.html.If you want the value of thelocationdirective to be deleted from the URI first, use thealiasdirective.location /another { alias html/test; index bar.html; }The URI/another/bar.htmlwill be located athtml/test/bar.html.Seethis documentfor details.
I want to archive that serving variable html file through different uri, below is my config.server { listen 8888; server_name localhost; location / { root html/test index foo.html } location /another { root html/test index bar.html } }I want request forlocalhost:8888/anotherthen responsebar.htmlwhich present in mytestdirectory, but I'm failed :(how could I fix above config, thanks for your time.
nginx serve multi index files from location
Following nginx location block has to be added in order to serve the subroutes from index file being served from redis. A detailed explanation and full nginx config can be foundhere.# This block handles the subrequest. If any subroutes are requested than this rewrite the url to root and tries to render the subroute page by passing the subroute to index file (which is served by the redis). location ~* / { rewrite ^ / last; }
I am usingnginx-luamodule withredisto serve static files ofember-app. Theindexfile content is stored inredisas avaluewhich is being properly served bynginxwhen the (root)domain/IPis hit.Ifloginpage is open from link, it gets opened properly. But when opened directly by hitting the url bar or refreshing the page the nginx gives404 not found. Theindexfile is inredisand rest of the files are being served from compiledjswhich is present on aCDN. Following is the nginx configurationserver { listen 80 ; server_name 52.74.57.154; root /; default_type text/html; location = / { try_files $uri $uri/ /index.html?/$request_uri; set_unescape_uri $key $arg_index_key; set $fullkey 'ember-deploy-cli:index:${key}'; content_by_lua ' local redis = require "resty.redis" local red = redis:new() red:set_timeout(1000) -- 1 sec local ok, err = red:connect("127.0.0.1", 6379) if not ok then ngx.say("failed to connect: ", err) return end if ngx.var.key == "" then --ngx.say("No Argument passed") local res, err = red:get("ember-deploy-cli:index:current-content") ngx.say(res) return end local res, err = red:get(ngx.var.fullkey) if res == ngx.null then ngx.say("Key doesnt exist ") return end ngx.say(res) '; }
404 page not found when a url is hit but properly served when opened from the link on index page
Remove theserver_nameline, it's not needed in nginx unless you want to serve different content depending on the host name you receive.If you remove that line, nginx will answer any request that arrives to your server at the proper port (80 in this case), coming withmyapp.cloudapp.netormydomain.pkin theHostheader.This assumes that there is no other configuration in /etc/nginx/sites-enabled that would catch the requests.
I have a Django web application hosted on a VM with the Debian-based Ubuntu as the OS, and nginx reverse proxy + gunicorn as the webserver.The DNS of this web application ismyapp.cloudapp.net. I also have a ccTLDmydomain.pkI need to be configured as a custom domain name for this web application.My original registrar only supported nameservers. Thus I made an account on dns.he.net (a free DNS hosting provider) to host my nameservers, and set up the CName for my machine.My problem is that once I set up the CName to point to my web app's DNS, entering mydomain.pk in the browser merely shows me a genericWelcome to ngnix!page. Whereas, enteringmyapp.cloudapp.net(ormyapp.cloudapp.net:80) in the browser correctly opens up the web application. Why isn't setting up the CName working?I've talked to the support staff at dns.he.net - I've been told my CName is set up correctly, and that there might be some problem with my nginx configuration.For instance, here's myetc/nginx/sites-available/myprojectfile:server { listen 80; server_name myapp.cloudapp.net; charset utf-8; underscores_in_headers on; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/myuser/projectpk/project; } location /static/admin { root /home/myuser/.virtualenvs/projectpk/local/lib/python2.7/site-packages/django/contrib/admin/static/; } location / { proxy_pass_request_headers on; proxy_buffering on; include proxy_params; proxy_pass http://unix:/home/myuser/projectpk/project/project.sock; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/myuser/projectpk/project/templates/; } }
Setting up nginx to support custom domain name
You're after the$hostnamecommon variable. Common variables are listed in thevariable index.Thenginx access log documentationonly shows variables that are specific to the access log:The log format can contain common variables, and variables that exist only at the time of a log write.
I'm trying to setup response headers on my separate webservers that outputs the physical name of the machine that nginx is running on, so that I can tell which servers are serving the responses to our web clients.Is there a variable that exists to do this already? Or do I just have to hardcode it per-server :(
Nginx variable for physical server name
Nginx doesn't have htaccess files. The code needs to go into the nginx config file. Also, try adding the "last" flag to the rewrite:# nginx configuration autoindex off; location / { rewrite .* /index.php last; }
I need to change from apache to Nginx but the.htaccessdoesn' work on the Nginx server.I got following in my Apache.htaccess:RewriteEngine On # always run through the indexfile RewriteRule .$ index.php # don't let people peek into directories without an index file Options -IndexesWhen I put a.htaccesson the Nginx server, the file gets deleted by the server. In what kind of file do I have to put the.htaccessdata (name and suffix please) and how should it be rewritten for Nginx and where do I put the file?I want all subdirectory files to go through myindex.phpfile so I can include specific files from myindex.phpfile ...I have tried something like:# nginx configuration autoindex off; location / { rewrite .$ /index.php; }but it doesn't work. The directory looks something like:── root ├── folder1 │   └── folder2 │   └── file.php └── index.phpso if I ask forroot/folder1/folder2/file.phpI want the server to load theroot/index.phpas I still have the server url request, I can include theroot/folder1/folder2/file.phpfrom myindex.phpIt all works on my apache install, but not on the Nginx server. Please help me.#EDIT: Turns out thenginx.conffile has to be above the root folder and is only accessible if you have access to the folders above the root folder. Like server admin access is necessary.
apache .htaccess to nginx rewrite rule
You may want to setproxy_ignore_client_abort off;Determines should the connection with a proxied server be closed if a client closes a connection without waiting for a response.from thedocumentationAnother suggestion is to uselimit_reqto limit the request rate.
We use nginx with an application server as a backend.We need to limit number of simultaneous connections per IP to backend. We usedlimit_connnginx directive for this purpose. But it doesn't work well in all cases. If user generates a lot of connections from one IP and quickly closes them, then nginx passes this request to a backend, but because client connection is already closed, this connection is not count inlimit_conn.Is it possible to limit number of simultaneous connections per IP to backend server with nginx?
Nginx: Limit number of simultaneous connections per IP to backend
Ok so after a bit of research I've discovered the best way to do this currently is indeed with an AWS US East EC2 instance running some sort of proxy. I've gone with linux/nginx.I've also learned there is a Heroku add-on currently in alpha stage of development that will handle exactly this requirement. If you'd like to test it, get in touch with Heroku support.
I need my app to be able access an third party API who limits access based on a single, static IP Address.Due to the dynamic nature of the Heroku dynos and routing mesh, this is not possible - I'll need something with a fixed IP Address to act as a proxy.An US East EC2 Linux/Nginx instance would seem the sensible choice, but these seems like a lot of work/maintenance for something pretty trivial. Does anyone know of any services out there that do this?
What is a good strategy for accessing an API which is limited to a static IP Address from Heroku?
Yay, I fixed it! The nginx configuration was correct before I changed chunked, the python/flask code however should have been:@app.route('/') def index(): rv = cache.get('request:/') if rv == None: rv = render_template('index.html') cachable = make_response(rv).data cache.set('request:/', cachable, timeout=5 * 60) return rvThat is, I should only cache the data, and that can only be done, afaik, if I do make_response first
I'm trying to cache Python/flask responses with memcached. I then want to serve the cache using nginx. I'm using flask code that looks something like this:from flask import Flask, render_template from werkzeug.contrib.cache import MemcachedCache app = Flask(__name__) cache = MemcachedCache(['127.0.0.1:11211']) @app.route('/') def index(): index = cache.get('request:/') if index == None: index = render_template('index.html') cache.set('request:/', index, timeout=5 * 60) return index if __name__ == "__main__": app.run()and an nginx site configuration that looks something like this:server { listen 80; location / { set $memcached_key "request:$request_uri"; memcached_pass 127.0.0.1:11211; error_page 404 405 502 = @cache_miss; } location @cache_miss { uwsgi_pass unix:///tmp/uwsgi.sock; include uwsgi_params; error_page 404 /404.html; } }However, when it pulls from the cache the html code is prefixed with a V, contains \u000a characters (line feeds) and garbled local characters, and is suffixed with "p1 ." as such:V\u000a\u000a \u000a \u000a\u000a [...] \u000a\u000a\u000a p1 .Despite Content-Type being "text/html; charset=utf-8". Supposedly the V [...] p1 . thing might have something do with chunked transfer encoding something, a flag that is not present in the response header. What should I do?
nginx with flask and memcached returns some garbled characters
If you take a look atthis threadon the Google Groups, you will see that the preferred approach is to the the context path.The recommendation is to use a bootstrap job to set the context per application in the following wayPlay.ctxPath="/project1"; Router.detectChanges(Play.ctxPath);So your code would bePlay.ctxPath="/cms"; Router.detectChanges(Play.ctxPath);etc.
I have developed 2 applications with Play Framework, accessing different information, so it does not make sense to merge then as a single app.Now I need to deploy both apps on the same hostname, each one in a separate sub-folder (URI), for example: example.com/payment/ example.com/cms/And I am having problems with routes. I configured a nginx webserver to work as reverse proxy. It deliveries first page as expected.But once I click anything, instead of going to /cms/Application/index it links back to /Application/index (without /cms/).IMHO I believe I need change my routes file, hardcoding /cms/ on all paths, but it seems a bad approach because if I need to deploy the APP on another URI I will need to change routes again.What is the best way to deploy two apps on the same hostname?----- nginx.conf ----- ... ... ... location /cms { proxy_pass http://localhost:9001/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /payment { proxy_pass http://localhost:9002/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } ... ... ... ----- nginx.conf -----
Deploying two different Play! applications on the same hostname
Workaround found athttp://groups.google.com/group/phusion-passenger/browse_thread/thread/f91cd54bd379ad26/0a510133a080daacAdd to config.ru:ENV['RAILS_ENV'] = ENV['RACK_ENV'] if !ENV['RAILS_ENV'] && ENV['RACK_ENV']
I'm having trouble getting a Rails app to run in the production environment via Phusion Passenger on Nginx/Ubuntu. According to thedocs, the environment is controlled by the rails_env option in nginx.conf ... but it runs in development mode on our box regardless of whether we specify 'rails_env production;' or leave it out (the default is said to be production).Other notes:The Linux environment variable RAILS_ENV is also set to production.We can run in production mode using 'script/server -e production', so it doesn't seem to be a case of Ruby code overriding the environment.Any ideas?Full nginx.conf:worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-2.2.7; passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; sendfile on; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_buffers 16 8k; server { listen 80; server_name bar.foo.com; root /home/foo/dev/bar/public; passenger_enabled on; rails_env production; } }
Can't force Rails into production environment via Passenger/Nginx
This error can indicate multiple problems. The fact it works for you locally strengthen the probability the issue relies on the nginx side.You can try to solve it by increasing the timeout thresholds (as suggestedhere), and the buffers size. Add this to your server's nginx.conf:proxy_read_timeout 300s; proxy_connect_timeout 300s; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k;
When I tried to upload a big csv file of size about 600MB in my project which is hosted in the digital ocean, it tries to upload but shows 502 Bad Gateway Error (Nginx).The application is a data conversion application.This works fine while working locally.sudo tail -30 /var/log/nginx/error.logshows[error] 132235#132235: *239 upstream prematurely closed connection while reading response header from upstream, client: client's ip , server: ip, request: "POST /submit/ HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/submit/", host: "ip", referrer: "http://ip/" sudo nano /etc/nginx/sites-available/myprojectshowsserver { listen 80; server_name ip; client_max_body_size 999M; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { alias /root/static/; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } }nginx.confuser root; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream;I have also the javascript loader running while the conversion process takes place. How can I fix this?
How to fix 502 Bad Gateway Error in production(Nginx)?
Issue in my case was indeed the cert as log says. documentation was not clear! I has to create generic secret for certs with ca because my certificate is self signed.kubectl create secret generic proxy-ca-secret --from-file=tls.crt=client.crt --from-file=tls.key=client.key --from-file=ca.crt=ca.crtmistake it did was having certificate & chain in cert.pem and importing as tlskubectl create secret tls proxy-ca-secret --key "client.key" --cert "client.pem"
k8s ingress controller doesn't pass certificate to upstream https service.with nginx i could achive with something like thislocation /upstream { proxy_pass https://backend.example.com; proxy_ssl_certificate /etc/nginx/client.pem; proxy_ssl_certificate_key /etc/nginx/client.key; }Am i missing something here! My current config look something like this. I don't want pass ssl comming from client will terminate here.apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: backend namespace: default annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: "/$1" nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/proxy-ssl-secret: "proxy-ca-secret" nginx.ingress.kubernetes.io/proxy-ssl-name: "backend.example.com" spec: rules: - http: paths: - path: /(api/auth/.*) backend: serviceName: auth servicePort: 8080Log showsSSL_do_handshake() failed (SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:SSL alert number 42) while SSL handshaking to upstreamI verfied base64 cert with openssl cert look fine. Thanks in advance!
k8s reverse proxy secure upstream with self signed cert nginx
NGINX will not alter the 500 from the appas long asit doesn't step on a problem contacting / fetching data from Apache. E.g. it's a perfectly possible situation that your app will generate a 500, but a problem in NGINX communication against Apache will result in a different 50x, so that 50x is the one the client will see.If Apache is completely down, you should be getting a 502 (bad gateway), because, in your setup, Apacheisthe gateway for NGINX. The same will happen if NGINX does not "like" Apache's response in a way, e.g. when Apache sends a response which has headers exceeding NGINX'sproxy_buffer_sizeYes, you should be getting 504 (gateway timeout), when Apache/app is timing out in relation to NGINX timeoutsSee point 2. And the following: NGINX will simply passthrough whichever response code from the upstream (as in gateway = Apache), so it doesn't need to take any consideration on whether a given response is invalid in terms of response codes, by default.Youcanhave NGINX take error response codes coming from Apache in consideration and act differently by use ofproxy_intercept_errors, which combined witherror_page, can allow you to "rewrite" response codes / error messages from Apache, e.g. to "masquarade" app failures asService Unavailable:error_page 500 =503 /503.html; proxy_intercept_errors on;
Gone throughHTTP response codes.. and understands the what these response codes(rcodes) stands forBut I am not sure what rcode will be sent to client/consumer(say browser) in below scenario. I am using NGINX as reverse proxy and Apache as HTTP server running web application(say app) behind the NGINX.Couple of scenarioRuntime error occurs in app which by throws rcode as 500(runtime error code by default). My understanding is nginx will continue to throw 500 and not convert it to 502 ?App is down or not available. My understanding is nginx will throw 503 not 502 in this case ?App is taking more time to process than nginx default connection time out. My understanding is nginx will throw 504 in this case ?If all above points are correct not sure when 502 will be thrown by nginx ? When NGINX will consider the response received from upstream server as invalid response ?
HTTP response codes 500 vs 502 vs 503?
Redirect from subdomain to subfolder on main siteDo you require a redirect from a subdomain to a subfolder on the main site?This would be best accomplished by a separateservercontext, with the appropriateserver_namespecification.Else, you could also do this with anifstatement testing against$host.As already pointed out elsewhere,rewritedirective operates based on$uri, which does not contain the hostname.server_name-based matching (recommended):Hardcoded redirect with a limited number of hostnames (recommended):server { server_name plugin.example.com; return 301 $scheme://example.com/plugin$request_uri; } server { server_name about.example.com; return 301 $scheme://example.com/about$request_uri; }Regex-based redirect from any subdomain to the main domain:server { server_name ~^(?:www\.)?(?.*)\.example\.com$; return 301 $scheme://example.com/$subdomain$request_uri; }Regex-based redirect from a limited number of subdomain to the main domain:server { server_name ~^(?:www\.)?(?plugin|about)\.example\.com$; return 301 $scheme://example.com/$subdomain$request_uri; }if-based:If-statement-based redirect with hardcoded hostnames:server { server_name .example.com; … if ($host = plugin.example.com) { return 301 $scheme://example.com/plugin$request_uri; } if ($host = about.example.com) { return 301 $scheme://example.com/about$request_uri; } … }If-statement-based redirect with a regex-based matching:server { server_name .example.com; … if ($host ~ ^(?:www\.)?(?plugin|about)\.example\.com$) { return 301 $scheme://example.com/$subdomain$request_uri; } … }Please refer tohttp://nginx.org/r/server_namefor more discussion of which option may be best for you.
Trying to do a simple redirect:rewrite https://url.example.com(.*) https://example.com/plugins/url permanent;Anytimeurl.example.comis hit, I want it to redirect to that specific path.EDIT:Will try to explain this better, as I'm trying to redirect to a specific domain from another.server { server_name example.com plugin.example.com; root /home/www/example.com/public; }I see thelocationused for redirects such as:location / { try_files $uri $uri/ /index.php?$query_string; }But not sure how to use it in my case, which is to changeplugin.example.comtoexample.com/plugin.For example:http://plugin.example.com https://plugin.example.com https://plugin.example.com/blah https://plugin.example.com/blah/moreAll of these should redirect to:https://example.com/plugin
Nginx redirect rule has no affect
I just encountered the same problem. I solved it by using a configuration-snippet.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-cors-auth-ingress annotations: nginx.ingress.kubernetes.io/configuration-snippet: | # fix cors issues of ingress when using external auth service if ($request_method = OPTIONS) { add_header Content-Length 0; add_header Content-Type text/plain; return 204; } more_set_headers "Access-Control-Allow-Credentials: true"; more_set_headers "Access-Control-Allow-Methods: GET, POST, PUT, PATCH, DELETE, OPTIONS"; more_set_headers "Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"; more_set_headers "Access-Control-Allow-Origin: $http_origin"; more_set_headers "Access-Control-Max-Age: 600"; nginx.ingress.kubernetes.io/auth-url: "http://auth-service.default.svc.cluster.local:80"
I can create ingress with basic auth. I followed the template from kubernetes/ingress-nginx:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-auth annotations: # type of authentication nginx.ingress.kubernetes.io/auth-type: basic # name of the secret that contains the user/password definitions nginx.ingress.kubernetes.io/auth-secret: basic-auth # message to display with an appropriate context why the authentication is required nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: http-svc servicePort: 80It works fine, but I need to allow 'OPTIONS' method without basic auth for pre-flight requests. Any pointers on how to do it will be very helpful.
How can I put basic auth on specific HTTP methods in ngnix ingress?
This is a websocket based app so you need additional nginx configlocation / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; #proxy_set_header X-Forwarded-Proto https; proxy_pass 127.0.0.1:8080; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_ssl_session_reuse off; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; proxy_redirect off; }
I am struggling with signalr3 and nginx reverse proxy configuration, my nginx cfg looks like this:server { listen 80; server_name my.customdomain.com; location / { root /pages/my.customdomain.com; index index.html index.htm; try_files $uri $uri/ /index.html =404; } ## send request back to kestrel ## location /proxy/ { proxy_pass http://xxxxxxxxxx.westeurope.cloudapp.azure.com/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }What do I miss here?When I browse my page, I receive OK forGET /proxy/notifications/negotiateandGET /proxy/notifications?id=uFQtMDg1dXib6LGvUssQhQbut 404 for POSTPOST proxy/notifications?id=uFQtMDg1dXib6LGvUssQhQpls halp!ps. my Hub is very simple...[AllowAnonymous] public class NotificationHub : Hub { }
How to configure nginx to support signalr3 under cloudflare?
Assuming you're happy with trusting the IP in the header of the second request, then yes, you can do it withuse-server:backend bk_foo [...] server srv_0a_00_01_05 10.0.1.5:80 weight 100 server srv_0a_00_02_05 10.0.2.5:80 weight 100 use-server %[req.hdr(x-backend-ip),lower,map_str(/etc/haproxy/hdr2srv.map,srv_any)] if { req.hdr(x-backend-ip),lower,map_str(/etc/haproxy/hdr2srv.map) -m found }Contents of/etc/haproxy/hdr2srv.map:#ip srv_name # hex of IP used for names in this example 10.0.1.5 srv_0a_00_01_05 10.0.2.5 srv_0a_00_02_05If you need to down one of the servers, you should dynamically update the map to remove it, so that the requests with the header set get redirected again.If you have multiple backends, you can do similar withuse_backend.
We have Similar setup to this diagramWhere request arrives to HAProxy, it get's roundrobin balanced to any servers, backend server checks its cache and if resource is not on that server it issues redirect with header set to the correct server IP.Second time request arrives to HAProxy, it detects that the header with backend server is there, but how can I take that IP and direct request directly to it?For example, second time request arrives to haproxy it has headerX-BACKEND-IP=10.0.0.5So instead haproxy trying to load balance that request, I want it to read the header, take that IP and go directly to that backend.Is that possible? If not, would it be possible with nginx ?
HAProxy dynamic server addresses
I have a setup with one VM using (IIS as LB) + several VMs with (IIS + Kestrel). It's working fine for my usage, but I'm curious to see if other people have different suggestions. Then it depends on what you are doing, if you use encryption, machine key needs to be shared between VMs, you might also needs to share session between VMs (https://www.exceptionnotfound.net/finding-and-using-asp-net-session-in-core-1-0/), store things in database ...
I am quite confused as I haven't seen any blogs or instructions on how to host ASP.NET Core/.NET Core applications with HA and multi-host deployments. All examples are either:1) One NGINX reverse-proxy, one Kestrel 2) One IIS reverse-proxy, one KestrelAnd both components on same host. In real-life production environments, you have LB maybe service discovery, multiple frontends, multiple backends, etc. But for this case there are no instructions whatsoever. So my questions would be for multi-host environments:Do I deploy one IIS/NGINX as LB/Reverse-proxy, and redirect requests to Kestrels running on many separate VM:s, i.e. various different IP:s?Or do I run an NGINX/F5 for load-balancing on one host, then route http traffic to various VM:s that run IIS+Kestrel, or just Kestrel? Is IIS required in this setup as NGINX acts as LB?If I run IIS or NGINX as reverse-proxy, can they keep alive Kestrels on different VM:s, or does each Kestrel require exactly one IIS/NGINX to keep it alive? I.e. the Kestrel process must be on the same same host as the reverse-proxy?All answers are very welcome, and thanks a lot in advance! :)
Multi-host deployment of ASP.NET Core applications
I too had this issue. Everything on a page should come and request https if you are using https and don't want warning/errors. You don't need to implement an api to proxy if you are using nginx. Whatever you implement will be performance hit as you correctly surmise. Just use proxy pass in nginx. In our configuration, we have :location /thirdparty/ { proxy pass http://thirdpartyserver/; }Notice the trailing slash in proxy pass, I keep all third party api which are http inhttps://myserver/thirdparty/requesturl. Trailing slash removes thirdparty while making request. So it becomes,http://thirdpartyserver/requestOfficial reference:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
My application is running under HTTPS with a valid certificate from one of the known authorities. Unfortunately I am using a third party API which doesn't support HTTPS.The result is the known message Mixed content: mydomain.com requested an insecure XMLHttpRequest endpoint.Is it possibleto add an exception to the web server to allow calling this API insecurely!! I am using Nginx BTW.If not what what can be other possibilities to solve this problem.I have a solution but I don't like it because it will be a performance drawback:Implement an API which acts as proxy, receive the requests from the application through HTTPS and make the requests to the third party API throw HTTP.
Calling insecure endpoint from a website runs under HTTPS - nginx
If you can reach the port 3000 from the outside of the computer this means that you program your Node.js application in a way that the HTTP server is listening on all interfaces. This is not bad per se and by default you should program your applications in this way, because you can't anticipate future changes of the final deployment topology. Leave the responsibility of hiding the port from outside world to the firewall (iptables comes to mind here) as suggested by Oxi.This way you don't need to change your code on the future to adapt it to a different deployment topology.I for example has a similar case. I use Haproxy as load balancer and for SSL termination. But in my case Haproxy instance run on a different host for performance considerations. If in the development stage i have restricted my application to listen just for local connections then i will have to update my code once on development just to adapt to the new topology.I hope this helps you.
I am running a nodejs application with nodejs app an server listening on port 3000. I use nginx as a reverse proxy which also handles ssl. the configuration is listed below (and it seems to me after reading several tutorials and forum posts it is pretty standard). Everything works as expected except from the fact that i am still able to accesss the app under "http://example.com:3000". Does that mean i need to add another server listening on port 3000 for redirects to https? This could either mean that the tutorials i read so far are somewhat incomplete or I am overlooking something fundamental. Can anyone help me figure out what it is?# app server upstream upstream app { server 127.0.0.1:3000; } # default http server. Redirect to https server server { listen 80; server_name www.example.com example.com; return 301 https://example.com$request_uri; } # https server server { listen 443; server_name www.example.com example.com; ssl on; ssl_certificate ssl.crt; ssl_certificate_key ssl.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on; location / { proxy_pass http://app; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
nodejs app accessible on port 3000 behind nginx reverse proxy
normal print goes on stdout and Nginx log only stderr.You should use the app.logger module of flask instead. Have a look atthe flask documentation on error handling
Followingthis tutorialI've just setupnginxwithuWSGIto serve my website which I built inFlask, and things work fine for now.I sometimes want to debug something for which I normally use basicprintstatements in the code. Unfortunately I have no idea where the result of these print's go?I've tailed the following log files, but I don't see the print's in there:/var/log/uwsgi/emperor.log /var/log/uwsgi/myapp_uwsgi.log /var/log/nginx/access.log /var/log/nginx/error.logDoes anybody know where I can see the result of the prints?
Where do my Python prints got with Flask deployed under nginx with uWSGI?
You have to keep in mind that setting up DNS and configuring nginx are entirely different tasks.The way I like to setup my DNS is to do aCNAMEfromwwwand other subdomains back to the original domain, if they are logically the same, and hosted on the same server. (However, technically, it's incorrect, because it means that you specify that all of your domains haveMXrecords, as well asTXT/SPFrecords showing that they could be used for mail. But noone really cares, so, as long as you don't have your TLD itself as aCNAME, things should be fine.)You don't see a404fromrandom.example.combecause the firstserverbecomes your default server. To avoid this, you might want to have an extraservercontext with alistendirective having adefault_serverparameter, where you can unconditionallyreturn 404;for all requests.The reason your finalservercontext might not be working may perhaps be due to a missinglistendirective. It usually helps to have an identicallistendirective between all of your servers.
I bought a domain name using namecheap, for simplicities' sake lets' call it example.com. I am running nginx on a Debian based VPS.I want to set up the following configuration(www).example.com points : to var/www/blog(www).static.example.com : points to var/www/staticHowever I can't wrap my head around configuring the subdomain using nginx, or is that something I need to do using Namecheaps control panel?This is my configuration on Namecheap:@ 111.111.111.111 Record type: A TTL: 1800www example.com Record type: Cname/Alias TTL: 1800No subdomains are configured, Should I configure subdomains here!?And here is my nginx configuration:server { root /var/www/blog; index index.html index.htm; server_name localhost example.com www.example.com; location / { index index.html index.htm; } } server { root /var/www/static; index index.html index.htm; server_name static.example.com www.static.example.com; location / { index index.html index.htm; }However this leads to the following:www.example.com points to the correct destinationrandom.example.com points to www.example.com (I don't want this to happen, it should return a 404)static.example.com gives me an error 400. If i look into my logs it can't find the file /var/www/blog/static/index.html, while actually I want it to point to /var/www/static/index.html
Configuring a subdomain on nginx using namecheap
there are different approaches here:1. using firewall setup access to B http{s} port only from A IP address.2. set Directory restriction in httpd.conf for aps B directory like: AllowOverride None Order allow,deny Allow from in APS A create link (http://ip_A/accesstoB/somepath/script.php) that will Proxied to B using .htaccess rule like:RewriteRule ^accesstoB/(.*)$ http:///$1 [P]in this example: customer accessinghttp://ip_A/accesstoB/somepath/script.phplink will be proxied tohttp://ip_B/somepath/script.php
Our client has a set of (5-6) intranet/internet applications either custom developed or 3d-party, located in various web servers, whichapplications we cannot modify/control.We have developed a web portal application (A) and the client wants that all itsother applications (B) are accessed only via A, meaning that if a user enters directly the application url for B, he gets an error page telling that access is allowed only via A. So, user has to log in to application A and then click a link to application B to access it. This requirement has been asked for security reasons and to make A act as anaccess gatewayto other applications (B).Is this possible and how can we implement it? Should we use anotherweb server on the top acting as a proxyto all other applications (B) or is there a better solution for this? And if we use another web server as a proxy should we implement the referrer logic with auser id - tokenapproach combined with appropriatesession cookies, so that the application B's url cannot be hacked and is unique for each user and session?Sorry if I stated my questions unclearly or in a wrong way, but I'm unfamiliar with network/system administration and web servers. I can provide more details where needed.
How can I restrict access to an application that I do not control only via another referrer application?
http { split_clients "${remote_addr}" $server_id { 33.3% 1; 33.3% 2; 33.4% 3; } server { location ~* \.(gif|jpg|jpeg)$ { return 301 "${scheme}://s${server_id}.site.com${request_uri}"; } }
I want to load balance my website with nginx.The load balancing in nginx wiki is proxy, so the actual file being downloaded from the frontend server. (http://wiki.nginx.org/LoadBalanceExample)This is how I need the balancing:user request file:http:// site.com/image1.jpgnginx redirect user to one of the servers (with Location header):http:// s1.site.com/image1.jpghttp:// s1.site.com/image1.jpghttp:// s3.site.com/image1.jpgIs this possible with nginx?
Nginx Load Balancing
The solution I found was this: the uwsgi.ini file that I made to create the uwsgi workers didn't specify a socket. So I made another .ini file and made a socket for it. That same socket I placed into the nginx conf file under uwsgi_pass. Here is a link to django's webpages for configuring uwsgi.https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/uwsgi/
I am using uWSGI and Nginx to server up my Django website (1.4 version). My file structure is django_mysite/django_mysite/ in which there is a wsgi.py file.I keep getting 502 Bad gateway errors. I have other servers running of nginx and they are working fine.My nginx config:server { listen 80; server_name beta.example.com; keepalive_timeout 70; root /path/to/django_mysite/django_mysite; location root { root html; uwsgi_pass localhost:9000; uwsgi_param UWSGI_SCRIPT django_wsgi; include uwsgi_params; } location / { uwsgi_pass localhost:9000; include uwsgi_params; uwsgi_param SCRIPT_NAME /django; uwsgi_param UWSGI_SCRIPT django_wsgi; uwsgi_modifier1 30; } }My wsgi.py file: import sys import ossys.path.append('/path/to/django_mysite/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mysite.settings") import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()The error in the log is:*3 recv() failed (104: Connection reset by peer) while reading response header from upstreamThanks
nginx django 502 bad gateway
nginx has some kind of Web Socket support in unstable 1.1 branch only. See Socket.IOwiki.Afaik there are currently only few stable Node.js based http proxies that support Web Sockets properly.Check out node-http-proxy (we use this):https://github.com/nodejitsu/node-http-proxyand bouncy:https://github.com/substack/bouncyOr you can use pure TCP proxy such asHAproxyUpdate!nginx (1.3.13>=) supports websockets out of the box!http://nginx.org/en/docs/http/websocket.html
I'm using Express.js to create a server to which I can connect using web sockets.Even though it eventually seems to work (that, is connects and passes an event to the client), I initially get an error in Chrome's console:Unexpected response code: 502On the backend, the socket.io only logswarn - websocket connection invalid.However, nginx logs this:2012/02/12 23:30:03 [error] 25061#0: *81 upstream prematurely closed connection while reading response header from upstream, client: 71.122.117.15, server: www.example.com, request: "GET /socket.io/1/websocket/1378920683898138448 HTTP/1.1", upstream: "http://127.0.0.1:8090/socket.io/1/websocket/1378920683898138448", host: "www.example.com"Note: I have nginx dev running:nginx version: nginx/1.1.14so it should support HTTP/1.1.Also note that if I just use the node.js server without the nginx it works without any warnings.Finally, here is my nginx config file:server { listen 0.0.0.0:80; server_name www.example.com; access_log /var/log/nginx/example.com.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://node; proxy_redirect off; } } upstream node { server 127.0.0.1:8090; }Any help would be greatly appreciated. I tried the fix suggested in thisquestionbut that didn't work either.
"websocket connection invalid" when using nginx on node.js server
Have you checked themax_execution_timevalue inphp.ini?That's the only other configurable value I can think of that might be causing a timeout.
I need the timeout to be high so I can use a debugger on my source code. It's getting passed to fastcgi from nginx correctly, butalwaystimes out after 60 seconds. I've changed as many timeout parameters as I could find, restarted nginx and fast-cgi after every change and nothing worked.I see most users point questions like this toHow do I prevent a Gateway Timeout with FastCGI on Nginx. But that solution did not work for me.The parameters I've increased are:fastcgi_read_timeout (the above thread says this fixed the issue for that user)client_header_timeoutclient_body_timeoutsend_timeout
nginx/fastcgi 504 gateway error, increasing fastcgi_read_timeout isn't helping
If not production, you can test what is being sent by nginx after launching the simplest listening server on the desired local address and port (instead of a real one):$ nc -l 127.0.0.1 3000 POST /some/uri HTTP/1.0 Host: 127.0.0.1 Connection: close Content-Length: 14 some payloadResponse can be simulated by manually enteringHTTP/1.1 200 OK, followed with 2 new lines, whilencis running.
I've got the following nginx conf:http { log_format upstream_logging '[proxied request] ' '$server_name$request_uri -> $upstream_addr'; access_log /dev/stdout upstream_logging; server { listen 80; server_name localhost; location ~ /test/(.*)/foo { proxy_pass http://127.0.0.1:3000/$1; } } }When I hit:http://localhost/test/bar/fooMy actual output is:[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000While my expected output is:[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000/barIs there a variable or a way to produce the actual proxied URI in the log?
Nginx: log the actual forwarded proxy_pass request URI to upstream
Just change it to the following: (replacing the last three location blocks)location = / { include proxy_params; proxy_pass http://0.0.0.0:8000; } location / { try_files $uri $uri/ /index.html; }Thelocation = /only matches the exact domain, everything else will be matched bylocation /.
I have a web app that uses Django for the backend and some frontend and ReactJS for stricly the frontend. I am setting up my Nginx configuration and I am trying to get Nginx to proxy_pass only on the "/" location and then on the rest of the locations I want it to serve the index.html file from React.Here is my current Nginx configuration under sites-available. So the only url that does not have the prefix "/home/" or "/app/" is the homepage. I want Django to serve the homepage and then ReactJS to serve the rest.server { listen 80; root /home/route/to/my/ReactJS/build; server_name www.mydomain.com; location = /favicon.ico { log_not_found off; } location / { include proxy_params; proxy_pass http://0.0.0.0:8000; } location /home/ { try_files $uri $uri/ /index.html; } location /app/ { try_files $uri $uri/ /index.html; } }Let me know if you need any more details or if I could clear things up.Thanks!
How to have Nginx to proxy pass only on "/" location and serve index.html on the rest
I've been having the same issue and first tried using Apache's ProxyPass to redirect/blogtoport 2368but found other issues doing this.Before trying my suggestions you should undo any changes made usinghttpproxy.What seems to have worked for me is placing the code you have inindex.jsdirectly into yourapp.jsfile instead of what you already have in there. You will need to add the ghost errors variable and renameparentAppto the name of your app, I'll call thisyourAppNameso it's clear but mine is justapp. So insideapp.jsyou can put:var yourAppName = express(); var ghost = require('ghost'); var ghosterrors = require('ghost/core/server/errors') ghost().then(function(ghostServer) { yourAppName.use(ghostServer.config.paths.subdir, ghostServer.rootApp); ghostServer.start(yourAppName); }).catch(function(err) { errors.logErrorAndExit(err, err.context, err.help); });You probably already have theghostandexpressvariables declared in app.js so you won't need to add these lines.The blog should now be available at the URL specified in config.js.
I am trying to run Ghost on a subdirectory of my main Node.js project. It is currently hosted in azure websites.Something like:http://randomurlforpost.azurewebsites.net/blogI followed the instructions here:https://github.com/TryGhost/Ghost/wiki/Using-Ghost-as-an-NPM-moduleWith the new addition of using Ghost as a npm module do I still need Nginx or Apache?. As of now I have my main site running on localhost:3000 and the Ghost instance running on localhost:2368.I have tried doing all kinds of modifications to the part of the code stated on the instructions however I have not succeeded.//app.js, is there a specific place to put this? var ghost = require('ghost'); ghost().then(function (ghostServer) { ghostServer.start(); }); //config.js development: { url: 'http://localhost:3000/blog', database: { client: 'sqlite3', connection: { filename: path.join(__dirname, '/content/data/ghostdev.db') }, debug: false }, server: { host: '127.0.0.1', port: '2368' }, paths: { contentPath: path.join(__dirname, '/content/'), } }, //index.js ghost().then(function (ghostServer) { parentApp.use(ghostServer.config.paths.subdir,ghostServer.rootApp); // Let ghost handle starting our server instance. ghostServer.start(parentApp); }).catch(function (err) { errors.logErrorAndExit(err, err.context, err.help); });EDIT: I was able to route traffic withhttp-proxyhowever it is routing it to localhost:2368/blog (which doesn't exist) any ideas on how to prevent this?var httpProxy = require('http-proxy'); var blogProxy = httpProxy.createProxyServer(); var ghost = require('ghost'); var path = require('path'); // Route /blog* to Ghost router.get("/blog*", function(req, res, next){ blogProxy.ws(req, res, { target: 'http://localhost:2368' }); });
Run Ghost in a subdirectory of my main Node.js application
It was related to the version of PHP. I have used latest version of nginx and slightly old version of PHP. The issue has been fixed by updating PHP to latest version.
I'm getting this error from/var/log/messageson my FreeBSD box. I'm usingnginxandspawn-fcgiwithmemcacheandapcmodules enabled.upstream prematurely closed connection while reading response header from upstream, client HTTP/1.1", upstream: "fastcgi://unix:/tmp/fcgi.sock:", host:
upstream prematurely closed connection while reading response header from upstream, client
You can provide a more explicit rewrite. Try the following:rewrite ^/foo/ $scheme://www.domain.com:80/bar$request_uri permanent;I have assumed that you meant to use^/foo/instead of^/foo$, since^/foo$is a very specific case. Just revise as needed.
I have an nginx server processing PHP requests, but it's configured to listen on a non-standard port (port 12345 or something). I can't change the listen port because corporate IT says, "No."There is a proxy in the data center that forwards requests from www.domain.com:80 to the nginx box on port 12345.I have some static 301 redirects that I need to put in place, but I'm getting unexpected behavior.Sample redirects in site.conf "server { }" block:rewrite ^/foo$ /bar/foo/ permanent;When I attempt to go to www.domain.com/foo, the redirect happens, but it tries to forward the browser to www.domain.com:12345/bar/foo/My question is, how can I get nginx to redirect the user to the correct port (www.domain.com/bar/foo/)?Maybe a better question is, what is the correct way to do what I'm asking? There are 50+ redirects that need to go in, and I'd rather not create a "location" section for each of those redirects.
301 Redirect on nginx machine running non-standard port behind a proxy
Found your problem, I am still trying to find something cleaner but here is the quick & dirty fix:add this in your config/initializers/omniauth.rb:class Rack::OpenID def realm_url(req) 'https://localhost:3000' end endAnd now for the explanation: when the rack-openid gem builds the request to send to the google openid server it fails in one spot using the rails application access url and not the nginx one (wich uses ssl) resulting in this being sent to the openid server:openid.realm:http://localhost:3001 openid.return_to:https://localhost:3001/auth/open_id/callbackThe realm use the http url (rails url) while the return_to points to the right https url (nginx), when the openid server sees this it stops and return an error.PS: I will edit the answer if I manage to find a cleaner way.
Rails 3.0.12, newest omniauth, I can connect to Google and get the user's email address just fine. But then I run that same rails app behind nginx in SSL mode, and it fails with the Google page:"The page you requested is invalid."Is it my nginx config? My omniauth setup?I know theX-Forwarded-Proto: httpsis the special sauce here, is there anything else I need to do to get openid happy behind an SSL web server?Here's the full example code: you can clone this repo,bundle install, and runrails sto see it work just fine, then runrake serverto see it fail.https://github.com/jjulian/open_id_sslnginx.conf:worker_processes 2; pid tmp/nginx.pid; error_log log/error.log; daemon off; events { } http { client_body_temp_path tmp/body; proxy_temp_path tmp/proxy; fastcgi_temp_path tmp/fastcgi; uwsgi_temp_path tmp/uwsgi; scgi_temp_path tmp/scgi; server { listen 3000 ssl; ssl_certificate development.crt; ssl_certificate_key development.key; ssl_verify_depth 6; access_log log/access.log; proxy_buffering off; location / { proxy_pass http://127.0.0.1:3300; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Forwarded-Proto https; } } }omniauth.rb initializer:require 'openid/store/filesystem' Rails.application.config.middleware.use OmniAuth::Builder do provider :open_id, :identifier => 'https://www.google.com/accounts/o8/id' endroutes.rb:OpenIdSsl::Application.routes.draw do match '/auth/open_id/callback' => 'accounts#update' match '/auth/failure' => 'accounts#failure' root :to => 'accounts#show' endUPDATE:This example used Rails 3.1.12 and OmniAuth 1.0.3. Upgrading to Rails 3.1.4 and OmniAuth 1.1.0 fixes the issue.
Omniauth and open_id with Google broken when running behind nginx in SSL mode
You won't merge their domain to your server.In fact, when they will register their domains, they will make it point to your server.On your server configuration, you'll have to dynamically create rules thatimplicitlyredirect the page to the one they created on your server.So, users will seehttp://purchaseddomain.com/on-uribut you serve the pagehttp://domain.com/custom-name/one-uriI.E: it's like if you added on an .htaccess - even if you don't use apache, it's just to explain what the "system" must be:RewriteCond %{HTTP_HOST} purchaseddomain\.com$ [NC] RewriteRule (.*) /custom-name/$1
I'm having a system where users can input their purchased domain into their profile, so when accessing their domain, it should replace their custom domain, e.g.http://domain.com/custom-nametohttp://purchaseddomain.com.So when they access their purchase domain, it should take them to their profileincludingtheir navigation links, such as links on their page will be replaced with their purchased domain, for example viewing their records would be:http://domain.com/custom-name/recordstohttp://purchaseddomain.com/records.Tumblr enables this feature, however I have no idea how this all works:This is exactly how I like to have a feature like this, I've searched on SO, but it didn't seem to help.Now this is a problem, I'm not sure how I can validate, confirm and merge their purchased domain into my server without a problem using PHP - I'm using Codeigniter for this.Is there a solid, stable plugin/library or detailed tutorial that can have the ability to enable custom domains masking a internal domain?My server is running Ubuntu 11.10 on nginx 1.0.6.The templating will be just fine for me, which I can do - all I need help on ishowto safely accept and merge their domain to my server.EDIT: Just looked into nginxVirtualHostExample, this looks good overall but how will I be able to dynamically add/remove those domain entries while the domain has anArecord pointing to my server?
How to enable user custom domains in PHP
I like having regular users on a system:multiple admins show up in sudo logs -- there's nothing quite like asking a specific personwhythey made a specific change.not all tasks require admin privileges, but admin-level mistakes can be more costly to repairit is easier to manage the~/.ssh/authorized_keysif each file contains only keys from a specific user -- if you get four or five different users in the file, it's harder to manage. Small point :) but it is so easy to writecat ~/.ssh/id_rsa.pub | ssh user@remotehost "cat - > ~/.ssh/authorized_keys"-- if one must use>>instead, it's precarious. :)But you're right, you can do all your work as root and not bother with regular user accounts.
I'm currently trying to set up nginx + uWSGI server for my Django homepage. Some tutorials advice me to create specific UNIX users for certain daemons. Likenginxuser for nginx daemon and so on. As I'm new to Linux administration, I thought just to create second user for running all the processes (nginx, uWSGI etc.), but it turned out that I need some--systemusers for that.Main question is what users would you set up for nginx + uWSGI server and how to work with them? Say, I have server with freshly installed Debian Squeeze.Should I install all the packages, virtual environment and set up all the directories as root user and then create system ones to run the scripts?
Linux user scheme for a Django production server