Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
There are two solutions for this 1- add the ssl certs to the loadbalance: You need to request a cert with all the supported DNS names (app.mydomain.com and design.customerwebsite.com)/ and you need to manage customerwebsite.com domain with Route53. I think that is not possible in your case.2- Do not use ssl on the load balancer: for this option, we will not terminate ssl on the load balancer, however, it will be passed to nginx to handle. Your loadbalancer configs should look likeyou need to generate a new ssl cert that includes both domainssudo certbot --nginx -n --redirect -d app.mydomain.com -d *.mydomain.com -d design.customerwebsite.com -d *.customerwebsite.comNginx configsserver { server_name www.customerwebsite.com; return 301 $scheme://customerwebsite.com$request_uri; } server { listen 80 default_server; server_name design.customerwebsite.com; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem; server_name design.customerwebsite.com; root /opt/bitnami/apps/myapp/dist; location / { resolver 127.0.0.11 ipv6=off; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto https proxy_set_header X-Real-IP $remote_addr; proxy_hide_header X-Frame-Options; proxy_pass http://localhost:3000; } }
I'm working on a Web App.My app runs on the subdomainapp.mydomain.comI need to WhiteLabel my app. I'm asking my Customers to point to their own website via CNAME to my app.design.customerwebsite.compoints toapp.mydomain.comHere is what I have tried to solve this.I created a new file in/etc/nginx/sites-availablenamedcustomerwebsite.comAdded a symlink to the file.I installed SSL usingcertbotwith the below command.sudo certbot --nginx -n --redirect -d design.customerwebsite.comHere is the code for my NGINX conf file ofcustomerwebsite.comserver { server_name www.customerwebsite.com; return 301 $scheme://customerwebsite.com$request_uri; } server { # proxy_hide_header X-Frame-Options; listen 80; listen 443; server_name design.customerwebsite.com; ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem; root /opt/bitnami/apps/myapp/dist; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_hide_header X-Frame-Options; proxy_pass http://localhost:3000; } proxy_set_header X-Forwarded-Proto $scheme; if ( $http_x_forwarded_proto != 'https' ) { return 301 https://$host$request_uri; } }I'm successfully able to run my web app onhttps://design.customerwebsite.comBut the SSL certificate shows that it is pointed toapp.mydomain.comand shows insecure.Myapp.mydomain.comhas SSL certificate from Amazon ACM which is attached via Load Balancer.What should be the approach to solve this?
How to handle SSL certificates for implementing WhiteLabel option in a web app running on NGINX server
Yes, you can use multiple domains (each with its own SSL certificate) in nginx. You will need a separate server block for each domain in the nginx configuration file. Here's an older reference that can get you startedHow To Set Up Multiple SSL Certificates on One IP with Nginx on Ubuntu 12.04If you are looking to have subdomain sites on the same IP, the same method should work as long as you match the right subdomain pattern for each server block in the nginx configuration file.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed4 years ago.Improve this questionI'm trying using nginx for having multiple ssl certificates without having to create a server for each one of them.So I have a reverse proxy built and it will have multiple different sites with the same domain running on it. Is it possible to have the server have multiple ssl certificates and keys so that when it proxies uses the right key and cert or do I need to create its own server for every single site that gets generated?Thanks in advance.
Set up nginx to use multiple ssl certificates without having multiple servers [closed]
Ok after many days and hours breaking my head I came to this solution:On GoDaddy's DNS I added these records:A: test24.company.io -> A: sub1.test24.company.io -> A: sub2.test24.company.io -> note:All 3 records point to the same elastic IP, there is no need to set an individual elastic IP for each subdomain.The nginx conf file is configured as follows:https://gist.github.com/ahmed-abdelazim/b3536c1780afc4215ef57633bbe77a88Thispost is a very useful guide on setting nginx proxies for different ports on your server.All the rest in the nginx config file is generated by certbot, which also managed the redirect from http to https.note to mods: I cannot seem to format the code of the GitHubGist in this answer for some reason.
I'm trying to configuredefault.confin/etc/nginx/conf.dto show a simple landing page located at/home/ubuntu/project-source/company/entry/index.html.The domains are set up correctly as far as I know to point to the serverA: test24.company.io -> default.conf:server { listen 80; server_name localhost; index index.html; location / { root /home/ubuntu/project-source/company/entry; } } server { server_name test24.company.io www.test24.company.io; listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/test24.company.io/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/test24.company.io/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = test24.company.io) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name test24.company.io www.test24.company.io; return 404; # managed by Certbot }Additional question: the project will run 2 processes on 2 subdomains lets saysub1andsub2and they will run onlocalhost:3001andlocalhost:3002respectively, how do I configuredefault.confto point/proxy to these processes as well?
How to configure nginx's default.conf
Either change your authentication logic so Nginx handles it, or implementrequest and connection limitswithin Nginx to control how many connections are accepted and passed to the upstream server
I just performed a basic DDOS from my computer:websocket-bench -a 2500 -c 200 wss://s.example.comWhich to my total dismay crashed my server! The WS works by connecting to my nginx proxy:location / { proxy_pass http://sock; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real-IP $remote_addr; proxy_read_timeout 60; } upstream sock { server 127.0.0.1:1203 fail_timeout=1s; }and locally on the server on port1203isratchet. The setup for ratchet is that I allow any connection and the firstonMessageperforms authentication and if invalid the connection is closed.I also have tried authentication by passing headers on the first connection and if invalid the socket closes but this has not helped at all and nginx still reaches 100% resources and then crashes.What should I be analysing to prevent these crashes?When changing the upstream to another closed port (i.e disabling it) the server still crashes.
How to prevent web socket DDOS attacks?
I have now found two solutions for the above...I have opted forsolution twoas it requires no code changes but have successfully tested both solutionsSolution OneApologies, I dont have access to the working test code on this machine but it goes something like the below :Create a base controller and override the ControllerBase.RedirectToAction method.Add a base URL setting to your webconfig (or a db setting etc).Create custom redirect result object and append baseurl to URL. Return custom result object from overridden method.protected override RedirectToRouteResult RedirectToAction(string actionName, string controllerName, RouteValueDictionary routeValues)Solution TwoUsing IIS, run the application within a virtual directory(s) or child application to match the location of the proxy. MVC will then automatically correctly control all the routing without having to override any base methods.NB. You will need be careful with any relative paths/links as with any proxy.I am currently using this method in production without any problems. See below example.
I am currently working on a project that requires one of our current ASP.NET MVC5 web applications to sit behind a NGINX reverse proxy that the client will control.I am brand new to NGINX so am lacking in knowledge.The reverse proxy will be placed at a sub path. (example below)http://localhost:9999/foo/bar/This will then proxy to the root of the MVC5 application (port 9998) I have set up NGINX locally to test that the site will work as expected. We use absolute paths to our resources (hosted in an internal CDN) so all these load as expected.My Issue- The reverse proxy is working correctly and displaying the root page of the application. The problems start to arise when hitting any controller methods/page links that have been created using this.RedirectToAction() or @html.ActionLink() etc.The MVC application does not realise it is running behind a reverse proxy and chops that subpath out of its derived URL.So a redirect to a home controller looks likehttp://localhost:9999/homeInstead of :http://localhost:9999/foo/bar/homeDoes anyone have any ideas the counteract this? I can see .NET core has a workaround but cant see anything for MVC5. I can use this.Redirect() and specify the absolute path but the application is large and used in other scenarios without the reverse proxy.Can this be resolved through my NGINX configuration? I have included my config below :#user nobody; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 9999; server_name localhost; location /foo/bar/ { rewrite ^/foo/bar/(.*)$ /$1 break; proxy_pass http://127.0.0.1:9998/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }
ASP.NET MVC behind NGINX reverse proxy
I finally found the cause of this problem. Nginx works correctly, however I routed requests to AWS Elastic Beanstalk which uses nginx internally to route requests to inside Docker container.So the misconfiguration was on the side of AWS Elastic Beanstalk. Configuring the AWS fixed the problem.
When I setclient_max_body_size 30m;without ssl everything works (files up to 30MB are accepted). However when I switch to ssl it completely ignores this directive.My configuration looks like (/etx/nginx/conf.d/my-sites.com.conf):server { listen 443 ssl; server_name my-sites.com; ssl_certificate /etc/nginx/ssl/my-sites.com/uni_my-sites.com.crt; ssl_certificate_key /etc/nginx/ssl/my-sites.com/my-sites.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; client_max_body_size 30m; location / { proxy_pass http://my-backend.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }I found several suggestions herenginx - client_max_body_size has no effectbut nothing worked. I've tries to use diferent nginx versions, I set client_max_body_size for all blocks: http, server, location but nothing works.I've also searched if it is an nginx bug with no results.Is there any solution I can overcome the problem or am I forced to use non-ssl connection? Any suggestions are welcomed.My configuration is:AWS EC2 nano instanceNginx in docker (latest stable - 1.10.1)Only one virtual host on single IP addressDifference from this questionnginx - client_max_body_size has no effect: this question is related to sslEdit:I've created an issue in nginx wikihttps://trac.nginx.org/nginx/ticket/1076#ticket
nginx - client_max_body_size has no effect with ssl configured
I find this link, to building nginx on the Win32 platform with Visual C :http://nginx.org/en/docs/howto_build_on_win32.html
I'm trying to test Streaming media files in my local Nginx configuration. I need to add two nginx modules : flv and mp4--with-http_flv_module for Flash Video (FLV) files --with-http_mp4_module for H.264/AAC fileshowever, I'm using Kevin Worthington install :http://kevinworthington.com/nginx-for-windows/, so I'm not eable to add thoses modules in order to check if Nginx sends partial-content 206 header when requesting Streaming media.Thanks in advance.
How to add nginx modules on windows platforme?
Use localdomainslikehttp://test.loc/andhttp://dev.locinstead of relying on subfolders. Althoughapplication.contextshouldwork I saw many posts complaining that they don't...What's more usinglocal domainsis more similar to the final - production enviroment, so it's just easier to debug some url depended things, like ie. cookies.
GoalSetup multiple Play 2.1 applications with nginx using different subdirectory for each application.App1 running on127.0.0.1:4000should be accessible under127.0.0.1/devApp2 running on127.0.0.1:5000should be accessible under127.0.0.1/testConfigurationnginx.confworker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; upstream app1 { server 127.0.0.1:4000; } upstream app2 { server 127.0.0.1:5000; } server { listen 80; server_name localhost; location /dev { rewrite /(.*) /$1 break; proxy_pass http://app1; } location /test { rewrite /(.*) /$1 break; proxy_pass http://app2; } } }App1 - application.confapplication.context=/devApp2 - application.confapplication.context=/testProblemWith this configuration I can access both applications, but only html code is loaded. All static files (css, js, images) aren't loaded.I think this is caching problem. I've tried with different nginx parameters, without luck. If I request the site for the first time the browser responds (forcssandjsfiles, e.g.127.0.0.1/dev/assets/stylesheets/main.css) with status200but without content -Content-Length: 0. For the next time it responds with304, still without content.I'm not sure if this isnginxorPlay 2.1configuration problem.I will appreciate any help.
Reverse proxy for a subdirectory with nginx and Play 2.1 apps
Usegunicorn_django [OPTIONS] myprojectif you usemyproject.settings
I've been reading about deploying Django with gunicorn and I wanted to give it a try.I have found at least 3 ways of running a server with gunicorn and django:gunicorn [OPTIONS] [APP_MODULE] # tested locally and worked finepython managy.py run_gunicorn # also works fine locallygunicorn_django [OPTIONS] [SETTINGS_PATH] # I have an error due to apps/ locationI have Apache with nginx (serving static files) in production at the moment, works fine but is a litle slow and want to try Gunicorn. The first 2 options worked fine locally with nginx serving static files.I want to know a couple if things:What is the difference between any option above ?What is the proper instruction to run inPRODUCTIONenvironments ?Thank you guys.
Django with Gunicorn different ways to deploy
You shouldn't need to uninstall the apt-get version first, but it's a good idea so that you don't inadvertantly walk over your custom recompile with an 'apt-get update' or similar system update in the future.There are a few reasons your recompile may not have worked. Does the installer have the correct permissions to overwrite the existing file? Is .configure placing the compiled binary in the same place as apt-get? (--sbin-path=/where-you-want-it-installed on .configure, if not /sbin/nginx) Was nginx running when you recompiled? The installer may not be able to overwrite an open file. (You have restarted nginx, right?) Maybe something else, but that's where I'd start looking.
I originally installed nginx via apt-get install. It works just fine. Now, I want to install some 3rd party modules and I have to recompile nginx. So I tried to recompile. It went through the motions and then I realized that my original version was still the one that was being used.Do I need to uninstall my original copy of nginx first in order for the other to install properly?my flags for the install:--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_stub_status_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-mail --with-mail_ssl_module --with-ipv6 --add-module=/usr/src/gnosek-nginx-upstream-fair-5f6a3b7 --add-module=/usr/src/mod_strip
Recompiling nginx after using apt-get install nginx
Enter the php-fpm container:docker-compose -i -t exec php-fpm /bin/shThen change access rights of storage folder:chmod -r 777 /home/html/storageCause it's local development environment, correct rights doesn't matter.
I'm using docker compose to boot up a development workspace, consisting of php, nginx and mysql. Everything boots, static html get's served, but when trying to start a laravel app, i get the following error:The stream or file "/home/html/storage/logs/laravel-2019-06-10.log" could not be opened: failed to open stream: Permission deniedI searched around and it looked like a permissions issue? Do note, that the docker with just the database and the build in php server does seem to work.My docker-compose.ymlversion: "3" services: db: image: mysql command: --default-authentication-plugin=mysql_native_password restart: always environment: MYSQL_ROOT_PASSWORD: "root" ports: - 3306:3306 php-fpm: image: php:7.3-fpm-alpine links: - db volumes: - "./:/home/html/" nginx: image: nginx:1-alpine ports: - "8080:80" links: - php-fpm volumes: - "./site.conf:/etc/nginx/conf.d/default.conf" - "./:/home/html/"My nginx config:server { index index.php index.html; listen 80 default_server; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /home/html/public; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php-fpm:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } }Kind regards :)
Permission Denied Nginx Docker
This might be a problem with classical namespaced Temp, please see this:http://fedoraproject.org/wiki/Features/ServicesPrivateTmpbut you mentioned that you have set your sock location to your app directory, have you done it in your nginx configuration for that virtual host? you will definitely need to restart your web server for that changes to take affect.Please refer this question and its answer, which might be useful in this case.Got 'No such file or directory' error while configuring nginx and uwsgiPlease refer to the section where in nginx.conf file,uwsgi_passis mentioned. if you have not made changes like that, please do so and restart the webserver. I think that must solve the problem.
I am configuring nginx with uwsgi on EC2, I have check logs in file/var/log/nginx/error.log.I am getting this error:200 connect() to unix:/tmp/uwsgi.sock failed (2: No such file or directory) while connecting to upstreamMy uwsgi.sock location is/var/www/myapp/How can I change the file location fromuwsgi://unix:/tmp/uwsgi.socktouwsgi://unix:/var/www/myapp/in configuartion?
200 connect() to unix:/tmp/uwsgi.sock failed
It's neither component.This isn't anything from AWS... it's the browser. It's aninternalredirect the browser is generating, related to HSTS...HTTP Strict Transport Security.If you aren't doing it now, then presumably, in the past, you've generated aStrict-Transport-Security:header in responses from this domain, and the browser has remembered this fact, preventing you from accessing the site insecurely, as it is intended to do.
I have a domain from GoDaddy, with AWS Route53 for managing DNS records. Route53 sends request to a load-balancer.For webserver I have a load-balancer that routes requests to a single (for now) EC2 instance and the nginx in EC2 instance get the request and sends a response to the client.The problem is that when I usehttp://to perform a request, AWS redirects requests to thehttps://version of the domain with307 Internal Redirectresponse. The response object hasNon-Authoritative-Reason: HSTSheader as well.What's the problem and which component is redirected requests?
Amazon AWS 307 response and permanent redirect to HTTPS
You need the~operator to enable regex matching, and since you only need to matchwebsite/eventsorwebsite/events/as full strings, you will need anchors^and$around the pattern:location ~ ^/events/?$ ^ ^ ^The^/events/?$pattern matches:^- start of input/events- a literal substring/events/?- one or zero/symbols$- end of input.
Currently I have this location block:location = /events { rewrite ^ https://totallydifferenturl.com; }This successfully redirects frommywebsite/events, but I want this block to also handlemywebsite/events/.Tryinglocation = /events/?didn't work out.
NGINX location rewrite URL with or without trailing slash
Final correct solution:location ~* ^/$ { if ($http_cookie ~* "wordpress_logged_in") { return 301 http://example.com/newsfeed/; } }
I'm building a closed social network and currently when a user is not logged in they will always be redirected to the homepage of my domain.What I would like to do is do the following:Use NGINX to check if a user is logged in (through checking for a cookie) and then when they go to the homepage (mydomain.com) redirect to to mydomain.com/newsfeed.This check should only be applied when a user brows to the homepage and should not work at ANY other url (or else they would always be redirected).I'm very new to NGINX and looked at various tutorials for using cookies for redirect but failed to get an answer (most notably to limiting the redirect to only the homepage).Thanks in advance!
NGINX: Redirect a user based on cookie for a specific URL only
As @Valery Viktorovsky's answer says, you can't use a * forserver_name. You can designate aserverblock as the "default" to receive all requests which don't match any others. Seethis postfor an explanation. It would look like this:server { listen 80 default_server; server_name wontmatch.com; # but it doesn't matter }Seethe docs forserver_namefor more.
Is there a way to proxy all traffic to a certain serverunlessthe domain is something different?Basically a*for theserver_nameproperty?server { listen 80; server_name foo.com } server { listen 80; server_name * }Or, is there a way to set a "default" server, and then it would use it if none of the other specific server configs match?
nginx and asterisk server_name?
You are probably under the impression thattry_fileson server level must work for every request. Not at all. Quite the contrary, it works only for requests that match nolocationblocks.
I'm configuring a pretty standard webserver using nginx. The server is working as expected, however, there is a small configuration detail I would like to understand.My current configuration is:index index.html index.htm index.php; location / { try_files $uri $uri/ /index.php?q=$uri; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; }With this configuration, if I access to:http://myweb.com/wp-content/uploads/2012/10/cropped-bitmap11.png/lol.phpI get a 404 as expected.However, with this configuration:try_files $uri =404; location ~ \.php$ { fastcgi_index index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; }I get a blank page with "Access denied".Why is the result different?Thank you
Nginx: try_files outside location
Try update nginx config. It will work for direct url.location / { try_files $uri $uri/ /index.html; }
Similar case:flutter-web-app-blank-screen-in-release-modeI have an AWS EC2 cloud server. and I built a flutter web build in my EC2 Server. and cross-connect the flutter web index.html to Nginx.> $ flutter build web 💪 Building with sound null safety 💪 Compiling lib/main.dart for the Web... 1,491msso EC2 can 200 OK.and routed navigate another page and refreshed.my EC2 Server nginx config is:server { listen 80 default_server; listen [::]:80 default_server; root /var/www/flutter_web_project/build/web; # <- the flutter build web released result folder index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ =404; } }and i used url_strategy dependencydependencies: flutter: sdk: flutter url_strategy: ^0.2.0and main.dart source:... import 'package:url_strategy/url_strategy.dart'; void main() { setPathUrlStrategy(); runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, title: 'flutter web sample', theme: ThemeData( visualDensity: VisualDensity.adaptivePlatformDensity, ), home: locationPlatform(), onGenerateRoute: WebRouter.generate, initialRoute: '/', ); } ...
flutter web built release but nginx route 404
As you mentioned you're using the new Mongo extension for PHP 7.The class names have changed from the older version, i.e.MongoClientis nowMongoDB\Driver\ManagerMongoDateis nowMongoDB\BSON\UTCDateTimeI'm not sure how backwards compatible everything is, but this should get you started!
I am trying to configure MongoDB to work with my Laravel 5.1 Homestead instance on a virtual Ubuntu 14.04 machine. I was able to successfully install the latest version of MongoDB which supports PHP 7.0 usingsudo pecl install mongodb(this is correct for 7.0,notsudo pecl install mongoanymore).I then added the extension in my php.ini files (all three) on my Ubuntu machine, each in:/etc/php/7.0/cli/php.ini/etc/php/7.0/fpm/php.ini/etc/php/7.0/cgi/php.iniThis is the extension I wrote which is correct for use with PHP 7.0:extension=mongodb.so(not mongo.so anymore)When I runphpinfo()in my browser, it states that MongoDB is properly configured with my PHP 7.0.If MongoDB is properly configured, how come I keep getting:Fatal error: Class 'MongoDate' not foundwhen I try to run my migrations and seeds withphp artisan migrate:refresh --seed?I already tried:rebooting the Ubuntu machine withvagrant reloadandvagrant reload --provisionRestarting PHP and Nginx withsudo service nginx restartandsudo service php7.0-fpm restartNeither have worked.
Fatal error: Class 'MongoDate' not found when using mongodb php driver 1.1.2 and PHP 7.0.2 - Laravel 5.1
I'm assuming you are using a "single page" angular app, so one html page that uses ng-view to load all the other partials.In this case you need to do something like this:Express 4:var express = require('express'), app = express(), server = require('http').Server(app), bodyParser = require('body-parser'), db = require('./db'), io = require('./sockets').listen(server), apiRoutes = require('./routes/api'), webRoutes = require('./routes/web'); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use('/api', apiRoutes); app.use(express.static(__dirname + '/public')); // Here's the new code: app.use('/*', function(req, res){ res.sendfile(__dirname + '/public/index.html'); }); server.listen(3000, function() { console.log('Listening on port %d', server.address().port); });The problem you're facing is that even though you have routes setup for '/login' before the routes are fired they need to be loaded. So the server tries to find a match for the route '/login' which it can't returning the 404. In the case of single page angular apps all the routes you use in routing must be caught by a route,app.get('/*', ...in this case, and then return the main angular.js html page. Note that this is the last call so it will be evaluated last, if you put it first it will prevent all the subsequent rules from running as express just runs the handler for the first rule it encounters.
I am using Express 4 to host my AngularJS app on my backend, with Nginx as my frontend server. However html5 mode does not seem to work, as I will get a Cannot /GET error when I try to enter the page link (e.g.http://localhost/login) via the browser. Is there any routing configuration I need to do for my Express/Nginx? Here's my config code:Express 4:var express = require('express'), app = express(), server = require('http').Server(app), bodyParser = require('body-parser'), db = require('./db'), io = require('./sockets').listen(server), apiRoutes = require('./routes/api'), webRoutes = require('./routes/web'); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use('/api', apiRoutes); app.use(express.static(__dirname + '/public')); server.listen(3000, function() { console.log('Listening on port %d', server.address().port); });AngularJS:'use strict'; var nodeApp = angular.module('nodeApp',['ngRoute']); nodeApp.config(function($routeProvider, $locationProvider, $controllerProvider) { $routeProvider.when('/', { templateUrl: 'partials/home.html' }).when('/login', { templateUrl: 'partials/login.html' }); $locationProvider.html5Mode(true); nodeApp.controllerProvider = $controllerProvider; });Nginx:# the IP(s) on which your server is running upstream test-app { server 127.0.0.1:3000; } # the nginx server instance server { listen 0.0.0.0:80; server_name test-app.cloudapp.net; access_log /var/log/nginx/test-app.log; # pass the request to the nodejs server with correct headers location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Nginx-Proxy true; proxy_pass http://test-app/; proxy_redirect off; } }
Express 4, NodeJS, AngularJS routing
You can install any nginx versions.Check this:https://centos.pkgs.org/7/nginx-x86_64/nginx-1.14.2-1.el7_4.ngx.x86_64.rpm.htmlIf you wanna install nginx 1.14.2 follow this:wget https://nginx.org/packages/centos/7/x86_64/RPMS/nginx-1.14.2-1.el7_4.ngx.x86_64.rpm sudo rpm -Uvh nginx-1.14.2-1.el7_4.ngx.x86_64.rpm nginx -v
How to install Nginx with an exact version on Amazon Linux 2?What I triedsudo yum install nginxsudo amazon-linux-extras install nginx1sudo yum install nginx:1.14.2Both get nginx 1.20.0 or no package available. How can I get other versions, ex: nginx 1.14.2?
How to install Nginx with an exact version on Amazon Linux 2?
for me this is because selinux enabled, check withselinuxenabled && echo enabled || echo disabledif enabled try to disablenano /etc/sysconfig/selinux SELINUX=disabledthenreboot
Okay so there have been some previous posting of this yet no solution fixes my problem.I have site configured which is just straight up HTML, CSS & JS and I'm trying to add a wordpress site. My config for the wordpress site is as follows.####################### server { listen 80; root /usr/share/nginx/threadtheatre/wordpress; index index.php; server_name threadtheatre.co.uk; access_log /var/log/nginx/thread.access.log; error_log /var/log/nginx/thread.error.log; location / { # try_files $uri $uri/ =404; try_files $uri $uri/ /index.php?q=$uri&$args; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; } }This is the error thats in my logs"/usr/share/nginx/threadtheatre/wordpress/index.php" failed (13: Permission denied), client: 109.155.53.189, server: threadtheatre.co.uk, request: "GET / HTTP/1.1", host: "threadtheatre.co.uk"nginx is using the nginx user and likewise for php-fpm. The nginx directory and all its sub directories have the following permissions.drwxrwxr-x. 3 root nginx 4096 Feb 8 18:23 ..If I browse to threadtheatre.co.uk on the web i get 404.hope someone can help with this.Lee.
Nginx stat() failed (13: Permission Denied)
If you usehttp.getit should decode it automaticlly, but it looks likerequestmight not do it for you.There clearly is code to decompress the gzip request here, but only for thegetmethod:https://github.com/ruby/ruby/blob/v1_9_3_327/lib/net/http.rb#L1031
I haveruby-1.9.3-p327with zlib installed.localhost:80is the nginx simple test page.require "net/http" => true Net::HTTP::HAVE_ZLIB => true res = Net::HTTP.start("localhost", "80") do |http| req = Net::HTTP::Get.new "/" req["accept-encoding"] = "gzip" http.request req end => # res.get_fields "content-encoding" => ["gzip"] res.body => "\x1F\x8B\b\x00\x00\x00\x00\x00\x00\x03\xEC\xBDi..."The body was not decoded. Why?
Ruby Net::HTTP not decoding gzip?
There are various ways to inspect the POST requesty body in ngx_lua, depending on your needs:Fully buffered way: usengx.req.read_body,ngx.req.get_body_data, andngx.req.get_body_file.Streaming processing way: usengx.req.socketto read and process the request body stream in chunks.
My goal is to inspect a body of the POST request and compare it to some list of key-value pairs on nginx. In my situation POST requests will always be in JSON format. Each request will contain akey:valuepair like this:"transaction":"12345"or"transaction":"098765". Mean the key "transaction" will always be there and value will change some time. I was thinking to uselua-nginx-moduleto inspect a post body and than compare it with key-value from let's saymemcached. I don't have any code to show yet, but I will try to update a question, some time soon. I was wondering if someone could help me get started, with this or show how it can be done.
How to inspect POST body in nginx (HttpLuaModule)
server { root /path/to/site error_page 504 502 500 = /html/error/500.html; location /get_error { return 500; } }See the docs:http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
I'm trying to trigger an nginx error to test my error pages. This is what I tried:server { // ... root /path/to/site error_page 504 502 500 = /html/error/500.html; # absolute path: /path/to/site/html/error/500.html return 500; }But I keep getting the default nginx error. Testing the html is not enough, I wanna make sure that nginx will show the correct error pages.Any ideas?
How to test nginx errors?
You can set the environment for each instance using therails_envoption. For example:server { listen 443; server_name staging.myapp.com; root /apps/myapp/staging/public; passenger_enabled on; rails_env staging; }
I'm creating an app that in addition to the live production environment requires a development and staging environment. The production environment is currently live and on its own VPS instance. A record:myapp.com 1.2.3.4The development and staging environments will be on their own VPS instance. I've configured the appropriate DNS records so each environment has its own sub-domain (A record in the myapp.com domain pointing to the dev/staging server:dev.myapp.com 5.6.7.8 staging.myapp.com 5.6.7.8The Nginx confix (Rails, Passenger) sets the root for each server (wild card SSL is configure in the http definition and port 80 redirects to port 443):server { listen 443; server_name dev.myapp.com root /apps/myapp/dev/public } server { listen 443; server_name staging.myapp.com root /apps/myapp/staging/public }I'm a bit confused on the Rails side what else do I need to do to configure the environments so I can access the individual dev and staging environments by URL:staging.myapp.com dev.myapp.comI know Capistrano allows you to set production and staging environments but I need both the dev and staging URLs to be live or should this be sufficient?
development, staging, and production environments rails app
I was able to solve this using only Nginx to program it using OpenResty's lua module.Thehttps://github.com/openresty/lua-nginx-modulegives ability to program in nginx.conf, where one could use the existing lua libraries such ashttps://github.com/bungle/lua-resty-templatefor templating!myapp.lua:local template = require("resty.template") local template_string = ngx.location.capture("/templates/greet.html") template.render(template_string.body, { param1 = ngx.req.get_uri_args()["param1"], param2 = ngx.req.get_uri_args()["param2"] })greet.html: Nice to see you {{param1}}. Platform greets you "{{param2}}". nginx.conf:worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { root ./; server { listen 8090; location /myapp { default_type text/html; content_by_lua_file ./lua/myapp.lua; } }content_by_lua_fileis where the power of openresty comes.I described the complete process here:https://yogin16.github.io/2018/03/04/nginx-template-engine/Hopefully, someone would find this helpful.
I have a requirement for basic html template webapp such as:http://localhost:3000/myapp?param1=hello&param2=Johnis called it should returntext/htmlresponse which looks like this: Nice to see you John. Platform greets you "hello". the name and greeting is templated from param. so template is something like this: Nice to see you {{param1}}. Platform greets you "{{param2}}". I have currently done this in node server using express.js and then the server is exposed publicly via nginx.conf:server { listen 80; # server_name example.com; location / { proxy_pass http://private_ip_address:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }I was wondering if this could be possible with some plugins or other configuration with bare nginx without hosting the node server on 3000 port.
Can we use NGINX as webapp for template engine
If anyone runs into this, the problem was with how I set up Nginx for this particular domain.This block in my/etc/nginx/sites-enabled/defaultfile didn't work with query strings:location / { try_files $uri $uri/ /index.php$query_string; }After changing it to the following, everything worked as it supposed to be:location / { try_files $uri $uri/ /index.php$is_args$args; }Hope it helps someone.
For some reason I can't get any GET parameters from a url in my controllers using theIlluminate\Http\Requestfacade. I tested in multiple controllers, but no success.Using the following code, nothing is returned on the remote server when accessingdomain.com/admin/dashboard?test=test, but on my local machine it returnstest:The dashboard function is called by the route /admin/dashboard /** * Dashboard page * * @return view */ public function dashboard(Request $request) { echo ''; var_dump($request->all()); echo ''; // Return here return; // ...instead of here return view('backend::pages.dashboard'); }I'm running Laravel 5.1 on Ubuntu 14.04 LTS using Nginx and php5-fpm. The code works fine on my local Homestead instance as well as on MAMP. I checked my Nginx configuration and everything seems fine. I'm hosting multiple sites on my server and I can get route parameters on all other sites.
Can't retrieve url GET parameters with Laravel 5.1on remote server
Well, I don't know what causes the problem. I checked my pcre, and it was in latest version. Without option I just uninstall it and reinstall it again...Then it works...
I use homebrew to install nginx. However, when I start nginx, it prompts:dyld: Library not loaded: /usr/local/lib/libpcre.1.dylib Referenced from: /usr/local/bin/nginx Reason: Incompatible library version: nginx requires version 4.0.0 or later, but libpcre.1.dylib provides version 2.0.0 Trace/BPT trap: 5Any ideas?
nginx installed successfully but cannot start
The nginx cookbook version was increased to 2.0.0 to make an emphasis on breaking changes. Particularly now you should specify all modules withnginx::prefix and do not useextra_modulesat all. So, it should look like this now:"default_attributes": { "nginx": { "source": { "modules": [ "nginx::http_gzip_static_module", "nginx::http_ssl_module", "nginx::http_realip_module", "nginx::http_stub_status_module", "nginx::upload_progress_module"] } } }Please look at thisticketand relevantchangesetfor details.
I'm using berkshelf to manage cookbooks, chef 11.6.2, and nginx cookbook v 2.0.0my settings to compile nginx from source:set[:nginx][:source][:modules] = ["http_gzip_static_module", "http_ssl_module"]The provisioning gives me the error:Cookbook http_gzip_static_module not found. If you're loading http_gzip_static_module from another cookbook, make sure you configure the dependency in your metadataIs it a bug from nginx cookbook and how do you solve it? Everything works well with nginx cookbook v 1.7.0Many thanks.
Nginx cookbook v 2.0.0: Cookbook http_gzip_static_module not found
I think it is similar to Node solution - you should repeat all your routes in nginx config to return 404 status code correctly, the main idea is that you should use "equals" modifier in locations and defineerror_pageto return sameindex.htmlfile but with 404 status code, example:server { listen 80; server_name localhost; root /my/dir/with/app error_page 404 /index.html; location = / { try_files $uri $uri/ /index.html; } location = /books { try_files $uri $uri/ /index.html; } # example nested page location = /books/authors { try_files $uri $uri/ /index.html; } # example dynamic route to access book by id location ~ /books/\d+$ { try_files $uri $uri/ /index.html; } }Probably this config can be simplified or improved because I am not very good at nginx configuration but it works.
I have set up my vue-cli version 3 SPA so that any requests not found in my routes.js file will default to my 404 view as shown in the officialdocumentation:Inserted near bottom ofroutes.jsfile:{ // catches 404 errors path: '*', name: '404', component: () => import(/* webpackChunkName: "NotFoundComponent" */ './views/NotFoundComponent.vue'), },Inserted into nginx configuration file:location / { try_files $uri $uri/ /index.html; }This successfully alerts the user that the page they requested doesn't exist.My Question:I would like for the error 404 component to return a 404 response header (it current returns the 200 status code) and also log this error to the nginx error.log file. I imagine this is only possible through using nginx configuration. Has anyone achieved this goal?I noticed that this issue is addressed in the following page in the vue-cli official docs, but it only is concerned with node express servers and not nginx:https://router.vuejs.org/guide/essentials/history-mode.html#caveat
How to handle 404 error request in vuejs SPA with nginx server
RTFMhttps://github.com/openresty/lua-nginx-module#ngxvarvariableRead and write Nginx variable values.value = ngx.var.some_nginx_variable_name ngx.var.some_nginx_variable_name = valueNote that only already defined nginx variables can be written to. For example:location /foo { set $my_var ''; # this line is required to create $my_var at config time content_by_lua_block { ngx.var.my_var = 123; ... } }That is, nginx variables cannot be created on-the-fly.
I have a variable$aetthat I initialize in lua, but I wish I could use it in nginx too.Here is my code:location /getIp { default_type 'application/json'; rds_json on; content_by_lua ' if ngx.var.host:match("(.*).nexus$") ~= nil then aet = ngx.var.host:match("(.-)%.") $aet = aet; end '; postgres_pass database; postgres_query "SELECT ip FROM establishment_view WHERE aet = $aet"; postgres_output rds; }It does not work because in the query it does not know the variable aet :nginx: [emerg] unknown "aet" variable
Use variable in lua and nginx
Use thenginx map directiveto set the$maintenancevalue according to the$remote_addr:map $remote_addr $maintenance { default on; 127.0.0.1 off; 10.1.1.10 off; 10.*.1.* off; } server { server_name doamin.tld; if ($maintenance = on) { return 503; } # ... your code ... }Take a look at theinclude directiveif you want to take the IPs list in a separate file.
I have this server block:server { server_name doamin.tld; set $maintenance on; if ($remote_addr ~ (127.0.0.1|10.1.1.10)) { set $maintenance off; } if ($maintenance = on) { return 503; } error_page 503 @maintenance; location @maintenance { root /var/www/html/global; rewrite ^(.*)$ /holding-page.html break; } root html; access_log logs/doamin.tld.access.log; error_log logs/doamin.tld.error.log; include ../conf/default.d/location.conf;}What is the correct way to pass a list to the$remote_addrinstead of coding it like (127.0.0.1| etc...)?
How can I set a range of remote IP addresses without passing a list?
Theproxy_passstatement mayoptionallymodify the URI before passing it upstream. Seethis documentfor details.In this form:location ^~ /api/ { proxy_pass http://myserver/; }The URI/api/foois passed tohttp://myserver/foo.By deleting the trailing/from theproxy_passstatement:location ^~ /api/ { proxy_pass http://myserver; }The URI/api/foois now passed tohttp://myserver/api/foo.
My Nginx installed and running, below is the config from/etc/nginx/nginx.conf, I want to forward all/api/*to my tomcat server, which is running on the same server at port 9100(typehttp://myhost:9100/api/appsworks) , otherwise, serve static file under '/usr/share/nginx/html'. Now I typehttp://myhost/api/appsgive an 404. What's the problem here?upstream myserver { server localhost:9100 weight=1; } server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location ^~ /api/ { proxy_pass http://myserver/; } location / { } }
Nginx reverse proxy return 404
Probably nginx-user does not have rights to read second file. Options:1) change chmod of that file so it can be read by everyone2) add nginx-user and file-owner-user to the same group and allow group to read that file
I just set up an nginx server. I can visit my webpage ( an "under construction" page ), but although one image is server properly by the server ( named "logo.png" ), another image on the same directory ( I have everything under the root directory of nginx ) is not served and throws a "403 - Forbidden" error ). Below I show you the "http" part of my nginx.conf file.http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name ***********.com; root html; location / { index index.html index.htm; } } }Do you have any suggestions about how I might solve this??
Nginx does not serve image ( 403 - forbidden error )
Like elektronaut indicated, this is probably something that should be handled in your proxy's configuration. That said, ActiveSupport::UrlFor#url_for has some information that might be useful. Take a look athttp://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/routing/url_for.rbWhat I think it boils down to is passing two arguments into your url_for and/or link_to calls. First is the:port => 123argument, the second is:only_path => falseso that it generates the full link including domain, port, etc.So when generating a link, you might do:link_to 'test', root_url(:port => 80, :only_path => false)and when creating a custom url you might do:url_for :controller => 'test', :action => 'index', :port => 80, :only_path => falseFor a redirect:redirect_to root_url(:port => 80, :only_path => false)I hope this helps, and if it doesn't, can you be more specific about how you generating your URLs, what rails is generating for you, and what you would like it to generate.Update:I wasn't aware of this, but it seems you can set defaults for the URL's rails generates with url_for, which is used by everything else that generates links and/or URLs. There is a good write up about it here:http://lucastej.blogspot.com/2008/01/ruby-on-rails-how-to-set-urlfor.htmlOr to sum it up for you:Add this to yourapplication_controler.rbdef default_url_options(options) { :only_path => false, :port => 80 } endand this:helper_method :url_forThe first block sets defaults in the controllers, the second causes the url_for helper to use the one found in the controllers, so the defaults apply to that as well.
I have an Rails application server that is listening on port 9000, and is being called through haproxy. All my redirects from that server are being redirected back through port 9000, when they should be sent back on port 80.I am using a combination of haproxy + nginx + passenger. Is there a way to make sure all redirects are being sent through port 80, regardless of what port the actual server is listening on?I don't care if its a haproxy, nginx, Passenger, or Rails change. I just need to make sure most requests unless specified otherwise, are sent back to port 80.Thanks!
Send Redirects To Specific Ports
in any middleware you can use this examplepublic function handle($request, Closure $next) { $response = $next($request); return $response instanceof \Symfony\Component\HttpFoundation\Response ? $response->header('pragma', 'no-cache') ->header('Cache-Control', 'no-store,no-cache, must-revalidate, post-check=0, pre-check=0') ->header('X-ANY-HEADER', 'any header value') : $response; }but i do not know this fix your problem
My team and I are working on a Laravel API which communicates with a Vue.js frontend that uses the Apollo client to consume the GraphQL responses.We have an issue with cache-control headers being added to the response.Apollo cannot cache the contents because the response contains this header:Cache-Control: no-cache, privateIn php.ini, we have this to disable sending cache-control headers by PHP:; Set to {nocache,private,public,} to determine HTTP caching aspects ; or leave this empty to avoid sending anti-caching headers. ; http://php.net/session.cache-limiter session.cache_limiter =In the nginx config we cannot find anything that is setting those headers. I checked the global nginx.conf and config file we setup in sites/available.I can add this to the nginx config, but it will only add another header:add_header Cache-Control "public"; Cache-Control: no-cache, private Cache-Control: publicIf this header is not coming from PHP or nginx, then where could it be coming from? And how can I remove or overwrite it?Laravel 5.5Folkloreatelier/laravel-graphqlPHP 7.1nginx 1.14.0Ubuntu 16.04
How to remove Cache-control header no-cache
I run a similar setup and I ran into this problem as well. According to thedocs:By default, when you specify an external_url starting with 'https', Nginx will no longer listen for unencrypted HTTP traffic on port 80.I see that you are forwarding your traffic over HTTP and port 80, but telling GitLab to use an HTTPS external URL. In this case, you need set thelistening port.nginx['listen_port'] = 80 # or whatever port you're using.Also, remember to reload the gitlab configuration after making changes togitlab.rb. You do that with this command:sudo gitlab-ctl reconfigureFor reference, here is how I do the redirect:Nginx config on the reverse proxy server:location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_pass http://SERVER_2_IP:8888; }The GitLab config file,gitlab.rb, on the GitLab server:external_url 'https://gitlab.domain.com' nginx['listen_addresses'] = ['SERVER_2_IP'] nginx['listen_port'] = 8888 nginx['listen_https'] = false
I'm running Gitlab behind my Nginx.Server 1 (reverse proxy): Nginx with HTTPS enabled and following config for/git:location ^~ /git/ { proxy_pass http://134.103.176.101:80; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; }If I dont change anything on my GitLab settings this will work but is not secure because of external http request like:'http://www.gravatar.com/avatar/c1ca2b6e2cd20fda9d215fe429335e0e?s=120&d=identicon'. This content should also be served over HTTPS.so if I change the gitlab config on hidden server 2 (http gitlab):external_url 'https://myurl' nginx['listen_https'] = falseas said in the docu. I will get a bad gateway error 502. with no page loaded.what can I do ?EDIT: Hacked it by setting:gitlab_rails['gravatar_plain_url'] = 'https://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'to https... this workes but is not a clean solution. (clone url is still http://)
Gitlab behind Nginx and HTTPS -> insecure or bad gateway
This ended up working for me: Procfile:web: vendor/bin/heroku-php-nginx -C nginx.conf public/nginx.conflocation / { # try to serve file directly, fallback to rewrite try_files $uri @rewriteapp; } location @rewriteapp { # rewrite all to app.php rewrite ^(.*)$ /index.php$1 last; }
I have the following in `nginx.conf in my project rootlocation / { try_files $uri $uri/ /index.php?$query_string; }But only in the/path works, all others are coming up with a 404 error.How can I make Laravel work on heroku with nginx?
Laravel nginx.conf with official Heroku php buildpack?
The default storage backend for media filesislocal storage.Yoursettings.pydefines these two environment variables:MEDIA_ROOT(link to docs)-- this is the absolute path to the local file storage folderMEDIA_URL(link to docs)-- this is the webserver HTTP path (e.g.'/media/'or'//%s/media' % HOSTNAMEThese are used by the default storage backend to save media files. From Django's default/globalsettings.py:# Default file storage mechanism that holds media. DEFAULT_FILE_STORAGE = 'django.core.files.storage.FileSystemStorage'This configured default storage is used inFileFieldsfor which no storagekwargis provided. It can also be accessed like so:rom django.core.files.storage import default_storage.So if you want to vary the storage for local development and production use, you can do something like this:# file_storages.py from django.conf import settings from django.core.files.storage import default_storage from whatever.backends.s3boto import S3BotoStorage app_storage = None if settings.DEBUG == True: app_storage = default_storage else: app_storage = S3BotoStorage()And in your models:# models.py from file_storages import app_storage # ... result_file = models.FileField(..., storage=app_storage, ...)Lastly, you want nginx to serve the files directly from yourMEDIA_URL. Just make sure that the nginx URL matches the path inMEDIA_URL.
I use Amazon S3 as a part of my webservice. The workflow is the following:User uploads lots of files to web server. Web server first stores them locally and then uploads to S3 asynchronouslyUser sends http-request to initiate job (which is some processing of these uploaded files)Web service asks worker to do the jobWorker does the job and uploads result to S3User requests the download link from web-server,somedbrecord.result_file.urlis returnedUser downloads result using this linkTo work with files I useQueuedStoragebackend. I initiate myFileFieldslike this:user_uploaded_file = models.FileField(..., storage=queued_s3storage, ...) result_file = models.FileField(..., storage=queued_s3storage, ...)Wherequeued_s3storageis an object of class derived from...backends.QueuedStorageandremotefield is set to'...backends.s3boto.S3BotoStorage'.Now I'm planning to deploy the whole system on one machine to run everything locally, I want to replace this'...backends.s3boto.S3BotoStorage'with something based on my local filesystem.The first workaround was to use FakeS3 which can "emulate" S3 locally. Works, but this is not ideal, just extra unnecessary overhead.I have Nginx server running and serving static files from particular directories. How do I create my "remote storage" class that actually stores files locally, but provides download links which lead to files served by Nginx? (something likehttp://myip:80/filedir/file1). Is there a standard library class for that in django?
Local filesystem as a remote storage in Django
After experimenting a bit around I finally got the configuration right to work exactly how I wanted the server to work:server { location /node_app/ { proxy_pass http://localhost:3000/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }Lesson learned: Remember the slashes!
I run my node app onlocalhost:3000and it is serving a default page for the route/. If I accesshttp://localhost:3000the default page is displayed accordingly. I have also running a Nginx server that is basically configured as followed:server { listen 80; server_name localhost; location /node_app { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }If I runhttp://localhost/node_appnow, my node app throws an error saying that it cannot find route/node_app.How can I configure either my node app or the nginx server in a way that I can access the app by callinghttp://localhost/node_app, yet the app itself thinks it is at/?UpdateIf I add a/tohttp://127.0.0.1:3000it is actually matching/node_appto the/route. But now every stylesheet for instance within the default page is now pointing to the wrong path.
Forwarding port to Node.js app with Nginx and routing
You can just use rewrite function without locationrewrite ^/$ /index.html last;or for permanent redirectrewrite ^/$ /index.html permanent;to rewrite with parameters, e.g.http://www.foo.com/?param=value->http://www.foo.com/index.html?param=valuerewrite ^/(\?.*)?$ /index.html$1 permanent;
I'm runnning nginx v 1.0.4 and we're trying to do the following:location ~ ^/$ { rewrite  ^.*$  /index.html  last; }Basically: If the user gets to the the default domainhttp://www.foo.comorhttp://www.foo.com/redirect them tohttp://www.foo.com/index.htmlWhen I add this to my conf file, I get the following: Starting nginx: nginx: [emerg] unknown directive " " in /etc/nginx/myconf.confThanks in advance.
nginx location fix: Redirect to index.html
I can answer my own question (after several hours of looking in completely the wrong place). A good readup onAuthlogic::Session::Configdid the trick.class UserSession < Authlogic::Session::Base allow_http_basic_auth false end
I am in the early stages of building an app using Rails 3. User authentication is powered by Authlogic which I have setup pretty much as standard (as per the example docs) and everything is working as expected locally.I have just deployed the app to a clean server install of Centos 5.4 / NginX / Passenger so staff can start to log in and enter content, etc. However, we're a long way from this being ready for public eyes so I have used NginX's basic auth module to keep the entire site behind another level of authentication.Unfortunately Authlogic's authentication and NginX's basic authentication seem to be conflicting with one another. If basic auth is on then it is impossible to log in with Authlogic, yet if I disable basic auth then Authlogic works as expected.I haven't posted any code as I'm really not sure what code would be relevant. I wonder whether this is a known issue and if there is any changes I can make to the configuration to get round the issue?
Rails 3, Authlogic, NGINX and HTTP basic authentication no working nicely together
.ebextension method is not working now.Please try.platformmethod.Please create a folder called.platformin your project root folder..platform/ nginx/ conf.d/ timeout.conf 00_myconf.configContent of File 1 -timeout.conf(Inside.platform/nginx/conf.d/folder)keepalive_timeout 600s; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; fastcgi_send_timeout 600s; fastcgi_read_timeout 600s;Content of File 2 -00_myconf.config(Inside.platform/folder)container_commands: 01_reload_nginx: command: "service nginx reload"reupload your application and see the changes.
I want to increase the default timeout of nginx in a nodejs environment in AWS elastic beanstalk, i'm following this guide:https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6adbut it's not working, if i upload my application i receive this error Unsuccessful command execution on instance id(s) 'i-xxxxxxxxxx'. Aborting the operation. any suggestion? i'm trying to use .ebextension and this is the code of my 01-timeout.config filefiles: "/etc/nginx/conf.d/01-timeout.conf": mode: “000644” owner: root group: root content: | keepalive_timeout 600s; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; fastcgi_send_timeout 600s; fastcgi_read_timeout 600s; container_commands: nginx_reload: command: "sudo service nginx reload"Thanks for any help.Updatenow the deploy it's ok, but the timeout doesn't work, it's like before with the timeout of 60s, reading the logs seems that the reload of nginx it's made, this is the message: Command nginx_reload succeeded , any clue of what is the problem?
aws beanstalk nodejs: how to override 60s timeout of nginx
OpenResty is an enhanced version of Nginx, which combines Lua and Nginx. Unless you are planning to use Lua, there will be no benefit of choosing OpenResty over Nginx. Since you are running Laravel based website, there will be no benefits.
As a novice web developer, I tend to use Nginx when deploying & running my Larvel PHP sites.I've recently come across OpenResty and, from what I believe, it appears to be webserver software like that of Nginx.As someone who is always looking to improve the websites I make, will using Open Resty over Nginx improve the development and overall quality or experience of my Laravel websites?
What's the difference between OpenResty and Nginx?
Funnily enough, I've solved the issue while writing the question. Addingfastcgi_buffering off;to Nginx config fixes the issue.But I still don't understand what was the problem and why disabling buffering fixed it. So if anyone can explain it I don't mind marking that answer as solution.
This issue happens on a pure PHP files served by Nginx & PHP-FPM. I've stumbled upon this issue while developing my website using Symfony but the problematic content length range is 3702-15965 for that (I wonder why it's different than vanilla PHP).What I've tried so far:Timeout duration is 15 seconds but I've tried increasing it to 300 seconds and it still timeouts. So I'm guessing it's infinite loopy thing.It doesn't look like it's resource related because it works even if content length is 5 million characters.Created various tests with different characters to see if I can cause changes to the problematic content length range. Answer is no, range stayed same for all my tests.I have tried disabling gzip. It didn't change the length range but the response changed. Gzip enabled response: "upstream request timeout" | Gzip disabled response: Completely blankNotes:This issue doesn't exist on my localhost.Itrarelyopens the page normally. I can't reproduce this consistently.There are no errors in Nginx, PHP or GCR logs besides the "request timed out" lines.Any help is appreciated. Thanks.
Google Cloud Run website timeouts when content length is between 4013-8092 characters. What is going on?
Remove:location = /status { <============== HERE stub_status on; default_type text/plain; access_log off; allow 127.0.0.1; deny all; }And just create a new config filestatus.confwith the following content:server { listen localhost; server_name status.localhost; keepalive_timeout 0; access_log off; allow 127.0.0.1; deny all; location /nginx_status { stub_status on; } }Reload Nginx config:sudo nginx -s reloadTest the URL:$ curl http://127.0.0.1/nginx_status Active connections: 6 server accepts handled requests 1285282 1285282 17041299 Reading: 0 Writing: 6 Waiting: 0New Relic config:url=http://localhost/nginx_status
I'm trying to create an /status to use with newrelic, but it's always returning 404.$ curl 127.0.0.1/status 404 Not Found 404 Not Found nginx/1.17.1 Here is my nginx conf file (it's uses certbot as well).server { server_name mysite.com api.mysite.com painel.mysite.com; location / { root /var/www/mysite-prod/public; index index.php index.html index.htm; try_files $uri $uri/ /index.php?$query_string; } location ~ \.(php|phar)(/.*)?$ { root /var/www/mysite-prod/public; fastcgi_split_path_info ^(.+\.(?:php|phar))(/.*)$; fastcgi_intercept_errors on; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass php-fpm; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { location = /status { <============== HERE stub_status on; default_type text/plain; access_log off; allow 127.0.0.1; deny all; } if ($host = panel.mysite.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = api.mysite.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = mysite.com) { return 301 https://$host$request_uri; } # managed by Certbot server_name mysite.com api.mysite.com painel.mysite.com; listen 80; return 404; # managed by Certbot }Am I doing something wrong?I'm using AWS Linux and followed this guide:https://www.scalescale.com/tips/nginx/nginx-new-relic-plugin/#
nginx - how to create /status with stub_status
First of all, not all variables need to be specified using environment variables. Keep variables that do not differ per system in a separate yaml file.When you have just one environment per server you can specify the environment variables globally in/etc/environment. (Might be different depending on your Linux flavour)Personally I find that using DotEnv poses more difficulties than solutions when you run multiple environments on the same server. Specifying the variables in a global configuration like/etc/environmentdoesn't work in that case.Specifying the environment variables in nginx isn't a solution either since, as you mentioned, they won't be picked up by cron, supervisor, the console, etc. For me, this was the reason to completely remove DotEnv and work with the good oldparameters.yamlfile again. Nothing will stop you from doing that.Another solution however is to keep using DotEnv in your development environment and to include a separateparameters.yamlin production. You can then define the environment variables as follows:parameters: env(APP_ENV): prod env(APP_SECRET): 3d05afda019ed4e3faaf936e3ce393ba ...A way to include this file is to put the following in your services.yaml file:imports: - { resource: parameters.yaml, ignore_errors: true }This way, the import will be ignored when no parameters.yaml file exists. Another solution is to add a line toconfigureContainer()in your Kernel class:$loader->load($confDir.'/parameters'.self::CONFIG_EXTS, 'glob');
.env files are very handy with docker, kubernetes, etcBut what if I have simple nginx server without any orchestration and a pack of cron workers and a pack of daemons(systemd/supervisord/etc)?I can write these env variables to nginx server section, but I have to set up hundreds of env variables to each cron worker or daemon.I found a quick solution: using symfony/dotenv component in production.But it seems to me dirty. Who can suggest better solution?
Symfony 4, .env files and production
Use a separating:. For example:proxy_pass http://unix:/home/ubuntu/projects/UsersDB-api/app.sock:/;Seethis documentfor details.
I have one server that has several APIs running on it. One of them isusers-DBThe following gets down to gunicorn just fine:location /usersDB/ { include proxy_params; proxy_pass http://unix:/home/ubuntu/projects/UsersDB-api/app.sock; }Except when I try to access the usersDB API's /helloWorld route, and look in the logs at gunicorn.err I see:GET /usersDB/helloWorldI was hoping to see:GET /helloWorldOf course, gunicorn returns 404s and that is what I see in my browser. I've tried rewrite rules:location /usersDB/ { rewrite /usersDB/(.*) /$1 last; include proxy_params; proxy_pass http://unix:/home/ubuntu/projects/UsersDB-api/app.sock; }But the above results in the requests making their way to/var/www/htmlhelloWorldinstead of app.sock.I know that if you use a url for the proxy_pass you just add atrailing/, but I'm not sure what to do in the case of a sock file.How do I get rid of the/usersDB/suffix that is now included on all routes in nginx?
Redirect/rewrite nginx location to .sock file without prefix
According tothis doc, put this in/etc/gitlab/gitlab.rb:# Disable the built-in Postgres postgresql['enable'] = false # Fill in the connection details for database.yml gitlab_rails['db_adapter'] = 'postgresql' gitlab_rails['db_encoding'] = 'utf8' gitlab_rails['db_host'] = '127.0.0.1' gitlab_rails['db_port'] = 5432 gitlab_rails['db_username'] = 'USERNAME' gitlab_rails['db_password'] = 'PASSWORD'And run this command to apply this values :sudo gitlab-ctl reconfigure. Also you need to seed your database if you choose an external one. This command will do it with omnibus-gitlab:sudo gitlab-rake gitlab:setup
When installingGitlabby default Nginx and Postgres .. among other things are installed regardless of whether you have them already or not. So since I have these two already, I am trying to configure gitlab to use them, I have done this for Nginx, Using:$ vi /etc/gitlab/gitlab.rb: # Disable GitLab's nginx completely nginx['enable'] = false # Set external web user which is 'nginx' on CentOS 7 web_server['external_users'] = ['nginx']but I need to know how to do the samepostgres.
How to configure gitlab to use existing postgres server
I think I figured out what you were trying to do. The proper way is to usetry_filestogether with a named location.Try with the following configuration:# IP which nodejs is running on upstream app_x { server 127.0.0.1:3000; } # nginx server instance server { listen 80; server_name x.x.x.x; #access_log /var/log/nginx/x.log; location / { root /var/www/x/public; index index.html index.htm index.php; try_files $uri $uri/ @node; } location @node { proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://app_x; } }Note: When you have an upstream defined you should use that in yourproxy_ pass. Also, when proxying, always add theX-Forwarded-Forheader.
I have a problem with my current nginx configuration. What I am trying to do is:For requests without any path, get the index.html (works)Get existing files directly (works)If the requested file or path does not physically exist, proxy request to nodejs (404)I have tried several configurations found here on stackoverflow, but none of them fit my needs.Here is my current configuration:# IP which nodejs is running on upstream app_x { server 127.0.0.1:3000; } # nginx server instance server { listen 80; server_name x.x.x.x; #access_log /var/log/nginx/x.log; root /var/www/x/public; location / { root /var/www/x/public; index index.html index.htm index.php; } location ^/(.*)$ { if (-f $request_filename) { break; } proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:3000; } }
nginx + nodejs configuration
Emperor mode is for handling multi-application environments. It basically monitors the directories you specify for new apps & events you want it to respond to.Pros:You can gracefully reload a site when you update your code by touching the vassal file.Apps respawn on crashes & reboots.Scales very nice if you need to add multiple serversThrottles your vassals to prevent Denial of Service (DoS)ConsI'm not sure there are any. I believe this is the preferred way to run apps (even if only one).I'm not 100% certain, but I believe launching with the settings provided in the docs will only launch an app Nginx passes. There are two issues I see with this, you're stuck with Nginx. Not saying that's bad, but if you wanted to play around with, or decided to move to another server, you might need to redo the setup. Also, this does not provide any of the benefits I mentioned earlier.
I am doing multi-application nginx+uWSGI setup and I wonder if I should use dynamic mode of uWSGI as documentedhere(under Dynamic apps) or theEmperor mode. I am slightly more inclined to use the emperor mode but maybe it is not the best choice. What are pros/cons of each?
nginx+uWSGI: dynamic vs emperor mode
Answer foundhere:Unfortunately add_header won't work with status codes other than 200, 204, 301, 302 or 304. You can find this in the documentation here.You may be able to use this plugin to do what you want:http://wiki.nginx.org/NginxHttpHeadersMoreModule
I am making a cross domain request in my web app.I have set the CORS headers on Nginx. Everything is working fine except when the service returns an error like 404, 400, 500 etc, instead of receiving the error code, the service is failing with an error saying that theOrigin *********** is not allowed by Access-Control-Allow-Origin.Any ideas why this might be happening?
Nginx services fails for cross-domain requests if the service returns error
WSGI is not like PHP. You can't just point uwsgi to a directory with a bunch of .py files. In fact, never, ever make your python modules available in a public directory, accessible from the server. You need to hook uwsgi up to a WSGI application, preferably a framework. Read more about WSGIhere. Check outbottlewhich is small, simple WSGI framework. It has great docs, and it's easy to get started with. There are actually tons of great web frameworks for Python though, so feel free to look around :)
I'm using uwsgi on Nginx to run some Python code.I'd like to bind uwsgi to a directory and make it render any .py file that I call from the server in the browser. I'm thinking like PHP, here (/index.php executes that file, /login.php executes that file).Is this a possibility? Or am I only able to explicitly specify a single module/app/file in uwsgi?Here is my init syntax:/opt/uwsgi/uwsgi -s 127.0.0.1:9001 -M 4 -t 30 -A 4 -p 4 -d /var/log/uwsgi.log --pidfile /var/run/uwsgi.pid --pythonpath /srv/wwwI thought that would allow/srv/wwwto act as the folder where any .py files are executed.Here is my nginx config:server { listen 80; server_name DONT_NEED_THIS; access_log /srv/www/logs/access.log; error_log /srv/www/logs/error.log; location / { root /srv/www; # added lines include uwsgi_params; uwsgi_pass 127.0.0.1:9001; }As it stands, when I try to call web root (ie www.site.com/) I get a:wsgi application not foundWith the following index.py file:import sys import os sys.path.append(os.path.abspath(os.path.dirname(__file__))) def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output]Any ideas?Thanks!
uwsgi + python + nginx + willy nilly file execution
http://nginx.org/en/docs/http/ngx_http_referer_module.html#valid_referersvalid_referers server_names ~.; if ($invalid_referer) { return 403; }
How do I know when the nginx variable $http_referer is not set or empty?I receive some requests that don't have a http referer. In nginx logs $http_referer appears like that: "-". What I am trying to do is to "return 403;" if the $http_referer is not set or empty as in this case.Thanks!
Nginx - How do I know when $http_referer is not set or empty?
I'd second for Passenger + Nginx. Very low memory and it's not too difficult to setup. What type of server are your deploying too? Specs? OS? I'd take that into consideration as well considering your available hardware. If you've got enough memory already, then it shouldn't be an issue whether its Passenger or Apache, just optimize and cache your app efficiently.
I have a Ruby on Rails application that will be a CMS in way which means it's mostly DB intensive. I expect it to have decent amount of traffic so before designing I'm choosing what servers to use. Most important for me is performance.I heard good things about Nginx and many developers in the Rails community recommends it my only concern about it was that its version is 0.8 which is Beta I believe so I was concerned about potential problems. What is your say?Also, I want to decide between using Mongrel cluster or Phusion Passenger. What do you think?I'm planning to user Ruby 1.9 as it has better performance that Ruby 1.8 and I will be using VPS to host my website.My main things is performance even if it takes longer to setup one over the other.Your opinion is highly appreciated.Thanks,Tam
Should I user Apache or Nginx & Passenger or Mongrel for my Rails application
Is your ELB using HTTP/HTTP listeners or TCP/SSL listeners? Websockets only works on the latter protocol types. Change the listener to TCP and it will work.Alternatively, if you built your environment using CLI or API, you can also rebuild your ElasticBeanstalk App using an Application Load Balancer (ALB) instead of a Classic Load Balancer (ELB) as the ALB also supports websockets. This option is not available via the web console.
There is a Laravel/Vue.JS app hosted on AWS behind a Classic Load Balancer (Elastic Beanstalk) and proxied internally via Nginx down to socket.io server. SSL is terminated on the Nginx.This is the nginx config:location /socket.io { proxy_pass http://127.0.0.1:6001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }Right now the long-polling mode works fine, but it fails to initiate an upgrade:WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=HmDFtq-aj1WgfGUyAAAJ' failed: Error during WebSocket handshake: Unexpected response code: 400P.S Chrome'sFramestab I can only see this weird message:(Opcode -1)Has anybody successfully got socket.io working on an AWS Elastic Beanstalk environment? I just wasted two weeks dealing with this issue, would be very thankful for ANY suggestions or ideas. Thanks!Update. I turned on a verbose logging and here are the variables within Nginx:$host example.com $proxy_add_x_forwarded_for 134.xxx.xxx.xxx $http_upgrade - $remote_addr 172.31.10.208 $remote_user - $server_name _ $upstream_addr 127.0.0.1:6001 $request GET /socket.io/?EIO=3&transport=polling&t=Lw26sYn&sid=6L5iHma-GJOeE3JQAAAX HTTP/1.1 $upstream_response_time 24.658 msec $request_time 24.658Maybe someone will find some of these values incorrect so I would appreciate any advise.
AWS EB: Error during WebSocket handshake: Unexpected response code: 400
This variable is set byrootdirective. You cannot use it inrootdirective itself, because it will lead to infinite loop.Seehttp://nginx.org/r/rootThepathvalue can contain variables, except$document_rootand$realpath_root.Use your own variable instead.set $my_root folder/my_root; root /$my_root; ... location = /404.html { root /$my_root/error_pages; }And don't try to put leading slash into variable.root $varwill look for$varin some default directory like/usr/local/nginxor/etc/nginx.
While setting up my nginx configuration I came upon this. Does anyone have any idea on why this happens exactly?root /folder/my_root; index index.php index.html index.htm; error_page 404 /404.html; location = /404.html{ root $document_root/error_pages; //FAILS HERE with the error in the title internal; }
The $document_root variable cannot be used in the "root" directive
The error messagenginx: [emerg] invalid host in upstream "172.17.0.2:5000/tcp"shows anupstream serverparameter with the spurious characters/tcpfollowing the port number. Seeupstream module documentation.@Fishman confirmed that the erroneous parameter appeared within thenginxconfiguration file, and that this file was generated from the Dockerfile, and that the problem was corrected by changing the value of theEXPOSEparameter (within the Dockerfile) from5000/tcpto5000.
I am trying to deploy .net asp web api application in Docker container to Amazon via elasticbeanstalk, but here is what I got:ERROR: Failed to start nginx, abort deployment ERROR: [Instance: i-ecf0d365] Command failed on instance. Return code: 1 Output: nginx: [emerg] invalid host in upstream "172.17.0.2:5000/tcp" in /etc/nginx/conf.d/elasticbeanstalk-nginx-docker-upstream.conf:2 nginx: configuration file /etc/nginx/nginx.conf test failedImage is here:https://hub.docker.com/r/wedkarz/awsm8-api/DockerfileFROM microsoft/aspnet:latest COPY . /app WORKDIR /app RUN ["dnu", "restore"] EXPOSE 5000/tcp ENTRYPOINT ["dnx", "-p", "project.json", "web"]elasticbeanstalk-nginx-docker-upstream.conf fileupstream docker { server 172.17.0.2:5000/tcp; keepalive 256; }
Nginx fail on Docker deployment to Amazon
This is what worked for me:sudo cp /usr/local/opt/nginx/*.plist /Library/LaunchDaemons sudo launchctl load -w /Library/LaunchDaemons/homebrew.mxcl.nginx.plistThe trick to this is that Mac OSX won’t let anything other than “root” or “system” level services use a port number below 1024.Read more here:http://derickbailey.com/2014/12/27/how-to-start-nginx-on-port-80-at-mac-osx-boot-up-log-in/
I installed NGINX with homebrew then I got info and followed the instructions to load the launchd plist$ brew info nginx nginx: stable 1.6.2, devel 1.7.7, HEAD ... To load nginx: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist Or, if you don't want/need launchctl, you can just run: nginxThe problem is nginx doesn't load when I restart.The plist looks like this: Label homebrew.mxcl.nginx RunAtLoad KeepAlive ProgramArguments /usr/local/opt/nginx/bin/nginx -g daemon off; WorkingDirectory /usr/local
Launchd not loading nginx on startup
After some searches I found a working solution.I have to add the following lines to my/etc/nginx/nginx.conf:http { ... fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; ... }Dont forget to edit with root rights withsudosudo nano /etc/nginx/nginx.confAnd then restart nginxsudo /etc/init.d/nginx restartSource of the info :https://laracasts.com/discuss/channels/general-discussion/whoops-doesnt-show-any-errors-homestead-20
My Homestead Vagrant virtual machine is returning me a502 Bad Gatewayinstead of a Laravel Whoops error for some PHP errors (like class not found, some kind of parse errors etc ...).Does someone have the solution for briging Whoops for all PHP errors ?I could get the error reading manually/var/log/nginx/.app-error.loglike this :2014/11/27 15:15:44 [error] 1300#0: *12 FastCGI sent in stderr: "PHP message: PHP Fatal error: on line But it is very annoying for debugging ...Homestead version : 0.2.0. Laravel version : 4.2
Homestead 502 Bad Gateway instead of Whoops for PHP errors
It turns out that the webdav module in-built in nginx is broken and to enable full webdav, we need to add the following external 3rd party module: nginx-dav-ext-module.Link to its github:https://github.com/arut/nginx-dav-ext-module.gitThe configure parameter would now be:./configure --with-http_dav_module --add-module=/path/to/the/above/moduleThe built in one just provides thePUT DELETE MKCOL COPY MOVEdav methods.The nginx-dav-ext-module adds the following additional dav methods:PROPFIND OPTIONSYou will also need to edit the configuration file to add the following line:dav_ext_methods PROPFIND OPTIONS;After doing so check if the syntax of the conf file is intact by issuing:nginx -tand then soft reload (gracefully) nginx:nginx -s reloadAnd Voila! you should now be able to use cadaver or any other dav client program to get into the directories.I cannot believe that I solved this, it drove me nuts for a while!
I have built nginx on a freebsd system with the following configuration parameters:./configure ... –with-http_dav_moduleNow this is my configuration file:user www www; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # reserve 1MB under the name 'proxied' to track uploads upload_progress proxied 1m; sendfile on; #tcp_nopush on; client_max_body_size 500m; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; #upload_store /var/tmp/firmware; client_body_temp_path /var/tmp/firmware; server { server_name localhost; listen 8080; auth_basic "Restricted"; auth_basic_user_file /root/.htpasswdfile; create_full_put_path on; client_max_body_size 50m; dav_access user:rw group:r all:r; dav_methods PUT DELETE MKCOL COPY MOVE; autoindex on; root /root; location / { } } }Now, the next things I do are check the syntax of the confiuration file by issuing anginx -tand then do a graceful reload as follows:nginx -s reload.Now, when I point my web-browser to the nginx-ip-address:8080 i get the list of my files and folders and so on and so forth (I think that is due to the autoindex on feature).But the problem is that when I try to test the webdav using cadaver as follows:cadaver http://nginx-ip-address:8080/It asks me to enter authorization credentials and then after I enter that it gives me the following error:Could not open Collection: 405 Not AllowedAnd the following is the nginx-error-log line which occurs at the same time:*125 no user/password was provided for basic authentication, client: 172.16.255.1, server: localhost, request: "OPTIONS / HTTP/1.1", host: "172.16.255.129:8080"The username and pass work just fine wheni try to access it from the web-browser, then what is happening here?
nginx webdav could not open collection
After banging my head against the wall for a couple of minutes, I just decided "what the heck, I'll fix the first error and see what happens." Lo and behold, removing the extraneous text/javascript MIME type in the gzip_types declaration fixed the problem.Hope this is helpful!
Here's my nginx.conf:user www-data; worker_processes 1; worker_rlimit_nofile 8192; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; # debug_connection 192.168.1.1; # multi_accept on; } http { server_tokens off; include mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nodelay on; gzip on; # http://wiki.nginx.org/HttpGzipModule#gzip_disable gzip_disable "msie6"; gzip_types text/javascript text/css text/plain text/xml application/xml application/xml+rss application/x-javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*.ngx; #tcp_nopush on; #keepalive_timeout 0; }When I try to start nginx, here's what I see:nginx: [warn] duplicate MIME type "text/javascript" in /etc/nginx/nginx.conf:27 nginx: [emerg] could not build the test_types_hash, you should increase either test_types_hash_max_size: 2048 or test_types_hash_bucket_size: 64This identical configuration has worked previously with no issues. What am I missing?
Nginx won't start, says "could not build test test_types_hash"
Still not sure how this happens, but the following solution seems to work:lsof /tmp/my_app.socket- lists the pidskill -9 pid- (replace 'pid' with one of those listed)Thencap deploy:startfrom the local terminal.
I am trying to deploy code using Capistrano, and it fails ondeploy:startordeploy:stopbecause the Unicorn process is already killed. However if I try tocap deploy:start, I get a stderr claiming thatAddress already in use - /tmp/my_app.socket. How would this happen, and how might I get out of this mess?
Unicorn/Nginx process missing, socket open
First, remove the forwarding and you need to change the nameservers of the domain in your domain DNS management if your domain is somewhere other than DigitalOceanAdd below nameservers:ns1.digitalocean.com ns2.digitalocean.com ns3.digitalocean.comNow check if they are propagating by using whatsmydns.net (Enter your IP and change click on A and select NS and click Search)Once they are propagating, add the domain to your DigitalOcean account.Go to your DigitalOcean Dashboard and click on Networking.Add a domain and click Save.Then Edit the domain and add the Droplet to the Domain and save.Now click on the domain name and an A record which points to your droplet.Hope this will resolve your issue.
I created a small hello world node app, then i hosted the app on digital ocean droplet, after that i can access my application onhttp://my_public_ip:3000Felt happy 😍Then i bought a domain name calledhelloworld.tkfree domain from freenom.com After that i install nginx as a webserver in my droplet then i added a reverse proxy code in /etc/nginx/sites-enable/defaultMy code looks like:server { listen 80; server_name helloworld.tk location / { proxy_pass http://localhost:3000; } }After that i went to domain management panel in my freenom.com and set url forwarding tohttp://my_public_ipSo if i enter my domain namehelloworld.tkin browser my node app successfully works 🤩 but wait what 🤔 my ip address is showing on left side below corner on chrome and if i refresh the page multiple times i get402 Too many request error page on nginxSo i deleted my url forwarding and in my domain management panel instead of url forwarding i set my nameservers like this ns1.digitalocean.com bla.bla.bla...Then i added my domain in my digitalocean panel. Now yes everything is working perfect.If i hit my url no ip address is showing, also notoo many requesterrors 😌My node app successfully getting executed!Wait i am a beginner for hosting node app, so i need help whether it is correct good setup for nodeapp on production?What is the difference between url forwarding and nameservers? Whether my nginx reverse proxy code is correct? is my reverse proxy working correctly?NOTE: I used pm2 for running node app on background.
Connecting my domain name on digital ocean droplet
Using the following piece of codeconst DIST_FOLDER = join(process.cwd(), 'dist'); app.set('views', join(DIST_FOLDER, 'browser'));means that the view engine will look for views in this directory:/dist/browserThe current working directory corresponds to the directory where the node process was started from.So if you want your code to work the same way for local and prod environment (using nginx), you need to make sure that the directory where you start node from is always the parent directory of thedist/browserdirectorySo you should run node (or pm2) from/var//
Following up the official Angular tutorial on setting up SSR using Express server:https://angular.io/guide/universal#configure-for-universalThe tutorial would setup paths like this:... const DIST_FOLDER = join(process.cwd(), 'dist'); ... app.set('views', join(DIST_FOLDER, 'browser'));This works pretty well on the local server.However once deployed at the server (powered by Nginx), getting the error:Error: Failed to lookup view "index" in views directory "/home/user_name/dist/browser" at Function.render (/var/proj_name/server.js:44670:17) at ServerResponse.render (/var/proj_name/server.js:53701:7) at /var/proj_name/server.js:121:9 at Layer.handle [as handle_request] (/var/proj_name/server.js:46582:5) at next (/var/proj_name/server.js:46330:13) at Route.dispatch (/var/proj_name/server.js:46305:3) at Layer.handle [as handle_request] (/var/proj_name/server.js:46582:5) at /var/proj_name/server.js:45805:22 at param (/var/proj_name/server.js:45878:14) at param (/var/proj_name/server.js:45889:14)How to handle this correctly so the app works properly both locally (for development) and on the production server?EDIT:Have also tried to use__dirnameinstead:app.get('.', express.static(join(__dirname, 'browser')));But this fails both locally and on production server:Error: Failed to lookup view "index" in views directory "/browser"EDIT2:I have managed to make this work by movingbrowserfolder into~/dist/browser. But I don't want the app to work this way.Looks like the failing code is inserver.ts:// All regular routes use the Universal engine app.get('*', (req, res) => { res.render('index', { req }); });When ran locally, theconst DIST_FOLDER = join(process.cwd(), 'dist');returns correct output. However when ran on the real server (Ubuntu, Nginx) it gets:/home//dist/browserinstead. Using__dirnamedidn't help.So need some way to make sureres.render('index', { req });gets the correct resource.
Angular SSR 'Failed to lookup view' on production (Ubuntu, Nginx)
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file./opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh: mode: "000755" owner: root group: root content: | #!/bin/bash -xe rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf status=`/sbin/status nginx` if [[ $status = *"start/running"* ]]; then echo "stopping nginx..." stop nginx echo "starting nginx..." start nginx else echo "nginx is not running... starting it..." start nginx fi
My configuration worked up until yesterday. I have added thenginx NodeJS https redirect extension from AWS. Now, when I try to add a new Environment Variable through the Elastic Beanstalk configuration, I get this error:[Instance: i-0364b59cca36774a0] Command failed on instance. Return code: 137 Output: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 27395 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}. Hook /opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.When I look at the eb-activity.log, I see this error:[2018-02-18T17:24:58.762Z] INFO [13848] - [Configuration update 1.0.61@112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity... [2018-02-18T17:24:58.939Z] INFO [13848] - [Configuration update 1.0.61@112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError) caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (Executor::NonZeroExitStatus)What am I doing wrong? And what has changed recently since this worked fine when I changed an Environment Variable a couple months ago.
Errors adding Environment Variables to NodeJS Elastic Beanstalk
Shall I recreate the Nginx container after each update? if so, how?No, You just need to reload nginx service most of the time. You can use:docker exec nginxcontainername/id nginx -s reloadordocker kill -s HUP nginxcontainername/idAnother optionwould be using a custom image and check nginx config checksum and reload nginx when ever it changes. Example script:nginx "$@" oldcksum=`cksum /etc/nginx/conf.d/default.conf` inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \ /etc/nginx/conf.d/ | while read date time; do newcksum=`cksum /etc/nginx/conf.d/default.conf` if [ "$newcksum" != "$oldcksum" ]; then echo "At ${time} on ${date}, config file update detected." oldcksum=$newcksum nginx -s reload fi doneYou need to installinotifywaitpackage.
We are using Nginx as a reverse proxy for docker-cloud services. A script is implemented to update the config file of Nginx whenever new service deploys on docker cloud or if service gets new url on docker-cloud.The Nginx and the script have been run in a docker container separately. The Nginx config file is mounted in Host(ECS). After updating the config file using script, it needs to reload the Nginx in order to apply the changes.First, I would like to know if this is the best way of updating Nginx config file and also what is the best way to reload the Nginx without any downtime?Shall I recreate the Nginx container after each update? if so, how?or it's fine to reload the Nginx from Host by monitoring the changes in the config file(using a script) and reload it with below command?docker exec NginxcontainerID | nginx -s reload
Update Nginx config file in a container with zero down time
The problem in my case was insufficient disk space mounted on root. I have a huge disk mounted on/home, but only had about 4 GB left on/. I assume that nginx was saving incoming request bodies there and after it had filled up, the request was shut down.The way I fixed it was to add those lines to thenginx.conffile (not all of them are necessarily required):http { (...) client_max_body_size 100G; client_body_timeout 300s; client_body_in_file_only clean; client_body_buffer_size 16K; client_body_temp_path /home/nginx/client_body_temp; }The last line is the important part - there I tell nginx to fiddle with its files in the/homespace.
I have an Artifactory behind nginx and uploading files larger than 4 GB fails. I am fairly certain that this is nginx's fault, because if the file is uploaded from/to localhost, no problem occurs.nginx is set up to haveclient_max_body_sizeandclient_body_timeoutlarge enough for this not to be an issue.Still, when uploading a large file (>4 GB) via curl, after about half a minute it fails. The only error message I get isHTTP 500 Internal Server Error, nothing is written to the nginx's error logs.
nginx returns Internal Server Error when uploading large files (several GB)
You can usemap, something like this:map $host $host_without_img { default ...; ~*img[0-9]\.(?.*) $x_host_without_img; }
I need to get only part of the $host variable. Domain is in the form img1.domain.comand I need to get "domain.com" and then use it in redirect.I am trying it wrong like this:$host ~* img[0-9]\.(.*); set $host_without_img $1;I know it would work, if I would put in in IF condition like this:if ($host ~* img[0-9]\.(.*)) { set $host_without_img $1; }But I just don't want to use IF, when it is not necessary.
How to get part of the $host variable in nginx?
With the fine help of the great people in#RubyOnRailsonircI got this figured out. So thankscrankharderandsevenseacatfor your input and advice.What I ended up with was this:class DomainConstraint def initialize(domain) @domains = domain end def matches?(request) @domains.include? request.host end endand:require 'domain_constraint' Rails.application.routes.draw do constraints DomainConstraint.new('api.project.dev') do resources :statuses root :to => 'statuses#index', as: 'api_root' end constraints DomainConstraint.new('admin.api.project.dev') do resources :statuses root :to => 'statuses#new' end end
$ rails -v Rails 4.2.1$ ruby -v ruby 2.2.2p95 (2015-04-13 revision > 50295) [x86_64-linux]I am building an API for a mobile app, which will have an admin interface to it. What i'm attempting to do, is run this through nginx using unicorn ( which I have running on my dev environment)I have 2 domains routed to the exact same rails project. These domains are:api.project.devandadmin.api.project.devI've read this:http://guides.rubyonrails.org/routing.html#advanced-constraintsand tried:Separate Domain for Namespaced Routes in Rails 4( see answer )I've tried a few other things to try and get this to work, the only thing that comes up ( for either sub-domain ) is:Invalid route name, already in use: 'root'My current implementation of this is:class DomainConstraint def initialize(domain) @domains = domain end def matches?(request) @domains.include? request.domain end endandrequire 'domain_constraint' Rails.application.routes.draw do resources :statuses constraints (DomainConstraint.new('api.project.dev')) do root :to => 'statuses#index' end constraints(DomainConstraint.new('admin.api.project.dev')) do root :to => 'statuses#new' end endkeep in mind that the roots are different pages only for now, but ultimately will be completely different systems.Not quite sure where to go from here in order to get this functioning as I would hope.
Rails Domain Constraints ( to serve multiple domains )
Your process most likely dies before it manages to finish its work. It's because PHP kills it after the response is returned back to the client and connection is closed.Process::start()is used tostart a process asynchronously. You need to eitherwait()for it to finish or check if it has finished yet withisRunning():$process->start(); $process->wait(function ($type, $buffer) { // do sth while you wait });Alternatively, useProcess::run()instead ofProcess:start().Use message queues if you want to process something in background.
In a fresh symfony2-project (installation as describedhere), I would like to start a console-process as part of a request. The app runs on a "standard" ubuntu 14.04 box with nginx + php-fpm.Consider this controller-code:get('kernel')->getRootDir(); $env = $this->get('kernel')->getEnvironment(); $commandline = $rootDir . '/console --env=' . $env . ' acme:hello --who jojo' $process = new Process($commandline); $process->start(); return new JsonResponse(array('command' => $commandline)); } }When I issue a request to /command, I get my expected result and the process starts, e.g. I see it with htop and the like. When I issue this request again, I get my expected result, but the process to be started does not show up anywhere. No errors, no nothing.Restarting the php5-fpm service enables me to start one process through a request again, so basically I need to restart the whole php-service after each request. So this maybe is no programming-issue. But I don't know yet, honestly. The problem was described on stackoverflow before,Symfony2 - process launching a symfony2 command, but the workaround with exec is not working for me.Does somebody have a clue?Thanks, regards, jojo
symfony/process - Process silently not starting
The error was due to a conflict in rules in nginx config file.So, the solution was:location ^~ /protected_files { # ^~ needed according to the [nginx docs][1] to avoid nginx to check more locations internal; alias /path/to/static/files/directory; } #avoid processing of calls to unexisting static files by my app location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ { try_files $uri =404; }Hope this helps many of you.
I am developing a webapp and X-Accel-Redirect header works fine only in files without extension. For some reason, if I add an extension to the file name the X-Accel-Redirect doesn't work.Working example:X-Accel-Redirect: /protected_files/myfile01.zNon-working example:X-Accel-Redirect: /protected_files/myfile01.zipI'm using nginx 1.7.1.Initially, The weird part is that if I change the extension part (in this case ".zip") with something not registed in the mime.types file, it works fine (Obviously I rename the file accordingly), but with a extension pointing to a know mime type (something like "zip", "jpg", "html") will generate a "404 Not found" error.UPDATE:It seems that the issue is due to this rule I have in the conf file:location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ { try_files $uri =404; }For some reason, it seems that nginx tests the existence of the file in the file system first and after that it tries the the "internal/aliased" path.Any ideas about how to let nginx to filter all the "/protected_files" coming from X-Accel-Redirect directly to the "internal" instead of trying to find in other paths first?Thanks in advance.
Nginx: X-Accel-Redirect not working in files with know MIME extension
Probably you managed to insert windows line endings when you copy and pasted. If you have dos2unix, use it. (dos2unix scriptfile) Otherwise, there are a number of similar utilities.
I'm trying to run this dropbox script on my nginx server , but im getting:Syntax error: word unexpected (expecting "do")I copy pasted the script for a website, and I tried removing special characters, but im still getting the same error.script:#!/bin/sh # /etc/init.d/dropbox ### BEGIN INIT INFO # Provides: dropbox # Required-Start: $network $syslog $remote_fs # Required-Stop: $network $syslog $remote_fs # Should-Start: $named $time # Should-Stop: $named $time # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start and stop the dropbox daemon for debian/ubuntu # Description: Dropbox daemon for linux ### END INIT INFO DROPBOX_USERS="root" start() { echo "Starting dropbox..." for dbuser in $DROPBOX_USERS; do start-stop-daemon -b -o -c $dbuser -S -x /home/$dbuser/.dropbox-dist/dropboxd done } stop() { echo "Stopping dropbox..." for dbuser in $DROPBOX_USERS; do start-stop-daemon -o -c $dbuser -K -x /home/$dbuser/.dropbox-dist/dropboxd done } status() { for dbuser in $DROPBOX_USERS; do dbpid=`pgrep -u $dbuser dropbox` if [ -z $dbpid ] ; then echo "dropboxd for USER $dbuser: not running." else echo "dropboxd for USER $dbuser: running." fi done } case "$1" in start) start ;; stop) stop ;; restart|reload|force-reload) stop start ;; status) status ;; *) echo "Usage: /etc/init.d/dropbox {start|stop|reload|force-reload|restart|status}" exit 1 esac exit 0
bash script Syntax error: word unexpected (expecting "do")
You can force HTTP headers to influence the browser caching behavior, however this is probably not a good idea in a production environment where you want caching.So simply use something like:expires -1To forceCache-Control no-cacheheaderCheck here for more information:http://wiki.nginx.org/HttpHeadersModuleThat being said, I have gotten myself in the habit of just changing image and static files names as I revise them. Perhaps this comes from working with CDN's where this can be incredibly helpful. So say I have static files that I might update often (i.e. they are not part of some specific piece of content). I would name them like:someimagev1.jpg someimagev2.jpg somejs1.js somejs2.js etc.I change values (and links in HTML source) as needed.
I am updating my site frequently after finishing updates my clients reporting that old images & scripts are getting loaded instead of new ones. I know they are coming from their browser cache but is there any way i can force scripts not to load from cache in server.I am using nginx with php-fpm.
nginx prevent loading from cache
maplets you define a variable's value based on another variable.mapshould be declared athttplevel (i.e. outside ofserver):map $http_x_header $file_suffix { default "2"; OK "1"; };Then the followinglocationshould do the trick using your new variable$file_suffixlocation ~ ^(/files_dir/.+)\.js$ { root html; try_files $1$file_suffix.js =404; }
I have nginx 1.0.8 installed. here is my problem: I have 2 files :file1.jsandfile2.js. the requested path is something like this:www.mysite.com/files_dir/%user%/file.jsIf the requested header : "X-Header" exists and has the value "OK" then the responded content should be file1.js else file2.js.The files are situated in "html/files_dir" and %user% is a set of directories that represents the usernames registered through my service.How do I configure this in nginx? I'm not interested in php, asp or similar technologies only if it's possible with nginx.Thanks
nginx - response based on the requested header
I've found the answer,this solution. It's the 2nd post down on the thread.Comment by[email protected], Oct 15, 2009The directives are almost identical if you're using nginx. Open /etc/nginx/mime.types and add the following three lines inside your types {}declaration (in recent versions of nginx they're already there):text/x-component htc; application/x-shockwave-flash swf; image/svg+xml svg;
I'm trying to run rounded corners on <= IE8 using border-radius.htc locatedhere. I've run the URL to the .htc file in my browser, and I can view the code so my path is correct in the css file. I'm using nginx to host my webpages.Does anyone know how I can get this file to run so that the styling works in < IE9? I've read somehereaboutMIMEtypes for .htc extensions, but I don't know what to do for nginx or even ifMIMEtype is the issue. If there is some other way to get the rounded corners without using an .htc file, I'm open to try that solution as well. Thanks.
how to configure .htc files to work on nginx
You don't use the dist folder:You should publish the application usingdotnet publish -c Releaseand copy the folderbin\Release\netstandard2.1\publish\wwwrootto/srv/sites/app.mysite.localThenginx.confshould be:events { } http { include mime.types; types { application/wasm wasm; } server { listen 80; location / { root /srv/sites/app.mysite.local; try_files $uri $uri/ /index.html =404; } } }
I have built a blazor webassembly site which I host on an ubuntu server using nginx. The configuration of the site in nginx is like this:server { server_name app.mysite.local; root /srv/sites/app.mysite.local; index index.html; }I published the site using visual studio to a local folder and copied the /dist folder to the root of /srv/sites/app.mysite.local. The site is now working on app.mysite.local but I get the folowing error messages:WASM: wasm streaming compile failed: TypeError: Failed to execute 'compile' on 'WebAssembly': Incorrect response MIME type. Expected 'application/wasm'.WASM: falling back to ArrayBuffer instantiationI tried adding 'application/wasm wasm' to the files /etc/mime.types and /etc/nginx/mime.types and restarted the server but without any effect. I don't know if these error messages are connected.What I also noticed is when I go to /account/companyname using the menu on the index page, the page is displayed but I when I type the url app.mysite.local/account/companyname in the address bar I get an nginx 404 error. Maybe this is solved when I solve the mime type issue but I don't know that for sure.Can anybody help me solving the mime type issue? Let me know if you need more information. Thanks in advance!
How to serve the right mime type for a blazor site on ubuntu nginx
If you have recently switched to Python 3, please take a look atherefor a reference to octal literals in Python 3. Changing your settings as follows should fix it:FILE_UPLOAD_PERMISSIONS = 0o644Thisis also helpful in writing Python 2-3 compatible code.
Today I noticed that whenever I upload a file through my Django site the file is uploaded with the file permissions 0600 meaning whenever a non root user wants to view the file (nginx) a 403 is shown.This only started happening today from what I can tell. I have checked both the file_upload_permissions and file_upload_directory_permissions in the Django settings file and they are both set to 0644.I haven't done any Linux/Django updates recently so that shouldn't be the cause, any help would be greatly appreciated.Thanks,Sam
Nginx/Django File Upload Permissions
TL;DRnginx:fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_forphp:$_SERVER['HTTP_MERGED_X_FORWARDED_FOR']ExplanationYou can access all http headers withthe$http_variable. When using this variable, nginx will even do header merging for you soCustomHeader: foo CustomHeader: barGets translated to the value:foo, barThus, all you need to do is pass this variable to phpwithfastcgi_paramfastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_forProof of concept:in your nginx server block:location ~ \.php$ { fastcgi_pass unix:run/php/php5.6-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for; include fastcgi_params; }test.php GET /test.php HTTP/1.1 > Host: localhost > User-Agent: curl/7.47.0 > X-Forwarded-For: 127.0.0.1 > X-Forwarded-For: 8.8.8.8 > < HTTP/1.1 200 OK < Server: nginx/1.10.3 (Ubuntu) < Date: Wed, 01 Nov 2017 09:07:51 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < * Connection #0 to host localhost left intact 127.0.0.1, 8.8.8.8Boom! There you go, you have access to allX-FORWARDED-FORheaders, as a comma-delimited string in$_SERVER['HTTP_MERGED_X_FORWARDED_FOR']Of course, you can use whatever name you want and not justHTTP_MERGED_X_FORWARDED_FOR.
When user using proxy (Google data saver etc), the browser adds X-Forwarded-For for clients' real ip address to server. Our load balancer passes all headers + the clients' ip address as X-Forwarded-For header to nginx server. The example request headers:X-Forwarded-For: 1.2.3.4 X-Forwarded-Port: 80 X-Forwarded-Proto: http Host: *.*.*.* Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8,tr;q=0.6 Save-Data: on Scheme: http Via: 1.1 Chrome-Compression-Proxy X-Forwarded-For: 1.2.3.5 Connection: Keep-aliveIs there any way to pass both of the X-Forwarded-For headers to php, respectively?
Can nginx handle duplicate X-Forwarded-For headers?
This is a default module. You don't have to do anything to compile it, if you already have compiled nginx with http support.
I found the reference on how to use thengx_http_headers_modulebut nowhere how to compile Nginx with this module.Any help would be appreciated.
how to compile nginx with ngx_http_headers_module
I finally figured it out.Step 1: Make sure Nginx is sending the necessary forwarding headers, for example:server { # other stuff ... location / { # other stuff ... proxy_set_header X-Forwarded-Proto $scheme; # you could also just hardcode this to https if you only accept https } }Step 2: By default, AspNetCore will ignore these headers. Install the middleware that processes it:PM> Install-Package Microsoft.AspNetCore.HttpOverridesStep 3: in yourConfigurefunction, apply the middleware.app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.XForwardedProto });This should correctly change theContext.Request.Schemevalue to https, which will cause the authentication middleware to generate the correctredirect_uri.
I am creating an AspNetCore application with Google authentication. I am deploying this app behind an nginx reverse proxy on an Ubuntu server.Almosteverything is working, but I am having trouble with the callback url.In the Google developer console, I havehttp://localhost:5000/signin-googleset as an authorized redirect URI. This works as expected and allows me to use Google authentication when running from my workstation.For production, I havehttps://myserver/signin-googleset as an authorized redirect URI. However, when I try to use it, I get an error from accounts.google.com thathttp://myserver/signin-google(notice the missing s) is not authorized. That's true; it shouldn't be authorized and my server doesn't even respond to port 80 requests.How can I tell the authentication middleware that I need it to use HTTPS for the callback URL?
How to force an HTTPS callback using Microsoft.AspNetCore.Authentication.Google?
This is similar to what theX-Forwarded-ForandX-Forwarded-Protoheaders are used for, but there not astandard headerused to communicate the HTTP protocol version to the backend. I recommend using this:proxy_set_header X-Forwarded-Proto-Version $http2;The$http2variables comes fromngx_http_v2_modulewhich I presume you are using with Nginx to server HTTP/2.The difference between$http2and$server_protocolis that$http2works more like a boolean, appearing blank if the HTTP/1 protocol was used.$server_protocolwill contain values like "HTTP/1.1" or "HTTP/2.0", so it could also be a good choice depending on your needs.
I have a simple rails application(browser) -> (nginx latest; proxy_pass) -> rails (latest)How do I configure nginx to notify rails that nginx received an HTTP/2 request via a different header IE: my_"http_version = 2.0"? proxy_pass communicates with rails via HTTP 1.1 and I want to know if the original request was http/2.Thank you!
how to forward http 2 protocol version via proxy?
You could also simply use nginx as a proxy for your minecraft server, and forward traffic from ingress port 25565 to the minecraft server. That way all traffic goes through one Service
I have the following services hosted in my Kubernetes cluster on AWS.An nginx server, on ports 80 and 443.A Minecraft server, at port 25565.Both are working great. I currently have both of them set totype: LoadBalancer, so they both have Elastic Load Balancers that are providing ingress to the cluster.I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP.I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working.Here's the setup on a gist. When I try andcurlon the allocatednodePortfrom inside the cluster it just hangs.Is there a way to have one service balance to multiple services?
How can I have one Kubernetes LoadBalancer balance to multiple services?
To disable SSLv3, you'll have to editdefault serverconfiguration, not just an arbitrary virtual host config. It can only be disabled for a listen socket, not just a virtual server. The configuration snippet you've provided suggests that you are using per-server included configuration files, so you'll have to find one withdefault_serverin the appropriatelistendirective, and disable SSLv3 there:server { listen 443 default_server ssl; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ... }Or, better yet, edit the configuration athttplevel, innginx.conf:http { ... ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ... }You may also consider upgrading nginx to a recent version. In nginx 1.9.1+ SSLv3 is disabled by default.
Why on my server still enabled SSLv3 ? I want to disable for reasons that in some computers can not open my page because of safety issues.I found thisguide:But currently I've got it set. My server is hosted in Google Cloud, I currently have this Nginx configuration file:... ssl on; ssl_certificate /etc/nginx/dba_certs/dba_ssl2/ssl-bundle.crt; ssl_certificate_key /etc/nginx/dba_certs/dba_keys/dba.key; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; ...OpenSSL version is 1.0.1f 6 Jan 2014.What could be wrong?
Disable SSLv3 on Nginx
Rather than doing an HTTP proxy, I would use Nginx'sbuilt-in capacityto communicate with uWSGI. (This will still work if you are using separate Docker containers for Nginx and uWSGI since the communication is done over TCP)A typical configuration (mine) looks like this:location / { uwsgi_pass http://127.0.0.1:8001; include uwsgi_params; }You will have to remove the--httpargument (or config-file equivalent) from your uWSGI invocation.Additionally, in uwsgi_params (found in/etc/nginxor a custom location you specify) there are several directives to pass meta data through. Here's an excerpt from mine that looks like it could be related to your problem:... uwsgi_param REQUEST_URI $request_uri; uwsgi_param DOCUMENT_ROOT $document_root; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param HTTPS $https if_not_empty;Relevant docs:http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html#putting-behind-a-full-webserver
I've been working on a django app recently and it is finally ready to get deployed to a qa and production environment. Everything worked perfectly locally, but since adding the complexity of the real world deployment I've had a few issues.First my tech stack is a bit complicated. For deployments I am using aws for everything with my site deployed on multiple ec2's backed by a load balancer. The load balancer is secured with ssl, but the connections to the load balancer are forwarded to the ec2's over standard http on port 80. After hitting an ec2 on port 80 they are forwarded to a docker container on port 8000 (if you are unfamiliar with docker just consider it to be a standard vm). Inside the container nginx listens on port 8000, it handles a redirection for the static files in django and for web requests it forwards the request to django running on 127.0.0.1:8001. Django is being hosted by uwsgi listening on port 8001.server { listen 8000; server_name localhost; location /static/ { alias /home/library/deploy/thelibrary/static/; } location / { proxy_set_header X-Forwarded-Host $host:443; proxy_pass http://127.0.0.1:8001/; } }I use X-Forwarded host because I was having issues with redirects from google oauth and redirects to prompt the user to login making the browser request the url 127.0.0.1:8001 which will obviously not work. Within my settings.py file I also includedUSE_X_FORWARDED_HOST = Trueto force django to use the correct host for redirects.Right now general browsing of the site works perfectly, static files load, redirects work and the site is secured with ssl. The problem however is that CSRF verification fails.On a form submission I get the following errorReferer checking failed -https://qa-load-balancer.com/projects/newdoes not matchhttps://qa-load-balancer.com:443/.I'm really not sure what to do about this, its really through stackoverflow questions that I got everything working so far.
Django CSRF Error Casused by Nginx X-Forwarded-host
I ended up using a variable. This solved the problem:server { listen 80; ssl off; location / { proxy_http_version 1.1; proxy_set_header Host 'some-host.s3.amazonaws.com'; proxy_set_header Authorization ''; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header Set-Cookie; proxy_ignore_headers "Set-Cookie"; proxy_buffering off; proxy_intercept_errors on; resolver 172.16.0.23 valid=300s; set $indexfile "some-host.s3.amazonaws.com/front-end/index.html"; proxy_pass http://$indexfile; } }
I have an S3 bucket with the static contents of my site and I have an EC2 instance that receives all of the traffic to the site.I want to have every request to the EC2 return a specific file from my S3, but I want to keep the URL the same as the user inserted it.Example: Let's assume that my file is located in /path/index.html If the user makes a request to www.mydomain.com, I want to serve that file, and if the user makes a request to www.mydomain.com/some/random/path/ I still want to serve the same file. The last requirement is that the location will stay the same. That is, the user will still see www.mydoamin.com and www.mydoamin.com/some/random/path/ in the browser, even though the same file was served.Here's the nginx config file I have so far, which doesn't seem to work:server { listen 80; ssl off; location / { proxy_http_version 1.1; proxy_set_header Host 'some-host.s3.amazonaws.com'; proxy_set_header Authorization ''; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header Set-Cookie; proxy_ignore_headers "Set-Cookie"; proxy_buffering off; proxy_intercept_errors on; resolver 8.8.4.4 8.8.8.8 valid=300s; resolver_timeout 10s; proxy_pass http://some-host.s3.amazonaws.com/front-end/index.html; } }Any thoughts on how to make this work?Thanks!
Nginx pass all requests to specific S3 file
You want to do the rewrite only if the file doesn't exist, use it as a named location for the fall back intry_fileslocation /subfolder { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/subfolder/(.*) /subfolder/index.php?do=/$1; }
I have the following NGINX rewrite rule which works great for a PHP script installed in a subfolder:location /subfolder/ { if (!-e $request_filename){ rewrite ^/subfolder/(.*) /subfolder/index.php?do=/$1; } }but Nginx wiki says using "if" is evilhttp://wiki.nginx.org/IfIsEvilso I tried the followinglocation /subfolder/ { try_files $uri $uri/ /subfolder/index.php?$args; }but it doesn't work to replace the one above, although it works for WordPress and most of the PHP scripts. If there a way to translate it to use "try_files"?Thank you!
Convert Nginx rewrite from "if" to "try_files"
It is possible to run the nginx master process with a different user by just running the init script as non root (i.e./etc/init.d/nginx start).If this is really what you want to do, you will need to ensure the log and pid directories (usually/var/log/nginx&/var/run/nginx.pid) are writable for that user, and all yourlistencalls are for ports greater than 1024 (because binding to ports <=1024 requires root privileges).In most situations however, you run the nginx master process as root and specify theuserdirective so that nginx workers processes run as that user.
A/c tohttp://wiki.nginx.org/CoreModule#usermaster process used to run with root user, is it possible to run nginx mater process with different user?
How to run nginx master process with different user
Your precompiled assets should reside inpublic/assets, seerails guidesnormally you create them by runningRAILS_ENV=production bundle exec rake assets:precompileas part of your deployment.The shared stuff is to provide old stuff over several deploys.See also thisquestion
I'm building a VPS, and it's deployed via Capistrano, database connected etc, but there are no assets available to the page - it is basic html only.The assets appear to be compiled, and exist in theshareddirectory.From the page html: The asset files appear to exist in theshareddirectory:assay@assaypipeline:~/apps/assay/shared/assets$ ls application- a1b5d69aeaff709fd3dce163c559b38b.css application-a1b5d69aeaff709fd3dce163c559b38b.cssWhen Iview, sourceand then click on the hyperlink to the asset path, I get a 404 not found from Nginx.SOLUTIONThanks to Martin M (accepted answer) for help. The steps I took, from the ~/apps/(app name)/current directory on the server.$ bundle install $ RAILS_ENV=production bundle exec rake assets:precompile $ sudo service nginx restartObviously it would be better to include this in the Capistrano recipe.*EDIT - Capfile *load 'deploy' load 'deploy/assets' load 'config/deploy'
Rails assets missing after Capistrano deploy
Reference:http://nginx.org/en/docs/http/ngx_http_auth_basic_module.htmlAssume the subdirectory url ishttp://www.example.com/admin/.Step 1: create 2 files storing username/password (encrypted) pairs usinghtpasswd# to secure the admin site htpasswd -bc /tmp/admin_passwd.txt admin adminpassword # to secure the main site htpasswd -bc /tmp/site_passwd.txt user userpasswordStep 2: set up your nginx config:server { listen 80; server_name www.example.com; root /tmp/www; location ^~ /admin/ { auth_basic "secured site admin"; auth_basic_user_file /tmp/admin_passwd.txt; } location / { auth_basic "secured site"; auth_basic_user_file /tmp/site_passwd.txt; } }
I'm trying to set up basic HTTP authentication with Nginx that's multi-layered. I'd like to have one username & password for the entire site except for a certain subdirectory (URL), and a separate username & password for that subdirectory. If somebody navigates to that subdirectory, I want them to be prompted for the subdirectory-specific credentials only, and not the ones for the site root.How do I set this up?
Nginx password protect root, and separate password for subdirectory
You have to create twovirtual hostsusingserverblock.Let's suppose/var/wwwcontainsdomain1.comanddomain2.comdirectories with whatever HTML pages, CGI scripts, ...server { listen 12.34.56.78:80; server_name domain1.com index index.html; root /var/www/domain1.com } server { listen 98.76.54.32:80; server_name domain2.com; index index.html; root /var/www/domain2.com }
I want to use two different domains with different IP addresses, for exampledomain1.com - 12.34.56.78 domain2.com - 98.76.54.32I am usingnginxon Linux OS. What should I add in my nginx.conf?
Different Domains on Different IP's in Nginx?
I chose HttpPushStreamModule (https://github.com/wandenberg/nginx-push-stream-module).It's better. It now supports websockets.
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed6 years ago.Improve this questionOn nginx website they are two module for HTTP PUSH.Here they are:http://pushmodule.slact.net/andhttp://wiki.nginx.org/HttpPushStreamModuleWich one is better?Have you used one of them? Which one do you prefer?Thanks in advance.
Which module for Nginx is the best for HTTP PUSH? [closed]
Ok the problem was pretty simple...I was missing a / at the end of the proxy_pass argumentlocation /FOO/ { proxy_pass http://127.0.0.1:3000/; }
http://mydomain.com/=> 127.0.0.1:4567buthttp://mydomain.com/FOO => 127.0.0.1:3000Is that possible?So far I have:upstream myserver { server 127.0.0.1:4567; server 127.0.0.1:4568; } location / { proxy_pass http://myserver; } location /FOO/ { proxy_pass http://127.0.0.1:3000; }But this points tohttp://127.0.0.1:3000/FOO/and I want to pass only what comes after /FOO/Thx
Nginx proxy_pass without parameters
From the docs:https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controllerIf you would like to enable client source IP preservation for requests to containers in your cluster, add--set controller.service.externalTrafficPolicy=Localto the Helm install command. The client source IP is stored in the request header underX-Forwarded-For. When using an ingress controller with client source IP preservation enabled, SSL pass-through will not work.More information here as well:https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ipYou can use thereal_ipandgeomodules to create the IP whitelist configuration. Alternatively, theloadBalancerSourceRangesshould let you whitelist any client IP ranges by updating the associated NSG.
I'm currently working on copying AWS EKS cluster to Azure AKS. In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.In AWS to do so we added to the Kubernetes service annotationaws-load-balancer-proxy-protocolto support Nginxproxy_protocoldirective.Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.I saw that AKS Load Balancer hashes the IPs so I removed theproxy_protocoldirective from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:AKS Load Balancer StandardI tried whitelisting IPs at the level of the Kubernetes service using theloadBalancerSourceRangesapi instead on the Nginx level.But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips underloadBalancerSourceRangesand pass them through.I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
Getting client original ip address with azure aks
You need to define a catch all server. Use thedefault_serverparameter on thelistendirective.For example:server { listen 80 default_server; listen 443 ssl default_server; ssl_certificate /path/to/any/cert.pem; ssl_certificate_key /path/to/any/key.pem; return 444; }The server needs a certificate to blockhttpsconnections, any certificate will do. The client's browser will throw warnings, but they shouldn't be trying to connect to a secure server without a correct domain name anyway.Theserver_namedirective is not required. The non-standard code 444 closes the connection without sending a response header.Seethis documentfor details.
This is my configuration:server { listen 80; listen [::]:80; server_name domain.tld www.domain.tld; return 301 https://erp.uni.mk$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name domain.tld; ssl_certificate "/etc/nginx/ssl/ca_full.crt"; ssl_certificate_key "/etc/nginx/ssl/private.key"; ... }What I am trying to achieve is block access via the IP. And only allow it via the domain.I've seen some solutions with regex, but I am using both IPv4 and IPv6. And it should not impact performance.Any suggestions how to solve this?
Nginx allow via Domain but not via the IP
I ran into the same error... I suspect that it's because I'm using a mix of private and public Azure DNS entries and the record needs to get added to the public entry so letsencrypt can see it, however, cert-manager performs a check that the TXT record is visible before asking letsencrypt to perform the validation... Iassumethat the default DNS records cert-manager looks at is the private one, and because there's no TXT record there, it gets stuck on this error.The way around it, as described oncert-manager.iois to override the default DNS using extraArgs (I'm doing this with terraform and helm):resource "helm_release" "cert_manager" { name = "cert-manager" repository = "https://charts.jetstack.io" chart = "cert-manager" set { name = "installCRDs" value = "true" } set { name = "extraArgs" value = "{--dns01-recursive-nameservers-only,--dns01-recursive-nameservers=8.8.8.8:53\\,1.1.1.1:53}" } }
I have created cert-manager on aks-engine using below command kubectl apply --validate=false -fhttps://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yamlmy certificate specissuer specIm using nginx as ingress, I could see txt record in the azure dns zone created my azuredns service principle, but not sure what is the issue on nameservers
cert manager is failing with Waiting for dns-01 challenge propagation: Could not determine authoritative nameservers
I have no idea why, but addinglastto vue application configuration fixed it. Here is how the config looks now:server { listen 80; location /app { alias /var/www/app/dist; # removed the / at the end try_files $uri $uri/ /index.html last; # here is the trick } location / { alias /var/www/landing/dist/; try_files $uri $uri/ /index.html; } }
I need to open the landing page on/, and vueJS application on/app. Here is my current nginx setup:server { listen 80; location /app { alias /var/www/app/dist/; try_files $uri $uri/ /index.html; } location / { alias /var/www/landing/dist/; try_files $uri $uri/ /index.html; } }It opens landing on/, and vueJS app when I go to/app, however, if I open/app/loginit goes to landing page instead of vue application. What am I doing wrong ?
nginx + vueJS + static landing page
You have to use AWS ACM API (IAM certificate and ACM certificate are different). Equivalent API isGetCertificate in ACMaws acm get-certificate --certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012Now, I think you are trying to get the certificate and the chain to use it on your instance, but Amazon issued certificate cannot be used with EC2 instances as you can't get the private key.You have to use the certificate with ELB.If you want to install SSL certificate in your instance, you can get certificate from other CA or can use Let's Encrypt certificate (which is free as well).
This question already has answers here:How to add SSL certificate to AWS EC2 with the help of new AWS Certificate Manager service(4 answers)Closed6 years ago.I am using AWS and I have used ACM to generate a certificate. (This process is different than I am used to where I generate a certificate signing request and give it to a signing authority.) I requested a certificate:Now I am trying to install it usingthe instructions from AWS:aws iam get-server-certificate --server-certificate-name <>Only, when I replace<>with the name of my certificate, I am not sure what I am supposed to replace it with. Notice that in the picture above, theNamecolumn for my AWS certificate is blank. (Note: I made sure to give the IAM user that is configured with APIIAMFullAccesstemporarily to do this so there aren't permission issues.) If I try to use the domain name xxxxx.com as the name, I am told this message:A client error (NoSuchEntity) occurred when calling the GetServerCertificate operation: The Server Certificate with name xxxxxxxx.com cannot be found.This happens when I use the identifier and the ARN also.My end goal is to have a signed SSL certificate on NGINX to serve the web content of my EC2 instance.A: Is this the right track? (Are these the right preliminary steps?)B: If so, what do I use to reference the certificate? Or do I use a different API?
How to reference AWS ACM Certificate when using AWS API to get certificate [duplicate]
This can now be configured within GKE, by using a custom resourceBackendConfig.apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: my-bconfig spec: timeoutSec: 60And then configuring yourServiceto use this configuration with an annotation:apiVersion: v1 kind: Service metadata: name: my-service annotations: beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bconfig"}}' spec: ports: - port: 80 .... other fieldsSeeConfiguring a backend service through Ingress
I'm running Kubernetes on Google Compute Engine (GCE). I have an Ingress set up. Everything works perfectly except when I upload large files, the L7 HTTPS Load Balancer terminates the connection after 30 seconds. I know that I can bump this up manually in the "Backend Service", but I'm wondering if there is a way to do this from the Ingress spec. I worry that my manual tweak will get changed back to 30s later on.The nginx ingress controller has a number of annotations that can be used to configure nginx. Does the GCE L7 Load Balancer have something similar?
Kubernetes on GCE: Ingress Timeout Configuration