Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Turns out there was another reverse proxy in front of my server that I had no control of.
Changed my server set up to be directly internet facing and it works as expected. | I am using nginx and proxying to my app that uses socket.io on node.js for the websocket connection.I get the error above when accessing the app through the domain.I have configured nginx according tohttps://github.com/socketio/socket.io/issues/1942to ensure that websockets are properly proxied to the node.js backend.My nginx configuration is below:server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://xxx.xx.xx.xx:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}In my react client, I start the websocket connection this way:import io from 'socket.io-client';
componentWillMount() {
this.socket = io();
this.socket.on('data', (data) => {
this.state.output.push(data);
this.setState(this.state);
});
}Any advice is appreciated!edit 1:After more investigation.My server set up is as follows:Domain accessible from internet: web-facing.domain.comDomain accessible from intranet: internal.domain.comWhen I access the app from the intranet, it works fine, but get the error when accessing from the internet.I suspect it's due to the creation of the socket usingthis.socket = io()which sets up a socket connection with the current domain.Since the socket in node is listening tows://internal.domain.com, when connecting via web-facing.domain.com, the wrong socket,ws://web-facing.domain.com, is created.So now the question is, how can I create a socket to the internal domain when accessing from a different domain?edit 2:I have addedapp.set('trust proxy', true)to configure Express to accept proxy connections, but it still does not work. | WebSocket connection to 'ws://.../socket.io/' failed: Error during WebSocket handshake: net::ERR_CONNECTION_RESET |
You could use a common root and try the three directories in thetry_filesstatement:location /static {
root /usr/local/www;
try_files /style1$uri /style2$uri /style3$uri =404;
} | I have looked hi and low and found no such implementation and am wondering if what I am trying is even possible. I have 3 relative paths that serve up static content:Path1: /usr/local/www/style1/static/...
Path2: /usr/local/www/style2/static/...
Path3: /usr/local/www/style3/static/...The 3 different roots are static unto themselves but the content from /static on down is only semi-static (might be a bit different depending on the exact file being served up and may exist in one path and not in another). For example/static/css/custom.css
/static/css/form/button.css
/static/css/form/images/buttondisabled.png
/static/images/ui/buttons/add_item.png
/static/images/ui/menu/help.pngThe following is what I would like to do. Which is basically, if "/static" content is requested I want to check the relative path associated with path1 and if not found there then check the relative path associated with path2 and if not found check the relative path associated with path3. This seems fairly simple but I have not found any examples that outline how this might be done. Could I set the 3 different roots up as variables perhaps:path1root /usr/local/www/style1;
path2root /usr/local/www/style2;
path3root /usr/local/www/style3;
location /static
{
try_files path1root/$uri path2root/$uri path3root/$uri (=404);
}Or might that be done as follows since it is only needed for /static content:location /static
{
path1root /usr/local/www/style1;
path2root /usr/local/www/style2;
path3root /usr/local/www/style3;
try_files path1root/$uri path2root/$uri path3root/$uri (=404);
}Or can what I am attempting to do even be done at all ?? If I cannot do it with 3 roots could it be done with just 2 roots without defining one of them as an overall arching base root. If it is possible to do this and stay away from regular expressions that would be better -- but if that is needed then that is needed. | How to use try_files with 2 or more roots |
The problem could be fixed withnginx rewrite rules. Using the bellow rule redirect all urls withindex.phpparameter to theoriginal laravel route urllocation /index.php {
rewrite ^/index.php(.*)$ http://$server_name$1;
}Also I've put the follow to therobots.txtDisallow: /*index.php* | I discovered that any laravel website is accessible with index.php as a parameter.This is a big problem, index.php in url parameter breaks all images.
Look at a real example to understand what I mean:http://www.example.com/main-thing/sightseeinghttp://www.example.com/index.php/main-thing/sightseeingGooglebot read some urls with index.php as a parameter of url. This has effect to breaks all images when someone get access to the website from google search with index.php.Also, this is a bad seo practice because produce duplicate content.What is the best way to fix that? | index.php as a parameter in the url of laravel website |
I think the problem insrcpath of your JS file in script tag.Instead of, use: | Router configuration:const routes =
[
{
path: '/',
component: Layout,
onEnter: sessionFilter,
indexRoute:
{
component: ActivityIndex,
onEnter: sessionFilter
},
childRoutes:
[
{
path: 'login',
component: LoginPage
},{
path: 'activity-new',
component: ActivityNew,
onEnter: sessionFilter
},{
path: 'activity-edit/:id',
component: ActivityEdit,
onEnter: sessionFilter
}
]
}
];
ReactDOM.render(, Node);Nginx configuration:server {
listen 5002;
location / {
root www/bc;
index index.html;
try_files $uri $uri/ /index.html;
}
}All files transpiled with babel(webpack). It works fine when I accesshttp://server:5002/somethingbut throws anUnexpected token <if I accesshttp://server:5002/something/1orhttp://server:5002/something/.When I looked at the Sources tab in the Developer Tools I noticed that the js file has been returned with the index.html as its content which is caused by the Request URL pointing tohttp://server:5002/something/app.jsinstead ofhttp://server:5002/app.js. Do I need to add something to the configuration to solve this issue? | Unexpected token < with React Router & Nginx |
For completeness sake what I ended up doing three years ago was what @user8215365 suggested.
Simply invokingsudo gitlab-ctl stop nginxdid the trick. | I'm using Gitlabs latest Omnibus-package on an EC2 Ubuntu machine.To refresh my SSL certificate (issued via Let's Encrypt) I need to stop Gitlab's Nginx so Let's Encrypt can verify that I possess the domain.
Therefore I hitsudo gitlab-ctl stop.Thesudo gitlab-ctl statusafterwards is:down: gitlab-workhorse: 325s, normally up; run: log: (pid 1109) 5361843s
down: logrotate: 324s, normally up; run: log: (pid 1104) 5361843s
down: nginx: 324s, normally up; run: log: (pid 1103) 5361843s
down: postgresql: 324s, normally up; run: log: (pid 1101) 5361843s
down: redis: 323s, normally up; run: log: (pid 1102) 5361843s
down: sidekiq: 322s, normally up; run: log: (pid 1112) 5361842s
down: unicorn: 322s, normally up; run: log: (pid 1100) 5361843sHowever when I access my domain I get Nginx'502 Bad Gateway.How can I truly stop its internal Nginx.Besides the certificate part theetc/nginx/gitlab.rbis still the default.Here's the output ofps -eaf|grep -i nginxroot 1091 985 0 2015 ? 00:07:15 runsv nginx
root 1103 1091 0 2015 ? 00:04:14 svlogd -tt /var/log/gitlab/nginx
gitlab-+ 24669 1 0 2015 ? 01:03:38 nginx: worker process
root 27272 1091 0 13:12 ? 00:00:00 /opt/gitlab/embedded/sbin/nginx -p /var/opt/gitlab/nginx
ubuntu 27275 27254 0 13:12 pts/2 00:00:00 grep --color=auto -i nginx | Can't stop Gitlab's built-in Nginx |
This is definitely possible withFluentd. Amazon published its own plugin for AWS Kinesis. Fluentd has a feature to tail the nginx logs. Please check the URLs below.http://docs.fluentd.org/articles/kinesis-streamhttp://docs.fluentd.org/articles/in_tailhttps://github.com/awslabs/aws-fluent-plugin-kinesis | I am looking for a scalable solution.
Nginx will server a pixel (1x1 gif) with a query string to an html page.
This query string will be in the nginx access logs.
I need to stream, or send this data to Amazon kinesis so that we can then process it later.
I have done some reading about Logstash, Fluentd, Ect.
Is anyone doing this?
What is the recommended why to turn the access log into events that can then be processed?Thanks
Brian | Nginx Access Log To Kinesis |
Passenger author here. Byebug integration is available in Passenger Enterprise:https://www.phusionpassenger.com/library/admin/nginx/debugging_console/ruby/ | I developed a simple rails app which works in development environment under WEBrick. However, when I move to production environment it doesn't work. I fixed trivial errors relating to assets and things. However, some things just don't work. It would be extremely helpful to be able to see what is going on interactively with debugger.I can insertbyebugin the code, and it pauses code from running, but since neitherpassengernornginxlogs to STDOUT (by default) I can't get to the byebug prompt. (and neither read STDIN)Is there a way to runbyebugunderpassenger + nginx?EditThis time my issuewas relatedto https. | Any way to run byebug under passenger + nginx? |
I can replicate the problem when the$ifstatements are directly inside theserverblock, but it works correctly when they are inside a top-level location block. This worked for me:location / {
if ($args !~ ^$){
return 404;
}
if ($request ~* (^.*\?.*$)){
return 404;
}
}Of course, if you have other location blocks, you may need toincludethe rules there too. | It's my CDN and conf file contain "few lines".The situation is really strange:In configuration i haveerror_page 403 = /e403;
error_page 404 = /e404;andlocation =/e403 {
default_type text/html;
return 403 "somehtml403";
}
location =/e404 {
default_type text/html;
return 404 "somehtml404";
}Same time i have args fitler (args are forbidden, it's a CDN):if ($args !~ ^$){
return 404;
}
if ($request ~* (^.*\?.*$)){
return 404;
}When i request/cookies.txti have my custom 404 page. When i request/cookies.txt?onemoreor/cookies.txt?i have nginx's 404 page.The question is: why? | Nginx not returning custom error pages from if expression |
Got it.In my 'production.rb' I had a force_ssl setting and I didn't set up SSL yet since I was just starting out. | So I'm migrating from Heroku to AWS Elastic Beanstalk and testing out the waters. I'm following this documentation:AWS Docs :: Deploy Rails app to AWSHowever after following the documentation I keep receiving a Bad Gateway 502 (error).Here's the specs of my app:Rails 4.1.8Ruby 2.1.7Server PumaSo I checked my/log/nginx/error.logand here is what I see:2015/11/24 06:44:12 [crit] 2689#0: *4719 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.13.129, server: _, request: "G ET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "my-app-env-mympay5afd.elasticbeanstalk.com"From thisAWS Forum threadit appears as though Puma is not starting correctly.So the three log files that I have taken a look at are:/var/log/eb-activity.log/var/log/eb-commandprocessor.log/var/log/eb-version-deployment.logand none of them seem to indicate any errors except for the "secret_key_base" error which I fixed (I used theeb setenv SECRET_KEY_BASE=[some_special_key]command).One thing that could hint at the source of the issue is/var/log/nginx/rotated/error.log1448330461.gzhas the following content2015/11/24 01:06:55 [warn] 2680#0: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:39
2015/11/24 01:06:55 [warn] 2680#0: conflicting server name "localhost" on 0.0.0.0:80, ignoredBut they seem to be warnings rather than severe show stoppers.Are there any other files that I should be taking a look at?As another point of reference, I've looked at thisSO Postwhich would seem to imply that I need to enable SSL in order for all of this to work.Thanks in advance! | Rails app migrating to AWS Elastic Beanstalk :: Bad Gateway (502) |
You should specify the protocol scheme (http,https, ...) in the URL:$http.post('http://localhost:7001/testApp/user/login', { username: username, password: password })
.success(function (response) {
callback(response);
}); | I have an AngularJS app hosted in nginx server at localhost and Rest service hosted in Weblogic at localhost:7001.I'm trying to do this post inside of Angular part:$http.post('localhost:7001/testApp/user/login', { username: username, password: password })
.success(function (response) {
callback(response);
});And this is the error I'm getting:XMLHttpRequest cannot load localhost:7001/testApp/user/login. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource. | AngularJS $http.post() to another port at the same machine |
Unless you have either solution: proxy_read_timeout 1d or a ping message to keep connection alive, Nginx closes connections in 60sec otherwise. This default value was chosen by a reason.See what Nginx core developersays:There is proxy_read_timeout (http://nginx.org/r/proxy_read_timeout)
which as well applies to WebSocket connections. You have to bump it
if your backend do not send anything for a long time. Alternatively,
you may configure your backend to send websocket ping frames
periodically to reset the timeout (and check if the connection is
still alive).Having said that nothing should stop you from using USR2+QUIT signals combination that usually used when yougracefullyrestart Nginx while binary upgrade. Nginx master/worker processes rare consume more than 50MB of memory, so to keep multiple masters isn't that expensive. USR2 helps to fork new master and spawn its workers followed by gracefully shutdown old workers and master. | I plan to use nginx for proxying websockets. When performing nginx reload / HUP , I understand that nginx waits for the old worker processes to stop processing all requests. In websocket connection however, this may not happen for long time as the connection is persistent. Is there an option / roadmap to forceibly kill old worker process after timeout on reload?References:http://nginx.org/en/docs/control.htmlhttp://forum.nginx.org/read.php?21,247573,247651#msg-247651Thanks | nginx - ungraceful worker termination after timeout |
I managed to do it fromhere.It basically needed two rewrites.rewrite ^ $request_uri;
rewrite /.*/(.*)/(.*) /$1/$2;The first rewrite modifies uri from/index.php -> /a/b/c.The second rewrite modifies uri from/a/b/c -> /b/c. | Suppose the value of $request_uri is /a/b/c .
The current value of $uri is /index.php .
Is this possible to change my $uri to /b/c .I have tried this, which doesn't seem to be working,if ($request_uri ~* /a/(.*)/(.*)){
set $uri /$1/$2;
}But this gives error ofduplicate "uri" variable.
I also tried,if ($request_uri ~* /a/(.*)/(.*)){
rewrite ^ /$1/$2 break;
}But $variables don't seem to store values.Is there a way out? Thanks. | How to modify $uri to part of uri from request_uri in nginx? |
I suspect the mobile network is forcing use of an HTTP proxy that tries to buffer files before forwarding them to the browser. Buffering will make SSE messages wait in the buffer.With SSE there are a few tricks to work around such proxies:Close the connection on the server after sending a message. Proxies will observe end of the "file" and forward all messages they've buffered.This will be equivalent to long polling, so it's not optimal. To avoid reducing performance for all clients you could do it only if you detect it's necessary, e.g. when a client connects always send a welcome message. The client should expect that message and if the message doesn't arrive soon enough report the problem via an AJAX request to the server.Send between 4 and 16KB of data in SSE comments before or after a message. Some proxies have limited-size buffers, and this will overflow the buffer forcing messages out.Use HTTPS. This bypasses all 3rd party proxies. It's the best solution if you can use HTTPS. | I am having an issue with Server Sent events.
My endpoint is not available on mobile 3G network.One observation I have is that a https endpoint like the one below which is available on my mobile network.https://s-dal5-nss-32.firebaseio.com/s1.json?ns=iot-switch&sse=trueBut the same endpoint when proxy passed using an nginx and accessed over http (without ssl) is not available on my mobile network.http://aws.arpit.me/live/s1.json?ns=iot-switch&sse=trueThis is available on my home/office broadband network though. Only creates an issue over my mobile 3g network.
Any ideas what might be going on?I read that mobile networks use broken transparent proxies that might be causing this. But this is over HTTP.Any help would be appreciated. | Not able to access Server-Sent-Events over Mobile 3g Network |
I found the solution here:https://github.com/jfryman/puppet-nginx/blob/master/spec/defines/resource_location_spec.rbrewrite: '^(.*) /index.php?_controller=$1 last'Should look like:rewrite_rules: ['^(.*) /index.php?_controller=$1 last'] | Example from "config.yaml" file:locations:
...
some_location_root:
location: /
try_files:
- $uri
- '@rwr'
some_location_rewrite:
location: '@rwr'
rewrite: '^(.*) /index.php?_controller=$1 last'
... | How do I set up Nginx rewrite rules in a config file generated from PuPHPet? |
I solved the problem.It was because of my unicorn server started in development environment, not in production. Unicorn was trying to connect to development database, but credentials for the dev db in database.yml were missing.
After I started unicorn in production env everything connected well. | Im trying to setup Nginx, Unicorn and Rails application to work together.
Nginx and Nnicorn are running, I checked that using ps command.But when trying to access my page Ive got 502 Bad GatewayNginx error log has line:2015/03/18 19:53:26 [error] 14319#0: *1 connect() to
unix:/var/sockets/unicorn.mypage.sock failed (11: Resource temporarily
unavailable) while connecting to upstreamWhat can be the problem?my /etc/nginx/conf.d/default.confupstream app {
server unix:/var/sockets/unicorn.mypage.sock fail_timeout=0;
}
server {
listen 80;
server_name mypage.com;
# Application root, as defined previously
root /home/rails/mypage/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri @app;
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}/home/rails/mypage/config/unicorn.rbworking_directory "/home/rails/mypage"
pid "/home/rails/mypage/pids/unicorn.pid"
stderr_path "/home/rails/mypage/log/unicorn.log"
stdout_path "/home/rails/mypage/log/unicorn.log"
listen "/var/sockets/unicorn.mypage.sock", backlog: 1024
worker_processes 2
timeout 30 | Nginx, Unicorn and Rails = 502 Bad Gateway |
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpointURL, never usehttps://s3.amazonaws.comOne the good example that reduces number of DNS calls is the following below:https://s3-eu-west-1.amazonaws.com/folder/file.jpg | I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy servernginxto route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it? | Websites hosted on Amazon S3 loading very slowly |
Yes, you can usehttp://wiki.nginx.org/HttpLuaModulelocation /file {
content_by_lua 'os.execute("php cli.php ',ngx.var.remote_addr,'")';
}Not sure about the syntax passing IP but smth like this should work. You can also parse log file | I'm setting up download logging on a server that hosts podcast recordings. We just want to easily log into MySQL the files downloaded with the timestamp and requesting IP address.As these files average at least 150MB, I figured using readfile() would be a bad idea (don't want PHP running the entire time the file is downloading), and instead would have to have the files stored in a different location that PHP redirects them to after logging.The problem of course is that once they're redirected, they could potentially copy that redirected link and use that, inadvertently bypassing the download logging. I'd like to avoid that.I'm thinking my best bet would be to have nginx configured to call a secondary script before serving the file, passing the request data to it for processing. Is there a way to do this? | Have nginx call an external script (passing request info) while serving static file? |
You have a couple of options, at least:Option 1 is not to send both types of traffic to the same port on the instances. Instead, configure the application to listen on an additional port, such as 81 or 8080, and send the SSL-originating traffic there. Then use the port where the traffic arrives at the instance, to differentiate between the two types of traffic.Option 2 is to enable thePROXYprotocol on the ELB, after modifying the application to understand it. This has the advantage of also giving you the client IP address.http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.htmlhttp://www.haproxy.org/download/1.5/doc/proxy-protocol.txt | While using HTTP/HTTPS as load balancer protocol we get the requested origin protocol (i.e. it is HTTP or HTTPS) fromx-forwarded-protocolheader.
Now, using this header in nginx configuration it can be determined that whether the originating call was from HTTP or HTTPS and action could be performed accordingly.But if the ELB listeners configuration is as shown in the below image, then how to determine that the request has come via port 80 or port 443? | How to get the port number of client request with AWS ELB using TCP and Nginx on server |
You need to create differentserverformydomain.com:3001andmydomain.comupstream app_tools {
server 127.0.0.1:3000;
keepalive 64;
}
server {
root /var/www/example;
listen 0.0.0.0:80;
server_name mydomain.com;
access_log /var/log/nginx/app_example.log;
location / {
proxy_pass http://app_tools;
}
}
server {
root /var/www/example;
listen 0.0.0.0:3001;
server_name mydomain.com;
access_log /var/log/nginx/app_example_secure.log;
location / {
auth_basic "secured site tools";
auth_basic_user_file /var/www/example/.htpasswd;
proxy_pass http://app_tools;
}
}But remember what security through obscurity - bad idea. | I've got a website with a reverse front-end proxy in which my app listens on port 3000. I've got another app that sits on 3001 which is part of the same directory that serves the contents for the site on port 3000.What I want to do.Anyone going tomydomain.com:3001would be prompted forauth_basiccredentials. Anyone going tomydomain.comwould get through normally.My current nginx config:upstream app_example {
server 127.0.0.1:3000;
keepalive 64;
}
server {
root /var/www/example;
listen 0.0.0.0:80;
server_name example.com example;
access_log /var/log/nginx/app_example.log;
}Would it be something like this?upstream app_tools {
server 127.0.0.1:3001;
}
server {
listen 80;
server_name example.com example;
location / {
auth_basic "secured site tools";
auth_basic_user_file /var/www/example/.htpasswd;
proxy_pass http://app_tools;
}
} | Http basic auth with nginx on different port |
It is easily accomplished using Nginx:location /api/ {
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://localhost:9000;
} | How do I map www.somesite.com/api(.*) to www.somesite.com/$1:9000?
(I need to map /api to Play framework application running @ port 9000)I did the following:$HTTP["url"] =~ "^/api" {
proxy.server = ( "" =>
( ( "host" => "127.0.0.1", "port" => 9000 ) ) )
}This gets me to somesite.com/api:9000 when I go to somesite.com/api, and I get "Action not found: For request 'GET /api'" | How to map URL to port and modified URL? |
your question is not very structured, i will try giving some hints never the less.if you want to have a look at a working example for locales and domains have a look atthe onruby project. it's a multitenant application where each tenant can configure his own domains and default locales.this is the way that it is implemented:the site uses a cookie to track the currently selected locale.if no cookie is set, it uses the domain name to resolve it. to test that in development, you will have to use/etc/hostsand add the domains in there, so that they can get resolved locally. in your case this would be gettheskill.com:3000 to access your app.i usually use a slightly different domain name, either appending a subdomain or a different tld, so that i can still access the domains over the internet without editing/etc/hosts.the language switches work simply by appending the parameterlocale=XYto a link. soparams[:locale]always takes precedence and switches the selected locale and sets a new value to the cookie.i hope this is enough information to get it working for you.and NO, there is nothing that has to be configured outside of rails or the/etc/hostsfile. | I'm trying to implement internationalization on my rails app. Here's part of my application controllerbefore_action :set_locale
def set_locale
I18n.locale = extract_locale_from_tld || I18n.default_locale
end
def extract_locale_from_tld
parsed_locale = request.host.split('.').last
I18n.available_locales.include?(parsed_locale.to_sym) ? parsed_locale : nil
endIt seems not to work and I can only set the locale from url params making me usescope "(:locale)", locale: /en|se/ doin my routes which I don't want to.From the rails guide, they point out that the switching menu should be implemented like this.link_to("Deutsch", "#{APP_CONFIG[:deutsch_website_url]}#{request.env['REQUEST_URI']}")how do you implement this?
My current switching menu looks like this.<% if I18n.locale == I18n.default_locale %>
<%= link_to image_tag("eng.png", :alt => "England", :class =>"round"), :locale=>'en' %>
<%= link_to_unless I18n.locale == :se, "Swedish", "#{'http://www.natkurser.se'}" %>
<% else %>
<%= link_to image_tag("swe.png", :alt => "Sweden", :class =>"round"), :locale=>'se' %>
<%= link_to_unless I18n.locale == :en, "English", "#{'http://gettheskill.com'}" %>
<%end%>I've added 127.0.0.1 gettheskill.com and 127.0.0.1 natkuser.se to /etc/hosts but it still doesn't work on developement. What file(s) do I modify so that it works on production? I'm thinking nginx configuration files. And ultimately, how is the routes supposed to appear. This is the main thing that seems to have been left out in rails internationalization documentation. A detailed answer would be appreciated. | Rails Internationalization: Setting the Locale from the Domain Name |
Here is the working configuration:location ~ ^/lib.*\.(gif|png|ico|jpg)$ {
expires 30d;
} | I'm trying install DocuWiki script on nginx web server. Documentation says that I need to put following directive at nginx config file:location ^~ /lib/ {
expires 30d;
}When I try to add this, nginx stops sending .php files from lib directory to php-fpm, and send it to me like octet-streams for download. How can I correct this? | Setting expires for dokuwiki lib directory on nginx stops processing .php files |
You might try moving away from your proxy caching for some pages, or even all.There's no reason not to use a CDN for static assets and media library assets, so stick with thatLeverage Sitecore's built-in html cache for sublayouts/renderings - there are quite a few options for cachingUse Sitecore's Debug feature to track down the slowest components on your siteConsider using indexes instead of doing "fast" or Sitecore queriesDon't do descendants query "//*" (I often see this when calculating selected state for navigation - hint: go the other way, calculate the ancestors of the current page)@jammykam wrote anexcellent answer on this over here.John West wrotea great blog post on this also, though a bit older.Good luck! | We're planning to introduce DMS to our customer's Sitecore installation. It's a rather popular site in our country and we have to use proxy caching server (it's Nginx in this case) to make it high-traffic-proof.However, as far as we know, it's not possible to use all the DMS features with caching proxy enabled - for example personalization of content - if it gets cached it won't be personalized.Is there a way to make use of all the DMS features with proxy cache turned on? If not, how do you handle this problem for high-traffic sites - is it buying more Content Delivery servers to carry the load, or extending current server with better hardware (RAM, CPU, bandwidth)? | Sitecore with DMS vs caching server - how do you handle it? |
Nginx only informs FastCGI of the request method viaREQUEST_METHODparam. So you may simply override the value and report anything you want to PHP. You'll have to declare another variable in your Nginx configuration, let's name it$fcgi_method, based on the original request method:map $request_method $fcgi_method {
default $request_method;
PUT POST;
}(note thatmapsections should be athttplevel, i.e. the same configuration level asserverblocks)Then you may use it in your location like so:fastcgi_param REQUEST_METHOD $fcgi_method;It's important that this snippet is after typicalinclude fastcgi_paramsor alike. | I'm using nginx for a PHP REST API that I'm working on. To be fully REST-ful, I'm usingPUT/DELETErequests where appropriate. However, PHP doesn't parse the post body onPUTrequests - which I need for this particular scenario.I had considered parsing it myself, but a) I'd rather let PHP do it in C as it's considerably faster than any implementation I could come up with in PHP and b) there are a lot of edge cases that people have already spent a lot of time working around - I'd rather not duplicate those efforts.On the API side, I have already added support to read theX-HTTP-Method-Overrideheader and use that when available over the actual verb.All I'm looking for now is a way, in nginx, to take aPUTrequest, and change it to aPOSTrequest with that header set.I feel ike I have looked all over the place but can't find a solution. Anything would be helpful (even if you recommend a different parsing technique so I don't have to deal with this). | Nginx Change Request from PUT to POST |
This is the job of a load balancer instead of django or nginx.You can setup two zones with nginx and limit the bandwidth on one and not on the other. You can direct logged in users to the unlimited zone on nginx and anon users to the limited zone.This module lets you limit on a per-ip basis:https://github.com/trbs/Nginx-limit-traffic-rate-module | I need to throttle upload/download speed based on whether a user is logged in or not for my application. I'm using nginx and Django. Is there a way for me to do this? | Throttling upload and download speed optionally either in Django or in nginx |
Theserver_namedirective is useful in cases where you want to host different sites on the same server, and handling them differently depending on the "Host" header field (ex: mysite1.com => a PHP website, mysite2.com => a django website,...)
It's actually a virtual server (see also [the server directive])1.Fromthis article:[...] nginx tests only the request’s header field
“Host” [against the server_name directive] to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.If I understood, you don't want that. So you can use theunderscore character(in theMiscellaneous Namessection).When I don't need to handle specific domains, I generally use "localhost". To be honest I couldn't find any explanation on what it does. I just found examples with this value, and it seems to work exactly as the underscore character.So I'd go withserver_name _;orserver_name localhost; | I saw example configurations ofnginx, most of them use example.com as aserver_nameanduwsgi_passsimilar tounix:/var/www/run/blog.sock;or in combination with ip/port address. but what should I use in case of amazon ec2 instance, since it has long public name, ip is private and if I restart my instance it gets different public name and ip. I need shutdown instances sometime. I want to configure it for using uwsgi+django, but I am totally beginner in web area and servers. | How to configure nginx for Amazon ec2 |
Turns out Rainbows had a configuration option calledclient_max_body_sizethat defaulted to 1 MB
The option isdocumented hereIf this options is on, Rainbows will413to large requests silently. You might not know it's breaking unless you run something in front of it.Rainbows! do
# let nginx handle max body size
client_max_body_size nil
end | I am running ThreadPool rainbows + nginx (unix socket)On large file uploads I am getting the following in nginx error log (nothing in the application log):readv() failed (104: Connection reset by peer) while reading upstreamThe browser receives response:413 Request Entity Too LargeWhy does this happen?"client_max_body_size 80M;" is set both on http and server level (just in case) in nginxnginx communicates with rainbows over a unix socket (upstream socket + location @ proxy_pass)I don't see anything in the other logs. I have checked:rainbows logforeman logapplication logdmesg and /var/log/messagesThis happens when uploading a file ~> 1 MB size | 104: Connection reset by peer: nginx + rainbows + over 1 mb uploads |
I ended up solving the problem with a workaround in my C::A app. And I am documenting it here.So I didn't managed to have nginx pass along thePATH_INFOdown to my C::A app. To work around this, I set thePATH_INFOwith the value ofREQUEST_URIin my C::A app so it picks up the correct runmode.Also, nginx isn't passingQUERY_STRINGeither so I had to append$query_stringto the catch all route in order to pass alongQUERY_STRINGdown as well.my nginx config ends up like this:server {
listen 80;
server_name example.com;
root /var/www/example.com/htdocs;
index index.pl index.html;
location / {
try_files $uri $uri/ /index.pl?$query_string;
}
location ~ .*\.pl$ {
include fastcgi_params; # this is the stock fastcgi_params file supplied in debian 6.0
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PERL5LIB "/var/www/example.com/lib";
fastcgi_param CGIAPP_CONFIG_FILE "/var/www/example.com/conf/my.conf";
fastcgi_pass unix:/var/run/fcgiwrap.socket;
}
} | I am trying to get a C::A app work in nginx fastcgi environment (debian 6.0) and using spawn-fcgi.C::A route is configured using$self->mode_param( path_info=> 1, param => 'rm' );the problem is that whatever C::A app urls (example.com/cities,example.com/profile/99etc ) I am requesting, it always displays the homepage which is what theexample.com/index.pldoes.my nginx setup isserver {
listen 80;
server_name example.com;
root /var/www/example.com/htdocs;
index index.pl index.html;
location / {
try_files $uri $uri/ /index.pl;
}
location ~ .*\.pl$ {
include fastcgi_params; # this is the stock fastcgi_params file supplied in debian 6.0
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PERL5LIB "/var/www/example.com/lib";
fastcgi_param CGIAPP_CONFIG_FILE "/var/www/example.com/conf/my.conf";
fastcgi_pass unix:/var/run/fcgiwrap.socket;
}
}I have successfully setup few php apps in similar fashion.in this case, however, I suspect that I am not passing essentialfastcgi_paramdown to C::A which is required by it.what's your thoughts? | nginx fastcgi configuration for CGI::Application app |
Putting nginx in the front of your stack not only allows you to route static content requests to your s3 storage but also give you the ability to do things like caching your django requests and lower the hits in your app and database. You can set up fine grain cache policies and have more control of exactly where requests will go, all while still under the same url structure as your set up in django. | I'm building app in django which I want to deploy on aws ec2 server. The app will run on gunicorn, and I want to place static files on s3. So my question is - do I need to use nginx at all?Is there any other benefit of using nginx beside serving static files?Arek | aws, django, unicorn and s3 - do I need nginx then? |
This issue is documented inAppendix C.3of the Phusion Passenger manual. The usual method is to hook into the post-fork callback and re-establish the connection there. | I have test app written on ruby, using Sinatra+Sequel.config.ru:require './main'
run Sinatra::ApplicationExample code:require 'sinatra'
require 'haml'
require 'sequel'
DB=Sequel.connect('oracle://test:test@test')
class Tarification < Sequel::Model(DB[:test_table])
end
get '/' do
haml :index
endEverything was all right until I started using Phusion Passenger in my test environment. Now I've got exception in nginx error.log:Sequel::DatabaseError - RuntimeError: The connection cannot be reused
in the forked process.Is the right thing to place DB connection routine to rackup file config.ru or it's better to do it in a different way? If the first variant than how to make call to the connection correct from application code?P.S.: I know that I can dopassenger_spawn_method conservativeand continue opening connection in app code, but it's not the way I'm looking for because of it's speed and resource usage issues. | Right place for Sequel DB connection while working on Phusion Passenger with nginx |
You're right, as far as i can tell the two modules (gzip and gzip_static) don't really interact. Anything compressed on the fly by gzip will possibly be cached for a short period of time, but will not be saved for gzip_static. A bash script to automatically update the .gz files is a good idea, and if you're using source control, could be done as a post-command in Git or Hg.It's worth noting that for small files the overhead is arguably in the disk access rather than the compression.. but every little bit helps. | I'm just curious...nginx will detect the gz files in the same dir,if it does not exists,it will use on-the-fly gzip and return a response(if gzip on)so...when we turn gzip_static on,why nginx not to create a gz file with the output gzipped response?it's about trunked encoding or something else?So do I really need to write a bash script to create/update the gz files everytime I modify the static files,right?Thanks ^_^ | nginx gzip_static won't create the gz file that doesn't exist automatically? |
As pointed out in the comments on thesession_name doc page, there are some issues w/ session_name that can cause your site to break:Make sure that you callsession_namebefore anything else, includingsession_startMake sure there are no periods in your session name (i.e.: don't use "mysite.com" as your session name) | After reading through the PHP manual, I addedsession_name('mysite');to my code to ensure that collisions between sessions won't happen if I will run multiple apps in the future.Unfortunately thesession_name()-function call it kills my site completely . A fatal error is thrown (which appears in the error logs, but doesn't say anything!) and the below error is shown in the browser:I have PHP-FPM running along with suhosin+nginx at a freshly installed Ubuntu VM.What am I doing wrong? | session_name breaks my site? |
There is an option for HTTPSproxy '/path', :to => 'server', :https => true | I don't seem to be able to proxy an https (SSL) request with sc-server. The server keeps saying REDIRECTING TO, but I get no results whatsoever, because the request gets canceledI hope there is some configuration possible. | Proxying HTTPS requests with sc-server |
You can host multiple http servers on single VPS. Conflict will happen only if both, nginx and node.js, are bound to the same port. For example if your nginx web server is listening on port 80, then your node.js http server should listen on other than 80, lets say port 8080. You can also set upreverse proxy(in case you need to abstract your internal network and serve clients on the same port) where you will accept incoming connections on port 80 and nginx will forward communication specific for node.js to port 8080. | I only have one VPS hosting and using nginx for Django web application. Now, I prepare to start new app with Node.js and can I host on current Server ? I think, Node.js is running the own http server and it can conflict with nginx server. | Can I host node.js and Django in one server? |
You could useSetEnvIfin htaccess to set a variable, and then access it from within PHP. That will tell you if htaccess is being used (and the SetEnvIf module is running). But it won't tell you much more - it won't tell you if mod_rewrite is available, for example. To tell what modules are running, you could check the output of phpinfo programatically. | For security reasons, I need to check if a directory is not readable by a client,i.e.that the Apache server supports the.htaccess(noAllowOverride Nonein the apache configuration files). This is very important as I have recently discovered that a LOT of products and framework are not checking this (including Zend and Symphony). Is there a way to check this using only PHP ?BTW, correct me if I am wrong but it seems that other servers (nginx or lighttpd) do not support.htaccess. In these cases, how can I check that my directory is not readable by a client ? | How to detect support for .htaccess in PHP on Apache server (and other servers) |
You probably want to use something like Zenoss.There is some specific nginx integration graphs here:http://community.zenoss.org/docs/DOC-7441 | Are there any tools that I can run on my server to monitor multiple Pylons applications?I need to monitor the number of requests each application receives, how much memory each application is using, how much of the cpu is being used and other stats similar to those. I need to see the stats for each individual Pylons application.All information needs to be stored in a database for me to retrieve later (preferably SQLite, PostgreSQL, or MySQL).Thanks*UPDATE*It is a Unix server and it is running Ubuntu. It's using Nginx.Each application must store its data in its own database for just the application. | Monitor Multiple Pylons Application |
Assuming your 403 error returns with theContent-Typeheader being set totext/xml, you can transform this XML response to the HTML with the nginx usingXSL Transformations. To do it you'll need theXSLT module, and you should be aware this module is not built by default, it should be installed additionally as a dynamic module (or enabled with the--with-http_xslt_moduleconfiguration parameter when you build nginx from the sources).After you install the module, you should specify thexslt_stylesheetdirective under the location used to proxy requests to the MinIO backend:location ... {
xslt_stylesheet /path/to/error.xslt;
...
}Here is an example of the XSLT file that can be used to transform the XML response you've showed in your question:
<!DOCTYPE html>
Additional information:
:
The above file, being applied to the response sample, will give you the following:You can style the output whatever you like. I think this question is not about web design (and I'm not a designer), however provided information should be enough to be an example that you can adapt to your needs.UpdateIf your MinIO response comes with somethat different MIME content type, e.g.application/xml, you'd need to add that content type to the list of MIME types processed by the XSLT module with thexslt_typesdirective:location ... {
xslt_types application/xml;
xslt_stylesheet /path/to/error.xslt;
...
}Digging futher into the XSLT I finished up with somewhat different XSLT file. This one will transform only error messages containingErrortop level node, leaving any other response unchanged:
Additional information:
:
| When the expiration date of MinIO links passes, It responds to an XML like this:
AccessDenied
Request has expired
key-of-the-resource
bucket-name
/path-to/teh-resource
16FC78B1C6185XC7
5d405266-91b9-XXXX-ae27-c48694f203d5
Is there any way to customize this page by some sort of configuration inside the MinIO? I didn't find any related config on their documents.Other potential solutions:Use redirect links on my backend, and check if this link was expired, then redirect it to another pageMaybe we can use Nginx, but I don't know what the directives are. I appreciate your help with that.Updatecomplete response headers:$ curl -I
HTTP/2 403
date: Tue, 05 Jul 2022 12:51:13 GMT
content-length: 0
accept-ranges: bytes
content-security-policy: block-all-mixed-content
strict-transport-security: max-age=15724800; includeSubDomains
vary: Origin
vary: Accept-Encoding
x-amz-request-id: 16FEEFE391X98X88
x-content-type-options: nosniff
x-xss-protection: 1; mode=blockcomplete response:$ curl
AccessDeniedRequest has expirednew_structure/7553257.jpgstorage/decodl-storage/new_structure/7553257.jpg16FEEFFB573XXXXC5d405266-91b9-xxxx-ae27-c48694f203d5 | How to customize AccessDenied page of MinIO? |
Try the following conf:server {
listen 80 default_server;
server_name MY_DOMAIN_NAME;
location / {
proxy_pass http://127.0.0.1:8501/;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}and then, use this command line:streamlit run my_app.py --server.port 8501 --server.baseUrlPath / --server.enableCORS false --server.enableXsrfProtection false | ProblemThe app started by runningstreamlit run main.pywill displayhttp://IP_ADDRESS:8501is displayed correctly, buthttp://DOMAIN_NAMEstops with "Please wait... " and stops.EnvironmentDomain name already resolved with Route53Deploy Streamlit App on EC2 (Amazon Linux) and runStreamlit run main.pyon TmuxUse Nginx to convert access to port80 to port8501Changed Nginx settings/etc/nginx/nginx.confserver {
listen 80; #default
listen [::]:80; #default
server_name MY_DOMAIN_NAME;
location / {
proxy_pass http://MY_IP_ADDRESS:8501;
}
root /usr/share/nginx/html; #defaultWhat I triedI tried the following, but it did not solve the problem.https://docs.streamlit.io/knowledge-base/deploy/remote-start#symptom-2-the-app-says-please-wait-foreverstreamlit run my_app.py --server.enableCORS=falsestreamlit run my_app.py --server.enableWebsocketCompression=false | The Streamlit app stops with "Please wait... " and then stops |
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one. | I'm currently working on a project where we are using Google Cloud. Within the Cloud we are usingCloudRunto provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:in addition to the existing service I deploy another instance of the service which contains the changesI mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are storedThis allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service toALLit works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress toInternalI get errors with HTTPS -> 403 and with HTTP -> 502.Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help. | Mirror requests from cloudrun service to other cloudrun service |
Configure env variablePROXY_ADDRESS_FORWARDING=true, because Keycloak is running behind Nginx reverse proxy -https://hub.docker.com/r/jboss/keycloak/Enabling proxy address forwardingWhen running Keycloak behind a proxy, you will need to enable proxy address forwarding.docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak | I have successfully created a client inside Keycloak using Dynamic Client RegistrationThe response body contains:"registration_client_uri":"https://127.0.0.1:8443/auth/realms...",This is because Keycloak is installed with Docker, and is fronted by NginX. I want to replace the IP address/port with the actual public hostname.Where are the docs / configurations for this?I started keycloak as follows:docker run -itd --name keycloak \
--restart unless-stopped \
--env-file keycloak.env \
-p 127.0.0.1:8443:8443 \
--network keycloak \
jboss/keycloak:11.0.0 \
-Dkeycloak.profile=previewAnd inside keycloak.env, I have setKEYCLOAK_HOSTNAME=example.com | keycloak dynamic client registration registration_client_uri docker hostname override |
By default, the Janus REST api is at the /janus endpoint. To allow Nginx to proxy for the web socket and REST interfaces, create a location entry for /janus that passes to http://yourip:8088/janus and a second one for / that passes to http://yourip:8188.server {
server_name janusserver5.example.com;
location /janus {
proxy_pass http://10.10.30.20:8088/janus;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://10.10.30.20:8188;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_set_header Connection "upgrade";
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/janusserver5.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/janusserver5.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = janusserver5.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name janusserver5.example.com;
listen 80;
return 404; # managed by Certbot
}With this configuration I can now connect tohttps://janusserver5.example.com/janus/info, and wss://janusserver5.example.com with protocol "janus-protocol" | I have a Janus Gateway which exposes a REST api on port 8088. The web socket transport is also enabled on my janus server on port 8188. I have an Nginx reverse proxy set up for https traffic to reach my Janus server. How do I add wss support to my Nginx reverse proxy? Here is my config file "janusserver5.example.com" in nginx/sites-available:server {
server_name janusserver5.example.com;
location / {
proxy_pass http://10.10.30.27:8088;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/janusserver5.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/janusserver5.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = janusserver5.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name video518.doctogether.com;
listen 80;
return 404; # managed by Certbot
} | How can I set up a reverse proxy for the Janus REST api and socket api in Nginx? |
After some research, I was able to track this down withtcpdumpon the proxy node like below :After running wrk on the proxy, I ran tcpdump like below :tcpdump -i ens192 port 80 -nnAnd the result - though quite big - had some interesting insights :10:53:33.317363 IP x.x.x.x.80 > y.y.y.y.28375: Flags [P.], seq 389527:389857, ack 37920, win 509, options [nop,nop,TS val 825684835 ecr 679917942], length 330: HTTP: HTTP/1.1 502 Bad GatewayThe reason I could not see the error in nginx logs is that in reverse proxy mode logging, ngnix will log the results only in debug mode, which, itself, makes the processing so slow that the above error could not surface. Using tcpdump, I could find out what can be the issue inside the packets. | We are doing some test with NGINX as reverse proxy in front of two NGINX sample web servers. The tool being used in our tests iswrk. The web servers' configuration are very simple. Each of them has a static page (similar to default welcome page) and the NGINX proxy is directing traffic in a round robin fashion. The aim of the test is to measure the impact of different OSes with a NGiNX reverse proxy on the results (We are doing this with CentOS 7, Debian 10 and FreeBSD 12)
In our results, (except FreeBSD) we have a lot of non-2xx or 3xx errors inside:10 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 74.50ms 221.36ms 1.90s 91.31%
Req/Sec 5.88k 4.56k 16.01k 43.96%
Latency Distribution
50% 4.68ms
75% 7.71ms
90% 196.01ms
99% 1.03s
3509526 requests in 1.00m, 1.11GB read
Socket errors: connect 0, read 0, write 0, timeout 875
Non-2xx or 3xx responses: 3285230
Requests/sec: 58431.20
Transfer/sec: 18.96MBAs you can see, about 90 percent of the responses are in this category.
I've tried several different configurations on NGINX logging to "catch" some of these errors. But all I get is200 OKin the log. How can I get more information about these responses? | How to find out the errors behind a lot of non-2xx or 3xx responses when load testing nginx reverse proxy with wrk |
You are correct that this header itself doesn't do anything, you need additional logic in your PHP application.In my case I'm using a fastcgi variable rather than a header:fastcgi_param TLS_EARLY_DATA $ssl_early_data;Then in PHP you need to perform a check for any request that is at risk for a replay attack:if ($_SERVER['TLS_EARLY_DATA'] === '1') {
http_response_code(425);
exit;
}You need this sort of check on everything you want Replay protection on (eg.POST /transfer_money).While you can leave it off of something that has no side effects (eg.GET /account_balance).Because the attacker cannot decode the payload in the replay, the GET has no teeth and you can allow those requests to use TLS Early Data.Finally, most browsers do not yet have support for a HTTP 425 Too Early so I would strongly recommend returning an error page telling them to "Refresh and resubmit" the form.As browser support improves, fewer people will see that error page and browsers will handle 425 errors transparently, but we are not there yet."425 Too Early" is currently supported in:Firefox 58+And you can track other browsers here:Chrome & friendsNo Safari bug yet | I'm preparing to turn on nginxssl_early_datato enable RTT-0 with TLS 1.3.I understand that, if I don't do it right, replay attacks become possible. I understand that, to prevent this, you need to also use$ssl_early_dataRequests sent within early data are subject to replay attacks. To
protect against such attacks at the application layer, the
$ssl_early_data variable should be used.What I don't understand is if it's enough to put this directive in the nginx configuration or if/how the PHP application on my server should somehow use this $ssl_early_data variable and do some additional checks. | $ssl_early_data from nginx: should the application use it somehow? |
In yourdefault.confreplacehttps://app:3000byhttp://app:3000as SSL termination is happening at Nginx itself, app is still using http.Update yourdocker-compose.yamlUsedepends_oninstead oflinks, it has deprecated.version: "3.8"
services:
app:
build: roundmap/
container_name: app
command: [ "node", "app.js"]
ports:
- 3000:3000
nginx:
image: nginx:1.17.2-alpine
container_name: nginx
ports:
- 80:80
- 443:443
depends_on:
- app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
- /etc/ssl/roundmap/roundmap.crt:/etc/ssl/roundmap/roundmap.crt
- /etc/ssl/roundmap/roundmap.key:/etc/ssl/roundmap/roundmap.key | Trying to run nginx and web application via docker composedockerfileFROM node:12.16.2 as build
RUN mkdir /app
COPY package*.json ./
RUN npm install
COPY . /app
RUN npm run-script build
COPY --from=build /app/build /var/www/roundmap.app
EXPOSE 3000nginx config defauls.confserver {
listen 80;
listen 443 ssl;
listen 3000;
server_name *.roundmap.app 185.146.157.206;
root /var/www/roundmap.app;
index index.html;
ssl_certificate /etc/ssl/roundmap/roundmap.crt;
ssl_certificate_key /etc/ssl/roundmap/roundmap.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
location / {
proxy_pass https://app:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}docker-compose.ymlversion: "4"
services:
app:
build: roundmap/
container_name: app
ports:
- 3000:3000
nginx:
image: nginx:1.17.2-alpine
container_name: nginx
ports:
- 80:80
- 443:443
links:
- app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
- /etc/ssl/roundmap/roundmap.crt:/etc/ssl/roundmap/roundmap.crt
- /etc/ssl/roundmap/roundmap.key:/etc/ssl/roundmap/roundmap.keyrunning via docker-compose up
and getting errornginx | 2020/07/20 16:39:19 [emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/default.conf:22
nginx | nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:22
nginx exited with code 1Сan you please help where am I mistaken? | Docker compose error: nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:21 |
You need to useRUNto perform commands when build a docker image, as mentioned in @tgogos comment. See therefer.You can try something like that:FROM nginx
RUN apt-get update && \
apt-get install git
RUN rm -rf /usr/share/nginx/html && \
git clone https://github.com/Sonlis/kubernetes/html /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]Also, I would like to recommend you to take a look inthis part of documentationof how to optmize your image taking advantages of cache layer and multi-stages builds. | I want to create a nginx container that copies the content of my local machine /home/git/html into container /usr/share/nginx/html. However I cannot use Volumes and Mountpath, as my kubernetes cluster has 2 nodes.
I decided instead to copy the content from my github account. I then created this dockerfile:FROM nginx
CMD ["apt", "get", "update"]
CMD ["apt", "get", "install", "git"]
CMD ["git", "clone", "https://github.com/Sonlis/kubernetes/html"]
CMD ["rm", "-r", "/usr/share/nginx/html"]
CMD ["cp", "-r", "html", "/usr/share/nginx/html"]The dockerfile builds correctly, however when I apply a deployment with this image, the container keeps restarting. I know that once a docker has done its job, it shutdowns, and then the deployment restarts it, creating the loop. However when applying a basic nginx image it works fine. What would be the solution ? I saw solutions running a process indefinitely to keep the container alive, but I do not think it is a suitable solution.Thanks ! | How to cleanly stop containers from restarting Kubernetes |
Well, as it turns out, it had nothing to do with nginx. The ingress element was configured incorrectly which rewrote every request to "/" and thus every request returned the default page (= index.html).In more detail, I had the following configuration in spec.rules (note the last lines)http:
paths:
- backend:
serviceName: my-app
servicePort: 80
path: /which had to be changed to:http:
paths:
- backend:
serviceName: my-app
servicePort: 80
path: /(.*) | I have set up a solution of an Angular (v9) application that is being built as a Docker image (with nginx as webserver) and deployed to Kubernetes. Everything works, except that for each request, both for the root application itself as well as its javascript files I receive the content of the index.html.My nginx configuration file looks as follows (mostly the default one):pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
types {
module;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
add_header X-Content-Type-Options nosniff;
server {
location / {
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
}
}Even if I comment out the location/try_files config lines it is still the same situation. There is a lot of guidance that I found for creating such a rewriting but nowhere did I find anything which would explain why this rewriting happens without me even configuring it. | nginx returns index.html content for all Angular files (including scripts) |
Thisnginx.conffile is what I have been using - works like a charm for me.server {
listen 80 default_server;
listen [::]:80 default_server;
root /your/web/root/path/nginx/html;
index index.html;
location / {
# Adds support for HTML5 history mode
try_files $uri $uri/ /index.html;
# -or-
# try_files $uri /index.html;
}
} | I have a URL in the form ofwww.example.com, and am using nginx as the web server. When I go towww.example.com, the site works fine, however whenever I go towww.example.com/anyUri, I receive a 404. Here is the location element in my sites-available file:location ~*/(.*) {
try_files $uri $uri/ = $404 ;
}The website is built in React, so there is no real directory, but rather different routes. When I click on a link to navigate to a different route, it loads correctly, but if I try to access that same route directly through the URL, I receive the 404 as well. For example, if from my home page I click "Contact", the URL changes towww.example.com/contactand loads the "Contact" component as desired. If I refresh the page or type inwww.example.com/contactmanually, I receive the 404. I have my website set up to handle the nonexistent URIs accordingly, and do not need nginx to handle those. Instead, I want nginx to go towww.example.com/anyUriand let the website logic take over from there. I have tried looking up the different patterns online, however none seem to be working as desired. | Visiting Any URI other than Root Resulting in 404 - nginx |
rootcan optionally take a trailing/- it doesn't matter and it is ignored.Thefileelements of atry_filesstatement (as with all URIs in Nginx) require a leading/. Seethis documentfor details.For example:root /usr/share/project;
location = / {
try_files /index.html =404;
}
location / {
proxy_pass ...;
}This works because the URI is internally rewritten to/index.htmland processed within thesame location.If you use theindexdirective, the URI is internally rewritten to/index.htmland Nginx will search of a matching location to process the request. In this case, you need another location to process the request.For example:root /usr/share/project;
location = / {
index index.html;
}
location = /index.html {
}
location / {
proxy_pass ...;
}The empty location block inherits the value ofrootfrom the outer block. Theindexstatement is the default value anyway, so strictly speaking, you do not need to specify that statement either. Notice that the values of theindexdirectivedo notrequire a leading/. Seethis documentfor details. | I'm trying to set up Nginx to work as reverse proxy, but also handle single static page (greeting) by its own:root /usr/share/project/;
location = / {
try_files index.html =404;
}This config always return 404. When I tried to figure out what exactly happens I rewrite try_files directive to make it fail:try_files index.html index.html;And was surprised at what I saw in the error.log:2019/05/07 17:30:39 [error] 9393#9393: *1 open() "/usr/share/projectindex.html" failed (2: No such file or directory), client: 10.25.88.214, server: , request: "GET /index.html HTTP/1.1"As you can see result file name is projectindex.html. Slash was missed. I'm trying to add / and ./ in different places but it did not lead to anything.Finally I replace my config by following:root /usr/share/project/;
location = / {
try_files /index.html =404;
}
location = /index.html {
}and it works.I do not understand what is wrong with the first config. And also I do not understand the meaning of an empty location:location = /index.html {
}and why it works properly.Maybe there is a better way to do the same? | Why try_files directive missing slash between root and filename? |
I suspect the issue is that Django is rejecting the API requests with a 400 response because of a mis-matchingHostheader. Those ajax requests are probably using127.0.0.1for theirHostheader, whereas you're initiating the main page usinglocalhost.You should tell Nginx to set the header in the location configuration for/nokia-sdn/api/v1/- for example:location /nokia-sdn/api/v1/ {
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8000/nokia-sdn/api/v1/;
} | I have an application whose frontend is in React and the back-end is in Django with API's developed using DRF for the front-end. Now I am using Nginx as the web server along with Gunicorn. The following is my Nginx conf file in sites-available:server {
listen 8000;
server_name localhost;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/nokia-ui/build/;
}
location /nokia-sdn/api/v1/ {
proxy_pass http://127.0.0.1:8000/nokia-sdn/api/v1/;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://unix:/home/nokia-sdn/nokia.sock;
}}Now the when I accesshttp://localhost:8000/I get the login page, but I am not able to make any api requests to the back-end.Do we need to define the path of the API in nginx file, if so what is the format? | Configuring Nginx and Gunicorn for DRF, Django and React Frontend |
The pathname of the requested file is constructed by concatenating the value ofrootdirective with the URI. So you can only userootwith subfolders if (for example)second_pathandsecond_folderis actually the same name. Seethis documentfor details.For example:location /foo {
root /path/to/root;
}The URI/foo/index.htmlis located at/path/to/root/foo/index.htmlWheresecond_pathandsecond_folderare different names, you will need to use thealiasdirective. Seethis documentfor details.For example:location /foo {
alias /path/to/root/bar;
}The URI/foo/index.htmlis located at/path/to/root/bar/index.html | I have three static sites. I am using Vue 2 and running build for each folder.I want to host all three static files on the same server instance. Right now I don't have domain so i want to host on server's IP itself.I have folder in html/www folderfirst_folder
second_folder
third_folderAll the above three folders have index.html file in it.Let's say that I have an IP address 3.12.178.229I want to access folders likehttp://3.12.178.229 // i.e path for first_folder
http://3.12.178.229/second_path // i.e path for second_folder
http://3.12.178.229/third_path // i.e path for third_folderI am able to access the index.html file which first_folder has, but when I am trying to access second_folder using IPhttp://3.12.178.229/second_folderIt does not show anything.{
listen 80;
server_name 3.12.178.229;
location / {
root path_to_first_folder/first_folder; // I am able to access this
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /second_path {
root path_to_first_folder/second_folder; // I am able to access this
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /third_path {
root path_to_first_folder/third_folder; // I am able to access this
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
} | Nginx configuration for multiple static sites on same server instance |
Click this button as shown on the screenshot: | I am running a Django Project in PyCharm and deploying it to EC2 in AWS. The guide tells me to use nginx I have to create a file callednginx_someName.confhowever no matter how much I try I can't get create a.conffile and write in it. I tried to download Scala to Pycharm using the question belowIntelliJ IDEA plugin to fold .conf files?but somehow the Scala plugin is no longer available on Pycharm. (Scala was supposed to be a plugin that allows.conffiles)However I am able to create.configfiles So I named my filenginx_someName.configIs it the same thingImage below in relation to @yole's solutionFollowing @yole's advise related image Reached here now.. | How to add .conf files to Pycharm |
I had a similar issue before. It might be because you are zipping and uploading the whole project folder instead of just the project folder contents | I tried changing nginx config usingAttempt 1.ebextensions/000_nginx.configcontainer_commands:
01_reload_nginx:
command: "sudo echo 'underscores_in_headers on;' >> /etc/nginx/conf.d/elasticbeanstalk/00_application.conf"andAttempt 2.ebextensions/000_nginx.configfiles:
"/tmp/proxy.conf":
mode: "000644"
owner: root
group: root
content: |
underscores_in_headers on;
container_commands:
00-add-config:
command: cat /tmp/proxy.conf >> /etc/nginx/conf.d/elasticbeanstalk/00_application.conf
01-restart-nginx:
command: /sbin/service nginx restartandAttempt 3.ebextensions/nginx/conf.d/elasticbeanstalk/00_application.conflocation / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
underscores_in_headers on;But each try updates files and then clears out the changes after code deploy to BeanstalkHow can I prevent overwriting config files or basically how to change Nginx config? | Nginx config overwrites during beanstalk deployment |
Yes, just make sure paths in yoururls.pydo not overlap with routing from yourCHANNELS_LAYERCHANNEL_LAYERS = {
"default": {
# ...
"ROUTING": "websockets.routing.channel_routing",
},
} | I currently, Django rest api which is developed using docker, nginx, uWSGI, redis, Django & Angular.I am adding couple of websocket endpoints, I would like to keep the existing architecture and keep serving http requests via uWSGI & nginx. And use Django channels (with nginx) for web-socket connections.Is that possible? If so, can I use just one container and start uWSGI and daphne on different ports ? Or do I need separate Django app for channels all together and separate container ? | Django Channels Along with uWSGI |
You should use following codelocation / {
# First attempt to serve request as file, then
# as directory, then fall back to proxy
try_files $uri $uri/ @proxy;
}
location @proxy {
proxy_pass www.example.com;
} | So I am using nginx to reverse proxy to another server. This wasn't serving static files, until I linked them in the location. The location block is super long, but looks similar to the code below. I'm sure that I'm doing this the wrong way, but it works, it's just tedious to write all the paths. I'm wondering if there's a better way.location / {
proxy_pass www.example.com;
}
location /sytlesheet.css {
proxy_pass www.example.com/stylesheet.css;
}
location /page1 {
proxy_pass www.example.com/page1;
}
#this goes on and onIs there a way to get everything past the '/' for example 'page1', and pass that to the proxy without manually typing it?I'm hoping there's something a way to use a variable or something to link all the pages and resources with a single location block:location / {
proxy_pass www.example.com;
}
location /$variable {
proxy_pass www.example.com/$variable;
}Thanks! | nginx proxy_pass to all pages |
A given media file will only get downloaded after you have clicked on a linkYou can confirm this yourself by getting onto your page in question then hit F12 or ctrl-shift-i in your browser (firefox/chrome/opera) to open up your developer tools then hit the Network tab which will display network traffic ... once there do a page refresh and observe traffic ... next to none since no media files have been requestedNow click on a media link to request a download and only then will you see significant network traffic as the media packets come tumbling into the browserBy default above setup will just download the mp3 not stream ... to stream an mp3 file create on server side a text file called mysong.m3u which contains URL of actual mp3 filehttp:///sorabhdomain.com/mymedia/mysong.mp3then have the browser link use the m3u URL not the mp3 URL and the browser should now stream not download | I have my web application which has many audio files. I have kept these files on my Nginx server.On my HTML page, I am usingaudio tag.
My question is when my HTML page loads on the web browser then do all the audio files will get downloaded at the same time? Or when the user plays particular audio file, then only that audio get streamed and downloaded.Since my page has many audio files, so I need only that audio gets streamed/downloaded to the user which he plays. | Audio mp3 stream from static server NGINX |
Yes, it's appropriate to do these kinds of redirects in the webserver. If it's https your certificate needs to cover both domains. | I have a Docker + Gunicorn + Nginx + Django setup on AWS EC2 and Route 53. Right now I want to redirect mydomain.com to www.mydomain.com.Is it appropriate to do a redirect in a Nginx configuration? Or are there are better solutions.Here is docker-compose-yml, using gunicorn to start the Django server.version: '2'
services:
nginx:
image: nginx:latest
container_name: dj_nginx
ports:
- "80:8000"
- "443:443"
volumes:
- ./src/my_project/static:/static
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dj_web
command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn my_project.wsgi -b 0.0.0.0:8000"
depends_on:
- db
volumes:
- ./src:/src
- ./apps/django_rapid:/src/my_project/django_rapid
expose:
- "8000"
db:
image: postgres:latest
container_name: dj_dbHere is my Nginx Confupstream web {
ip_hash;
server web:8000;
}
# portal
server {
listen 8000;
location / {
proxy_pass http://web/;
}
location /media {
alias /media; # your Django project media files - amend as required
}
location /static {
alias /static; # your Django project static files - amend as required
}
server_name localhost;
}
# portal (https)
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/conf.d/mynginx.crt;
ssl_certificate_key /etc/nginx/conf.d/mynginx.key;
location /media {
alias /media; # your Django project media files - amend as required
}
location /static {
alias /static; # your Django project static files - amend as required
}
location / {
proxy_pass http://web/;
}
} | Docker + Gunicorn + Nginx + Django: redirect non-www to www on AWS Route 53 |
The above configuration works. The issue on my side was with using the wrong nginx config file that had the old settings. | I'm setting up a server with a Phoenix app that will use websockets. Locally websocket work but I have problems with setting it up on my staging server. Can someone help me with setting up websockets on my server. I have nginx configured like this:map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream my_app {
server 0.0.0.0:6443;
}
server {
listen 80;
listen 443;
server_name example.com;
ssl on;
ssl_certificate /path/to/wildcard.crt;
ssl_certificate_key /path/to/wildcard.key;
ssl_prefer_server_ciphers on;
if ($request_uri ~ "^[^?]*//") {
rewrite "(.*)" $scheme://$host$1 permanent;
}
if ( $scheme = http ){
rewrite ^ https://$host$request_uri permanent;
}
location / {
allow all;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Cluster-Client-Ip $remote_addr;
# WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass https://my_app;
}
}and my phoenix config is:use Mix.Config
config :my_app_core, MyAppCoreWeb.Endpoint,
load_from_system_env: false,
url: [host: "https://example.com", port: {:system, "PORT"}],
http: [port: {:system, "PORT"}],
https: [otp_app: :my_app_core, port: 6443, keyfile: "/path/to/wildcard.key", certfile: "/path/to/wildcard.crt"],
server: true,
root: ".",
watchers: [],
version: Mix.Project.config[:version],
check_origin: false
config :logger, :console, format: "[$level] $message\n"
config :phoenix, :stacktrace_depth, 2
config :phoenix, :serve_endpoints, true
import_config "workspace.secret.exs"I'm testing the connection withhttp://phxsockets.io/and I getfailed: Error during WebSocket handshake: Unexpected response code: 400Can someone help with this? | How to configure Elixir, NGINX, Websockets on server |
This feature now exists for both Linux and Windows and is in the same section of the Azure Portal for Web App Services.Go to your Web App ServiceGo to SettingsGo to ConfigurationSelect the tab for General SettingsBeneath Platform Settings, choose the "2.0" from the Http version drop down.See image for details. | SSL configuration is handled upstream on Azure App Service.So running an App Service as Docker container and configuring Nginx for server { listen 443 ssl h2; } is not necessary and in fact, will not render a webpage.How do I modify the configuration on Azure App Service for Linux to run as http2 protocol when SSL w/ customer domain is setup on the service?Thanks, | Does Azure App Service for Linux support http2? |
This is a:NGINX maintenance page example for docker-composeJust keep nginx in a separate container in the same docker-compose.yml and deploy as this:docker-compose up -d --build --force-recreate your-app-serviceAdd some logic to put the maintenance page in nginx. The nginx service won't be touched by compose.Use something like this to enable a maintenance site:Yournginx config:upstream backend {
server app:80;
server maintenance:80 backup; # <-- note the backup flag
}
server {
location / {
proxy_pass http://backend;
proxy_connect_timeout 1s;
}
}Then in your docker-compose.yml:version: "3"
services:
app:
(...)
nginx:
(...)
maintenance:
image: nginx
volumes:
- ./maintenance.html:/usr/share/nginx/html/index.hml
- ./maintenance.conf:/etc/nginx/conf.d/default.confmaintenance.confserver {
root /usr/share/nginx/html;
server {
listen 8080;
location / {
rewrite ^ /index.html break;
}
}
}I have a complete working example here:https://github.com/xbx/docker-compose-nginx-maintenance-page-example | I think I have a chicken-and-egg situation:My Rails app is Docker based and I have several images for nginx, Rails, a Resque worker, Redis and MySQL.My current implementation of deployment is (simplistically):docker-compose build
docker-compose down
... compile assets
... migrate
docker-compose upWhich works great, but of course if I browse to the app during deployment I don't any response which isn't great user experience.I know of setting a 'maintenance' page in nginx that is served while the site is in maintenance mode, but the nginx image is part of the docker-compose spec, so that will go down as well.Having all the images in one docker-compose spec does make deployment easier - if anything changes inanyimage (including nginx), that will be deployed automatically. And especially because nginx, Rails, MySQL, etc. are all in the same net.How could I keep serving a maintenance page while the app is redeploying if nginx is part of the docker-compose spec?(If it makes a difference, I'm using gitlab and a gitlab-runner container on the host to do the deployment from the repo.)Thanks | nginx maintenance page for Docker based Rails app |
nginxalways has adefault server, the one that is used if theserver_namedoes not match. If you only have one server block withlisten 443, then that is the implicit default server for allhttpsconnections irrespective of server name.You will need to set up an explicitcatch-allserver forhttpsconnections, or addlisten 443 sslto an existingserverblock to act as thecatch-allserver.You can reuse the samecertificate fileand you will continue to get certificate errors if anyone attempts to use it, but at least your other domains will not be exposed.For example:ssl_certificate /path/to/crt;
ssl_certificate_key /path/to/key;
server {
listen 443 ssl;
server_name domain1;
...
}
server {
listen 443 ssl default_server;
return 403;
}Seethis documentandthis documentfor more. | I have two domains set up on a Digital Ocean droplet (with nginx). I've installed a SSL certificate in one of them (domain1) and everything is fine with that one. The second domain (domain2), does not require a SSL certificate but if I try to accesshttps://domain2is showing me the content of domain1 and giving me a certificate error (This page is not secure).I understand the certificate error, but I don't want the contents of domain1 being displayed inhttps://domain2Is it a configuration problem? | Domain without ssl certificate redirecting to different ssl domain |
The speed increase you're probably seeing here in PHP using NFS over memcached is adeceptiveone by nature. PHP session storage defaults to lock acquisition on a first-come-first-served basis. Which means that two concurrent requests made to PHP for the same session will cause the first request to lock the session until either PHP is done or you explicitly callsession_write_close()from your code, to release the lock.But in a file based session store, PHP relies on flock, which doesn't work in NFS.The NFS (Versions 2 and 3) protocoldoes notsupport file lockingSee this answer on unix stackexchangeSo for a distributed session store, you rarely want slow file-system based lock. Most in-memory stores work faster anyway. And since NFS can't typically handle flock calls, your sessionswill get corruptedif two concurrent requests try to write to the same session file. In other words, what you're seeing asfasteris just basically your requests potentially corrupting their sessionsfaster, because there's no lock on the session for concurrency.If your requests take a really long time and don't require the session it's best to explicitly callsession_write_closeas early in the code as possible when you are done with the session so that any other concurrent requests coming in can get at the session. This is typically a problem when you're doing a lot of long-polling requests to PHP (say over AJAX). | i have one question, i am usingnginxandPHPFPM.
I am using loadbalancer for2 phpfpm servers.To keep sessions from both phpfpm servers synced i used memcached.
But when i use memcached i see thatpage is slowing down.When i usefiles as sessions save typeweb is runing faster, but sessions are not synced imediately (i guess files are owerwriting). I am using NFS to share sessions.Any ideas please how to sync sessions when using nginx loadbalancer for phpfpm servers? | NGINX + phpFPM load balancer and sessions |
It's better do at network level. For unix:iptables -A INPUT ! -s 127.0.0.1 -p tcp -m tcp --dport 8080 -j DROPSeehttps://serverfault.com/a/248864 | I have the following config:server {
listen 80;
server_name my_server.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}The application is run on port 8080. How can I block this port? I mean when some user openshttp://my_server.com:8080then server sends no response.Thank you! | Block a specific port in Nginx |
The issue was due to the PHP session lock. When i used to make a certain request, PHP used to lock the session file and would release only after the request was completed.To avoid this, you may usesession_write_close(). In my case, I implemented redis session.Problem solved!!! | I have php-fpm & nginx stack installed on my server.I'm running a JS app which fires an AJAX request that internally connects to a third party service using curl. This service takes a long time to respond say approximately 150s.Now, When i connect to the same page on another browser tab, it doesn't even return the javascript code on the page which triggers the ajax requests. Basically all subsequent requests keep loading until either the curl returns response or it timeouts.Here, i have proxy_read_timeout set to 300 seconds.I want to know why nginx is holding the resource and not serving other clients. | Nginx PHP-FPM and curl hangs subsequent browser to server requests |
There is a problem in your config: you have SSI enabled in both servers, due tossi on;defined at http{} level. This results in SSI directives being expanded in the second server{}. The response as cached in the first server doesn't have any SSI directives in it (there are already expanded), and hence it stays the same all the time.If you want included fragment to be fresh on every request, you have to enable SSI only in the first server, e.g.:proxy_cache_path /path/to/cache keys_zone=my_cache:20m;
server {
listen 80;
server_name first.example.com;
location / {
proxy_pass http://127.0.0.1:81;
proxy_cache my_cache;
ssi on;
}
}
server {
listen 81;
server_name second.example.com;
location ~ ^/.+\.php {
fastcgi_pass 127.0.0.1:9000;
}
}Note thatssi onis in the first server, along withproxy_cache my_cache. This way nginx will cache backend responses with SSI directives in them, and will redo SSI processing on every request, caching includes separately if needed. | I'm trying to set up a basic working Nginx+SSI example:Nginx config (just the relevant parts, for brevity):ssi on;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:20m max_size=20m inactive=60m use_temp_path=off;
server {
listen 80;
server_name localhost;
location / {
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_buffering on;
proxy_pass http://127.0.0.1:81;
}
}
server {
listen 81;
root /path/to/root;
location ~ ^/.+\.php {
fastcgi_pass 127.0.0.1:9000;
}
}ssi.php:
Time:
Fragment time: time.php:<?php
header('Cache-Control: no-cache');
echo time();The include works nicely:Time: 1466710388
Fragment time: 1466710388Now, a second later I would expect the page (ssi.php) to be still cached, but thetime.phpfragment to be fresh:Time: 1466710388
Fragment time: 1466710389However it stays completely the same for 5 seconds, after which the ssi page is updated along with the fragment:Time: 1466710393
Fragment time: 1466710393I've done this before with ESI and Varnish, and expect this to work the same with SSI. Am I wrong in assuming this? I can't find an answer online for this, and have experimented with different cache-control headers, but I'm fairly sure this is the right way to do this. What am I missing here? | Nginx/SSI independent fragment caching |
You can simplify by capturing the.in$sub:server {
listen 80;
listen 443 ssl;
server_name ~^(?\w+\.)?firstdomain\.org$;
ssl_certificate /path/to/certificate;
ssl_certificate_key /path/to/certificatekety;
return 301 "$scheme://${sub}seconddomain.org$request_uri";
} | I'm currently playing around with nginx and am trying to redirect all traffic for e.g. firstdomain.org to seconddomain.org This is working fine with a simple redirect but I now also want it to hand on the URI, the scheme and the subdomain.E.g.http(s)://firstdomain.org/redirects tohttp(s)://seconddomain.org/,http(s)://firstdomain.org/testredirects tohttp(s)://seconddomain.org/test,http(s)://test.firstdomain.org/redirects tohttp(s)://test.seconddomain.org/and so on..My current set up is like this:server {
listen 80;
listen 443 ssl;
server_name ~^(?\w+)\.firstdomain\.org$, firstdomain.org;
ssl_certificate /path/to/certificate;
ssl_certificate_key /path/to/certificatekety;
location / {
if ($sub = '') {
return 301 $scheme://seconddomain.org$request_uri;
}
return 301 $scheme://$sub.seconddomain.org$request_uri;
}
}This is redirecting links without subdomain just fine but as soon as it's e.g.http(s)://test.subdomain.orgorhttp(s)://test.subdomain.org/testit does not work anymore.Is there anything I have missed or is there maybe even an easier way nginx supports to achieve what I want to do? | redirect all traffic for a certain domain using nginx |
This is a known bug in Kestrel RC1:https://github.com/aspnet/KestrelHttpServer/issues/341You can work around it by forcingConnection: keep-alive:proxy_set_header Connection keep-alive; | I am trying to get nginx, ASP.NET 5, Docker and Docker Compose working together on my development environment but I cannot see it working so far.Thisis the state where I am now and let me briefly explain here as well.I have the following docker-compose.yml file:webapp:
build: .
dockerfile: docker-webapp.dockerfile
container_name: hasample_webapp
ports:
- "5090:5090"
nginx:
build: .
dockerfile: docker-nginx.dockerfile
container_name: hasample_nginx
ports:
- "5000:80"
links:
- webapp:webappdocker-nginx.dockerfilefile:FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.confanddocker-webapp.dockerfilefile:FROM microsoft/aspnet:1.0.0-rc1-update1
COPY ./WebApp/project.json /app/WebApp/
COPY ./NuGet.Config /app/
COPY ./global.json /app/
WORKDIR /app/WebApp
RUN ["dnu", "restore"]
ADD ./WebApp /app/WebApp/
EXPOSE 5090
ENTRYPOINT ["dnx", "run"]nginx.conffile:worker_processes 4;
events { worker_connections 1024; }
http {
upstream web-app {
server webapp:5090;
}
server {
listen 80;
location / {
proxy_pass http://web-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}All good and when I rundocker-compose up, it gets two containers up and running which is also nice. From the host, when I hitlocalhost:5000, the request just hangs and when I terminate the request, nginx writes out a log through docker compose indicating499HTTP response.Any idea what I might be missing here?Update:I added some logging to ASP.NET 5 app and when I hit localhost:5000, I can verify that request is being send to ASP.NET 5 but it'sbeing terminated immediatelygiving a healthy response judging from the 200 response. Then, nginx sits on it util I terminate the request through the client. | Request hangs for nginx reverse proxy to an ASP.NET 5 web application on docker |
By default, nginx catch HTTP error codes. It is a good thing, for security purposes.It is possible to disable this behaviour, you can setuwsgi_intercept_errors off.http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_intercept_errorsYou can use custom static error pages, served by nginx. Example:error_page 413 /custom_413.html;
location = /custom_413.html {
root /usr/share/nginx/html;
internal;
}Just set it to all error codes you want to handle. | I am using the following errorhandler in my flask [email protected](413)
def error413(e):
return render_template('error413.html'), 413which shows an error page if error 413 happens (filesize too large). This works fine on my localhost, but on the server I get the nginx 413 error page instead.413 Request Entity Too Large
nginx/1.4.6 (Ubuntu)Is there anything which is different between nginx server and localhost regarding error handling?
I use gunicorn together with nginx...
thanks
carl | errorhandler in flask and nginx server |
The problem was a trailing backslash in theproxy_passdirective. It should've beenproxy_pass http://default;. Thanks toanatolyfor pointing that out.UPDATE:Nginx documentationis a bit confused, but highlights the difference between proxy_pass with/without URI:A request URI is passed to the server as follows:If the proxy_pass directive is specified with a URI, then when a
request is passed to the server, the part of a normalized request URI
matching the location is replaced by a URI specified in the directive:location /name/ {
proxy_pass http://127.0.0.1/remote/;
}If proxy_pass is specified without a URI, the request URI is passed to
the server in the same form as sent by a client when the original
request is processed, or the full normalized request URI is passed
when processing the changed URI:location /some/path/ {
proxy_pass http://127.0.0.1;
} | My question is how do I get Nginx to forward a domain (www.example.com) to a meteor app on the same server without ssl.Here's the details:
I'm trying to use Nginx to host an app made by meteor on my own server. I've checked a ton of different config files that I've found online (most of which are dated) but I can't seem to get Nginx to forward my domain name to port 3000 where meteor can pick it up and handle the web page.The most recent config for Nginx to proxy a port is this:upstream default {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://default/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}I've slightly modified it to what I believe is correct for my setup. I'm using the default config file in Nginx and I have created a meteor app in /usr/share/nginx/html using "meteor create html."I know it's bad habit to use the defaults for all this but I'm just trying to get a meteor app up and running first.I should have all the dependencies installed: meteor, nodejs, mongodb, and nginx.A lot of the more up to date nginx configs I've found are using SSL which I don't intend to use. I'm not sure how to modify them for what I need either.Could someone explain either why this config doesn't work or what I'm missing to get Nginx to point to my meteor app at www.example.com:3000?Thanks in advance.P.S.I've been able to get the same setup working using a VM, with the exact same config file. I'm at a loss as to where I'm missing a step. | Nginx config for meteor |
In order to dynamically create thetssegments from a static file like anmp4the filename and extension must be present in them3u8playlist filename:myvideo_high.mp4.m3u8formyvideo_high.mp4For:myvideo_high.m3u8it assumes the segments already exist.TheServing Media with NGINX Pluswhitepaper shows an example for a manually createdm3u8variant playlist whichis incorrect due to page formatting (line wrapping):#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=545600,RESOLUTION=416x234,
CODECS="avc1.42e00a,mp4a.40.2"
/hls/myvideo_low.mp4.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,RESOLUTION=640x360,
CODECS="avc1.42e00a,mp4a.40.2"
/hls/myvideo_high.mp4.m3u8THe#EXT-X-STREAM-INFinformation should be on a single line (no new-line characters):#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=545600,RESOLUTION=416x234,CODECS="avc1.42e00a,mp4a.40.2"
/hls/myvideo_low.mp4.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,RESOLUTION=640x360,CODECS="avc1.42e00a,mp4a.40.2"
/hls/myvideo_high.mp4.m3u8 | I have installed Nginx Plus and configured HLS for streaming. While requesting them3u8file I'm getting the error:2015/09/29 13:32:34 [error] 5814#5814: *1 open() "/usr/video/hls/CODECS="avc1.42e00a,mp4a.40.2"" failed (2: No such file or directory)Them3u8file has the following contents:#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=545600,RESOLUTION=416x234,CODECS="avc1.42e00a,mp4a.40.2"
/usr/video/hls/myvideo_low.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,RESOLUTION=640x360,CODECS="avc1.42e00a,mp4a.40.2"
/usr/video/hls/myvideo_high.m3u8The Nginx configuration is:location /hls {
root /usr/video;
hls;
hls_fragment 5s;
hls_buffers 10 10m;
hls_mp4_buffer_size 1m;
hls_mp4_max_buffer_size 5m;
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Cache-Control' 'no-cache';
}In the browser I am getting a warning:
"No TS Fragments found". | Nginx Plus not streaming HLS |
AddActionController::Base.relative_url_root = "/app1"to the end of yourconfig/environment.rbof app1 (similarly for other two apps). This will make Rails add proper prefix to URLs.If you don't want to mess with Rails config, you probably could force Nginx to go through all of your assets folder until it finds the one it needs, if I'm not mistaken it could be archived like this:location /assets/ {
try_files /app1/$uri /app2/$uri /app3/$uri;
}Please note that you must have different filenames for assets of different apps. That is already so if you are using asset pipeline everywhere, as it hashes file names.UPD.You can also try 'Referer'-based routing:location /assets/ {
if ($http_referer ~* /app1) {
rewrite ^(.*)$ app1/$1 break;
}
if ($http_referer ~* /app2) {
rewrite ^(.*)$ app2/$1 break;
}
if ($http_referer ~* /app3) {
rewrite ^(.*)$ app3/$1 break;
}
} | I'd like to use nginx in order to map all my rails apps on port 80.Currently, I have 3 rails apps running on port 3000 3001 and 3002 and I'd like to use nginx on port 80 to map them so :http://127.0.0.1/app1 => 127.0.0.1:3000
http://127.0.0.1/app2 => 127.0.0.1:3001
http://127.0.0.1/app3 => 127.0.0.1:3002Here's what I did :server {
listen 80;
location /app1/ {
proxy_pass http://127.0.0.1:3000/;
}
location /app2/ {
proxy_pass http://127.0.0.1:3001/;
}
location /app3/ {
proxy_pass http://127.0.0.1:3002/;
}
}However, when I try to accesshttp://127.0.0.1/app1, I only get the HTML content, no assets/js/css as the browser tries to get them fromhttp://127.0.0.1/assetsinstead ofhttp://127.0.0.1/app1/assets.Is there a way to fix this? | Using nginx to map rails applications |
I solved the problem quite easily by symlinking the package of interest in .env/lib/python2.7/site-packages. I originally tried to symlink the entire project folder but that didn't work as it couldn't find the package.It seems that my uWSGI/Nginx just follows the virtualenv's version of pythonpath, so whatever I configure there is used.It will be a bit of a pain to have to remember to symlink every package, but at least I only have to do it once for each package.I'm using PyDev, and it was masking the issue because I was using the default Python interpreter, not the one in virtualenv. Once I changed that, it was easier to solve. | I have a somewhat intricate project setup consisting of several components that work together. Each component is a separate Python project that is hosted as a uWSGI application behind an Nginx proxy. The components interact with each other and with the outside world through the proxy.I noticed myself about to cut-and-paste some code from one component to another, as they perform similar functions, but interact with different services. Obviously, I want to avoid this, so I am going to pull out common functionality and put it into a separate 'library' project, to be referenced by the different components.I am running these apps in a virtual environment (using virtualenv), so it should theoretically be easy to simple drop the library project into .env/includes.However, I have a bit of a strange setup. First of all, I am running the project from /var/www (i.e. uWSGI hosts the apps from here), but the projects actually are present in another source controlled directory. For various reasons, I don't want to move them, so I created symlinks for the project directories in /var/www. This works fine. However, now I have a potential problem, namely, where do I put the library project (which is currently in the same directory as the other components), which I also want to symlink?Do I symlink it in .env/includes? And if so, how should I reference the library from my other components? Do I reference it from sys.path or as a sibling directory? Is Nginx/uWSGI with virtualenv following the symlinks and taking into account the actual directory or is it blindly assuming that everything is in /var/www?I have not tried either approach because there seems to be a massive scope for problems, so I wanted to get some input first. Needless to say, I am more than a little confused. | Virtualenv: How to make a custom Python include shared by multicomponents hosted by uWSGI |
Get rid of the file extensions in your@importdeclarations. It should be:@import "jquery-ui";
@import "admin/jquery.datepick"; | I had put many css files in the fileactive_admin.css.scss:// Active Admin's got SASS!
@import "active_admin/mixins";
@import "active_admin/base";
@import "admin/plugins/*";
@import "admin/calendar";
@import "jquery-ui.css";
@import "admin/jquery.datepick.css";But the files"jquery-ui.css"and"admin/jquery.datepick.css"are creating problems. I am getting the404 Not Founderror in the browser console for below :http://staging.xxx.com/assets/jquery-ui.css
http://staging.xxx.com/assets/admin/jquery.datepick.cssI also checked the assets in browsers, these 2 files are present, but they don't have content inside it. I am using Nginx as my webserver in Ec2. All is working in development, but not in production.My Ngnix is configured as mentioned inthis answer. I am usingCapistranoto deploy. Everything is working but not those 2 files.I have the below settings inproduction.rbtoo :config.assets.compile = true
config.assets.precompile += %w[active_admin.css active_admin.js]And still it didn't work. I found the above suggestion fromhere. | Why getting empty CSS files in Production? |
You should install docker:- name: install docker
shell: curl -sSL https://get.docker.com/ | sh
args:
creates: /usr/bin/dockerAnd you should check that it works:- name: Wait for the Docker server to start
action: raw docker version
register: docker_version
until: docker_version.stdout.find("Client") != -1
retries: 30
delay: 10And you need met all dependencies(http://docs.ansible.com/ansible/docker_module.html):Requirements (on host that executes module)
python >= 2.6
docker-py >= 0.3.0
The docker server >= 0.10.0 | I get an error onTASK: nginx container:failed: [localhost] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
FATAL: all hosts have already failed -- abortingWhen play nextAnsibleplaybook:---
- name: Play
hosts: localhost
vars: []
tasks:
- name: nginx container
docker:
name: my.nginx2
image: nginx
state: startedWhat I do wrong? Is this a bug?P.S. More verbose output got with-vvvvis: REMOTE_MODULE docker state=started name=my.nginx2 image=nginx
EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1431434101.65-11072088770561 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1431434101.65-11072088770561 && echo $HOME/.ansible/tmp/ansible-tmp-1431434101.65-11072088770561']
PUT /tmp/tmp7ySlXq TO /home/victor/.ansible/tmp/ansible-tmp-1431434101.65-11072088770561/docker
EXEC ['/bin/sh', '-c', u'LANG=C LC_CTYPE=C /usr/bin/python /home/victor/.ansible/tmp/ansible-tmp-1431434101.65-11072088770561/docker']
failed: [localhost] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
FATAL: all hosts have already failed -- aborting | msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),) |
First, you have to edit your Nginx configuration files to change fastcgi_read_timeout. There is no getting around that, you have to change that setting.I'm not sure why you say "I shouldn't modify these files in my case". I think your reason might be that you want to change the timeout for one of your websites, but not others. The best way I've found to accomplish that is go ahead and set fastcgi_read_timeout to a very long timeout (the longest you would want for any of your sites).But you won't really be counting on using that timeout, instead let PHP handle the timeouts. Edit your PHP's php.ini and set max_execution_time to a reasonable amount of time that you want to use for most your websites (maybe 30 seconds?).Now, to use a longer timeout for a particular website or web page, use the set_time_limit() function in PHP at the beginning of any scripts you want to allow to run longer. That's really the only easy way to have a different setting for some websites but not others on a Nginx / PHP-FPM set up. Other ways of changing the timeout are difficult to configure because of the way PHP-FPM shares pools of PHP threads with multiple websites on the server. | I would like to increase timeout of one php site on nginx so I don't get "504 Gateway timeout". I've tried set_time_limit but it doesn't work. I've found some solutions which are based on modification of configuration files (e.g.Prevent nginx 504 Gateway timeout using PHP set_time_limit()). However I shouldn't modify these files in my case. Is there such a way?Thanks for any efforts. | Nginx PHP set_time_limit() not working |
I solved it.
The answer was easy, indeed the user "http" was not allowed to execute things from the wiringpi library which the C-program needed to run.In the end I simply did:chmod +s action(This sets modifies the executable (called "action") to always run with root privileges.)
...and the code ran as expected with the following PHP file (index.php):Thanks for all the help! | I would like to execute a program using PHP, a piece of code that will use an RF transmitter to switch in my lamp.This is achieved from the command line by:action 63 A onIt is just a C program someone wrote to control the GPIO pins on my raspberry pi. So I make an index.phpIt does nothing while:Gives me the default text output of the program (an explanation on parameters to use). This tells me PHP works, the program is located, it can be executed. But my lamp is not switching on. Moreover, typing on the command line:php index.phpDoes switch my lamp on/off as expected! (Using the first variety of the file)
Is Nginx (user http) not allowed to switch the lamp on/off? It is allowed to execute the file, at least, it can make it generate text output.I also tried:And some more varieties like shell_execAnd thoughts? | Execute code to switch light on with PHP |
You have commented the configuration data.First remove all the # from your configuration file.Then use the below code inside the server {}location / {
root data/www;
}
location /images/ {
root data;
}Note- the location of the static file inside your nginx root folder should be the (root+location) data and the access of the file should be "location" data. e.g from the first location configuration the static file should present inside the folder "data/WWW/" and in the second location configuration the static file should present inside the folder "data/images/".URL folder inside nginx home path
----- --------------------------
localhost/hello.html data/WWW/hello.html
localhost/images/img1.png data/images/img1.png | I want to configure the nginx server in my windows7 PC.For storing images and html files.
I followed up the following steps:
1.I downloaded the nginx-1.2.9 and unziped into c:\ filder.
2. created one folder "data" and within 'data" folder created another two folders say "WWW" and "images".
3.Keeping all images in the "images"folder .and .html file in folder "WWW".
4.Now started the nginx server using command C:\nginx-1.2.9>start nginx5.Made changes in nginx.conf file.`
#server {
#location / {
# proxy_pass http://127.0.0.0:8080;
#}
#location /images/ {
# root /C:/data/images;
# }
}Not able to access images and html page.
Please help me to solving this problem. I'm sure doing mistake in config file only..
Thanks in Advance,
Satya | configuring nginx server to store the static images and html |
It would sound to me like you want to have basic auth enabled for the whole server, and not just a singlelocation. The waylocations work, is that only one applies at a time, hence if you specify an auth policy in a singlelocation, but it does arewriteto some otherlocation, then the otherlocationwill not be subject to the auth policy from the previouslocationAccording to the documentation forauth_basic, it seems like it's allowed to be used not just in thelocationcontext, but inserverandhttp, too.Therefore, if you want your wholeserverto require authentication, simply move all of yourauth_basicdirectives up one level, from being in a singlelocation, to being in a singleserver. | I have the following nginx configuration I would like to add basic authentication to this configuration to this.upstream phpfcgi {
server 127.0.0.1:7777;
# server unix:/var/run/php5-fpm.sock; #for PHP-FPM running on UNIX socket
}
server{
listen 80;
server_name qu.abc.com;
root /home/qu/website/current/web;
location / {
# try to serve file directly, fallback to rewrite
try_files $uri @rewriteapp;
}
location @rewriteapp {
# rewrite all to app.php
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
fastcgi_pass phpfcgi;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}I have edit the location block to thislocation / {
# try to serve file directly, fallback to rewrite
auth_basic "Restricted";
auth_basic_user_file /home/qu/website/current/web/.htpassword;
try_files $uri @rewriteapp;
}But this one doesn't work with rewrite.
Could somebody help on this. | How can I add basic auth to an nginx configuration with rewrite |
I don't know a ready to use library. But it seems pretty easy to write a script, which generates an Nginx config file from application's routes (for example, during application setup). This file can be included into main configuration of server using "include" command of Nginx config:server {
listen 80;
server_name example.com;
include /path/to/application/routes.conf
} | I've been following theWeb Frameworks Benchmarkand have noticed that a number of web framework suffer from the same performance penalty, that being they do HTTP routingwithinthe framework itself and not leverage the highly performant HTTP server of NGINX to do routing.For example, in theFlaskpython framework, you might have:@app.route('/add', methods=['POST'])
def add_entry():
...Which makes your application much easier to follow than doing it directly within NGINXconfigfile like so:server {
listen 80;
server_name example.com;
location /add {
... // defer to Flask (python) app
}Question: How can you gain the performance of NGINX built-in HTTP routing (using NGINX own config file to define routing), while also keeping the easy of application development by defining the HTTP routing within your web framework?Is there a way you can dynamically load into NGINX from INSERT_NAME_OF_YOUR_WEBFRAMEWORK the HTTP routing? | How to dynamically load HTTP routing into NGINX from your webframework? |
What you're looking for is rolling restarts. Phusion Passenger Enterprise supports this:http://www.modrails.com/documentation/Users%20guide%20Nginx.html#PassengerRollingRestarts | When I push new code from my Sinatra application to my production server, I am currently triggering a restart of passenger by touchingtmp/restart.txt, which loads the new changes. The problem is that the site is essentially down for about 10 seconds during this process.How can I setup my server so that I can completely avoid any downtime?That is, I want the application to keep serving the old version of the code until the new code is completely loaded, and then to instantly switch to the new code.Using shotgun or sinatra/reloader will not work here since this is a production environment. Finally, if the answer depends on the application server, I'd be interested in how to do it with both unicorn and passenger. | Sinatra: Hot Code Pushes In Production? |
I solved it.Had to add:set_real_ip_from 0.0.0.0;Where IP0.0.0.0being the proxy | I'm running a Nginx 1.2.4 webserver here, and I'm behind a proxy of my hoster to prevent ddos attacks. The downside of being behind this proxy is that I need to get the REAL IP information from an extra header. In PHP it works great by doing$_SERVER[HTTP_X_REAL_IP]for example.Now before I was behind this proxy of my hoster I had a very effective way of blocking certain IP's by doing this:include /etc/nginx/block.confand to allow/deny IP's there.But now due to the proxy, Nginx sees all traffic coming from 1 IP.I have configurated Nginx with--with-http_realip_moduleso I should now be able to get the real IP's from people.In my nginx.conf I have added:real_ip_header X-Forwarded-For;
include blockips.conf;I have also tried:real_ip_header X-Real-IP;
include blockips.conf;In both cases IP's listed in blockips.conf are not being blocked. Also in my log files I do not see the real ip's, but only the proxy IP show up.What am I doing wrong? | Using Nginx to block IP's behind proxy |
tmuxinator allows you to easily configure a tmux session that can be launched with a single command containing any number of windows (tabs) and executes commands in each window (like starting a server). Just configure it to load the appropriate gemset for the appropriate rails server.https://github.com/aziz/tmuxinator | For the site I am currently working on we have 2 Rails 3.2 projects. One project is basically an API, and the other is a web front end. In order to develop on the web front end I need to have the API project running. I've tried using theforemanandsubcontractorgems to manage this but it doesn't seem to work. Both projects run the Thin application server and have their own RVM gemsets. We also run Nginx in production.How would you go about managing this setup for development? I want there to be 1 command to fire up everything, similar to how Foreman works.Requirements:RVM SupportThin for developmentOne command I can run from the API application to start both applicationsCannot using Pow (it always seems to get hung up and is incredibly slow)Setup should work for other developers with minimal setup (easily scriptable)Works on OSXThanks! | How do I setup multiple rails applications for development? |
Okay. Well this is a very broad subject as you know. But I will try and help.The ELB is generally not very good at burst scaling. After speaking with Amazon engineers about this, I figured out they actually won't scale the ELB on bursts because it is not possible. You need to have consistent load over time to get the ELB scaled up. Because of this, I switched to haproxy. In addition to the ELB not scaling on burst load, it also uses a CNAME for the DNS lookup which is going to affect your performance as well. So if you are planning on having burst load often, or demanding DNS lookup, probably best to get off the ELB.RDS is a black box. Well that is not totally true, but in general I avoid RDS unless I know the backend is a simple setup that is easy to scale. Having said that, RDS does help with scaling, but I would dumb down the backend and ensure your query runs quickly. Run it on a regular MySQL instance and see if it is subsecond. In my experience, when you say the query is "optimized" that doesn't really mean there is not another way to make it more "optimized" if you catch my drift. | The setup: EC2 servers autoscaling behind ELB, connecting to RDS mysql database, all static files served from cloudfront.I'm running nginx as the web server on the EC2 servers, keepalive set to 20, worker processes 4, . Codeigniter is the backend and using codeigniter sessions.I've been running lots of benchmarks to attempt to test the performance, siege, apache benchmark, blitz.io.I'm testing two particular pages, the first performance is extremely good, it uses codeigniter sessions so hits the database to read and update the ci_sessions database. The second page is the one I'm having trouble with, it runs a query with several joins which complete in roughly 0.4 seconds with a single user. This query is optimised, and I'm using InnoDB tables. Under apache benchmark with c10 and n1000 100% of requests come back within 634 ms.When I run concurrent users > 200 I start running into problems. Adding more EC2 servers doesn't help, the CPU's are around 50% utilised. The RDS database monitoring shows also the CPU and memory usage is less than 70%, and the average DB connections is < 35.Performance has been improved by moving to a large RDS instance and large EC2 instances, this makes me wonder whether I/O is coming into play here.If I boot up a server outside of the ELB during load tests and hit this page it comes back in less than a second, but if I fire up another server within the ELB it remains up to 4 or 5 seconds. This suggests that I'm not overloading the RDS.I tried ramping up the ELB slowly with 5 minute bursts and this didn't seem to help.I'm wondering where to look next for this problem, whether it's some kind of I/O issue or something else because the RDS, and EC2 servers don't seem pushed to their capabilities. Any suggestions or ideas where to look next would be much appreciated | ELB, RDS mysql, EC2, NGINX where to look next for concurrency performance issues |
From what I understand you can't proxy websocket traffic with a proxy_pass.Since web sockets are done over HTTP 1.1 connections (where the handshake and upgrade are completed), your backend needs to support HTTP 1.1, and from what I have researched, they break the HTTP 1.0 spec...I've seen some people try to do the same thing with socket.io and HAProxy (see links). I would guess that you could try to swap out the socket.io with em-websockets and expect similar results.1:http://www.letseehere.com/reverse-proxy-web-sockets2:HAProxy + WebSocket Disconnection | I have an event-machine websocket application (using theem-websocketgem) and it runs fine. The problem is that I need to deploy it using port 80 through nginx (can't compile it with tcp proxy module). Is it possible to use a simple nginx proxy_pass pointing to a thin server and have the thin server to pass the requests to my websocket server? | How to access event machine websockets through Thin/nginx? |
The push stream module will technically do what you want it to do -- set up a url in which you can push updates to that in turn can be polled by a pubsub in your client side code.In order to install the push stream module, you need to get the latest source of nginx, get the source for that module, compile that, then recompile your nginxwith the path to the new module source as one of the flags. See how i did that, here:Recompiling nginx after using apt-get install nginxIf the restart of your nginx server does not list that module in the listed flags for the current instance, then you didn't properly overwrite the nginx files during recompile. Make sure you include the--sbin-pathflag to ensure overwriting to the correct directory.Once you've confirmed that it is in fact installed and running in nginx, then follow the steps provided by @baba | I'm creating a simple chat app. Have installed nginx on Ubuntu 11.10, with PHP via fast-cgi. To get a feel for performance, I made a simple PHP file that sleeps 10 seconds then reports the time. Calling this with several browser instances (different browsers, different machines) the response becomes sluggish after about 10 instances, a lot less than expected (was hoping to not see any deterioration until the hundreds, though that would not be practical using manual browser testing).I'm a web-dev, not sys-admin, maybe out of my depth? Not looking for optimal solution (searching reveals nginx should be able to handle 10k per core), but a few hundred would be great.There's also the Nginx Push Stream Module, but I can't figure out how to install it, and seems yet another technology to get to grips with. Should basic out of the box nginx be able to cope with my expectations, i.e. 100+ long-term connections using PHP? | how to configure nginx for long poll (and php) |
I don't have a server to use to stand in for SomeServer for me to test out my suggestions, but I'll give it a shot anyway. If I'm wrong, then I guess you'll just have touse Flash (sample code from VK).How about using an iFrame to upload the file toSomeServer, receive the JSON response, and then usepostMessageto pass the JSON response from the iFrame to your main window from your site. As I understand it, that is pretty much the motivation for creatingpostMessagein the first place.Overall, I'm thinking of somethinglike thisorYUI'sio()modulebut withpostMessageadded to get around the same origin policy.Or in VK's case,using their explicit iFrame support. It looks to me like you can add a method to the global VK object and then call that method from the VK origin domain usingVK.callMethod(). You can use that workaround to create a function that can read the response from the hidden iFrame.So you use VK.api('photos.getUploadServer', ...) to get the POST URL.Then you use JS to insert that URL as the action for your FORM that you use to upload the file. Follow the example under "Uploading Files in an HTML Form" in theio()docsand in thecompletefunction, usepostMessageto post the JSON back to your parent window. Seeexample and docs here. (If it doesn't work withio(), you can certainly make it work using theroll-your-ownexample code if I'm right aboutVK.callMethod().)Then in response to thepostMessageyou can use regular AJAX to upload the JSON response back to your server. | As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed11 years ago.I am creating a web service of scheduled posts to some social network.Need help dealing with file uploads under high traffic.Process overview:User uploads files to SomeServer (not mine).SomeServer then responds with a JSON string.My web app should store that JSON response.Option 1: Save, cURL POST, delete tmpThestupidway I made it work:User uploads files to MyWebApp;MyWebApp cURL's the file further to SomeServer, getting the response.Option 2: JS magicThe smart way it could be perfect:User uploads the file directly to SomeServer, from within an iFrame;MyWebApp gets the response through JavaScript.But this is(?) impossible due to the 'Same Origin Policy', isn't it?Option 3: nginx proxying?The better way for a production server:User uploads files to MyWebApp;nginx intercepts the file uploads and sends them directly to the SomeServer;JSON response is also intercepted by nginx and processed by MyWebApp.Does this make any sense, and what would be the nginx config for, say,/fileuploadLocation to proxy it to SomeServer? | What is the best way to upload files to another domain from a browser? [closed] |
You cannot match arguments inrewriterules, they may include paths only. The reasons are simple: suppose arguments may have another order; suppose there can be additional arguments you did not take into account (e.g. keywords from Google).So your rules should be rewritten in a way to match path at the first and then check arguments. Like this:rewrite ^/index_([0-9]+)(.*)$ /forum-$1-1.html last;
location /index.asp {
if ($arg_boardid ~ "^([0-9]+)") {
rewrite ^ /forum-$1-1.html break;
}
rewrite ^ /index.php break;
}
location /dispbbs.asp {
rewrite ^ /thread-$arg_ID-1-1.html break;
} | rewrite ^/index\.asp /index.php last;
rewrite ^/index\.asp\?boardid=([0-9]+)$ /forum-$1-1.html last;
rewrite ^/index\.asp\?boardid=([0-9]+)(.*)$ /forum-$1-1.html last;
rewrite ^/index_([0-9]+)(.*)$ /forum-$1-1.html last;
rewrite ^/dispbbs\.asp\?boardID=([0-9]+)&ID=([0-9]+)$ /thread-$2-1-1.html last;I have try out rewrite rules above, and get a dead result, not working.
I have refer to many posts and articles, and no help.Is there any mistakes?V/R,
gavinThanks for your reply. :)I have altered my nginx config to,rewrite ^/index\.asp$ /index.php last;
rewrite ^/index\.asp\?boardid=([0-9]+)(.*)$ /forum-$1-1.html last;
rewrite ^/index\.asp\?boardid=([0-9]+)$ /forum-$1-1.html last;
rewrite ^/dispbbs\.asp\?boardID=([0-9]+)&ID=([0-9]+)$ /thread-$2-1-1.html last;Still not working. But I find no mistake in the rules. | nginx rewrite rule not working? |
Updating Docker to the latest Apple Silicon Preview release fixed it for me.Here | I try to run nginx in docker and connect it to one service in docker as well. Another service is running without docker and use this nginx too. My docker-compose:version: "3.9"
services:
nginx:
restart: always
image: nginx
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "3000:80"
- "9000:81"
web:
restart: always
image: "xxxx/xxxx/web"And nginx conf at./config/nginx.conf:events {}
http{
upstream ws-api {
server host.docker.internal:8081;
}
upstream ws-web {
server web;
}
server {
server_name localhost;
listen 80;
location / {
include fastcgi_params;
fastcgi_split_path_info ^(/)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_pass ws-api;
}
}
server {
server_name localhost;
listen 81;
location / {
include fastcgi_params;
proxy_pass http://ws-web;
}
location /api {
rewrite ^([^.]*[^/])$ $1/;
rewrite ^/api(.*)$ $1 break;
include fastcgi_params;
fastcgi_split_path_info ^(/)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_pass ws-api;
}
}
}When I open localhost:9000 I get the error message:lookup ws-web on 192.168.65.5:53: no such hostlocalhost:3000 works fineWhat is my mistake? | Lookup on 192.168.65.5:53: no such host |
I believe that the main problem in this case is the fact that the NGINX service had been started initially before the needed/etc/nginx/nginx.confwas placed. In my opinion, this order causes NGINX searches the PID file defined in the nginx.conf but the PID location policy was different during the start thus the PID file was not defined in the place expected by the reloading service.However, service.running watchesfile: /etc/nginx/nginx.conf. But it is not enough because the first start of the service occurs just after the package installation with the defaultnginx.conf.To sum up, the solution is to place/etc/nginx/nginx.confwith differentpiddirective before the package installation (if the package is installed ensure that all NGINX processes are killed, start the service with the needednginx.confOR [be careful, backup configs...] just fully remove the package disabling services and removing configs). In case of Salt Stack put- require_in: pkg: nginx(nginxhere is the name of the package installation state) into the state that manages the/etc/nginx/nginx.conf. | Using Salt I applied states that install and run NGINX (1.14.0-0ubuntu1.7) as a service. Service's status is active butsystemctl reload nginxkeeps failing thus an updated config cannot be applied.Full logs:systemd[1]: nginx.service: Can't open PID file /run/nginx.pid (yet?) after reload: No such file or directory
systemd[1]: Reloaded A high performance web server and a reverse proxy server.
systemd[1]: Reloading A high performance web server and a reverse proxy server.
nginx[18095]: nginx: [error] open() "/var/run/nginx.pid" failed (2: No such file or directory)
systemd[1]: nginx.service: Control process exited, code=exited status=1
systemd[1]: Reload failed for A high performance web server and a reverse proxy server.
systemd[1]: Reloading A high performance web server and a reverse proxy server.
nginx[1209]: nginx: [error] invalid PID number "" in "/var/run/nginx.pid"
systemd[1]: nginx.service: Control process exited, code=exited status=1
systemd[1]: Reload failed for A high performance web server and a reverse proxy server. | After the first reload: "nginx: [error] open() "/var/run/nginx.pid" failed (2: No such file or directory)" |
There is already an nginx image with rtmp module installed:docker pull tiangolo/nginx-rtmpHow to use:https://hub.docker.com/r/tiangolo/nginx-rtmp/Now for your problem, if you go to the official nginx docker image repository inhttps://github.com/nginxinc/docker-nginx, line 38 is:exec "$@"So what does that do?More information here:https://unix.stackexchange.com/questions/466999/what-does-exec-do | I'm not sure what I'm doing is correct so fix me if I'm mistaken.This is my dockerfile:FROM nginx:latest
RUN apt update && apt install build-essential libpcre3 libpcre3-dev libssl-dev libnginx-mod-rtmp -yI'm trying to addrtmp moduleto my nginx.I'm trying to run the image with the command below:docker run --rm --name mynginx -p 80:80 -v ~/nginx/conf/nginx.conf:/etc/nginx/nginx.conf mynginx:latestThis is what I received:/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
/docker-entrypoint.sh: 38: exec: nginx: not foundWhat is that/docker-entrypoint.sh: 38: exec: nginx: not found? How do I fix it? | docker-entrypoint exec nginx not found |
You can configure your apigateway with cors headers, methods and url. You just need to edit the configurations (to add new) and after that you can redeploy your apigateway configurations. (changes are only visible after deploy from api gateway).If you save, it only saves your current configuration state but it does not apply the configurations.In order to apply your current configuration you have to deploy your api gateway.Here is the configuration:As stated by thedocumentation:Every time you update an API, you must redeploy the API to an existing stage or to a new stage. Updating an API includes modifying routes, methods, integrations, authorizers, and anything else other than stage settings. | I am running into a CORS problem which says that I'm unable to load my webpage due to the following:"Access to fetch at 'ALB Load balancer dns address:port' from origin 'ALB Load balancer dns address' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled."After doing some troubleshooting and googling around, I am pretty confident that the issue is to do with AWS's load balancers not supporting CORS.I have read that an API gateway can be used as a proxy to apply CORS headers to the ALB address to get around this but I have tried this approach and it doesn't seem to be resolving the issue.Anyone have any suggestions as to what else can be done to bypass this problem?P.S. I have tried applying CORS to my webserver (NGINX), Javascript code, and my flask application which didn't seem to make a difference when trying to access it from my Application load balancers DNS address. I have also tried contacting the mentioned ALB address via postman and it doesn't return an error about CORS | Application Load Balancer having problems with CORS |
The problem was IISexpress port access issue.By default, the IISexpress does not allow the external network to access the port and this access needs an explicit configuration.If you are facing the same problem, you can find the code snippet and other details here.Accessing IISExpress for an asp.net core API via IP | I have a dotnet core application built on dotnet core 3.1 and when I tried to deploy the same in ubuntu 18.04 server by following the steps given in thisdocbut not able to access the app on port 80 (accessing through public IP)Here is the Nginx updated configurationand dotnet application is running with port 5000 and 5001 (for now I didn't configure service to the same)Getting the following error when accessing through the browsers ( public IP)I'm missing any configurations? | How to deploy dotnet core application on Ubuntu server with Nginx server? |
This often happens if the index.php (or any other script) you are calling does not exit correctly, for example throwing an exception.Have a look at the error.log | server {
listen 80;
server_name xx.cn;
index index.php index.html index.htm;
root /data/www_deploy/xx/backend/web;
location ~* /\. {
deny all;
}
location / {
try_files $uri /index.php?$args;
}
location ~ .*\.(php|php5)?$
{
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
client_max_body_size 512m;
}nginx error show that client closed connection while waiting for request, client: x.x.x.x, server: 0.0.0.0:80 when visit domain via client browsererror show thatConnecting to xx.cn (xx.cn)|x.x.x.x|:80... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2019-09-13 19:48:18 ERROR 500: Internal Server Error.on the server via wget xx.cnI wonder how to deal it? | nginx error -client closed connection while waiting for request, client: x.x.x.x, server: 0.0.0.0:80 |
Command line arguments are accepted by the Ingress controller executable.This can be set in container spec of thenginx-ingress-controllerDeployment manifest.List of annotation document :https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.mdCommand line argument document:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.mdIf you will run the commandkubectl describe deployment/nginx-ingress-controller --namespaceYou will find this snip :Args:
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--annotations-prefix=nginx.ingress.kubernetes.ioWhere these all are command line arguments of nginx as suggested.From here you can also change the--annotations-prefix=nginx.ingress.kubernetes.iofrom here.Default annotation in nginx isnginx.ingress.kubernetes.io.!!! noteThe annotation prefix can be changed using the--annotations-prefixinsidecommand line argument, but the default isnginx.ingress.kubernetes.io. | I feel like I'm missing something pretty basic here, but can't find what I'm looking for.Referring to the NGINX Ingress Controller documentation regardingcommand line argumentshow exactly would you use these? Are you calling a command on the nginx-ingress-controller pod with these arguments? If so, what is the command name?Can you provide an example? | What is the command to execute command line arguments with NGINX Ingress Controller? |
On GCP GKE the gcp ingress controller its enabled by default and will be always lead to a new LB in any ingress definition even if the .class its specified.https://github.com/kubernetes/ingress-nginx/issues/3703So to fix it we should remove the gcp ingress controller from the cluster as mention onhttps://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller | I have a set of services that i want to expose as an ingress load balancer. I select nginx to be the ingress because of the ability to force http to https redirects.Having an ingress config likeapiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-https
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.org/ssl-services: "api,spa"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- api.some.com
- www.some.com
secretName: secret
rules:
- host: api.some.com
http:
paths:
- path: /
backend:
serviceName: api
servicePort: 8080
- host: www.some.com
http:
paths:
- path: /
backend:
serviceName: spa
servicePort: 8081gke creates the nginx ingress load balancer but also another load balancer with backends and everything like if where not nginx selected but gcp as ingress.below screenshot shows in red the two unexpected LB and in blue the two nginx ingress LB one for our qa and prod env respectively.output from kubectl get servicesxyz@cloudshell:~ (xyz)$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 1.2.3.4 8080:32332/TCP,4433:31866/TCP 10d
nginx-ingress-controller LoadBalancer 1.2.6.9 12.13.14.15 80:32321/TCP,443:32514/TCP 2d
nginx-ingress-default-backend ClusterIP 1.2.7.10 80/TCP 2d
spa NodePort 1.2.8.11 8082:31847/TCP,4435:31116/TCP 6dscreenshot from gcp gke services view of the ingress with wrong infoIs this expected?Did i miss any configuration to prevent this extra load balancer for been created? | gke nginx ingress create additional load balancer |
Step 1
To edit the configuration in AWS ElasticBean of nginx you need to add the configuration file in .ebextensionsto this add folder .ebextensions/nginx/ create proxy.config filefiles:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
owner: root
group: root
content: |
underscores_in_headers on;it will start accepting header with underscores.Step 2In case if it not accepting header with underscores then access your instance with ssh and run the following command.sudo service nginx reloadHope it helps. | I'm new to AWS EBS. I'm trying to modify etc/nginx/nginx.conf. I just wanted to add a line inhttp{ underscores_in_headers on; }and I'm able to change by accessing the instance with IP using putty. But the problem is that when auto scaling scales the environment with new IP then the linehttp{ underscores_in_headers on; }will be removed from new instance.So, I want when server deploy new snapshot/instance has to be similar as the main server or you can say with same configuration.I tried to solve my issue with this link | How to customize nginx config on Node.js Elastic Beanstalk |
nginxuses theHostheader to determine whichserverblock to use to process a request.When the request passes through theproxy_pass http://load;statement, theHostheader is set to the valueloadby default.To makenginxchoose theserverblock containing theserver_name loadapi.example.com;statement, it either needs to be thedefault_serverserver, or include the nameloadin itsserver_name, or set theHostheader using:proxy_set_header Host loadapi.example.com;Of course, usingupstreamfor load balancing means that both servers receive the same value for theHostheader, and must both respond correctly to it.Seethis documentfor more. | I am configuring Nginx load balance with Nginx upstream module, configuration as follow:upstream load {
server loadapi.example.com;
server loadapi.anotherdomain.com down;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://load;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name loadapi.example.com;
root /disk/projects/load/loadapi;
index index.html index.htm index.shtml index.php;
...
...
error_page 404 /404.html;
}Notice that theapi.example.comandloadapi.example.comare on the same server.loadapi.anotherdomain.comis resolved to another server which provides the same service.Everything works fine withloadapi.anotherdomain.com, which are on another server.But when I use theloadapi.example.comas the backend, it seems that Nginx cannot handle it correctly. I can get my service up and running onloadapi.example.com. But it is not working with the upstream.(look like Nginx cannot resolve the subdomain name correctly).any advice? thx in advance. | nginx upstream subdomain on the same server |
The solution was to use thetlslibrary:const tls = require('tls');
const options = {
port: 443,
host: 'myhost'
};
const socket = tls.connect(options, () => {
console.log('connected to', socket.remoteAddress);
socket.write('GET /endpoint?term=one two HTTP/1.1\r\n' +
'Host: myhost\r\n' +
'\r\n');
socket.setEncoding('utf8');
socket.on('data', data => {
console.log('SOCKET RESPONSE:', data);
}).on('end', () => {
console.log('Connection end');
}).on('close', (had_error) => {
console.log('Connection closed. Had error:', had_error);
}).on('error', error => {
console.log('ERROR:', error);
});
}); | I need to simulate malformed HTTP requests to my server for testing purposes - I have aclientErrorhandler in my Node.js server and want to create a functional test it.For example:curl "https://myhost/endpoint?term=one two"will trigger this handler (due to the not encoded space between wordsoneandtwo)I'm struggling to find a way to do a similar request in Node. As far as I know, any kind of higher level request libraries do encoding automatically so I couldn't use them.Using the built-innetlibrary instead I managed to get this far:const net = require('net');
const socket = new net.Socket();
const options = {
port: 443,
host: 'myhost'
};
socket.connect(options, () => {
console.log('connected to', socket.remoteAddress);
socket.write('GET /endpoint?term=one two HTTP/1.1\r\n' +
'Host: myhost\r\n' +
'\r\n');
socket.on('data', (data) => {
console.log('SOCKET RESPONSE: ' + data);
}).on('end', () => {
console.log('SOCKET ENDED');
});The problem is, calls to my Node.js service are proxied through Nginx, so running the above code results in 400 error from Nginx:The plain HTTP request was sent to HTTPS portI see using--verboseflag that Curl is smart enough to make some TLS handshakes.Any ideas how to update my code to achieve this? | Simulate invalid HTTP request |
If device_id field is supposed to be unique in the table, then add anunique indexfor it.Then you'll be able to run a mysql query ON DUPLICATE KEY, likeINSERT INTO users (...) VALUES(:device_id, :ip, ...)
ON DUPLICATE KEY UPDATE ip = values(ip) , ...I don't know if it's possible to run such a query with Slim-PDO, but at least you can go with generic insert and update queries,using exceptions, as shown in my article:$this->_device_id = $device_id;
$this->_ip = $ip;
try {
$this->createUser();
} catch (PDOException $e) {
$search = "!Integrity constraint violation: 1062 Duplicate entry\ .*? for key 'device_id'!";
if (preg_match($search, $e->getMessage())) {
$this->updateInfo();
} else {
throw $e;
}
}it's very important to update on the certain error only, and re-throw it otherwise. | I am faced with an unknown problem, I created a PHP API (Slim framework + Slim PDO) connected to Mysql. I use Nginx as an HTTP server. The API use a "device-id" header to recognize the client (An android application). The concern is that recently an update of the android application makes that at the launch of this one it makes now 2 asynchronous requests on the result API if the user is unknown I find myself with two entries in the table users carrying the same device-idIn a middleware$user = new User($device_id, $ip);In the user classfunction __construct($device_id, $ip)
{
$this->_device_id = $device_id;
$this->_ip = $ip;
if ($this->isExists())
$this->updateInfo();
else
$this->createUser();
}
private function isExists()
{
global $db_core;
$selectStatement = $db_core->select(array('id', 'current_group'))
->from('users')
->where('device_id', '=', $this->_device_id);
$stmt = $selectStatement->execute();
if ($stmt->rowCount() > 0)
{
$u = $stmt->fetch();
$this->_id = $u['id'];
$this->_current_group = $u['current_group'];
return true;
}
return false;
}The createUser() function creates an entry in the users table with device-id as well as other information such as date and so on.User listsThank you in advance for your help | Several requests at the same time bad SQL results |
deny allwas not working for me because the traffic was being forwarded internally through a proxy.Here is what ended up working for me:upstream backend_solr {
ip_hash;
server ip_address:port;
}
server {
listen 80;
server_name www.example.com;
index /example/admin.html;
charset utf-8;
access_log /var/log/nginx/example_access.log main;
location / {
# **
set $allow false;
if ($http_x_forwarded_for ~ " 12\.22\.22\.22?$")-public ip {
set $allow true;
}
set $allow false;
if ($http_x_forwarded_for ~ " ?11\.123\.123\.123?$")- proxy ip {
set $allow true;
}
if ($allow = false) {
return 403 ;
}
# **
proxy_pass http://backend_solr-01/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /favicon\.ico {
root html;
}
location ~ /\. {
deny all;
}
} | i create a new conf file to block all public ip to access and give only one public ip address(office public IP) to access. but when i try to access its shows the "403 Forbidden nginx"upstream backend_solr {
ip_hash;
server ip_address:port;
}
server {
listen 80;
server_name www.example.com;
index /example/admin.html;
charset utf-8;
access_log /var/log/nginx/example_access.log main;
location / {
allow **office_public_ip**;
deny all;
proxy_pass http://backend_solr-01/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /favicon\.ico {
root html;
}
location ~ /\. {
deny all;
}}but in the logs it shows accessing to the public ip but forbiddenIP_Address - - [31/Jul/2017:12:43:05 +0800] "Get /example/admin.html HTTP/1.0" www.example.com "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" "my_office _IP" "-" "-" "-" 403 564 0.000 - - - | Nginx configuration for allow ip is not working deny all is working fine |
Using_localpreference in this scenario is fine, since you have two nodes and one replica for your indices, which means each node has exactly the same data.Preference_localwill run the query you are sending to a node,on that particular node's data. If that node doesn't have the data that needs to be queried will send the requests to other nodes, as well.Also, when querying an Elasticsearch cluster you need to send your search requests either via a client node, or via a load balancer or your code needs to target BOTH nodes. Basically you want all your nodes to perform the "gatherer" job. This is important because the node that receives the search request is the only who gathers the results from all other nodes, performs final searching and aggregations and send the results back to the user. So, the node that gets the request is the one who's doing more work.In a two-node scenario with preference_localthe queries load balancing is even more important because that node that gets the request always will perform all the work, the other one will be idle. | I am currently tuning elasticsearch for search api. The specification are:2 Nodes single cluster with single index on a VM (2 cores 2GB of RAM)5 shards1 replicationload balanced using nginxWhen I test it using Jmeter using Nginx it get throughput about ~220 req/s but when I specify the?preference=_localit can get to ~320 req/s. That's very good performance improvement. What I want to ask is :What?preference=_localactually do and how it can improve the performance of the query?What's the tradeoff of using?preference=_local?Query :{
"query": {
"multi_match": {
"query": "trump",
"type": "most_fields",
"operator": "and",
"fields": ["title", "content"]
}
},
"sort": {
"published_at": {
"order": "desc"
},
"_score": {
"order": "desc"
}
},
"from": 0,
"size": 20,
"min_score": 1
} | Using preference _local in elasticsearch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.