Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Your first capture is probably too greedy, you should limit it by using:rewrite ^(/[^/]+/[^/]+)/(.*)$ $1/index.php?$2 last;Seethis useful resourceon regular expressions.
At work we have a single staging server with a staging domain, something likehttps://staging.example.com. We recently decided to switch from Apache to NGINX on a new server and we're having issues with our Laravel routing.All of our laravel apps sit in sub-directories on the staging server, like so.https://staging.example.com/app1/publichttps://staging.example.com/app2/publicI've tried configuring the NGINX conf file as specified in theLaravel docsbut get a 404 when accessing any 2nd level route, i.e.https://staging.example.com/app1/public/a/bUsing something like the below config, I can access all the routes in an app.location @laravel { rewrite /app1/public/(.*)$ /app1/public/index.php?$1; } location / { try_files $uri $uri/ @laravel; }However, we have many apps hosted on this server and we don't want to have to update an NGINX conf file every time we want to add an app to the server.Is there a way of constructing a rewrite to apply to any sub-directory and keep Laravel's routing system working?Note: I've also tried this rewriterewrite (.*)/(.*)$ $1/index.php?$2and that doesn't work for 2nd level routes.
Multiple Laravel Projects on a single domain with NGINX
Possible duplicate ofWhere to add .ebextensions in a WAR?, though since you are not using war packaging you can use Procfile-basedconfigurationand archive your jar and .ebextensions into additional zip layer. Then your zip file structure should be looking like this:your_app.zip | |_.ebextensions | |_ nginx-timeout.config | |_ your_app.jar |_ ProcfileAnd your Procfile should contain your jar file launching instructions$ cat Procfile web: java -jar your_app.jar
So I am using AWS Elastic Beanstalk to host my Java Spring app, and there are certain requests which take more than 60 seconds to complete. I wanted to raise the timeout cap so these could complete, so I began to followthistutorial.I succeeded in changing the Load Balancer timeout in the ELB console, but I am having trouble changing settings for the nginx proxy. The tutorial suggests to create a file called.ebextensions/nginx-timeout.configwhere.ebextensionsis in the "root of my project." The tutorial is assuming that we are using Beanstalk with Docker, which I am not, so I foundthislink which suggests to fill the contents ofnginx-timeout.configwith these contents:files: "/tmp/proxy.conf": mode: "000644" owner: root group: root content: | proxy_send_timeout 1200; proxy_read_timeout 1200; send_timeout 1200; container_commands: 00-add-config: command: cat /tmp/proxy.conf >> /var/elasticbeanstalk/staging/nginx/conf.d/elasticbeanstalk/00_application.conf 01-restart-nginx: command: service nginx restartOne of my problems is that I do not know exactly where the root of my application is. I am using Maven with Java Spring Boot, so my structure is as follows:I am not sure whether I should place.ebextensionsin the base directory where mypom.xmlfile is, or somewhere else. Also the method in which I am deploying this application is using maven to build a jar, and then uploading the jar, I'm not sure if this changes anything.Any advice on this problem? I'm currently also trying to see how I might ssh into my instance to possibly change the configuration of the nginx server there, but I am not sure if that will be possible.
AWS Elastic Beanstalk - Configuring my nginx settings to increase timeout for Java Spring maven app
Assuming you have generated a private key and a certificate request for your user and signed it with your client CA. You need to get the private key and the signed certificate into the list of personal certificates in the browser.I have found that the best way is to create a password protected PKCS#12 (as some browsers insist on password protection). I use the following OpenSSL command:cat user.key user.crt | openssl pkcs12 -export -out user.p12
I´m trying to use nginx as a reverse proxy to an internal webserver running Tomcat, which hosts a front-end to our ERP system.It is already working fine: I can perfectly connect to the nginx server (which is locked up on our network, different VLAN, firewall, etc etc etc) and then reverse proxy to my ERP server.However, I want do add an extra layer of protection, by requiring users to have a digital certificate on their computer, so they can access the first (nginx) server. The certificate is not used/necessary to the back-end server.I´ve been through this tutorialhttp://nategood.com/client-side-certificate-authentication-in-ngiwhich allowed me to generate my self-signed certificates and everything else.When usingssl_verify_client optionalon nginx configuration, I can connect normally to my back-end server, but no certificate is asked/required.When I switch it tossl_verify_client on, all access are then blocked by a400 Bad Request No required SSL certificate was sentNo matter which browser I am using (Chrome, IE, Edge, Firefox). Of course I´ve put all certificates/chain on my client computer, but no certificate is asked on any browsers. What I am missing?Here is my full nginx config:server { listen 443; ssl on; server_name 103vportal; ssl_password_file /etc/nginx/certs/senha.txt; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca.crt; ssl_verify_client on; location / { proxy_pass http://10.3.0.244:16030; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 300; proxy_send_timeout 300; } }
nginx - reverse proxy certificate authentication
The error you're receiving,ERR_NAME_NOT_RESOLVED, is a DNS error. Couple that with the fact that GitLab works on the base domain, and I'd say that you probably forgot to setup the DNS record for your subdomain.
This is my /etc/gitlab.rb file :I've ransudo gitlab-ctl reconfigureandsudo gitlab-ctl restart. Yet, when I go tohttp://gitlab.hop-child.comI get a ERR_NAME_NOT_RESOLVED. However,http://hop-child.combrings me to GitLab...What am I doing wrong? Restarting my whole server isn't really an option.
Gitlab omnibus external url not working
The exporting server is something that you deploy on your server side (i.e. you have to deploy a server to do exporting for you). However, if you only need to export PNG and SVG, then you can do with client-side only solution as per their docs.http://www.highcharts.com/docs/export-module/client-side-exportIf their server seems to have a limit on how big requests it will serve. Means that you have to deploy your own server and configure it (its has to do with actual http server configuration I think) to accept larger requests. Not much you can do on the client, but to limit the amount of data you display on the chart.P.S. it always directs you to highcharts export server because export functionality by default users their server.
I am having trouble with the exportation of a certain graph. I have made a JSFiddle (http://jsfiddle.net/oy73rgc4/3/) to show with what I am working. This example doesn't contain all the data points with are used, because then my browser (Chrome) crashes. In total i am using about 80K of data points. The HighCharts is displayed like normal and doesn't cause any problems. The problem comes when I want to export the chart!When I export the chart, doesn't matter if it's PNG/JPG/PDF it always directs tohttps://export.highcharts.com/with the message413 Request Entity Too Large. I have tried some google'ingoffline-export.jsOther people who have experienced this problem had tried to use the JS offline-export. I tried this, but it didn't have any effect.. It just removed the export button in the chart.https://github.com/highcharts/highcharts/issues/4614Data groupingSome suggested to others to use HighCharts Data grouping. I checked the API but I find that there is too little explanation about this. I think that I can't implement this from scratch and I am unable to find an examplehttp://api.highcharts.com/highstock/plotOptions.series.dataGroupingcustom exporting server with increased size limit in nginx.confI also found that this option could help. I tried to find instructions, but I don't understand how I need to implement this in my web application (Laravel 5.2)http://www.highcharts.com/docs/export-module/setting-up-the-serverDoes someone have a new suggestion for me on how I could solve this problem? Or could someone help me out with one of the options which I have suggested?
413 Request Entity Too Large HighCharts
Did you figure it out yet? A link to your cdn or the website that was to use the cdn would be helpful.It could be that Chrome does a preflight OPTIONS request to your distribution, and that your origin doesn't have the header set for that. You can check this withcurl -I -X OPTIONS http://cdn.example.com/ThemeIcons.woff?387osh
I'm using a nginx as my host and a Amazon Cloudfront as my cdn.my header from my cdn has the Access-Control-Allow-Origin set to *From my site:curl -I http://example.com/ThemeIcons.woff?387osh HTTP/1.1 200 OK Server: nginx Date: Thu, 08 Dec 2016 03:01:23 GMT Content-Type: application/font-woff Content-Length: 18068 Last-Modified: Sat, 25 Jul 2015 05:35:17 GMT Connection: keep-alive ETag: "55b32015-4694" Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Access-Control-Allow-Origin: * Accept-Ranges: bytesFrom my cdn site:curl -I http://cdn.example.com/ThemeIcons.woff?387osh HTTP/1.1 200 OK Content-Type: application/font-woff Content-Length: 18068 Connection: keep-alive Server: nginx Date: Thu, 08 Dec 2016 03:01:35 GMT Last-Modified: Sat, 25 Jul 2015 05:35:17 GMT ETag: "55b32015-4694" Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Access-Control-Allow-Origin: * Accept-Ranges: bytes X-Cache: Miss from cloudfront Via: 1.1 251651f117f01cad42a0ea283b85cb0a.cloudfront.net (CloudFront) X-Amz-Cf-Id: iFxwbrqD8DWkxlnqsvFMHnO6M4BLU5bywk5MsicXZ00whNzV32U_Rw==but still the message in chrome console showed,Access to Font at 'http://cdn.example.com/ThemeIcons.woff?387osh' from origin 'http://cdn.example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://example.com' is therefore not allowed access.I have researched this for 2 days, is there anyone can help me out ? Thanks!
Cloudfront CDN Font blocked by CORS policy blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource
Finally we've found out, that there was another expire-statement in the vhost-config. Reduce both to one single statement solved our issue
Analyzing an online shop (Shopware) withGooglePageSpeedresults in many "expiration not specified"-Lines on every image.I am wondering about because the webserver (nginx) addsLast-Modified-Timestamps andETAGheaders to the response of all images, resulting in an expected 304-Response on the second request.Is ETAG/LastModified not supported by Google Page Speed?I will provide the appropriate parts of the nginx-configuration:location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 1M; access_log off; add_header Cache-Control "public"; } ## All static files will be served directly. location ~* ^.+\.(?:css|cur|js|jpe?g|gif|ico|png|html|xml)$ { ## Defining rewrite rules rewrite files/documents/.* /engine last; rewrite backend/media/(.*) /media/$1 last; expires 1w; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; access_log off; # The directive enables or disables messages in error_log about files not found on disk. log_not_found off; tcp_nodelay off; ## Set the OS file cache. open_file_cache max=3000 inactive=120s; open_file_cache_valid 45s; open_file_cache_min_uses 2; open_file_cache_errors off; ## Fallback to shopware ## comment in if needed #try_files $uri @shopware; }Is there anythong wrong or missing?
How to solve Google Page Speed: "expiration not specified"
I figured out a workaround (Don't know if it will qualify as an answer). I wrote the background process as a job in database and used a cronjob to check if I have any job pending and if there are any the cron will start a background process for that job and will exit.The cron will run every minute so that there is not much delay. This helped in improved performance as it helped me execute heavy tasks like this to run separate from main application.
I am starting a process using python's multiprocessing module. The process is invoked by a post request sent in a django project. When I use development server (python manage.py runserver), the post request takes no time to start the process and finishes immediately.I deployed the project on production using nginx and uwsgi.Now when i send the same post request, it takes around 5-7 minutes to complete that request. It only happens with those post requests where I am starting a process. Other post requests work fine.What could be reason for this delay? And How can I solve this?
python process takes time to start in django project running on nginx and uwsgi
proxy_pass http://unix:/root/myproject/myproject.sock;The socket is in the superuser's home folder. That's pretty much inaccessible to all other users including your nginx users. Please more the socket to a different location. /var/log/gunicorn/ is a good place.Also do i see you running gunicorn as root?. Not recommended.setuid rootPlease use some other user here.
I am working at mydjangoproject withnginxandgunicorn, as it said here:https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04My site works locally, but when I startnginxandgunicornserver I had502 Bad Gateway error.OS isUBUNTU 14.04I'm trying to make my project working, and reinstall everything as root (I know its bad) - the same mistake.Here is my "error.log":2016/04/20 20:15:10 [crit] 10119#0: *1 connect() tounix:/root/myproject/myproject.sock failed (13: Permission denied) while connecting to upstream, client: 46.164.23When i run comand "nginx":nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind()My gunicorn.confdescription "Gunicorn application server handling myproject" start on runlevel [2345] stop on runlevel [!2345] respawn setuid root setgid www-data chdir /root/myproject exec myprojectenv/bin/gunicorn --workers 3 --bind unix:/root/myproject/myproject.sock myproject.wsgi:applicationThats my "/etc/nginx/sites-available/myproject"server { listen 80; server_name www.mysite.ru; error_log /nginx_error.log; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /root/myproject; } location / { include proxy_params; proxy_pass http://unix:/root/myproject/myproject.sock; }}I will appreciate your help very much!!!
Nginx, django, gunicorn, ubuntu 14.04 (13: Permission denied) while connecting to upstream
I have not done any measurement. But in theory, proxygen server would be more performant because it runs in the same process as the php worker threads, thus avoiding some overhead inter-process communication. Proxygen server is used at Facebook and some efforts are made to make it more reliable, e.g., protection mechanisms when the JIT compiler isn't fully warmed up. However, these should not matter much for other users. If you already have your favorite apache/nginx setup and do not want to spend the time to tune settings for another http server, use FastCGI.
HHVM has a built in Server, Proxygen. You can run HHVM with the Proxygen server or run it in FastCGI mode, using another server such as nginx or apache to handle web requests.I cannot find any benchmarks or authoritative source that provides any indication of which of the two option performs best. Obviously I could provision two systems an manually test various loads under different concurrency combinations and put together a benchmark, but I'd rather avoid the work if someone has already done such a comparison.Does anyone know in general which is the better option from a sheer performance standpoint?
Is it more performant to use Proxygen or NGINX + FastCGI local socket with HHVM?
Thenginx documentationspecifies that thegzipoption is allowed in the following contextsContext: http, server, location, if in locationThis means you need to wrap thegzipswitch inside alocationblock.gzip off; server { listen 80; server_name localhost; valid_referers server_names; location / { root /var/www/; index index.html index.htm; if ($invalid_referer = "") { gzip on; } } }
In order to mitigate against the BREACH attack, I would like to selectively enable gzip only when$http_referer's hostname matches one of my server names.How would I do this? I tried usingvalid_referers server_names;, but it seems like nginx doesn't allowgzip oninside if statements. When I include this in my conf:valid_referers server_names; if ($invalid_referer = "") { gzip on; gzip_vary on; }I get[emerg] "gzip" directive is not allowed here. The must be a way to selectively enable gzip.
Nginx: Selectively enable compression based on referrer hostname
The new$request_idvariable now is available since Nginx v1.11.0.
I found thissbange's answerabout unique request ID in nginx upstreams. Here is a quote:location / { proxy_pass http://upstream; proxy_set_header X-Request-Id $pid-$msec-$remote_addr-$request_length; }It's looks nice, but it generates long and not very useful string. It would be better with a short hash (md5 for example).Then I found this third-partynginx module ngx_http_set_hash. And of course, I can use a perl_modules for md5 functions. But, I trying to find some out-of-box, just with Nginx.Can Nginx make hash value of some string or maybe someone know better method for generating short unique request id?
Can Nginx make hash sum from some strings (unique Request Id)?
I have had success with the URI changing on the backend while processing with the following.location /apps/phpauthentication/1 { rewrite ^(.*)//(.*)$ /$1/$2 permanent; ##First matches double slash and rewrites try_files $uri /app_dev.php$is_args$args; ##URI is now /apps/1/signup if (!-e $request_filename) { rewrite ^/(.*)$ /app_dev.php last; ## Matches all request that pass from above }Now the URL in the browser never changes but the backend server now appears to have a valid path.
I have a double slash in my URL (which is not ideal).So my app is hit at//signup.Error message:Uncaught PHP Exception Symfony\Component\HttpKernel\Exception\NotFoundHttpException: "No route found for "GET //signin""Anyway to change it to just/signup?I have tried the below in the first location block (which is the one catching the proxy).Maybe something along the lines of the...location /apps/phpauthentication/1 { rewrite ^\//(.*)/$ /$1 break; try_files $uri /app_dev.php$is_args$args; if (!-e $request_filename) { rewrite ^/(.*)$ /app_dev.php last; } }Full config:server { listen 80; server_name localhost; root /srv/http/web; index app_dev.php index.php index.html; location /apps/phpauthentication/1 { rewrite ^\//(.*)/$ /$uri permanent; try_files $uri /app_dev.php$is_args$args; if (!-e $request_filename) { rewrite ^/(.*)$ /app_dev.php last; } } location ~ ^/(app_dev|config)\.php(/|$) { fastcgi_pass app:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; fastcgi_param APP_ENV dev; include fastcgi_params; } location ~ \.php$ { fastcgi_pass app:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APP_ENV dev; include fastcgi_params; } }Thanks :)
Remove leading slash NGINX
AsDaniel Rosemanindicated, Amazon doesn't seem to allow me to do what I was trying to do here without additional configuration.I took the advice and configured Nginx and now I am able to usegunicorn MyApp.wsgi:application --bind=172.31.21.65:8000to spin it up.
Using AWS and Gunicorn, my Django site is accessible and fully functional if I spin up the Django's built-in server, but I can't access it through Gunicorn.If I try:gunicorn MyApp.wsgiIt seems to start up:[2015-11-18 17:53:30 +0000] [18752] [INFO] Starting gunicorn 19.3.0 [2015-11-18 17:53:30 +0000] [18752] [INFO] Listening at: http://0.0.0.0:8001 (18752) [2015-11-18 17:53:30 +0000] [18752] [INFO] Using worker: sync [2015-11-18 17:53:30 +0000] [18755] [INFO] Booting worker with pid: 18755But the browser just returns the Nginx splash page at the basic URL/IP address and times out when attempting to visit:8001or:8001.I get the same result if I run:gunicorn MyApp.wsgi:application --bind 0.0.0.0:8001EDIT: If I try the same command with the specific URL in place of0.0.0.0it outputs the following:[2015-11-18 18:24:17 +0000] [18902] [INFO] Starting gunicorn 19.3.0 [2015-11-18 18:24:17 +0000] [18902] [ERROR] Retrying in 1 second. [2015-11-18 18:24:18 +0000] [18902] [ERROR] Retrying in 1 second. [2015-11-18 18:24:19 +0000] [18902] [ERROR] Retrying in 1 second. [2015-11-18 18:24:20 +0000] [18902] [ERROR] Retrying in 1 second. [2015-11-18 18:24:21 +0000] [18902] [ERROR] Retrying in 1 second. [2015-11-18 18:24:22 +0000] [18902] [ERROR] Can't connect to ('myurl.xyz', 8001)If I use the server's IP address, I see:[2015-11-18 18:26:22 +0000] [18911] [INFO] Starting gunicorn 19.3.0 [2015-11-18 18:26:22 +0000] [18911] [ERROR] Invalid address: ('', 8001)I spent 3 days unsuccessfully trying to get uWSGI to work and I'm on day 2 with Gunicorn and I can't seem to get past this point with either. None of the docs or tutorials specify what to do if this first step doesn't work. I'm very new at this, perhaps there is something simple that I'm overlooking that everyone else just assumes would be taken care of?All relevant IP addresses and urls are set correctly in theALLOWED_HOSTSportion of Django's settings module.
Gunicorn - Can't Access Django Project (Browser Times Out)
The proxy protocol specificationsays:The receiver MUST be configured to only receive the protocol described in this specification and MUST not try to guess whether the protocol header is present or not. This means that the protocol explicitly prevents port sharing between public and private access. Otherwise it would open a major security breach by allowing untrusted parties to spoof their connection addresses.I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.Another couple of options spring to mind too:Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.
I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error -->*58 broken header while reading PROXY protocolI figured I have two optionsPerform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.If it is possible to catch thebroken headererrors, or detect regular http requests in theproxy_protocolserver block, then redirect those requests to a port that listens to http.I would prefer the latter(#2), if possible. So is there any way to do this?Ideally, I would prefer not to have to do any of this. Afeature requestto fix this has been submitted to AWS, but it has no ETA.
Nginx catch "broken header" when listening to proxy_protocol
Give the following setting a try, sounds funny but just copy past the entire block instead of typing them in (believe you me that helped a couple of colleague of mine)xdebug.remote_host=10.0.2.2 xdebug.remote_enable=1 xdebug.remote_port=9000 xdebug.remote_autostart=1 xdebug.show_exception_trace=0 xdebug.show_local_vars=0 xdebug.var_display_max_data=10000 xdebug.var_display_max_depth=20 xdebug.max_nesting_level=200p.s.I assume you have the extinction file is exists in your guest machine (Virtual Machine) since you said it will stops at the breakpoint for split of soundAlso i assume your browser is sending the correct "PHPSTORM"
I've followed multiple tutorials to set up XDebug with PhpStorm but it seems likeI'm not lucky with it at all. No matter what I try, it's always stuck withWaiting for incoming connection with ide key 'PHPSTORM'But when I reload the page with CTRL + R I can see for asplit secondconnected.However, then it switches back to "Waiting.."I've tried the XDebug Chrome Plugin and the PHPStorm XDebug Generator Bookmarksaswell as enabling"Start listening for PHP Debug Connections"in PHPStorm.I'm Using NginX with php5-fpm and triedtcpdump 9089.As said, for a split second it dumps it. But then it's lost again..Can someone please help me?My php.ini config :[xdebug] zend_extension="/usr/lib/php5/20121212/xdebug.so" xdebug.remote_enable=1 xdebug.remote_port=9089 xdebug.remote_connect_back=1 xdebug.profiler_enable=1 xdebug.profilter_output_dir="/tmp/xdebug.log" xdebug.idekey=PHPSTORMMy PHPStorm Settings :[EDIT: I have NO IDEA why, but removingxdebug.remote_connect_back=1and replacing it withxdebug.remote_host=my.ip.add.essmade it work?!As I've read the docs I had the understanding that the first setting is for implicit requestswhile the later one is for an explicit ip request..
PhpStorm and Remote XDebug not working
Normally, one would set up Nginx (or some other general web server) on the "main" port, and then configure it to forward certain requests to your application server (in this case, Flask) on a secondary port which is invisible/unknown to the client browser. Flask would provide the result to Nginx which would then forward the result to the user.This is called areverse-proxy, and Nginx iswidely considereda good choice for this setup. In this way, all files are served to the client by Nginx, so the client doesn't notice that some of them actually come from your application server.This is good from an architectural standpoint, because it isolates your webapp (somewhat) from the client, and allows it to conserve resources, e.g. by not serving static files and by having Nginx cache some of the webapp's results when it makes sense to do so.If you're doing development, this may seem like a lot of overhead; but for production it makes a lot more sense. However, it is a good idea to have your dev environment mimic your prod environment as closely as possible.
I’m having trouble serving static files (image assets, etc.) for a small game I’m working on inPhaser. I’m usingflask-socketioon the server (and socket.io on the client-side) for networking which is why I’m trying to get this working under Flask. As far as I can tell, I must use Flask to serve the static resources because otherwise I run into the problem of theSame-origin policy.Indeed, I tried serving the assets with nginx on a different port but I got this message in my browser console (Safari in this case):SecurityError: DOM Exception 18: An attempt was made to break through the security policy of the user agent.I looked in the Flask documentation on how to serve static files and it said to use “url_for.” However, that only works for HTML template files. I’m trying to load the static resources inside my javascript code using the Phaser engine like so (just for example):this.load.image('player', 'assets/player.png’); //this.load.image('player’, url);where I cannot obviously use ‘url_for’ since it’s not a template file but javascript code.So my question is, how do I serve my static resources so that I don’t violate the same-origin policy?Is there another secure way to serve static resources in Flask besides using ‘url_for’?Or should I be using nginx as areverse proxy? In the flask-socketio documentation I found this nginx configuration snippet:Flask-SocketIO documentation(please scroll down to where it says "Using nginx as a WebSocket Reverse Proxy”)Regarding #2, I don’t quite understand how that should work. If I should be doing #2, can someone kindly explain how I should configure nginx if Flask is listening on port 5000? Where in that snippet do I configure the path to my static assets on the filesystem? And in my javascript code, what url path do I use to reference the assets?
Serving static resources with Flask - running afoul of Same-origin policy
The links are propably not working when installed on Heroku. Use the net panel of firebug or your browser developer tools to see which absolute URL the browser is trying to load. Most likely it is invalid and a HTML 404 error page is returned (starting with a '<')
When I click on the index.html file on my local computer, the page is correctly loaded in my browser. However, when I open the site on herokuapp, I get "SyntaxError: expected expression, got '<'" errors for all of the .js files I link to in the html.I would appreciate any help on how to fix this.Thanks!
SyntaxError: expected expression, got '<' on Heroku
First, requests to port8000completely bypass nginx, so nothing strange here. You should go tolocalhostwithout port number.Second, you have to symlink this config to/etc/nginx/sites-enabledand reload nginx.Third, your static location is wrong. You havelocationwithout trailing slash andaliaswith one. They should always be with or without trailing slash simultaneously. And in this case it's even better to haverootdirective.server { root /home/www/flask_project; index index.html; location / { proxy_pass http://localhost:8000/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static/ { # empty. Will serve static files from ROOT/static. } }
I am using thistutorial - part 1, but i am not sure how to test if the app is running with nginx serving static files or not.I have exactly the same code./etc/nginx/sites-available/flask_project server { location / { proxy_pass http://localhost:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static { alias /home/www/flask_project/static/; } }And then:gunicorn app:app -b localhost:8000All routes are working fine. However if I dohttp://localhost:8000/statici will seeNot Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.And apparently I should see the page withTest!from static folder.What i am doing wrong?Basically i want to know how configure nginx to serve static files and then confirm.-app.py -static -index.html
Confirm if application is using nginx to serve static files
Through trial and error (with curl + firefox-dev-tools) I have found, that the character0x11in combination with:Content-type: text/plainmakes nginx deliverContent-type: application/octet-stream.I don't know why this happens, but i found that the C-program produces the error, because it prints0x11or^Qordc1. This phenomenom also happens with files containing this character.
I am new to this forum. This is my first question.I have a nginx-server + fcgiwrapper set up to run Programs on user request (no PHP).For testing I have a simple bash script, which displays the environment variables and sets two cookies, a second bash-script prints "Hello World" as text/plain and another bash-script prints "Hello World" as text/html.Another Program written in C is supposed to read Text from stdin, parse it and print Text based in the input to stdout, which the should be displayed as text/plain in the requesting webbrowser. (the requesting browser needs to use POST).However sometimes it displays the returned text as "text/plain" (which it should do), but sometimes the browser wants to download the returned text, as if it was "application/octet-stream".But, if I test the C-Program in a prepared EnvironmentEnvironment Variables: CONTENT_LENGTH=30 REQUEST_METHOD=POST HTTP_COOKIE=NAME=TEST; ID=200it works every time, shows no errors and at the beginning it prints:Content-type: text/plain (plus two newlines)I have found that depending on the contents length it sometimes works and sometimes doesn't. (This only happens when the program is started through a webbrowser.) In Firefox, using the dev-tools, I could see that the answers Content-type wasapplication/octet-streamand if I save it, it turns out to be a text file which contains the text that should have been displayed in the browser directly. What am I doing wrong?Edit: I have already searched for similar problems with no success + all other things work perfectly + This also happens with different browsers (epiphany, lynx, internet explorer on Windows)
nginx + fcgiwrapper sporadic Prblem: Delivers application/octet-stream instead of text/plain
I could not fix this problem. Also we supposed to use EC2 free instance only instead of BeanStalk.We have now moved to Free EC2 instance with RDS and deployed the rails application using Capistrano with Nginx + Unicorn. Though it was not easy[1][2]but finally we got it working.
The deployment was successful and everything is green. But when we try to access the application URL, it gives502 Bad Gatewayerror.Checking for puma process withps -aux | grep pumadoesn't return any process attached to puma server butpgrepreturns following.$pgrep -fl puma 18009 su -s /bin/bash -c bundle exec puma -C /opt/elasticbeanstalk/support/conf/pumaconf.rb webapp 18031 ruby /opt/rubies/ruby-2.0.0-p598/bin/puma -C /opt/elasticbeanstalk/support/conf/pumaconf.rb
502 bad gateway nginx + puma + rails 3.2 on Elastic Beanstalk
Even if you've enabled custom recipes have you configured OPsWorks to go to your GIT repo?http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable.htmlIf your GIT repo is on Github for instance you'll have to also authorize Amazon to access your repo using the proper SSH Key.Also ensure the nginx has all the dependencies it needs in order to run. It might be easier to just put the cookbooks in a tar.gz and upload it to S3 to use.
I'm trying to figure out the best way to add an nginx server as a proxy for my node.js AWS OpsWorks machines. I have not been able to get it working, as Chef/OpsWorks cannot seem to find the cookbook. Here is my setup: I am using the node.js layer, and have created a git repo for the chef recipes for nginx. I have enabled custom recipes, and I have successfully used a custom HAproxy attributes file from this repo. The structure of the repo is as follows:nginx-custom --recipes --templates --attributes haproxy --attributesThe weird thing is that the HAproxy overrides work. The nginx cookbook is basically copy-pasted from the OpsWorks version, with some of my own attributes (Maybe b/c it is a full cookbook is the problem?). So when I try to run the nginx-custom cookbook as part of the setup step (I've added the name of the cookbook to the setup step with the default recipe like 'nginx-custom::default'), I get the "No such cookbook" error. I've tried running it as a standalone command with the sam result. Am I doing something obviously wrong? Should I use Berkshelf for this? Should I make a custom layer instead of trying to modify an existing one? Any help appreciated. Thank you.
No such cookbook -- OpsWorks can't find custom cookbook
Please open a support ticket & we can take a look. We wouldn't add a redirect unless you set a PageRule instructing us to do so, so this sounds like you may have something configured incorrectly. Forwarding & cache status are too different things & really shouldn't have any play together here.
I've an issue with Cloudflare, sometimes (randomly) I have 302 redirections on random an non-existing subfolders, I give you some examples :GET /en/home > 302 > /en/home/sWetZ > 302 > /en/home > 302 > /en/home/qUTIs > 302 > /en/home > 200 GET /en/home > 302 > /en/home/zaIue > 302 > /en/home/zaIue/widUT > 302 > /en/home > 200When I disable Cloudflare, everything seems to work well and no sign of strange redirections. I've noticed that these redirections happen on resources when header "CF-cache-status" is set to MISS.It's very annoying because "ERR_TOO_MANY_REDIRECTIONS" happen really often which totally breaks the website : javascript, styles and images are not loaded...
Cloudflare bug random 302 redirections
Use location with trailing slash, remove rewrite and useproxy_passwith/uri. Nginx will take of replacing/site1/with/. Also, you may need to setHostheader tosite1.mysite.comnot the$host.location /site1/ { proxy_pass http://site1/; proxy_set_header Host site1.mysite.com; ... }
Is there a way to use nginx as a router while keeping the requested domain in the URL? For example, if I hit mysite.com, the nginx routing server looks at the URL and directs traffic to a particular server, all while maintaining the original requested domain in the URL.E.g.mysite.com/site1/params Router -> site1.mysite.com/paramsBut even though behind the scenessite1.mysite.com/paramsis being called, the user seesmysite.com/site1/paramsin the URL.I've taken a stab at the configuration, but seem to be getting 404's.upstream site1 { server site1.mysite.com; } location /site1 { rewrite ^(.*)$ /$1 break; proxy_pass http://site1; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
Routing to different servers with nginx
This isn't anything to do with Nginx, there's a bug in the code generating your URLs.On the pagehttp://git.example.org/fooyou have a link written as:Click to commitIt should either be absolute to the server as:Click to commitOr relative to the current directory as:Click to commitPresumably somewhere in the code where you init cgit you are pasing infoowhere you should be passing in/foo.
Wired URL rewriting issueswhile I gotohttp://git.example.org/fooit works fine, repos shows up. However the links on that page appended /foo again i.e.http://git.example.org/foo/foo/commitWhen I goto URL likehttp://git.example.org/foo/commit?id=123123It works, but each links on that page looks likehttp://git.example.org/foo/commit/foo/snapshot/foo/4f0be51d35fe3160a9122894723b69df69a6fb7e.zip?id=4f0be51d35fe3160a9122894723b69df69a6fb7eHere is my nginx.conf, did I miss something?server { listen 80; server_name git.example.org; root /var/www/htdocs/cgit; index cgit.cgi; location ~* ^.+\.(css|png|ico)$ { expires 30d; } if ($request_filename = cgit.cgi){ rewrite ^/([^/]+/.*)$ /cgit.cgi?url=$1 last; } location / { try_files $uri @cgit; } location @cgit { fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi; fastcgi_param HTTP_HOST $server_name; fastcgi_param PATH_INFO $uri; include fastcgi_params; } access_log /var/log/nginx/cgit_access.log; error_log /var/log/nginx/cgit_error.log warn; }Update, SOLVEDit's cgit setting of virtual-root=/ And I updated my nginx.conf too, urls rewrite works now!!server { listen 80; server_name git.mengzhuo.org; root /var/www/htdocs/cgit; location ~* ^.+\.(css|png|ico)$ { expires 30d; } location / { index cgit.cgi; fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param HTTP_HOST $server_name; fastcgi_param PATH_INFO $uri; fastcgi_param QUERY_INFO $uri; include "fastcgi_params"; } access_log /var/log/nginx/cgit_access.log; error_log /var/log/nginx/cgit_error.log warn; }
Cgit and Nginx URL rewrite
Ok. I got this work. Not sure the solution is correct but:First I need to setup a subdomain in Tomcat.Here is the answer I was inspired with. Then, configure nginx with:server { listen 80; server_name example.com; location / { root /path/to/domain/root; index index.html; } } server { listen 80; server_name subdomain.example.com; location / { proxy_pass http://subdomain.example.com:8080; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }Domain config remains as in the question - it is good.And finally I just putsubdomain.warto thewebappdirectory of my Tomcat and it works like a charm!
Here are the preconditions:I've registered domainexample.comin godaddy.I run a Tomcat on8080port with a couple of applications. 8080 port is hidden to the world.I want to mapsubdomain.example.comtoserver:8080/subdomain. I want it be possible to do similar mappings (subdomain2.example.comtoserver:8080/anotherContext, for example) in the future.example.comshould be mapped toserver:8080/mainPageAppAll other applications should be accesible via their context:example.com/app->server:8080/app.For now, I've only configured godaddy (I do not know if this is good):Can you help me with nginx config for this case? Here is mine, but it sends infinite redirects:server { listen 80; server_name localhost; location / { root html; index index.html; } } server { listen 80; server_name subdomain.localhost; location / { proxy_pass http://127.0.0.1:8080/subdomain; } }
Subdomains, nginx and godaddy
Problem solved:In system there were two Mercurial versions installed:/usr/bin/hg == v1.4 /usr/local/bin/hg == v2.2Althoughwhich hgshows/usr/local/bin/hgnginx was using/usr/bin/hg.
There's a lot of questions about bookmarks but none of them answers my question:What should I do to allow for creating bookmarks while pushing to my hgweb server?Here's what I'm getting while trying to push bookmark:$ hg push -B feature1 pushing to http://local_server/hg/Project searching for changes no changes found exporting bookmark feature1 updating bookmark feature1 failed!Is there anything I should put into.hgrcor inhgwebconfig?When bookmarks already exist in remote repo then they are updated, also creating bookmarks by push works on bitbucket so I'm sure it's possible.Problem solved:In system there were two Mercurial versions installed:/usr/bin/hg == v1.4 /usr/local/bin/hg == v2.2Althoughwchich hgprints/usr/local/bin/hgnginx was using/usr/bin/hg.Thank you for your help.
Creating remote bookmarks while pushing to hgweb server
I'd probably use rewrite.server { listen 80; server_name test.dev; root /srv/www; location / { index foo.html; rewrite ^/foo(.*)$ /$1 last; rewrite ^/bar(.*)$ /$1 last; break; } }Something like that yah, I think you can figure it out from here :)
I have spent an hour looking for an answer to this, but unless you know the vocabulary it is very difficult to search. What I want is dead simple. I have a server like so:server { listen 80; server_name test.dev; root /srv/www; location / { index foo.html; } location /foo { index foo.html; } location /bar { index foo.html; } }What I want is for /, /foo, and /bar to all point to theexact same file. In other words, I want the location part to be completely ignored. Just serve the file from the root directory that I tell you to serve.Alias doesn't seem to be the answer, doesn't know which file it is supposed to serve.
nginx ignore location part
Is the visitor being redirected to the affected page from another? If so, it's likely to be a known IE bug.Take a look at this similar question:javascript location.hash refreshing in IE
When I make an XMLHttpRequest, I also changewindow.location.hash.For example,mysite.com/gallery/q#1becomesmysite.com/gallery/q#2.When this happens, IE8, as Fiddler and nginx logs show, makes this strange extra request tomysite.com/gallery/(which is 404).The page isn't reloading, it's like an XMLHttpRequest.GET http://mysite.com/gallery/ HTTP/1.1 Accept: */* Referer: http://mysite.com/gallery/q User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727) Accept-Encoding: gzip, deflate Host: mysite.com Connection: Keep-AliveSeparately, hash change or Ajax-request won't trigger this extra one.Another thing to note – the extra request occursnot on everyAjax-request. It happens seemingly randomly.Can it be nginx misconfiguration? Or is it simply one of the many IE8 bugs?Is there a workaround? I don't want this extra load.UpdateHere's the Ajax code ($stands for jQuery):var id = link.getAttribute('data-id') var xhr = $.ajax({ cache: false, url: '/stock-items', method: 'GET', data: { id: id }, dataType: 'json' }) xhr.success(function (data) { if (currentId === id) { toggleLoader(false) displayData(data) } })And hash manipulating code:function setHash(link) { var index = $(link).index() globals.location.hash = index + 1 }Also tried with the hash-symbol with the same result:globals.location.hash = '#' + index + 1The Ajax-request is on click on gallery image links:links.on('click', function (e) { setHash(this) loadData(this) e.preventDefault() })I also tried theselinksto have thehrefattribute set to#1,#2and so on in the HTML (and removede.preventDefault()). So that the hash changes naturally. Nope, the extra request is made anyway.
IE makes extra GET-request on hash change
CGI::Fastwill handle most of the work for you, including setting up the daemon.use CGI::Fast; local $ENV{FCGI_SOCKET_PATH} = ":9000"; local $ENV{FCGI_LISTEN_QUEUE} = 20; while ($q = CGI::Fast->new) { print $q->header; print "The foo input is ", $cgi->param('foo'), ""; }An alternative isNginx::Simplewhich gives you more control over the behavior of your cgi-script-as-daemon.
There are any number of tutorials out there on how to use FastCGI to CGI wrappers to serve Perl code using nginx. But I'm comfortable working with Perl modules myself, so I don't need the wrapper. I'm trying to figure out the right way to set this up. Here's the code I have so far:#!perl use CGI; use FCGI; my $s = FCGI::OpenSocket(':9000',20); my $r = FCGI::Request( \*STDIN, \*STDOUT, \*STDERR, \%ENV, $s); while ($r->Accept >= 0) { my $cgi = CGI->new; print "Content-type: text/html\n\n"; print "The foo input is ", $cgi->param('foo'), ""; $r->Finish; }And enable it in nginx like so:location /foo { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.pl; }The problem is that no matter how many times I call the script,paramreturns the same value that was passed the very first time it was called since starting the program. Is there a better way of doing this? I'm open to alternatives toCGI.pmas well.
Perl web serving with nginx and FastCGI - not able to read parameters
Worker processes in nginx can handle multiple incoming and outgoing requests simultaneously. The answer to the question you linked (3436808) is also applicable to this question.
From my limited understanding of nginx I know that nginx seperates itself from Apache by using a single thread that handles all requests instead of Apache which throws threads at the problem. In theory with a bunch of small requests its faster. But what about long running requests.Lets say a user is downloading a large file or there's some long running PHP script that's slow because of something its depending on (disk IO, database) is slow. With Apache everything has its own thread so while PHP is waiting for a response from the database another request can come in and be simultaneously processed. With nginx however, wouldn't something like that lock the thread and therefor the whole server? I know that you can have multiple nginx processes but creating more processes for just file downloads just seems like trying to recreate Apache.I know I'm missing something here as nginx handles situations like this, but what? How does nginx do this with its threading model?And before you say it, this isn't a duplicate ofthis questionas it only talks about incoming connections
How does nginx handle long running requests like file downloads?
Nginx isn't able to connect to your backend (gunicorn) or gunicorn is refusing the connection. You provided no details about the configuration so that's all the help you'll get. You are correct that the application code has nothing to do with it. It's a configuration error on your part.
I am trying to add an application to an existing Django project, but once I have done it I get a 502 error.The server is running Ubuntu. I don't think it has to do with the applications code because I got it running on the django development server. It goes away when I take out the app's name from settings.py and restart gunicorn.Here's a part of the log2011/07/15 01:24:45 [error] 16136#0: *75593 connect() failed (111: Connection refused) while connecting to upstream, client: 24.17.8.152, server: staging.site.org, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8020/", host: "staging.site.org"Here's the nginx config file.Nginx Config FileI'm not sure what other information is needed. Not sure where the gunicorn logs are located. My server admin skills are kind of lacking.
502 error after adding application to a Django project running on nginx and gunicorn
Ok, it wasn't memory or database issue, it was... IonCube issue... i was debuging core classes and found that script stops on Enteprise Modules and... if You don't have IonCube installed it just simply display blank page.But, now Magento returns 404: Page not found...Thx, guys for help and if You have some advice on second issue fell free to post it here :)After applying little fix:/* Store or website code */ $mageRunCode = isset($_SERVER['MAGE_RUN_CODE']) ? $_SERVER['MAGE_RUN_CODE'] : ''; /* Run store or run website */ $mageRunType = isset($_SERVER['MAGE_RUN_TYPE']) ? $_SERVER['MAGE_RUN_TYPE'] : 'store'; Mage::run('', 'store'); //<-this //Mage::run($mageRunCode, $mageRunType);Front and Back are loading, but there is an problem with controllers... but not for long !A and if i type in url /admin nginx will return Input file not found, but when i type index.php/admin it load... part. It's and issue with rewrite and server vars.EDIT:I won ! iconv wasn't installed... now everything work except rewriting...SUMMARY: I need to find a way to properly get server var for index.php file and rewrite index.php to /Thx for help !
anybody knows how to configure server {} in configuration file of nginx server? I have something like this below:server { server_name local.com; root some_path; index index.php; #location / { #try_files $uri $uri/ index.php; #proxy_pass http://127.0.0.1:9000; #} # set a nice expire for assets #location ~* "^.+\.(jpe?g|gif|css|png|js|ico|pdf|zip|tar|t?gz|mp3|wav|swf)$" { # expires max; # add_header Cache-Control public; #} # the downloader has its own index.php that needs to be used #location ~* ^(/downloader|/js|/404|/report)(.*) { # include fastcgi_params; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$1/index.php$1; # fastcgi_read_timeout 600; # fastcgi_pass 127.0.0.1:9000; #} location ~* \.php { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_read_timeout 18000; fastcgi_pass 127.0.0.1:9000; } #location ~* ^(/index.php)?(.*) { # include fastcgi_params; # fastcgi_param SCRIPT_FILENAME $document_root/index.php$1; # fastcgi_pass 127.0.0.1:9000; # fastcgi_read_timeout 18000; #} }Browser returns blank page and doesn't exec php...EDIT:After spending some time with nginx configuration and php stuff i ended on having site that in some cases load properly and in some not...Ex: I have two pages that are identical, but for some reason first is loading lie a charm, and second is loading partially...Sometimes page is loading half way...Nginx isn't logging anything...And for some reason when i try to go to backend, nginx loads frontend with backend url :/Does anybody can provide me with other magento 1.8 configuration ?
Nginx configuration with Magento 1.8
I think one solution could be using a combination of PHP as apache module or through FastCGI and use mod_proxy apache module to do some reverse proxy to access your administration app running with gunicornYou can have a setup like :Front HTTP Server apache on port 80 : www.host.com:80Backend HTTP Server gunicorn on another port : other.host.com:8080 or localhost:8080 publicly accessed with mod_proxy and url like www.host.com/admin/Media HTTP Server : media.host.com, if it has to be on the same system you can use mod_proxy and run the NGINX server on another TCP port.Note that you should not be able to get the best performance with the NGINX as a media server hidden behind apache with mod_proxy.This part of setup relies upon the possibility of having more than one public IP adress on this slice.
I'm curious... I'm looking to have a really efficient setup for my slice for a client. I'm not an expert with servers and so am looking for good solid resources to help me set this up... It's been recommended to me that using FastCGI for PHP, Green Unicorn (gunicorn) for Django and Nginx for media is a good combination to have PHP and Django running on the same slice/server. This is needed due to have a main Django website and admin, but also to have a PHP forum on there too.Could anyone push me to some useful resources that would help me set this up on my slice? Or at least, any views or comments on this particular setup?
PHP and Django: Nginx, FastCGI and Green Unicorn?
You could use theproxy_passconfiguration in Ngxinx.server { gzip on; listen 80; server_name books-stuff.com ; location / { proxy_pass http://general-stuff.com/books/; break; } }Should do exactly what you want
I'm running Django behind Nginx (as FASTCGI) and I need to "deeplink" to a page in one domain from the root of another without redirecting or forwarding e.g.Given that I have a domain general-stuff.com and a matching URLhttp://general-stuff.com/books/and that I have a second domain books-stuff.com I need a way to get the page served byhttp://general-stuff.com/books/at the URLhttp://books-stuff.com/how would I go about this?Edit:Note that I also need the tree below these urls to work e.g.http://books-stuff.com/book1/should serve the page athttp://general-stuff.com/books/book1/etc.Thanks in advanceRichard.
Django & Nginx deeplinking domains (re-write rules or django urls?)
I found another solution for my use case. Based on some people's comments, it looks like it may not be possible to find the private ip address of a device that connects to your website, only the public ip address.@Cerceis os.networkInterfaces() answer may work. I did a quick test, but was unable to know for sure if it works. I don't have time to test it out more fully. If you are hoping to find an answer, I would try out os.networkInterfaces() in Node.js and that might get you the ip address you're looking for.
I am using this code to get the ip address in Node.js:const ip = await (req.headers['x-forwarded-for'] || '').split(',').pop().trim() || req.socket.remoteAddress;For all the devices on my home wifi network and when I access my website using data on my phone, I get this ip address: ::ffff:127.0.0.1I'm trying to get the ip address of each individual device (phone, laptop) that visits my site. But all of the devices show the same ip address.How do I get the individual device ip address of each device in Node.js?EDIT:I made some updates and no longer get ::ffff:127.0.0.1. I now get the ip address of the internet connection. So if I'm connected to wifi, I get the wifi modem ip address. If I'm using data, I get the data connection ip address.But I need to get the device ip address. I do NOT want the connection ip address. I want the device ip address.Here are the changes I made:I set 'trust proxy' to true:app.set('trust proxy', true);I updated the etc/nginx/sites-available/mysite file to look like this:location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:5050; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }I updated the etc/nginx/proxy_params file to look like this:proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme;What did I do wrong? How do I fix this? From what I'm reading, it sounds like I should be able to use req.headers['x-forwarded-for'] to get the right ip address, but req.headers['x-forwarded-for'] returns the same thing as req.headers['x-real-ip'] except it is in an array.
How to get the ip address in Node.js Express?
I have this issue as well, if you find a way to set it up on Google CDN please let me know.Here is my Workaround:ChooseUse origin settings based on Cache-Control headersSince most browsers cache static JS assets, I reduce myCache-Controlheader for.htmlto be reasonably low or normal, like 5-60 minutes, whereas the javascript files have a much longer cache time, like a week.Some Context: After deployment, if google serves the old index.html from its CDN cache, the user's browser will request old JS files. If it's time for those JS files to be re-validated, google will see they are now 404s, and send a 404 response instead of the JS file. The workaround above makes sure that the JS files a highly likely to be available in the cache, while the index.html is updated more frequently.Update:This works... but there appears to be a caveat, if the page isn't a frequently trafficked page, google will eventually return a 404 on the javascript file before the specified time. Even thoughgoogle docsstate it won't get revalidated for 30 days, this appears to be false.Update 2: Google's Response:The expiration doc says "Cloud CDN revalidates cached objects that are older than 30 days." It doesn't say that Google won't revalidate prior to 30 days. Things fall out of cache arbitrarily quickly and max-age is just an upper bound.
My frontend runs on nginx, and I'm serving a bunch of.chunk.jsfiles that are built from react.Every time I update my frontend, I rebuild the docker image and update the kubernetes deployment. However,some users might still be trying to fetch the old js files.I wouldlikeGoogle Cloud CDN to serve the stale, cached version of the old files, however it seems thatit will only serve stale content in the event of errors or the server being unreachable, not a 404.Cloud CDN also has something called "negative caching",however that seems to be for deciding how long a 404 is cached.--> What's the best way to temporarily serve old files on Google Cloud? Can this be done with Cloud CDN?(Ideally without some funky build process that requires deploying the old files as well)
Getting Google Cloud CDN to serve stale file on 404
I believe i have figured out what the error was. I had containers built without enabled WSL 2 within my Docker for Windows so when I have switched to WSL2 the containers weren't rebuilt. After pruning containers and rebuilding them it works OK.
i was trying lately to use thecodingmachine/php:7.1-v3-fpm-node10 image as my fpm container for local development and was surprised by this error message:php-fpm_1 | [24-Sep-2020 20:05:32] ALERT: [pool www] user has not been defined php-fpm_1 | [24-Sep-2020 20:05:32] ALERT: [pool www] user has not been defined php-fpm_1 | [24-Sep-2020 20:05:32] ERROR: failed to post process the configuration php-fpm_1 | [24-Sep-2020 20:05:32] ERROR: failed to post process the configuration php-fpm_1 | [24-Sep-2020 20:05:32] ERROR: FPM initialization failed php-fpm_1 | [24-Sep-2020 20:05:32] ERROR: FPM initialization failedWhen i looked into/etc/php/7.1/fpm/pool.d/www.confi have discovered that the line withuser = www-datais still commented, although i have absolutely no idea why. The same image worked ok a week ago.Can anyone please help me with this ?
ALERT: [pool www] user has not been defined when using thecodingmachine/php:7.1-v3-fpm-node10
Some certificates don't support it. You need to set --enable-ssl-chain-completion = false . Then it stops
I am running nginx ingress on my kubernetes 1.9 cluster. Using internal singed certificate for the application URL, I have include root & intermediate certificate part of the TLS secretes.From my nginx log file, I see this message frequently.backend_ssl.go:139] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate.How to get more details about this error message?error message:E0129 01:11:39.582118 7 backend_ssl.go:139] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate. E0129 01:11:39.582689 7 backend_ssl.go:139] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate. E0129 01:11:39.583031 7 backend_ssl.go:139] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate. E0129 01:11:39.583308 7 backend_ssl.go:139] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate.
nginx ingress - unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate
0 You can see the GPUs available for your project and zone via gce_list_gpus() Try This: gce_list_gpus(project="your-project") Share Improve this answer Follow answered Apr 19, 2021 at 20:57 Kiran GKiran G 2133 bronze badges 1 Thank you for the answer. However, it seems that googleComputeEngineR package is needed. Is there a way to check GPU only use the base R packages and the xgboost R package? – uared1776 Apr 20, 2021 at 3:43 Add a comment  | 
I am writing a script that uses xgboost package to train machine learning models in R. The model training can be accelerated if CUDA GPUs are available and the GPU version of the package is installed. Is there a function that can check if GPUs are available (preferably without installing additional package other than xgboost)?
How to check if CUDA GPUs are available in R?
0 According to Tensorflow tested build configuration, you need to install CUDA 10.1 for Tensorflow 2.3 version. If there is any incompatibility between versions of Tensorflow, CUDA and cuDNN, it won't detect GPU. In your case i would recommend you to upgrade tensorflow version to 2.4 or 2.5, which supports CUDA 11.x. From tensorflow 2.0 onward Tensorflow 2.30 is integrated with Tensorflow 2.31, hence there is no need of separate Tensorflow 2.32 installation. You can use Tensorflow 2.33 module. Share Improve this answer Follow answered May 15, 2021 at 14:56 user11530462user11530462 Add a comment  | 
I have installed all things needed to run my GPU with Tensorflow, such as NVIDIA driver, Microsoft VisualStudio 2017 C++ distribution, CUDNN in the correct folder. But still I am unable to use the GPU. I receive the following message. The following are my software and hardware specifications. My GPU version is NVIDIA Quadro P620, Tensorflow, and TF-GPU versions are 2.3.0, Keras and Keras-GPU 2.4.3 CUDA version 11.2.2 NVIDIA driver version: 27.21.14.6192 Where can I start my debugging to solve this problem? Any lead?
Cannot use NVIDIA Quadro P620 GPU with Tensorflow
0 I have looked more into it and it appears that there is no way to halt GPU threads. I have also been told that threads end all at the same time if they have a constant number of instructions. Share Improve this answer Follow answered Apr 11, 2021 at 19:39 DinnerPlzDinnerPlz 1 1 This is both not fully true, but also not fully wrong. For one, there is a way to halt GPU threads: AllMemoryBarrierWithGroupSync (). But it only halts until all threads in that group reach that point, not ALL threads you dispatch. Also, all threads within a wave front will execute at the same time, since they're running on the same SIMD processor. However, threads on two different wave fronts will not exeucte at the same time - depending on what you do in your shader, it's not even guaranteed whether they take approx. the same time, even with constant number of instructions. – Bizzarrus Apr 12, 2021 at 2:01 Add a comment  | 
I am currently working on a compute shader that does CA-based fluid simulation. My current algorithm writes to the direct neighbours of the cell currently being computed. My current idea is to have one thread compute a 3x3 area of my CA grid, and subsequently, each 3x3 area adjacent to the other. This in theory would ensure that a pixel that is being written to would not be prematurely read. I need some way to prevent a thread from continuing until all other threads have reached the same point. Pseudo Code: for (int i = 0; i < 9; i+) { // do all calculations necessary while(true) { if (allThreadsDone) break; } } This may not be necessary though if the computation of each thread was constant time, which I do not know if is true.
Is there a way to halt a GPU thread until all threads being executed reach that point?
0 The total number of GPU devices visible should be verified with list_logical_devices: logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(logical_gpus) #[LogicalDevice(name='/device:GPU:0', device_type='GPU'), # LogicalDevice(name='/device:GPU:1', device_type='GPU')] Share Improve this answer Follow answered Apr 10, 2021 at 18:34 Vlad-Marius GrigutaVlad-Marius Griguta 15711 silver badge88 bronze badges Add a comment  | 
I would like to test using multiple GPUs for training a network. However, I only have installed one physical GPU (RTX 2070) on my machine. Would I be able to split this device into two virtual devices? My current attempt is based on the tf.config.experimental.set_virtual_device_configuration function, however, it does not seem to be working. Full example: import tensorflow as tf print(tf.__version__) # 2.4.1 phisical_gpus = tf.config.experimental.list_physical_devices("GPU") print(phisical_gpus) # [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] tf.config.experimental.set_virtual_device_configuration( phisical_gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2048), tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2048)] ) print(tf.config.experimental.list_physical_devices("GPU")) # [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Can I split a physical GPU into two virtual devices in tensorflow 2.x?
You can load one batch of data at a time into GPU. You should use data loader to fetch a batch of data and also initialize a torch device instance to use GPU. You can check the following tutorial. It uses data loader to get data as batches and load them to GPU by using torch device. https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
When loading a dataset into the GPU for training, would a Pytorch NN load the entire dataset or just the batch? I have a 33GB dataset that fits comfortably on my normal RAM (64GB) but i only have a 16GB of GPU RAM (T4). As long as Pytorch only loads one batch at a time into the GPU, that should work fine without any memory problems?
How does a Pytorch neural network load dataset into GPU
Known issue. Should be resolved in next build. The issue can be tracked here: https://github.com/microsoft/WSL/issues/6773 Update: It has been resolved as of build 21359
I’m in the windows insider program to utilize the gpu-compute features that were implemented last summer. Things have worked great. However, after updating to the most recent build, nvidia-smi no longer works. I get an error saying “your operating system doesn’t allow it” https://blogs.windows.com/windows-insider/2021/04/07/announcing-windows-10-insider-preview-build-21354/ This is the release doc. I’m unsure whether my error is a result of the last bullet point in the known issues section, or if my driver has somehow been corrupted, or if something needs to be toggled back on in windows.
GPU Compute unavailable in wsl2 in new build?
0 IMHO you don't need to specify GPU explicitly in TF/keras - current versions on Colab will use it when it is available. GPU loading usually takes place at fit and predict times, not at model building - and then you can fine tune memory consumption using batch size. Please try your code without the with blocks. And please use proper code copy & paste instead of pictures in future. Share Improve this answer Follow answered Apr 8, 2021 at 7:21 snowdd1snowdd1 15666 bronze badges 3 Thanks for the suggestion for code insertion. But it gives an error in the same line without the 'with' statements also. – psj Apr 8, 2021 at 7:26 I had a broken runtime in the past. Try to "factory reset" the runtime. I don't think the model is overly large, a standard resnet50 from keras.applications is much much larger an works on Colab GPU without much hassle. You might re-think the number of units in that dense layer though. I don't think such a large dense layer gives you an added benefit. If the data is pictures, consider convolutions and max pooling layers before flatten the input to get features instead of pixels. – snowdd1 Apr 8, 2021 at 7:29 The output I have shown is after factory reset of runtime. Before the error, ~4300 MiB of RAM is allocated to data according to output of nvidia-smi, and then the crash occurs. Wrt the dense layer neurons, I am replicating architecture from a paper. – psj Apr 9, 2021 at 6:34 Add a comment  | 
I am running a model which allocates [32768,32768] float weight (around 4.29 GB) in its first layer. But it gives an oom error while adding the layer in the sequential model. This is the output of nvidia-smi before adding layer - This is the error - And this is the output of nvidia-smi after the error - When the Colab GPU is of 13 GB size, why can't it allocate a weight of 4.29 GB? The other answers on this for e.g., allowing GPU growth doesn't work. (Note - the GPU and CPU code division in the model creation was originally meant to be on gpu1 and gpu2, but since Colab provides only one GPU, I divided it between CPU and GPU to use RAM from both)
Colab giving OOM for allocating 4.29 GB tensor on GPU in tensorflow
0 In my case, reinstalling the GPU driver and reinstalling CUDA solved the issue. Share Improve this answer Follow answered Nov 15, 2021 at 17:34 TidesTides 1111111 bronze badges Add a comment  | 
I installed Cuda 11.2 on a Nvidia Quadro P2000 GPU. The driver version is R460 U5 (461.92). Unfortunately, after executing nvidia-smi, it shows me the following screen: ... +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| Internal error Does anyone know how to debug or fix this issue? nvidia-smi -v is not possible. Thank you so much!
Nvidia-smi shows internal error after 11.2 Cuda installation on P2000 GPU
To untrack a single file that has already been added/initialized to your repository, i.e., stop tracking the file but not delete it from your system use: git rm --cached filename To untrack every file that is now in your .gitignore: First commit any outstanding code changes, and then, run this command: git rm -r --cached . This removes any changed files from the index(staging area), then just run: git add . Commit it: git commit -m ".gitignore is now working" To undo git rm --cached filename, use git add filename. Make sure to commit all your important changes before running git add . Otherwise, you will lose any changes to other files. Please be careful, when you push this to a repository and pull from somewhere else into a state where those files are still tracked, the files will be DELETED
This question already has answers here: How do I make Git forget about a file that was tracked, but is now in .gitignore? (35 answers) Closed 6 years ago. I have an already initialized Git repository that I added a .gitignore file to. How can I refresh the file index so the files I want ignored get ignored?
Ignore files that have already been committed to a Git repository [duplicate]
The Chrome DevTools can disable the cache. Right-click and choose Inspect Element to open the DevTools. Or use one of the following keyboard shortcuts: F12 Control+Shift+i Command+Shift+i Click Network in the toolbar to open the network pane. Check the Disable cache checkbox at the top. Keep in mind, as a tweet from @ChromiumDev stated, this setting is only active while the devtools are open. Note that this will result in all resources being reloaded. Should you desire to disable the cache only for some resources, you can modify the HTTP header that your server sends alongside your files. If you do not want to use the Disable cache checkbox, a long press on the refresh button with the DevTools open will show a menu with the options to Hard Reload or Empty Cache and Hard Reload which should have a similar effect. Read about the difference between the options to know which option to choose. The following shortcuts are available: Command+Shift+R on Mac Control+Shift+R on Windows or Linux
I am modifying a site's appearance (CSS modifications) but can't see the result on Chrome because of annoying persistent cache. I tried Shift+refresh but it doesn't work. How can I disable the cache temporarily or refresh the page in some way that I could see the changes?
Disabling Chrome cache for website development
Update: Turns out that this method, while following STL idioms well, is actually surprisingly inefficient! Don't do this with large files. (See: http://insanecoding.blogspot.com/2011/11/how-to-read-in-file-in-c.html) You can make a streambuf iterator out of the file and initialize the string with it: #include <string> #include <fstream> #include <streambuf> std::ifstream t("file.txt"); std::string str((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>()); Not sure where you're getting the t.open("file.txt", "r") syntax from. As far as I know that's not a method that std::ifstream has. It looks like you've confused it with C's fopen. Edit: Also note the extra parentheses around the first argument to the string constructor. These are essential. They prevent the problem known as the "most vexing parse", which in this case won't actually give you a compile error like it usually does, but will give you interesting (read: wrong) results. Following KeithB's point in the comments, here's a way to do it that allocates all the memory up front (rather than relying on the string class's automatic reallocation): #include <string> #include <fstream> #include <streambuf> std::ifstream t("file.txt"); std::string str; t.seekg(0, std::ios::end); str.reserve(t.tellg()); t.seekg(0, std::ios::beg); str.assign((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>());
This question already has answers here: How do I read an entire file into a std::string in C++? (24 answers) Closed 7 years ago. I need to read a whole file into memory and place it in a C++ std::string. If I were to read it into a char[], the answer would be very simple: std::ifstream t; int length; t.open("file.txt"); // open input file t.seekg(0, std::ios::end); // go to the end length = t.tellg(); // report location (this is the length) t.seekg(0, std::ios::beg); // go back to the beginning buffer = new char[length]; // allocate memory for a buffer of appropriate dimension t.read(buffer, length); // read the whole file into the buffer t.close(); // close file handle // ... Do stuff with buffer here ... Now, I want to do the exact same thing, but using a std::string instead of a char[]. I want to avoid loops, i.e. I don't want to: std::ifstream t; t.open("file.txt"); std::string buffer; std::string line; while(t){ std::getline(t, line); // ... Append line to buffer and go on } t.close() Any ideas?
Read whole ASCII file into C++ std::string [duplicate]
As far as I know a common solution is to add a ?<version> to the script's src link. For instance: <script type="text/javascript" src="myfile.js?1500"></script> I assume at this point that there isn't a better way than find-replace to increment these "version numbers" in all of the script tags? You might have a version control system do that for you? Most version control systems have a way to automatically inject the revision number on check-in for instance. It would look something like this: <script type="text/javascript" src="myfile.js?$$REVISION$$"></script> Of course, there are always better solutions like this one.
We are currently working in a private beta and so are still in the process of making fairly rapid changes, although obviously as usage is starting to ramp up, we will be slowing down this process. That being said, one issue we are running into is that after we push out an update with new JavaScript files, the client browsers still use the cached version of the file and they do not see the update. Obviously, on a support call, we can simply inform them to do a ctrlF5 refresh to ensure that they get the up-to-date files from the server, but it would be preferable to handle this before that time. Our current thought is to simply attach a version number onto the name of the JavaScript files and then when changes are made, increment the version on the script and update all references. This definitely gets the job done, but updating the references on each release could get cumbersome. As I'm sure we're not the first ones to deal with this, I figured I would throw it out to the community. How are you ensuring clients update their cache when you update your code? If you're using the method described above, are you using a process that simplifies the change?
How can I force clients to refresh JavaScript files?
As @Bradford20000 pointed out in the comments, there might be a gradle.properties file as well as global gradle scripts located under $HOME/.gradle. In such case special attention must be paid when deleting the content of this directory. The .gradle/caches directory holds the Gradle build cache. So if you have any error about build cache, you can delete it. The --no-build-cache option will run gradle without the build cache. Daemon on MS Windows If you're on Windows, you'll need to kill the daemon before it allows you to clear those directories. See Kill all Gradle Daemons Regardless Version? for more info.
I'm trying to use Android Studio, and the first time I boot it up, it takes like 45 MINUTES to compile... If I don't quit the application, it is okay - each subsequent compilation/running the app will take around 45 seconds. I've tried to check some of my caches: there's a .gradle/caches folder in my home directory, and it's contains 123 MB. There's also a .gradle folder in my project folder... one of the taskArtifacts was like 200 MB. I'm scared to just randomly nuke them both. What parts of the folders are safe to delete? Is there a better explanation for why my Android Studio is taking forever to run the gradle assemble task upon first time loading the application? Do I also have to clear the intellij cache too?
How to clear gradle cache?
For modern web browsers (After IE9) See the Duplicate listed at the top of the page for correct information! See answer here: How to control web page caching, across all browsers? For IE9 and before Do not blindly copy paste this! The list is just examples of different techniques, it's not for direct insertion. If copied, the second would overwrite the first and the fourth would overwrite the third because of the http-equiv declarations AND fail with the W3C validator. At most, one could have one of each http-equiv declarations; pragma, cache-control and expires. These are completely outdated when using modern up to date browsers. After IE9 anyway. Chrome and Firefox specifically does not work with these as you would expect, if at all. <meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> Actually do not use these at all! Caching headers are unreliable in meta elements; for one, any web proxies between the site and the user will completely ignore them. You should always use a real HTTP header for headers such as Cache-Control and Pragma.
This question already has answers here: How do we control web page caching, across all browsers? (30 answers) Closed 7 years ago. I read that when you don't have access to the web server's headers you can turn off the cache using: <meta http-equiv="Cache-Control" content="no-store" /> But I also read that this doesn't work in some versions of IE. Are there any set of <meta> tags that will turn off cache in all browsers?
Is there a <meta> tag to turn off caching in all browsers? [duplicate]
Generally speaking: F5 may give you the same page even if the content is changed, because it may load the page from cache. But Ctrl+F5 forces a cache refresh, and will guarantee that if the content is changed, you will get the new content.
Is there a standard for what actions F5 and Ctrl+F5 trigger in web browsers? I once did experiment in IE6 and Firefox 2.x. The F5 refresh would trigger a HTTP request sent to the server with an If-Modified-Since header, while Ctrl+F5 would not have such a header. In my understanding, F5 will try to utilize cached content as much as possible, while Ctrl+F5 is intended to abandon all cached content and just retrieve all content from the servers again. But today, I noticed that in some of the latest browsers (Chrome, IE8) it doesn't work in this way anymore. Both F5 and Ctrl+F5 send the If-Modified-Since header. So how is this supposed to work, or (if there is no standard) how do the major browsers differ in how they implement these refresh features?
What requests do browsers' "F5" and "Ctrl + F5" refreshes generate?
They are slightly different - the ETag does not have any information that the client can use to determine whether or not to make a request for that file again in the future. If ETag is all it has, it will always have to make a request. However, when the server reads the ETag from the client request, the server can then determine whether to send the file (HTTP 200) or tell the client to just use their local copy (HTTP 304). An ETag is basically just a checksum for a file that semantically changes when the content of the file changes. The Expires header is used by the client (and proxies/caches) to determine whether or not it even needs to make a request to the server at all. The closer you are to the Expires date, the more likely it is the client (or proxy) will make an HTTP request for that file from the server. So really what you want to do is use BOTH headers - set the Expires header to a reasonable value based on how often the content changes. Then configure ETags to be sent so that when clients DO send a request to the server, it can more easily determine whether or not to send the file back. One last note about ETag - if you are using a load-balanced server setup with multiple machines running Apache you will probably want to turn off ETag generation. This is because inodes are used as part of the ETag hash algorithm which will be different between the servers. You can configure Apache to not use inodes as part of the calculation but then you'd want to make sure the timestamps on the files are exactly the same, to ensure the same ETag gets generated for all servers.
I've looked around but haven't been able to figure out if I should use both an ETag and an Expires Header or one or the other. What I'm trying to do is make sure that my flash files (and other images and what not only get updated when there is a change to those files. I don't want to do anything special like changing the filename or putting some weird chars on the end of the url to make it not get cached. Also, is there anything I need to do programatically on my end in my PHP scripts to support this or is it all Apache?
ETag vs Header Expires
If this is about .css and .js changes, then one way is "cache busting" by appending something like "_versionNo" to the file name for each release. For example: script_1.0.css // This is the URL for release 1.0 script_1.1.css // This is the URL for release 1.1 script_1.2.css // etc. or after the file name: script.css?v=1.0 // This is the URL for release 1.0 script.css?v=1.1 // This is the URL for release 1.1 script.css?v=1.2 // etc. You can check this link to see how it could work.
Is there a way I can put some code on my page so when someone visits a site, it clears the browser cache, so they can view the changes? Languages used: ASP.NET, VB.NET, and of course HTML, CSS, and jQuery.
Force browser to clear cache
Yes. Google Collections, or Guava as it is named now has something called MapMaker which can do exactly that. ConcurrentMap<Key, Graph> graphs = new MapMaker() .concurrencyLevel(4) .softKeys() .weakValues() .maximumSize(10000) .expiration(10, TimeUnit.MINUTES) .makeComputingMap( new Function<Key, Graph>() { public Graph apply(Key key) { return createExpensiveGraph(key); } }); Update: As of guava 10.0 (released September 28, 2011) many of these MapMaker methods have been deprecated in favour of the new CacheBuilder: LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder() .maximumSize(10000) .expireAfterWrite(10, TimeUnit.MINUTES) .build( new CacheLoader<Key, Graph>() { public Graph load(Key key) throws AnyException { return createExpensiveGraph(key); } });
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 6 years ago. This post was edited and submitted for review 1 year ago and failed to reopen the post: Original close reason(s) were not resolved Improve this question Do any of you know of a Java Map or similar standard data store that automatically purges entries after a given timeout? This means aging, where the old expired entries “age-out” automatically. I know of ways to implement the functionality myself and have done it several times in the past, so I'm not asking for advice in that respect, but for pointers to a good reference implementation. WeakReference based solutions like WeakHashMap are not an option, because my keys are likely to be non-interned strings and I want a configurable timeout that's not dependent on the garbage collector. Ehcache is also an option I wouldn't like to rely on because it needs external configuration files. I am looking for a code-only solution.
Java time-based map/cache with expiring keys [closed]
I had the exact same problem - I was running my nginx in Virtualbox. I did not have caching turned on. But looks like sendfile was set to on in nginx.conf and that was causing the problem. @kolbyjack mentioned it above in the comments. When I turned off sendfile - it worked fine. This is because: Sendfile is used to ‘copy data between one file descriptor and another‘ and apparently has some real trouble when run in a virtual machine environment, or at least when run through Virtualbox. Turning this config off in nginx causes the static file to be served via a different method and your changes will be reflected immediately and without question It is related to this bug: https://www.virtualbox.org/ticket/12597
I use nginx to as the front server, I have modified the CSS files, but nginx is still serving the old ones. I have tried to restart nginx, to no success and I have Googled, but not found a valid way to clear it. Some articles say we can just delete the cache directory: var/cache/nginx, but there is no such directory on my server. What should I do now?
How to clear the cache of nginx?
281 Delete the artifacts (or the full local repo) from c:\Users\<username>\.m2\repository by hand. Share Improve this answer Follow edited Jul 10, 2018 at 13:48 Peter G 2,78333 gold badges2626 silver badges3535 bronze badges answered Sep 13, 2011 at 22:14 palacsintpalacsint 28.5k1111 gold badges8282 silver badges109109 bronze badges 7 5 Tried that already, didn't work. Thanks for the suggestion though. – MetroidFan2002 Sep 14, 2011 at 0:34 3 it seems like, even after a restart, a heck of a lot of maven artifacts have handles to them: The action can't be completed because the folder or file in it is open in another program. Close the folder or file and try again. – liltitus27 Feb 11, 2016 at 22:15 2 Note that the folder location might vary on your system - see this answer for how to get the maven repo folder path – jakub.g Jan 16, 2017 at 18:03 15 @liltitus27 same here, just wiping out ~/.m2/repository did not work, mvn dependency:purge-local-repository finally worked – qbert65536 Oct 2, 2017 at 15:47 yes this worked for me 100% - Windows10 on a corporate network as a user not admin – Graeme Phillips Jun 8, 2018 at 10:32  |  Show 2 more comments
Recently, Apache Maven seems to be having caching issues. Performing clean installs on our projects using Windows Vista or Windows 7 sometimes produce artifacts with the same data as a previous build even though the newer artifact's files should have been updated. Is there any way to clear this cache to force maven to always trigger a clean build of the local artifact that should be built? In particular, we're having issues building a webapp with the war plugin. Maven version is 3.0.3. War plugin version is 2.1.1.
How do you clear Apache Maven's cache?
Command-Option-Shift-K to clean out the build folder. Even better, quit Xcode and clean out ~/Library/Developer/Xcode/DerivedData manually. Remove all its contents because there's a bug where Xcode will run an old version of your project that's in there somewhere. (Xcode 4.2 will show you the Derived Data folder: choose Window > Organizer and switch to the Projects tab. Click the right-arrow to the right of the Derived Data folder name.) In the simulator, choose iOS Simulator > Reset Content and Settings. Finally, for completeness, you can delete the contents of /var/folders; some caching happens there too. WARNING: Deleting /var/folders can cause issues, and you may need to repair or reinstall your operating system after doing so. EDIT: I have just learned that if you are afraid to grapple with /var/folders/ you can use the following command in the Terminal to delete in a more targeted way: rm -rf "$(getconf DARWIN_USER_CACHE_DIR)/org.llvm.clang/ModuleCache" EDIT: For certain Swift-related problems I have found it useful to delete ~/Library/Caches/com.apple.dt.Xcode. You lose a lot when you do this, like your spare copies of the downloaded documentation doc sets, but it can be worth it.
Jonathan suggest here: Xcode Includes .xib files that have been deleted! that cleaning all targets and empty the caches will fix the problem with Xcode including deleted .xib files but I cannot find a way to empty the cache in Xcode 4. How to do that in Xcode 4?
How to Empty Caches and Clean All Targets Xcode 4 and later
Python 3.8 functools.cached_property decorator https://docs.python.org/dev/library/functools.html#functools.cached_property cached_property from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome. This decorator can be seen as caching @property, or as a cleaner @functools.lru_cache for when you don't have any arguments. The docs say: @functools.cached_property(func) Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable. Example: class DataSet: def __init__(self, sequence_of_numbers): self._data = sequence_of_numbers @cached_property def stdev(self): return statistics.stdev(self._data) @cached_property def variance(self): return statistics.variance(self._data) New in version 3.8. Note This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes don’t provide a dict attribute at all).
Consider the following: @property def name(self): if not hasattr(self, '_name'): # expensive calculation self._name = 1 + 1 return self._name I'm new, but I think the caching could be factored out into a decorator. Only I didn't find one like it ;) PS the real calculation doesn't depend on mutable values
Is there a decorator to simply cache function return values?
Reference the System.Web dll in your model and use System.Web.Caching.Cache public string[] GetNames() { string[] names = Cache["names"] as string[]; if(names == null) //not in cache { names = DB.GetNames(); Cache["names"] = names; } return names; } A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data. In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists. Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work. I have seen this done in traditional ASP.NET apps , typically for very static data.
How to cache data in a MVC application
Instead of disabling caching for each single GET-request, I disable it globally in the $httpProvider: myModule.config(['$httpProvider', function($httpProvider) { //initialize get if not there if (!$httpProvider.defaults.headers.get) { $httpProvider.defaults.headers.get = {}; } // Answer edited to include suggestions from comments // because previous version of code introduced browser-related errors //disable IE ajax request caching $httpProvider.defaults.headers.get['If-Modified-Since'] = 'Mon, 26 Jul 1997 05:00:00 GMT'; // extra $httpProvider.defaults.headers.get['Cache-Control'] = 'no-cache'; $httpProvider.defaults.headers.get['Pragma'] = 'no-cache'; }]);
All the ajax calls that are sent from the IE are cached by Angular and I get a 304 response for all the subsequent calls. Although the request is the same, the response is not going be the same in my case. I want to disable this cache. I tried adding the cache attribute to $http.get but still it didn't help. How can this issue be resolved?
Angular IE Caching issue for $http
You have to use a more complex function like $.ajax() if you want to control caching on a per-request basis. Or, if you just want to turn it off for everything, put this at the top of your script: $.ajaxSetup ({ // Disable caching of AJAX responses cache: false });
I have the following code making a GET request on a URL: $('#searchButton').click(function() { $('#inquiry').load('/portal/?f=searchBilling&pid=' + $('#query').val()); }); But the returned result is not always reflected. For example, I made a change in the response that spit out a stack trace but the stack trace did not appear when I clicked on the search button. I looked at the underlying PHP code that controls the ajax response and it had the correct code and visiting the page directly showed the correct result but the output returned by .load was old. If I close the browser and reopen it it works once and then starts to return the stale information. Can I control this by jQuery or do I need to have my PHP script output headers to control caching?
Stop jQuery .load response from being cached
If the cache line containing the byte or word you're loading is not already present in the cache, your CPU will request the 64 bytes that begin at the cache line boundary (the largest address below the one you need that is multiple of 64). Modern PC memory modules transfer 64 bits (8 bytes) at a time, in a burst of eight transfers, so one command triggers a read or write of a full cache line from memory. (DDR1/2/3/4 SDRAM burst transfer size is configurable up to 64B; CPUs will select the burst transfer size to match their cache line size, but 64B is common) As a rule of thumb, if the processor can't forecast a memory access (and prefetch it), the retrieval process can take ~90 nanoseconds, or ~250 clock cycles (from the CPU knowing the address to the CPU receiving data). By contrast, a hit in L1 cache has a load-use latency of 3 to 5 cycles, and a store-reload has a store-forwarding latency of 4 or 5 cycles on modern x86 CPUs. Things are similar on other architectures. Further reading: Ulrich Drepper's What Every Programmer Should Know About Memory. The DRAM and cache details are still relevant. See also How much of ‘What Every Programmer Should Know About Memory’ is still valid? - The software-prefetch advice is a bit outdated: modern HW prefetchers are smarter, and hyperthreading is way better than in P4 days (so a prefetch thread is typically a waste). Also, the x86 tag wiki has lots of performance links for that architecture.
I understand that the processor brings data into the cache via cache lines, which - for instance, on my Atom processor - brings in about 64 bytes at a time, whatever the size of the actual data being read. My question is: Imagine that you need to read one byte from memory, which 64 bytes will be brought into the cache? The two possibilities I can see is that, either the 64 bytes start at the closest 64 bytes boundary below the byte of interest, or the 64 bytes are spread around the byte in some predetermined way (for instance, half under, half above, or all above). Which is it?
How do cache lines work?
Here is some good explaination. check out it. http://www.mssqltips.com/tip.asp?tip=1360 CHECKPOINT; GO DBCC DROPCLEANBUFFERS; GO From the linked article: If all of the performance testing is conducted in SQL Server the best approach may be to issue a CHECKPOINT and then issue the DBCC DROPCLEANBUFFERS command. Although the CHECKPOINT process is an automatic internal system process in SQL Server and occurs on a regular basis, it is important to issue this command to write all of the dirty pages for the current database to disk and clean the buffers. Then the DBCC DROPCLEANBUFFERS command can be executed to remove all buffers from the buffer pool.
I've got a simple query running against SQL Server 2005 SELECT * FROM Table WHERE Col = 'someval' The first time I execute the query can take > 15 secs. Subsequent executes are back in < 1 sec. How can I get SQL Server 2005 not to use any cached results? I've tried running DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE But this seems to have no effect on the query speed (still < 1 sec).
How can I clear the SQL Server query cache?
Redis is a remote data structure server. It is certainly slower than just storing the data in local memory (since it involves socket roundtrips to fetch/store the data). However, it also brings some interesting properties: Redis can be accessed by all the processes of your applications, possibly running on several nodes (something local memory cannot achieve). Redis memory storage is quite efficient, and done in a separate process. If the application runs on a platform whose memory is garbage collected (node.js, java, etc ...), it allows handling a much bigger memory cache/store. In practice, very large heaps do not perform well with garbage collected languages. Redis can persist the data on disk if needed. Redis is a bit more than a simple cache: it provides various data structures, various item eviction policies, blocking queues, pub/sub, atomicity, Lua scripting, etc ... Redis can replicate its activity with a master/slave mechanism in order to implement high-availability. Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required.
I have not used Redis yet, but I have heard about it and plan to try using it for caching data. I have heard that Redis uses memory as a cache store database. What's the point of Redis, since I can use an object or dictionary to store data? Like this: var cache = { key: { }, key: { } ... } What are the advantages of using Redis?
Redis cache vs. using memory directly
For Development you can also deactivate the browser cache - In Chrome Dev Tools on the bottom right click on the gear and tick the option Disable cache (while DevTools is open) Update: In Firefox there is the same option in Debugger -> Settings -> Advanced Section (checked for Version 33) Update 2: Although this option appears in Firefox some report it doesn't work. I suggest using firebug and following hadaytullah answer.
I have problem with caching partials in AngularJS. In my HTML page I have: <body> <div ng-view></div> <body> where my partials are loaded. When I change HTML code in my partial, browser still load old data. Is there any workaround?
AngularJS disable partial caching on dev machine
Angular's $http has a cache built in. According to the docs: cache – {boolean|Object} – A boolean value or object created with $cacheFactory to enable or disable caching of the HTTP response. See $http Caching for more information. Boolean value So you can set cache to true in its options: $http.get(url, { cache: true}).success(...); or, if you prefer the config type of call: $http({ cache: true, url: url, method: 'GET'}).success(...); Cache Object You can also use a cache factory: var cache = $cacheFactory('myCache'); $http.get(url, { cache: cache }) You can implement it yourself using $cacheFactory (especially handly when using $resource): var cache = $cacheFactory('myCache'); var data = cache.get(someKey); if (!data) { $http.get(url).success(function(result) { data = result; cache.put(someKey, data); }); }
I want to be able to create a custom AngularJS service that makes an HTTP 'Get' request when its data object is empty and populates the data object on success. The next time a call is made to this service, I would like to bypass the overhead of making the HTTP request again and instead return the cached data object. Is this possible?
Cache an HTTP 'Get' service response in AngularJS?
HttpContext.Current.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); HttpContext.Current.Response.Cache.SetValidUntilExpires(false); HttpContext.Current.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Cache.SetNoStore(); All requests get routed through default.aspx first - so assuming you can just pop in code behind there.
I am looking for a method to disable the browser cache for an entire ASP.NET MVC Website I found the following method: Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache); Response.Cache.SetNoStore(); And also a meta tag method (it won't work for me, since some MVC actions send partial HTML/JSON through Ajax, without a head, meta tag). <meta http-equiv="PRAGMA" content="NO-CACHE"> But I am looking for a simple method to disable the browser cache for an entire website.
Disable browser cache for entire ASP.NET website
There is now a php artisan view:clear command for this task since Laravel 5.1
I notice that Laravel cache views are stored in ~/storage/framework/views. Over time, they get to eat up my space. How do I delete them? Is there any command that could? I tried php artisan cache:clear, but it is not clearing the views cache. With that, I have to manually delete the files in the said folder. Also, how do I disable the views caching?
Laravel 5 Clear Views Cache
You can use std::hardware_destructive_interference_size since C++17. Its defined as: Minimum offset between two objects to avoid false sharing. Guaranteed to be at least alignof(std::max_align_t)
All platforms welcome, please specify the platform for your answer. A similar question: How to programmatically get the CPU cache page size in C++?
Programmatically get the cache line size?
"Buffers" represent how much portion of RAM is dedicated to cache disk blocks. "Cached" is similar like "Buffers", only this time it caches pages from file reading. quote from: https://web.archive.org/web/20110207101856/http://www.linuxforums.org/articles/using-top-more-efficiently_89.html
To me it's not clear what's the difference between the two Linux memory concepts : buffer and cache. I've read through this post and it seems to me that the difference between them is the expiration policy: buffer's policy is first-in, first-out cache's policy is Least Recently Used. Am I right? In particular, I'm looking at the two commands: free and vmstat james@utopia:~$ vmstat -S M procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 5 0 0 173 67 912 0 0 19 59 75 1087 24 4 71 1 james@utopia:~$ free -m total used free shared buffers cached Mem: 2007 1834 172 0 67 914 -/+ buffers/cache: 853 1153 Swap: 2859 0 2859
What is the difference between buffer and cache memory in Linux?
I noticed it myself, and found the files inside the backup folder. You can check where it is using Menu:Settings -> Preferences -> Backup. Note : My NPP installation is portable, and on Windows, so YMMV.
On the most recent versions of Notepad++, when the application is closed, unsaved files are maintained when the application is restarted. I presume that those files are cached on a temporary files. What is the location of that file(s). Thank you
Notepad++ cached files location
I believe this is how it works. From what I remember reading, there is a proxy class generated that intercepts all requests and responds with the cached value, but 'internal' calls within the same class will not get the cached value. From https://code.google.com/p/ehcache-spring-annotations/wiki/UsingCacheable Only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual cache interception at runtime even if the invoked method is marked with @Cacheable.
Spring cache is not working when calling cached method from another method of the same bean. Here is an example to explain my problem in clear way. Configuration: <cache:annotation-driven cache-manager="myCacheManager" /> <bean id="myCacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager"> <property name="cacheManager" ref="myCache" /> </bean> <!-- Ehcache library setup --> <bean id="myCache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:shared="true"> <property name="configLocation" value="classpath:ehcache.xml"></property> </bean> <cache name="employeeData" maxElementsInMemory="100"/> Cached service : @Named("aService") public class AService { @Cacheable("employeeData") public List<EmployeeData> getEmployeeData(Date date){ ..println("Cache is not being used"); ... } public List<EmployeeEnrichedData> getEmployeeEnrichedData(Date date){ List<EmployeeData> employeeData = getEmployeeData(date); ... } } Result : aService.getEmployeeData(someDate); output: Cache is not being used aService.getEmployeeData(someDate); output: aService.getEmployeeEnrichedData(someDate); output: Cache is not being used The getEmployeeData method call uses cache employeeData in the second call as expected. But when the getEmployeeData method is called within the AService class (in getEmployeeEnrichedData), Cache is not being used. Is this how spring cache works or am i missing something ?
Spring Cache @Cacheable - not working while calling from another method of the same bean
There are much more complicated solutions, but a very easy, simple one is just to add a random query string to your CSS include. Such as src="/css/styles.css?v={random number/string}" If you're using php or another server-side language, you can do this automatically with time(). So it would be styles.css?v=<?=time();?> This way, the query string will be new every single time. Like I said, there are much more complicated solutions that are more dynamic, but in testing purposes this method is top (IMO).
I'm currently editing a .css file inside of Visual Studio 2012 (in debug mode). I'm using Chrome as my browser. When I make changes to my application's .css file inside of Visual Studio and save, refreshing the page will not load with the updated change in my .css file. I think the .css file is still cached. I have tried: CTRL / F5 In Visual Studio 2012, Go to project properties, Web tab Choose Start External Program in the Start Action section Paste or browse to the path for Google Chrome (Mine is C:\Users\xxx\AppData\Local\Google\Chrome\Application\chrome.exe) In the Command line arguments box put -incognito Used the Chrome developer tools, click on the "gear" icon, checked "Disable Cache." Nothing seems to work unless I manually stop debugging, (close out of Chrome), restart the application (in debug). Is there any way to force Chrome to always reload all css changes and reload the .css file? Update: 1. In-line style changes in my .aspx file are picked up when I refresh. But changes in a .css file does not. 2. It is an ASP.NET MVC4 app so I click on a hyperlink, which does a GET. Doing that, I don't see a new request for the stylesheet. But clicking F5, the .css file is reloaded and the Status code (on the network tab) is 200.
How to force Chrome browser to reload .css file while debugging in Visual Studio?
I believe you can use... RESET QUERY CACHE; ...if the user you're running as has reload rights. Alternatively, you can defragment the query cache via... FLUSH QUERY CACHE; See the Query Cache Status and Maintenance section of the MySQL manual for more information.
Is there any way to clear mysql query cache without restarting mySQL server?
Clear MySQL query cache without restarting server
The benefit of write-through to main memory is that it simplifies the design of the computer system. With write-through, the main memory always has an up-to-date copy of the line. So when a read is done, main memory can always reply with the requested data. If write-back is used, sometimes the up-to-date data is in a processor cache, and sometimes it is in main memory. If the data is in a processor cache, then that processor must stop main memory from replying to the read request, because the main memory might have a stale copy of the data. This is more complicated than write-through. Also, write-through can simplify the cache coherency protocol because it doesn't need the Modify state. The Modify state records that the cache must write back the cache line before it invalidates or evicts the line. In write-through a cache line can always be invalidated without writing back since memory already has an up-to-date copy of the line. One more thing - on a write-back architecture software that writes to memory-mapped I/O registers must take extra steps to make sure that writes are immediately sent out of the cache. Otherwise writes are not visible outside the core until the line is read by another processor or the line is evicted.
My understanding is that the main difference between the two methods is that in "write-through" method data is written to the main memory through the cache immediately, while in "write-back" data is written in a "later time". We still need to wait for the memory in "later time" so What is the benefit of "write-through"?
Write-back vs Write-Through caching?
If I were doing this again from scratch today, I'd use Guava's CacheBuilder.
Please don't say EHCache or OSCache, etc. Assume for purposes of this question that I want to implement my own using just the SDK (learning by doing). Given that the cache will be used in a multithreaded environment, which datastructures would you use? I've already implemented one using LinkedHashMap and Collections#synchronizedMap, but I'm curious if any of the new concurrent collections would be better candidates. UPDATE: I was just reading through Yegge's latest when I found this nugget: If you need constant-time access and want to maintain the insertion order, you can't do better than a LinkedHashMap, a truly wonderful data structure. The only way it could possibly be more wonderful is if there were a concurrent version. But alas. I was thinking almost exactly the same thing before I went with the LinkedHashMap + Collections#synchronizedMap implementation I mentioned above. Nice to know I hadn't just overlooked something. Based on the answers so far, it sounds like my best bet for a highly concurrent LRU would be to extend ConcurrentHashMap using some of the same logic that LinkedHashMap uses.
How would you implement an LRU cache in Java?
You can use the PHP function apc_clear_cache. Calling apc_clear_cache() will clear the system cache and calling apc_clear_cache('user') will clear the user cache.
I need to clear all APC cache entries when I deploy a new version of the site. APC.php has a button for clearing all opcode caches, but I don't see buttons for clearing all User Entries, or all System Entries, or all Per-Directory Entries. Is it possible to clear all cache entries via the command-line, or some other way?
How to clear APC cache entries?
Memoization is a specific form of caching that involves caching the return value of a function based on its parameters. Caching is a more general term; for example, HTTP caching is caching but not memoization. Wikipedia says: Although related to caching, memoization refers to a specific case of this optimization, distinguishing it from forms of caching such as buffering or page replacement.
I would like to know what the actual difference between caching and memoization is. As I see it, both involve avoiding repeated function calls to get data by storing it. What's the core difference between the two?
What is the difference between Caching and Memoization?
It's possible, you can simply use jQuery to substitute the 'meta tag' that references the cache status with an event handler / button, and then refresh, easy, $('.button').click(function() { $.ajax({ url: "", context: document.body, success: function(s,x){ $('html[manifest=saveappoffline.appcache]').attr('content', ''); $(this).html(s); } }); }); NOTE: This solution relies on the Application Cache that is implemented as part of the HTML 5 spec. It also requires server configuration to set up the App Cache manifest. It does not describe a method by which one can clear the 'traditional' browser cache via client- or server-side code, which is nigh impossible to do.
I am looking for a way to programmatically empty the browser cache. I am doing this because the application caches confidential data and I'd like to remove those when you press "log out". This would happen either via server or JavaScript. Of course, using the software on foreign/public computer is still discouraged as there are more dangers like key loggers that you just can't defeat on software level.
How to programmatically empty browser cache?
Add a random query string to the src You could either do this manually by incrementing the querystring each time you make a change: <script src="test.js?version=1"></script> Or if you are using a server side language, you could automatically generate this: ASP.NET: <script src="test.js?rndstr=<%= getRandomStr() %>"></script> More info on cache-busting can be found here: https://www.curtiscode.dev/post/front-end-dev/what-is-cache-busting
This question already has answers here: How to force browsers to reload cached CSS and JS files? (58 answers) Closed 10 years ago. I have a simple html: <html> <body> <head> <meta charset="utf-8"> <meta http-equiv='cache-control' content='no-cache'> <meta http-equiv='expires' content='0'> <meta http-equiv='pragma' content='no-cache'> <script src="test.js"></script> </body> </html> In test.js I changed a Javascript function, but my browser is caching this file. How to disable cache for script src?
How to prevent caching of my Javascript file? [duplicate]
Use q flag for quiet mode, and tell wget to output to stdout with O- (uppercase o) and redirect to /dev/null to discard the output: wget -qO- $url &> /dev/null > redirects application output (to a file). if > is preceded by ampersand, shell redirects all outputs (error and normal) to the file right of >. If you don't specify ampersand, then only normal output is redirected. ./app &> file # redirect error and standard output to file ./app > file # redirect standard output to file ./app 2> file # redirect error output to file if file is /dev/null then all is discarded. This works as well, and simpler: wget0
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 11 years ago. Improve this question I'm using Wget to make http requests to a fresh web server. I am doing this to warm the MySQL cache. I do not want to save the files after they are served. wget -nv -do-not-save-file $url Can I do something like -do-not-save-file with wget?
How do I request a file but not save it with Wget? [closed]
These are what's known as Shadow Copy Folders. Simplistically....and I really mean it: When ASP.NET runs your app for the first time, it copies any assemblies found in the /bin folder, copies any source code files (found for example in the App_Code folder) and parses your aspx, ascx files to c# source files. ASP.NET then builds/compiles all this code into a runnable application. One advantage of doing this is that it prevents the possibility of .NET assembly DLL's #(in the /bin folder) becoming locked by the ASP.NET worker process and thus not updatable. ASP.NET watches for file changes in your website and will if necessary begin the whole process all over again. Theoretically the folder shouldn't need any maintenance, but from time to time, and only very rarely you may need to delete contents. That said, I work for a hosting company, we run up to 1200 sites per shared server and I haven't had to touch this folder on any of the 250 or so machines for years. This is outlined in the MSDN article Understanding ASP.NET Dynamic Compilation
I've discovered this folder in C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files and have a few questions. What does ASP.NET use this folder for, and what sort of files are stored here? How does a file get stored here, and when is it updated? Does the folder need any sort of maintenance?
What is the "Temporary ASP.NET Files" folder for?
Is it currently only possible to expire an entire key/value pair? As far as I know, and also according to key commands and document about expiration, currently you can set expiration only to specific key and not to it's underlying data structure. However there is a discussion on google groups about this functionality with outlined alternative solutions.
Is it currently only possible to expire an entire key/value pair? What if I want to add values to a List type structure and have them get auto removed 1 hour after insertion. Is that currently possible, or would it require running a cron job to do the purging manually?
Redis: possible to expire an element in an array or sorted set?
The param ?v=1.123 indicates a query string, and the browser will therefore think it is a new path from, say, ?v=1.0. Thus causing it to load from file, not from cache. As you want. And, the browser will assume that the source will stay the same next time you call ?v=1.123 and should cache it with that string. So it will remain cached, however your server is set up, until you move to ?v=1.124 or so on.
We want to cache bust on production deploys, but not waste a bunch of time off the bat figuring out a system for doing so. My thought was to apply a param to the end of css and js files with the current version number: <link rel="stylesheet" href="base_url.com/file.css?v=1.123"/> Two questions: Will this effectively break the cache? Will the param cause the browser to then never cache the response from that url since the param indicates that this is dynamic content?
Cache busting via params
I finally figured this out - http://blog.serendeputy.com/posts/how-to-prevent-browsers-from-caching-a-page-in-rails/ in application_controller.rb. After Ruby on Rails 5: class ApplicationController < ActionController::Base before_action :set_cache_headers private def set_cache_headers response.headers["Cache-Control"] = "no-cache, no-store" response.headers["Pragma"] = "no-cache" response.headers["Expires"] = "Mon, 01 Jan 1990 00:00:00 GMT" end end Ruby on Rails 4 and older versions: class ApplicationController < ActionController::Base before_filter :set_cache_headers private def set_cache_headers response.headers["Cache-Control"] = "no-cache, no-store" response.headers["Pragma"] = "no-cache" response.headers["Expires"] = "Mon, 01 Jan 1990 00:00:00 GMT" end end
Ubuntu → Apache → Phusion Passenger → Rails 2.3. The main part of my site reacts to your clicks. So, if you click on a link, it will send you on to the destination, and instantly regenerate your page. But, if you hit the back button, you don't see the new page. Unfortunately, it's not showing up without a manual refresh; it appears the browser is caching it. I want to make sure the browser does not cache the page. Separately, I do want to set far-future expiration dates for all my static assets. What's the best way to solve this? Should I solve this in Ruby on Rails? Apache? JavaScript? Alas. Neither of these suggestions forced the behavior I'm looking for. Maybe there's a JavaScript answer? I could have Ruby on Rails write out a timestamp in a comment, and then have the JavaScript code check to see if the times are within five seconds (or whatever works). If yes, then fine, but if no, then reload the page? Do you think this would work?
How to prevent browser page caching in Rails
I must clarify that no-cache does not mean do not cache. In fact, it means "revalidate with server" before using any cached response you may have, on every request. must-revalidate, on the other hand, only needs to revalidate when the resource is considered stale. If the server says that the resource is still valid then the cache can respond with its representation, thus alleviating the need for the server to resend the entire resource. no-store is effectively the full do not cache directive and is intended to prevent storage of the representation in any form of cache whatsoever. I say whatsoever, but note this in the RFC 2616 HTTP spec: History buffers MAY store such responses as part of their normal operation But this is omitted from the newer RFC 7234 HTTP spec in potentially an attempt to make no-store stronger, see: https://www.rfc-editor.org/rfc/rfc7234#section-5.2.1.5
I'm told to prevent user-info leaking, only "no-cache" in response is not enough. "no-store" is also necessary. Cache-Control: no-cache, no-store After reading this spec http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html, I'm still not quite sure why. My current understanding is that it is just for intermediate cache server. Even if "no-cache" is in response, intermediate cache server can still save the content to non-volatile storage. The intermediate cache server will decide whether using the saved content for following request. However, if "no-store" is in the response, the intermediate cache sever is not supposed to store the content. So, it is safer. Is there any other reason we need both "no-cache" and "no-store"?
Why both no-cache and no-store should be used in HTTP response?
The only difference is that with Private you are not allowing proxies to cache the data that travels through them. In the end, it all boils down to the data contained in the pages/files you are sending. For example, your ISP could have an invisible proxy between you and the Internet, that is caching web pages to reduce the amount of bandwidth needed and lower costs. By using cache-control:private, you are specifying that it shouldn't cache the page (but allowing the final user to do so). If you use cache-control: public, you are saying that it's okay for everyone to cache the page, and so the proxy would keep a copy. As a rule of thumb, if it's something everybody can access (for example, the logo in this page) cache-control: public might be better, because the more people that cache it, the less bandwidth you'll need. If it's something that is related to the connected user (for example, the HTML in this page includes my username, so it won't be useful to anyone else) cache-control: private will be better, as the proxies would be caching data that won't be requested by other users, and they might also be keeping data that you don't want to be kept in servers that you don't trust. And, of course, everything that is not public should have a private cache. Otherwise the data might be stored in a middle proxy server, were it could be accessed by anyone with access to it.
Can you please describe an example indicating difference between Public and Private Cache-Control in asp.net applications hosted in IIS. I read in MSDN that the difference is the following: Public: Sets Cache-Control: public to specify that the response is cacheable by clients and shared (proxy) caches. Private: Default value. Sets Cache-Control: private to specify that the response is cacheable only on the client and not by shared (proxy server) caches. I am not sure I have completely understood the pros and cons from each choice. An example for when to or not to use it would be great. For example what should I do if i have two web servers hosting the same application? Is there anything to watch out if I choose Private or Public?
Private vs Public in Cache-Control
You can click the settings icon on top right corner ... | More Tools | Developer Tools | Network | Disable cache (while DevTools is open) For windows, this is F12 or CTRL + SHIFT + I while on mac CMD + SHIFT + I opens up DevTools. New path for Chrome Update Sept 2018: Click settings icon on the top right corner ... | Settings | Preferences | Developer Tools | Network | Disable cache (while DevTools is open)
I will make a change to my JS files but it won't really change in the browser, I have to rename the files every time so that it reloads it. Is there some sort of .htaccess command I can add or something to make it stop caching? It is even caching my html pages hard core. I need to reopen my entire browser just to see changes. Could it possibly be a server problem?
Stop Chrome Caching My JS Files
Like their names say: "Memory Cache" stores and loads resources to and from Memory (RAM). So this is much faster, but it is non-persistent. Content is available until you close the browser. "Disk Cache" is persistent. Cached resources are stored and loaded to and from disk. Simple Test: Open Chrome Developer Tools/Network. Reload a page multiple times. The table column "Size" will tell you that some files are loaded from "memory cache". Close the browser, open Developer Tools/Network again, and load that page again. All cached files are loaded from "disk cache" now, because your memory cache is empty.
I am interested in chrome memory cache vs disk cache? I use webpack, common chunks plugin and generate all my files with chunkhash. How does memory differ from disk cache. When I reload my page some files are loaded from memory cache and some from disk cache (bundle.js and image.jpg from memory cache and css from disk cache). Sometimes it's different. Can we control that, choose what gets loaded from where? Memory cache seems to be faster than disk cache.
Chrome memory cache vs disk cache
You can set specific cache-headers for a whole folder in either your root web.config: <?xml version="1.0" encoding="UTF-8"?> <configuration> <!-- Note the use of the 'location' tag to specify which folder this applies to--> <location path="images"> <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="00:00:15" /> </staticContent> </system.webServer> </location> </configuration> Or you can specify these in a web.config file in the content folder: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="00:00:15" /> </staticContent> </system.webServer> </configuration> I'm not aware of a built in mechanism to target specific file types.
I would like to set up rules in IIS7 for static content caching in my ASP.NET website. I have seen these articles, which details how to do it using the <clientCache /> element in web.config: Client Cache <clientCache> (IIS.NET) Add Expires or Cache Control Header to static content in IIS (Stack Overflow) However, this setting appears to apply globally to all static content. Is there a way to do this just for certain directories or extensions? For example, I may have two directories which need separate cache settings: /static/images /content/pdfs Is it possible to set up rules for sending cache headers (max-age, expires, etc) based on extensions and folder paths? Please note, I must be able to do this via web.config because I don't have access to the IIS console.
How to configure static content cache per folder and extension in IIS7?
Take a look at Beaker: Home Page Caching Documentation Good quick-start article about using Beaker with Django (but useful in any other apps too)
I'm looking for a Python caching library but can't find anything so far. I need a simple dict-like interface where I can set keys and their expiration and get them back cached. Sort of something like: cache.get(myfunction, duration=300) which will give me the item from the cache if it exists or call the function and store it if it doesn't or has expired. Does anyone know something like this?
Is there a Python caching library?
angular-cli resolves this by providing an --output-hashing flag for the build command (versions 6/7, for later versions see here). Example usage: ng build --output-hashing=all Bundling & Tree-Shaking provides some details and context. Running ng help build, documents the flag: --output-hashing=none|all|media|bundles (String) Define the output filename cache-busting hashing mode. aliases: -oh <value>, --outputHashing <value> Although this is only applicable to users of angular-cli, it works brilliantly and doesn't require any code changes or additional tooling. Update A number of comments have helpfully and correctly pointed out that this answer adds a hash to the .js files but does nothing for index.html. It is therefore entirely possible that index.html remains cached after ng build cache busts the .js files. At this point I'll defer to How do we control web page caching, across all browsers?
We're currently working on a new project with regular updates that's being used daily by one of our clients. This project is being developed using angular 2 and we're facing cache issues, that is our clients are not seeing the latest changes on their machines. Mainly the html/css files for the js files seem to get updated properly without giving much trouble.
How to prevent Browser cache on Angular 2 site?
Enter "about:config" into the Firefox address bar and set: browser.cache.disk.enable = false browser.cache.memory.enable = false If developing locally, or using HTML5's new manifest attribute you may have to also set the following in about:config - browser.cache.offline.enable = false
During development I have to "clear cache" in Firefox all the time in order to make it use the latest version of JavaScript files. Is there some kind of setting (about:config) to turn off caching completely for JavaScript files? Or, if not, for all files?
How to turn off caching on Firefox?
You can use the expiringdict module: The core of the library is ExpiringDict class which is an ordered dictionary with auto-expiring values for caching purposes. In the description they do not talk about multithreading, so in order not to mess up, use a Lock.
I have multiple threads running the same process that need to be able to to notify each other that something should not be worked on for the next n seconds its not the end of the world if they do however. My aim is to be able to pass a string and a TTL to the cache and be able to fetch all the strings that are in the cache as a list. The cache can live in memory and the TTL's will be no more than 20 seconds. Does anyone have a any suggestions for how this can be accomplished?
Python in-memory cache with time to live
178 And now the punchline: use the system cache. URL url = new URL(strUrl); URLConnection connection = url.openConnection(); connection.setUseCaches(true); Object response = connection.getContent(); if (response instanceof Bitmap) { Bitmap bitmap = (Bitmap)response; } Provides both memory and flash-rom cache, shared with the browser. grr. I wish somebody had told ME that before i wrote my own cache manager. Share Improve this answer Follow edited Mar 31, 2011 at 5:03 Chris Lacy 4,34233 gold badges3636 silver badges3434 bronze badges answered Oct 8, 2010 at 20:08 edrowlandedrowland 1,78911 gold badge1010 silver badges22 bronze badges 14 1 Wow, this was an incredibly elegant way to do this, thanks a lot. It is in no way slower than my own simple cache manager, and now I don't need to do housekeeping on a SD card folder. – Kevin Read Oct 16, 2010 at 20:44 11 connection.getContent() always returns an InputStream for me, what am I doing wrong? – Tyler Collier Jul 7, 2011 at 18:36 3 If I could now also set an expiration date on the content for the cache my life would be so much easier :) – Janusz Jul 12, 2011 at 7:11 11 @Scienceprodigy no idea what that BitmapLoader is, certainly isn't in any standard android library I know of, but it at least led me in the right direction. Bitmap response = BitmapFactory.decodeStream((InputStream)connection.getContent()); – Stephen Fuhry Sep 30, 2011 at 18:07 6 Be sure to see Joe's answer below about the extra steps you need to take to get the cache working – Keith Nov 1, 2011 at 18:22  |  Show 9 more comments
How can I cache images after they are downloaded from web?
Android image caching
A common and simple solution to this problem that feels like a hack but is fairly portable is to add a randomly generated query string to each request for the dynamic image. So, for example - <img src="image.png" /> Would become <img src="image.png?dummy=8484744" /> Or <img src="image.png?dummy=371662" /> From the point of view of the web-server the same file is accessed, but from the point of view of the browser no caching can be performed. The random number generation can happen either on the server when serving the page (just make sure the page itself isn't cached...), or on the client (using JavaScript). You will need to verify whether your web-server can cope with this trick.
I generate some images using a PHP lib. Sometimes the browser does not load the new generated file. How can I disable cache just for images created dynamically by me? Note: I have to use same name for the created images over time.
Disable cache for some images
Probably is APC related. For the people having this problem, please specify you .ini settings. Specifically your apc.mmap_file_mask setting. For file-backed mmap, it should be set to something like: apc.mmap_file_mask=/tmp/apc.XXXXXX To mmap directly from /dev/zero, use: apc.mmap_file_mask=/dev/zero For POSIX-compliant shared-memory-backed mmap, use: apc.mmap_file_mask=/apc.shm.XXXXXX
I've occasionally run up against a server's memory allocation limit, particularly with a bloated application like Wordpress, but never encountered "Unable to allocate memory for pool" and having trouble tracking down any information. Does anyone know what this means? I've tried increasing the memory_limit without success. I also haven't made any significant changes to the application. One day there was no problem, the next day I hit this error.
What is causing "Unable to allocate memory for pool" in PHP?