Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
session.refresh(entity) or entityManager.refresh(entity) (if you use JPA) will give you fresh data from DB.
I am using Hibernate and Spring for my web application. In database operation, Hibernate is caching entities and returning them in next request without reading the actual database. I know this will reduce the load on database and improve performance. But while this app still under construction, I need to load data from database in every request (testing reason). Is there any way to force database read? I became sure about the caching from this log4j message. Returning cached instance of singleton bean 'HelloController' DEBUG [http-bio-8080-exec-42] - Last-Modified value for [/myApp/../somePageId.html] is: -1
Force Hibernate to read database and not return cached entity
you can decorate InputStream being passed to POIFSFileSystem with a version that when close() is called it respond with reset(): class ResetOnCloseInputStream extends InputStream { private final InputStream decorated; public ResetOnCloseInputStream(InputStream anInputStream) { if (!anInputStream.markSupported()) { throw new IllegalArgumentException("marking not supported"); } anInputStream.mark( 1 << 24); // magic constant: BEWARE decorated = anInputStream; } @Override public void close() throws IOException { decorated.reset(); } @Override public int read() throws IOException { return decorated.read(); } } testcase static void closeAfterInputStreamIsConsumed(InputStream is) throws IOException { int r; while ((r = is.read()) != -1) { System.out.println(r); } is.close(); System.out.println("========="); } public static void main(String[] args) throws IOException { InputStream is = new ByteArrayInputStream("sample".getBytes()); ResetOnCloseInputStream decoratedIs = new ResetOnCloseInputStream(is); closeAfterInputStreamIsConsumed(decoratedIs); closeAfterInputStreamIsConsumed(decoratedIs); closeAfterInputStreamIsConsumed(is); } EDIT 2 you can read the entire file in a byte[] (slurp mode) then passing it to a ByteArrayInputStream
I have an InputStream of a file and i use apache poi components to read from it like this: POIFSFileSystem fileSystem = new POIFSFileSystem(inputStream); The problem is that i need to use the same stream multiple times and the POIFSFileSystem closes the stream after use. What is the best way to cache the data from the input stream and then serve more input streams to different POIFSFileSystem ? EDIT 1: By cache i meant store for later use, not as a way to speedup the application. Also is it better to just read up the input stream into an array or string and then create input streams for each use ? EDIT 2: Sorry to reopen the question, but the conditions are somewhat different when working inside desktop and web application. First of all, the InputStream i get from the org.apache.commons.fileupload.FileItem in my tomcat web app doesn't support markings thus cannot reset. Second, I'd like to be able to keep the file in memory for faster acces and less io problems when dealing with files.
How to Cache InputStream for Multiple Use
If you're using RewriteRule, just use R instead of R=301. For other purposes, you'll have to clear your browser cache whenever you change a redirect. from https://stackoverflow.com/a/7749784/1066234
We are moving a site from one CMS to another. The .htaccess file has been changed and it needs to be refreshed for the new site to work right. From what I understand the .htaccess file will only be refreshed if the browser cache is cleared? It is fine for those creating the site to clear our cache, but is there a way to get the users' browsers to get the new .htaccess file without the user clearing cache manually on his own initiative?
Can I force .htaccess to refresh?
With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers. It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again. The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file. Your commands flush these buffers.
Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system: To free pagecache: # echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: # echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: # echo 3 > /proc/sys/vm/drop_caches I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they? Do freeing them up also remove the useful memcached and/or redis cache? -- Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.
what are pagecache, dentries, inodes?
HTTP/1.1 defines a selection of caching mechanisms; the expires header is merely one, there is also the cache-control header. To directly answer your question: for a resource returned with no expires header, you must consider the returned cache-control directives. HTTP/1.1 defines no caching behaviour for a resource served with no cache-related headers. If a resource is sent with no cache-control or expires headers you must assume the client will make a regular (non-conditional) request the next time the same resources is requested. Any deviation from this behaviour qualifies the client as being not a fully conformant HTTP client, in which case the question becomes: what behaviour is to be expected from a non-conformant HTTP client? There is no way to answer that. HTTP caching is complex, to fully understand what a conformant client should do in a given scenario, read and understand the HTTP caching spec.
Assume browser default settings, and content is sent without expires headers. user visits website, browser caches images etc. user does not close browser, or refresh page. user continues to surf site normally. assume the browse doesn't dump the cache for any reason. The browser will cache images etc as the user surfs, but it's unclear when it will issue a conditional GET request to ask about content freshness (apart from refreshing the page). If this is a browser specific setting, where can I see it's value (for browsers like: safari, IE, FireFox, Chrome). [edit: yes - I understand that you should always send expires headers. However, this research is aimed at understanding how the browser works with content w/o expires headers.]
No expires header sent, content cached, how long until browser makes conditional GET request?
12 In your case, you can use the apollo's method client.resetStore(); It will clear the previous cache and then load the active queries. Share Improve this answer Follow answered Apr 24, 2018 at 5:24 NitinNitin 12911 silver badge33 bronze badges 0 Add a comment  | 
I am using apollo graphql in my react application. Say I have the following query: query ListQuery($filter: String!) { items(filter: $filter) { id name } } This query lets me query a list of items using a filter. Say I used filter string A, and then used filter string B. The cache would now contain two entries: ListQuery(A) and ListQuery(B). Now let's say I use a mutation to add a new item. How can I remove all the cached queries from the cache? So in this case, I want to remove both ListQuery(A) and ListQuery(B) from the cache. How can I accomplish this?
Apollo GraphQl react. How to clear query cache for all variable combinations?
config:clear command just deletes bootstrap/cache/config.php file, so just delete this file manually.
Currently I'm working on small project that require me to host my laravel app on shared hosting (please ignore the reason why I didn't use VPS to host my laravel project) and this hosting provider disable escapeshellarg() for security reason so I can't use php artisan config:cache to clear config cache. Is there any workaround for this?
Laravel 5.3 - Clear config cache in shared hosting
The main difference between the two is that if an object with the same name already exists in the cache, the Insert method call on your instance of Cache will replace the object, whereas the Add method call will fail (taken from the Remarks paragraph of methods Add and Insert on their respective MSDN reference page): Add Calls to this method will fail if an item with the same key parameter is already stored in the Cache. To overwrite an existing Cache item using the same key parameter, use the Insert method. Insert This method will overwrite an existing cache item whose key matches the key parameter. The other main difference is also that with the Add method some parameters are mandatory, whereas with Insert , various overloaded methods are available, and some parameters will be set to default values like the absolute or sliding expirations. You can see that there is no difference between the Add and the Insert method with the exact same parameters except that Insert will not fail (i.e. do nothing) if an object with the same name is in the cache.
I'm working on an ASP.NET web application and I want to implement caching, so I want to know the difference between HttpContext.Current.Cache.Insert and HttpContext.Current.Cache.Add and which one is better?
What is the difference between HttpContext.Current.Cache.Insert and HttpContext.Current.Cache.Add
50 You could loop through all the cache items and delete them one by one: foreach (System.Collections.DictionaryEntry entry in HttpContext.Current.Cache){ HttpContext.Current.Cache.Remove(string(entry.Key)); } Syntax Correction for ASP.NET 4.5 C# foreach (System.Collections.DictionaryEntry entry in HttpContext.Current.Cache){ HttpContext.Current.Cache.Remove((string)entry.Key); } Share Improve this answer Follow edited May 19, 2014 at 14:36 ᴍᴀᴛᴛ ʙᴀᴋᴇʀ 2,76411 gold badge2727 silver badges3939 bronze badges answered May 13, 2013 at 22:29 KennethKenneth 28.6k66 gold badges6262 silver badges8484 bronze badges 5 +1 @Kenneth, but I think you meant foreach and entry instead of de. – itsme86 May 13, 2013 at 22:31 Yes, don't know how I did that :-) – Kenneth May 13, 2013 at 22:32 You beat me to that part of my question, so I knocked it out of my answer. Nice answer. +1 – Greg May 13, 2013 at 22:38 Wouldn't this approach also have the thread safe problem mentioned in the answer above? – Adrian Rosca Oct 20, 2017 at 17:59 Our web api app has a List<object> list of objects stored in a HttpContext.Current cache. We place the data in the cache (if the cache is currently == null) through a single invoke of HttpContext.Current.Cache.Insert( TheCacheKey, our List<object>, ... ). We wanted to provide ourselves a web api method that cleared/reset that cache that we could manually submit if we wanted to. We are invoking HttpContext.Current.Cache.Remove( TheCacheKey ) in the method. It appears to work. Is there something wrong with doing this? – StackOverflowUser Mar 28, 2018 at 8:26 Add a comment  | 
How do I clear the server cache in asp.net? I have found out that there are two kinds of the cache. There is the browser cache and the server cache. I have done some searching but I have yet to find a clear, step-by-step guide for clearing the server cache using asp.net (or not). (update) I just learned that the code-behind for this is in VB - Visual Basic (dot net).
How do I clear the server cache in asp.net?
Running the query for the first time makes InnoDB buffer pool populate with your tables' relevant blocks. Since re-running the query requires exactly same blocks, it spares the query from the need to read them from disk when it's re-run, making it significantly faster.
I am trying to do some benchmarking of different queries on different columns but MySQL just won't let me do it. After the first execution of a query, I can never get the same execution time for that query again. For example if the query executes in 0.062 secs the first time, I can never get the same execution time for the 2nd, 3rd etc runs. It becomes either 0 seconds or something like 0.015. I have read many posts on disabling and clearing MySQL query cache but none of them has been of any help for me. like this SO post here No matter what I do, MySQL seems to insist on using cached results. I restart the MySQL Workbench then I run; set global query_cache_type=0; set global query_cache_size=0; flush query cache; reset query cache; The execution time keeps showing 0 secs. Only server variable that I haven't been able to change is "have_query_cache". Its value is "yes" and when I try to set it to "no" Workbench says it is read-only. I also do; set profiling=1; run my select query show profile for query 2; Profiling result shows this: 'starting', '0.000077' 'checking permissions', '0.000007' 'Opening tables', '0.000016' 'init', '0.000035' 'System lock', '0.000009' 'optimizing', '0.000013' 'statistics', '0.000094' 'preparing', '0.000008' 'executing', '0.000002' 'Sending data', '0.000016' 'end', '0.000002' 'query end', '0.000003' 'closing tables', '0.000005' 'freeing items', '0.000139' 'cleaning up', '0.000009' If I am not wrong this shows that no caches are being used right? But I stil see 0 secs. for the execution time. Edit: The query I am running is a SELECT query using "SQL_NO_CACHE" like so: SELECT SQL_NO_CACHE col1,now() from mytable where col2="some_value" (I added now() function to help me prevent query caching) Edit2: I am using innoDB, MySQL 5.6.10 Could someone please help me cause I can't see what is going on here. Thanks a lot!
How to disable MySQL Query Caching
Chrome should certainly treat requests with varying query strings as different requests; a cached result for style.css?v=123 should never be used for style.css?v=124. If you're seeing different behavior, please file a bug at http://new.crbug.com/ and post the bug ID here. That said, I'd first check to see whether the page was cached longer than you expected. If a new version of the page itself wasn't downloaded, then it would still be requesting ?v=123 as the HTML wouldn't have changed. If you're sending long-lived cache headers with the page, it's certainly possible that Chrome is caching it more aggressively than you expected. If that's the behavior you're seeing, please star http://crbug.com/8742 for updates.
I've got a web service that, like most others, uses js and css files. I use the old trick of appending a version number to the js and css file like; ?v=123 and that gets changed every time we update the service on production. Now, this works fine on all browsers, except for Chrome. Chrome seems to prefer it's cached version over getting the new one and therefor seems to ignore the appended variable. In some cases, forcing it to refresh cache (cmd+r / ctrl+f5) wasn't enough so I had to go into options and clear out the cache for it to load up the new content. Has anyone experienced this issue with Chrome? And if so, what was the resolution to the problem?
Chrome caching like a mad browser
If you use any of the default Request classes implemented in volley(e.g. StringRequest, JsonRequest, etc.), then call setShouldCache(false) right before adding the request object to the volley RequestQueue: request.setShouldCache(false); myQueue.add(request); If you have your own implementation of the Request class, then you can call setShouldCache(false) in the constructor of your class. This solution disables caching for each requests individually. If you want to disable caching globally from the volley library, you can permanently set the mShouldCache variable to false in the Request class.
Is there a way I could disable the Volley cache management? My app is using Google Volley library to manage the transport layer, but I have my own cache manager implementation because the server does not uses Cache-Control header. I want to save the space that Volley cache is using because it is totally useless. Is there any easy way? or should I implement my own version of RequestQueue? Any suggestion appreciated.
Disable Volley cache management
You can set a Header to prevent the request from being cached. Example below: return fetch(url, { headers: { 'Cache-Control': 'no-cache' } }).then(function (res) { return res.json(); }).catch(function(error) { console.warn('Request Failed: ', error); });
I am building an app in react native which makes fetch calls that rely on the most up to date information from the server. I have noticed that it seems to cache the response and if i run that fetch call again it returns the cached response rather than the new information from my server. My function is as follows: goToAll() { AsyncStorage.getItem('FBId') .then((value) => { api.loadCurrentUser(value) .then((res) => { api.loadContent(res['RegisteredUser']['id']) .then((res2) => { console.log(res2); this.props.navigator.push({ component: ContentList, title: 'All', passProps: { content: res2, user: res['RegisteredUser']['id'] } }) }); }); }) .catch((error) => {console.log(error);}) .done(); } and the function from api.js im calling is as follows: loadContent(userid){ let url = `http://####.com/api/loadContent?User_id=${userid}`; return fetch(url).then((response) => response.json()); }
React Native - Fetch call cached
Create a simple file that includes none of your PHP libraries but lives in the same folder as the file that serves up your images through a PHP file. file: test.php Request this file through a browser and check the headers. If you see the Response headers that you don't want, you know that they're configured via apache and not generated via a PHP file and you can concentrate your searches on .htaccess file in the directory tree, and on the http.confg and other included apache config files. You'll want to search for <Directory.... and <VirtualHost sections that may apply to your site. If you don't see the headers in a request for that simple PHP file, you know that PHP is setting the headers somewhere. At the end of your image serving file (or right after it echos the image and exits), but the following PHP snippet) var_dump(get_included_files()); Request an image through the image serving URL. That above snippet will print out all the PHP files used in the request. (you'll probably need to view source or use curl to see the raw output, as the browser will report an invalid image) Having a subset of your files to work file, search through them for calls to the header(); function. The header function is the only way (I think) that raw PHP code can set Response headers. You'll also want to search for call_user_func eval $$ in case there's any dynamic code on the page that's using PHP's meta-programming capabilities to call the header function. Good luck!
I have a website which maintenance I've inherited, which is a big hairy mess. One of the things I'm doing is improving performance. Among other things, I'm adding Expires headers to images. Now, there are some images that are served through a PHP file, and I notice that they do have the Expires header, but they also get loaded every time. Looking at Response Headers, I see this: Expires Wed, 15 Jun 2011 18:11:55 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Which obviously explains the problem. Now, I've looked all over the code base, and it doesn't say "pragma" anywhere. .htaccess doesn't seem to have anything related either. Any ideas what could be setting those "pragma" (and "cache-control") headers, and how can I avoid it?
What could be adding "Pragma:no-cache" to my response Headers? (Apache, PHP)
24 Can someone please answer how to disable caching in persistence.xml. The second-level cache and query cache are disabled by default (and queries are not cached unless you explicitly cache them). The first-level cache can't be disabled. I tried to disable by changing properties (...) This would disable the second-level cache and query cache, if they were enabled. But It did not work. To be honest, "it did not work" is a very poor description of the current behavior vs the expected one. Providing more details, (pseudo) code, SQL traces would probably help. That being said, if the question is about HQL, an HQL query should definitely hit the database upon subsequent execution (without any query cache). Activate SQL logging if required to observe this. If the question is about Session#get() or Session#load(), then you could reload the state of an entity using Session#refresh() or call Session#clear() to completely clear the session. Share Improve this answer Follow answered Sep 30, 2010 at 8:51 Pascal ThiventPascal Thivent 566k138138 gold badges1.1k1.1k silver badges1.1k1.1k bronze badges 1 3 Sorry, but this is not helpful. – 3xCh1_23 Feb 10, 2017 at 15:45 Add a comment  | 
I am trying to write a unit test class which will have to use same query to fetch the results from database two times in same test method. But as Hibernate cache is enabled second time it is not actually hitting the database and simply fetching the results from cache. Can someone please answer how to disable caching in persistence.xml. I tried to disable by changing properties hibernate.cache.use.query_cache = false and hibernate.cache.use_second_level_cache = false. But It did not work.
How to disable hibernate caching
Yes. I even made a test to be sure: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = CacheableTest.CacheConfigurations.class) public class CacheableTest { public static class Customer { final private String id; final private String name; public Customer(String id, String name) { this.id = id; this.name = name; } public String getId() { return id; } public String getName() { return name; } } final public static AtomicInteger cacheableCalled = new AtomicInteger(0); final public static AtomicInteger cachePutCalled = new AtomicInteger(0); public static class CustomerCachedService { @Cacheable("CustomerCache") public Customer cacheable(String v) { cacheableCalled.incrementAndGet(); return new Customer(v, "Cacheable " + v); } @CachePut("CustomerCache") public Customer cachePut(String b) { cachePutCalled.incrementAndGet(); return new Customer(b, "Cache put " + b); } } @Configuration @EnableCaching() public static class CacheConfigurations { @Bean public CustomerCachedService customerCachedService() { return new CustomerCachedService(); } @Bean public CacheManager cacheManager() { return new GuavaCacheManager("CustomerCache"); } } @Autowired public CustomerCachedService cachedService; @Test public void testCacheable() { for(int i = 0; i < 1000; i++) { cachedService.cacheable("A"); } Assert.assertEquals(cacheableCalled.get(), 1); } @Test public void testCachePut() { for(int i = 0; i < 1000; i++) { cachedService.cachePut("B"); } Assert.assertEquals(cachePutCalled.get(), 1000); } }
@CachePut or @Cacheable(value = "CustomerCache", key = "#id") public Customer updateCustomer(Customer customer) { sysout("i am inside updateCustomer"); .... return customer; } I found below documentation under CachePut source code CachePut annotation does not cause the target method to be skipped - rather it always causes the method to be invoked and its result to be placed into the cache. Does it mean if I use @Cacheable , updateCustomer method will be executed only once and result will be updated in cache. Subsequent calls to updateCustomer will not execute updateCustomer , it will just update the cache. While in case of @CachePut, updateCustomer method will be executed on each call and result will be updated in cache. Is my understanding correct?
Spring Cacheable vs CachePut?
31 As Jake Wharton suggested in issues, do this to ignore the cache: request.setCacheControl(CacheControl.FORCE_NETWORK); Share Improve this answer Follow edited May 16, 2016 at 15:44 gokhanakkurt 4,95544 gold badges2929 silver badges3939 bronze badges answered Mar 14, 2015 at 19:53 inderinder 1,78311 gold badge1515 silver badges1515 bronze badges 5 I was using an older version, updated to 3.4 and your solution working. Thanks inder – Nidhin Aug 10, 2016 at 8:18 2 is it able to set the cacheControl from OkHttpClient class, not from the Request class? – zihadrizkyef Mar 14, 2017 at 16:34 11 Yes - to disable caching globally, create the client with a null cache: new OkHttpClient().newBuilder().cache(null).build(); – Simon Apr 28, 2017 at 12:19 2 Is it possible to annotate some API calls to not use the cache at all ? – Someone Somewhere Sep 22, 2017 at 9:08 This does not work if you use the Coroutines Adapter. Any Ideas ? – Till Krempel Jan 24, 2019 at 14:14 Add a comment  | 
In my android application, I am using Retrofit with OkHttpClient with caching enabled to access some APIs. Some of our APIs sometimes return empty data. We provide a "Refresh" button in the app for the client to reload data from a specific API. How do I tell OkHttpClient that a specific request should ignore the cached entry. Alternatively, is there a mechanism to delete the cached response corresponding to a single request? I see Cache.remove(request) method but it is marked as private.
How to tell OkHttpClient to ignore cache and force refresh from server?
MemoryCache is embedded in the process , hence can only be used as a plain key-value store from that process. An seperate server counterpart of MemoryCache would be memcached. Whereas redis is a data structure server which can be hosted on other servers can interacted with over the network just like memcached , but redis supports a long list of complex data types and operations on them, to provide logical and intelligent caching.
Redis is often used as a cache, although it offers a lot more than just in-memory caching (it supports persistence, for instance). What are the reasons why one would choose to use Redis rather than the .NET MemoryCache? Persistence and data types (other than key-value pairs) come to mind, but I'm sure there must be other reasons for using an extra architectural layer (i.e. Redis).
Redis vs MemoryCache
You may be interested in checking out the following Google Code article: Optimize caching: Leverage browser caching In a nutshell, all modern browsers should be able to cache your images appropriately as instructed, with those HTTP headers.
This is about a web app that serves images. Since the same request will always return the same image, I want the accessing browsers to cache the images as aggressively as possible. I pretty much want to tell the browser Here's your image. Go ahead and keep it; it's really not going to change for the next couple of days. No need to come back. Really. I promise. I do, so far, set Cache-Control: public, max-age=86400 Last-Modified: (some time ago) Expires: (two days from now) and of course return a 304 not modified if the request has the appropriate If-Modified-Since header. Is there anything else I can do (or anything I should do differently) to get my message across to the browsers? The app is hosted on the Google App Engine, in case that matters.
Asking browsers to cache as aggressively as possible
9 I ran into the same problem. As @monsur said, the problem is that S3 doesn't set teh "Vary: Origin" header, even though it should. Unfortunately, as far as I know there is no way to get S3 to send that header. However, you can work around this by adding a query string paramater to the request such as ?origin=example.com when you need CORS. The query string forces the browser not to use the cached resource. Ideally, cloudfront and S3 would send the Vary:Origin header when CORS is enabled and/or Webkit would implicitly vary on the Origin header, which I assume Firefox does since it doesn't have this problem. Share Improve this answer Follow answered Mar 21, 2014 at 0:48 ThayneThayne 6,78133 gold badges4343 silver badges6969 bronze badges Add a comment  | 
Gist: I have a page that uses tag loading of an image from s3 (HTML img tag) and I have a page that uses xmlhttprequest. The tag loading gets cached without the CORS headers and so the xmlhttprequest sees the cached version, checks it's headers and fails with a cross origin error. Details: edit: Fails in both safari 5.1.6 and chrome 21.0.1180.89. Works fine in Firefox 14. Using S3's new CORS, I setup a CORSRule as so: <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>HEAD</AllowedMethod> <MaxAgeSeconds>0</MaxAgeSeconds> <AllowedHeader>*</AllowedHeader> </CORSRule> If I request an image from S3 without setting the origin in the request headers I get back the image without any CORS headers in the response. This get's cached and subsequent CORS requests (one's that do set the origin in the request header) get rejected as the browser uses the non CORS version form the cache. What's the best way to solve this? Can I set something so the non CORS version never gets cached? Should I differentiate the CORS requests by appending a ?some_flag to the url of the request? Ideally I'd have S3 ALWAYS send back the needed CORS headers even if the request doesn't contain "origin".
Cached non CORS response conflicts with new CORS request
You could use the built-in MemoryCache to store entire resultsets you have retrieved from the database. A typical pattern: MyModel model = MemoryCache.Default["my_model_key"] as MyModel; if (model == null) { model = GetModelFromDatabase(); MemoryCache.Default["my_model_key"] = model; } // you could use the model here
I build my own cms using Asp.net mvc 4 (c#), and I want to cache some database data, likes: localization, search categories (it's long-tail, each category have it's own sub and sub-sub categories), etc.. It's will be overkill to query the database all the time, because it can be more than 30-100 queries for each page request, however the users update those database rarely So what is the best way (performance and convenience) to do it? I know how use the OutputCache of the action, but it's not what I need in this situation , it's cache the html, but what I need is for example, that my own helper @html.Localization("Newsletter.Tite") will take the value of the language, or any another helper that interact with data etc. I think (not really sure) that I need to cache the data I want, only when the application is invoke for the first time, and then work with the cache location, but I don't have any experience even about to it.
How to cache database tables to prevent many database queries in Asp.net C# mvc
# 480 weeks <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule>
Can someone provide me with an optimized .htaccess configuration that handles compression, browser caching, proxy caching, etc. for a typical website? Aside from my visitors, I'm also trying to make Google PageSpeed happy. I wanna use caching and gzip compression through .htaccess please help me with its code! I want to cache icon,pdf,flv,jpg,png,gif,js,css,swf files for long time I want to cache xml,txt files for 2 Days I want to cache html files for 2 Days And I wanna compress my html,txt,css,js,php files because those have large file size. Is there any way to gzip images using .htaccess?
Caching and gzip compression by htaccess
It's a normal behavior. Doctrine stores a reference of the retrieved entities in the EntityManager so it can return an entity by it's id without performing another query. You can do something like : $entityManager = $this->get('doctrine')->getEntityManager(); $repository = $entityManager->getRepository('KnowledgeShareBundle:Post'); $post = $repository->find(1); $entityManager->detach($post); // as the previously loaded post was detached, it loads a new one $existingPost = $repository->find(1); But be aware of that as the $post entity was detached, you must use the ->merge() method if you want to persist it again.
I want to be able to retrieve the existing version of an entity so I can compare it with the latest version. E.g. Editing a file, I want to know if the value has changed since being in the DB. $entityManager = $this->get('doctrine')->getEntityManager(); $postManager = $this->get('synth_knowledge_share.manager'); $repository = $entityManager->getRepository('KnowledgeShareBundle:Post'); $post = $repository->findOneById(1); var_dump($post->getTitle()); // This would output "My Title" $post->setTitle("Unpersisted new title"); $existingPost = $repository->findOneById(1); // Retrieve the old entity var_dump($existingPost->getTitle()); // This would output "Unpersisted new title" instead of the expected "My Title" Does anyone know how I can get around this caching?
How to stop Doctrine 2 from caching a result in Symfony 2?
Usually "too big cache size" warnings are issued under assumption that you have few physical memory and the cache itself well need to be swapped or will take resources that are required by the OS (like file cache). If you have enough memory, it's safe to increase query_cache size (I've seen installations with 1GB query cache). But are you sure you are using the query cache right? Do have lots of verbatim repeating queries? Could you please post the example of a typical query?
My application is very database intensive so I've tried really hard to make sure the application and the MySQL database are working as efficiently as possible together. Currently I'm tuning the MySQL query cache to get it in line with the characteristics of queries being run on the server. query_cache_size is the maximum amount of data that may be stored in the cache and query_cache_limit is the maximum size of a single resultset in the cache. My current MySQL query cache is configured as follows: query_cache_size=128M query_cache_limit=1M tuning-primer.sh gives me the following tuning hints about the running system: QUERY CACHE Query cache is enabled Current query_cache_size = 128 M Current query_cache_used = 127 M Current query_cache_limit = 1 M Current Query cache Memory fill ratio = 99.95 % Current query_cache_min_res_unit = 4 K However, 21278 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size MySQL won't cache query results that are larger than query_cache_limit in size And mysqltuner.pl gives the following tuning hints: [OK] Query cache efficiency: 31.3% (39K cached / 125K selects) [!!] Query cache prunes per day: 2300654 Variables to adjust: query_cache_size (> 128M) Both tuning scripts suggest that I should raise the query_cache_size. However, increasing the query_cache size over 128M may reduce performance according to mysqltuner.pl (see http://mysqltuner.pl/). How would you tackle this problem? Would you increase the query_cache_size despite query_cache_limit0's warning or try to adjust the querying logic in some way? Most of the data access is handled by Hibernate, but quite a lot of hand-coded SQL is used in the application as well.
MySQL query caching: limited to a maximum cache size of 128 MB?
50 Use the following to remove all objects from the cache IDictionaryEnumerator enumerator = HttpContext.Current.Cache.GetEnumerator(); while (enumerator.MoveNext()) { HttpContext.Current.Cache.Remove((string)enumerator.Key); } Also, it is a bit of a sledgehammer option but you can restart the entire application as follows: System.Web.HttpRuntime.UnloadAppDomain(); Share Improve this answer Follow edited Dec 24, 2015 at 13:12 benmccallum 1,2821212 silver badges2828 bronze badges answered Mar 11, 2011 at 16:19 NakedBrunchNakedBrunch 49k1414 gold badges7373 silver badges9898 bronze badges 3 I've never seen that method on httpruntime before – Chris Marisic Mar 11, 2011 at 16:22 @Chris Mansic: It's there. I've used it before. Take a look here at MSDN: msdn.microsoft.com/en-us/library/… – NakedBrunch Mar 11, 2011 at 16:24 2 Need to cast enumerator.Key to a string since Remove takes string and all keys are strings – GameSalutes May 28, 2015 at 19:14 Add a comment  | 
How can I manually clear ASP.NET server cache (IIS 7) on a given application/website, like what can be done in IE to clear browser cache for a given domain?
Manually clear ASP.NET server cache for a single application/web site?
You can specify it in the 4th parameter of Cache.Add(): public Object Add( string key, Object value, CacheDependency dependencies, DateTime absoluteExpiration, // After this DateTime, it will be removed from the cache TimeSpan slidingExpiration, CacheItemPriority priority, CacheItemRemovedCallback onRemoveCallback ) Edit: If you access the cache via the indexer (i.e. Cache["Key"]), the method that is called uses no expiration and remains in the cache indefinitely. Here is the code that is called when you use the indexer: public void Insert(string key, object value) { this._cacheInternal.DoInsert(true, key, value, null, NoAbsoluteExpiration, NoSlidingExpiration, CacheItemPriority.Normal, null, true); }
Is there a way to specify how long data is held in HttpContext.Cache?
HttpContext.Cache Expiration
A quick answer: Use the Repository pattern (see Domain Driven Design by Evans) to fetch your entities. Each repository will cache the things it will hold, ideally by letting each instance of the repository access a singleton cache (each thread/request will instantiate a new repository but there can be only one cache). The above answer works on one machine only. To be able to use this on many machines, use memcached as your caching solution. Good luck!
We've just started using LINQ to SQL at work for our DAL & we haven't really come up with a standard for out caching model. Previously we had being using a base 'DAL' class that implemented a cache manager property that all our DAL classes inherited from, but now we don't have that. I'm wondering if anyone has come up with a 'standard' approach to caching LINQ to SQL results? We're working in a web environment (IIS) if that makes a difference. I know this may well end up being a subjective question, but I still think the info would be valuable. EDIT: To clarify, I'm not talking about caching an individual result, I'm after more of an architecture solution, as in how do you set up caching so that all your link methods use the same caching architecture.
How do you implement caching in Linq to SQL?
If this is a straight array, then you could use var_export() rather than serialize (wrapping it with the appropriate "" and write it to a .php file; then include() that in your script. Best done if you can write it outside the htdocs tree, and only really appropriate for large volumes of data that memory caches would consider excessive.
I am currently building a PHP framework (original, i know) and im working on some optimisation features for it. One dilema I have come accross is what is the best way to cache MySQL results? I know some people will say, first optimise your MySQL etc, but lets say for arguments sake, my query takes 1 minute to run and is as optimised as possible. What would be the best way to cache the results in PHP so I dont have to rerun the query every page load? My first thought was to maybe loop through the results, add them to an array... serialize them and then store in a file. Now as creating the cache only occurs once, I can afford the overhead of the serialize function if the array contains say 1 million results. How ever, loading the cached file and then unserializing the array on every pageload, could have performance impact. Would it then be a better idea to while caching, instead of serializing the results and the writing to file, write to the file in a way that displays the results in a PHP readable array. So when it loads, there is no unserialize overhead. Are there any other (read: faster) ways to cache a slow query for frequent use?
PHP Best way to cache MySQL results?
We have done recently a fair amount of comparing of Velocity and Memcached. In the nutshell, we found Velocity to be 3x - 5x slower than Memcached, and (even more crucially) it does not have currently support for a multi-get operation. So at the moment, I would recommend going with Memcached. Also, another lesson we have learned was that the slowest operation in distributed caching is serialization and deserialization (at least in ASP.NET). The in-process ASP.NET cache is order of magnitudes faster. So you have to choose caching strategies much more carefully.
I've been paying some attention to Microsoft's fairly recent promoting of Velocity as a distributed caching solution that would compete with the likes of Memcached. I've been looking for a 64bit version of Memcached for Windows for some time now with no luck, and since everything about the ASP.Net MVC project I'm working on is 64bit, it doesn't make sense to use anything but 64bit. Now we're already hedging our bets with ASP.NET MVC in Beta (RTM soon hopefully), but StackOverflow doesn't seem to be doing too badly, so I have limited concerns there. But Velocity is still very much an unknown quantity and will still be Beta (or CTP) for ages - but it does have 64bit! Does anyone have relevant experience or point of view to offer in this situation? Should we bide our time for Velocity - is it even anywhere near good enough to compete with a giant like Memcached, or should we invest time trying to get a 64bit version of Memcached going?
MS Velocity vs Memcached for Windows?
17 A Last-Modified and Expires header might also be useful additions. Your server should also check for requests featuring an If-Modified-Since header, and return a 304 Not Modified response if possible to speed things along. Share Improve this answer Follow answered Aug 16, 2009 at 21:17 Paul DixonPaul Dixon 298k5454 gold badges313313 silver badges348348 bronze badges Add a comment  | 
I am writing a small application which serves images from the local computer, so they can be accessed as http://localhost:12345/something/something (which returns a jpeg). How can I force the browser to cache this, so only a single request would be sent to the server. Would this header be sufficient HTTP/1.1 200 OK Cache-Control: public, max-age=99936000 Content-Length: 123456 Content-Type: image/jpeg This seems to work with Firefox 3.x but would it be sufficient globally for other browser as well?
How to force a web browser to cache Images
Solved with model.connection.clear_query_cache
I'm currently creating a Rails app with some cronjobs etc, but I have some problems because the sql is cached by Rails. So anyone know how to disable the SQL Cache in Rails? Not globally, but for this code. Really don't want to create one method for every model, so is there anyway to just disable it temporary? Terw
Disable SQL Cache temporary in Rails?
That's a loaded question with a very long answer. I doubt it is going to be helpful, let's talk about the real problem you are trying to solve. Those performance counters have to be registered first before you can see them. Start an elevated console prompt (right-click the shortcut and use Run as Administrator) and type these commands: cd C:\Windows\Microsoft.NET\Framework\v4.0.30319 lodctr netmemorycache.ini That adds the required registry entries so the MemoryCache can create these counters at runtime. Start your program so an instance of MemoryCache is created. Run Perfmon.exe, right-click the graph, Add Counters and pick from the added ".NET Memory Cache 4.0" category. Also select the instance of your program.
Where are .NET 4.0 MemoryCache performance counters? I am looking for their name and I can't find any. Thank you,
Where are .NET 4.0 MemoryCache performance counters?
No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption. You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible. The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
Can a proxy server cache SSL GETs? If not, would response body encryption suffice?
First of all, Xaqron makes a good point that what you're talking about probably doesn't qualify as caching. It's really just a lazily-loaded globally-accessible variable. That's fine: as a practical programmer, there's no point bending over backward to implement full-on caching where it's not really beneficial. If you're going to use this approach, though, you might as well be Lazy and let .NET 4 do the heavy lifting: private static Lazy<IEnumerable<Category>> _allCategories = new Lazy<IEnumerable<Category>>(() => /* Db call to populate */); public static IEnumerable<Category> AllCategories { get { return _allCategories.Value; } } I took the liberty of changing the type to IEnumerable<Category> to prevent callers from thinking they can add to this list. That said, any time you're accessing a public static member, you're missing out on a lot of flexibility that Object-Oriented Programming has to offer. I'd personally recommend that you: Rename the class to CategoryRepository (or something like that), Make this class implement an ICategoryRepository interface, with a GetAllCategories() method on the interface, and Have this interface be constructor-injected into any classes that need it. This approach will make it possible for you to unit test classes that are supposed to do things with all the categories, with full control over which "categories" are tested, and without the need for a database call.
Long time listener - first time caller. I am hoping to get some advice. I have been reading about caching in .net - both with System.Web.Caching and System.Runtime.Caching. I am wondering what additional benefits I can get vs simply creating a static variable with locking. My current (simple minded) caching method is like this: public class Cache { private static List<Category> _allCategories; private static readonly object _lockObject = new object(); public static List<Category> AllCategories { get { lock (_lockObject) { if (_allCategories == null) { _allCategories = //DB CALL TO POPULATE } } return _allCategories; } } } Other than expiration (and I wouldn't want this to expire) I am at a loss to see what the benefit of using the built in caching are. Maybe there are benefits for more complex caching scenarios that don't apply to me - or maybe I am just missing something (would not be the first time). So, what is the advantage of using cache if I want a cache that never expires? Doesn't static variables do this?
Why use System.Runtime.Caching or System.Web.Caching Vs static variables?
I assume you are getting at this, System.Runtime.Caching, similar to the System.Web.Caching and in a more general namespace. See http://deanhume.com/Home/BlogPost/object-caching----net-4/37 and on the stack, is-there-some-sort-of-cachedependency-in-system-runtime-caching and, performance-of-system-runtime-caching. Could be useful.
I understand the .NET 4 Framework has caching support built into it. Does anyone have any experience with this, or could provide good resources to learn more about this? I am referring to the caching of objects (entities primarily) in memory, and probably the use of System.Runtime.Caching.
.NET 4 Caching Support
I was able to hunt down some helpful documentation. SizeLimit does not have units. Cached entries must specify size in whatever units they deem most appropriate if the cache size limit has been set. All users of a cache instance should use the same unit system. An entry will not be cached if the sum of the cached entry sizes exceeds the value specified by SizeLimit. If no cache size limit is set, the cache size set on the entry will be ignored. It turns out that SizeLimit can function as the limit on the number of entries, not the size of those entries. A quick sample app showed that with a SizeLimit of 1, the following: var options = new MemoryCacheEntryOptions().SetSize(1); cache.Set("test1", "12345", options); cache.Set("test2", "12345", options); var test1 = (string)cache.Get("test1"); var test2 = (string)cache.Get("test2"); test2 will be null. In turn, SetSize() allows you to control exactly how much of your size limit an entry should take. For instance, in the following example: var cache = new MemoryCache(new MemoryCacheOptions { SizeLimit = 3, }); var options1 = new MemoryCacheEntryOptions().SetSize(1); var options2 = new MemoryCacheEntryOptions().SetSize(2); cache.Set("test1", "12345", options1); cache.Set("test2", "12345", options2); var test1 = (string)cache.Get("test1"); var test2 = (string)cache.Get("test2"); both test1 and test2 will have been stored in the cache. But if SizeLimit is set to 2, only the first entry will be successfully stored.
In a controller class, I have using Microsoft.Extensions.Caching.Memory; private IMemoryCache _cache; private readonly MemoryCacheEntryOptions CacheEntryOptions = new MemoryCacheEntryOptions() .SetSize(1) // Keep in cache for this time .SetAbsoluteExpiration(TimeSpan.FromSeconds(CacheExpiryInSeconds)); And in Startup.cs, I have public class MyMemoryCache { public MemoryCache Cache { get; set; } public MyMemoryCache() { Cache = new MemoryCache(new MemoryCacheOptions { SizeLimit = 1024, }); } } What do these various size settings mean? This is .NET Core 2.1.
What do the size settings for MemoryCache mean?
36 SQL Server does not cache the query results, but it caches the data pages it reads in memory. The data from these pages is then used to produce the query result. You can easily see if the data was read from memory or from disk by setting SET STATISTICS IO ON Which returns the following information on execution of the query Table 'ProductCostHistory'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. The difference between logical and physical reads is the data read from memory. SQL Server will also claim Memory for caching until the maximum (configured, or physical maximum) is reached and then the least recently used pages are flushed. Share Improve this answer Follow edited Jan 4, 2021 at 18:52 answered Apr 5, 2013 at 5:57 Filip De VosFilip De Vos 11.7k11 gold badge4949 silver badges6060 bronze badges Add a comment  | 
This question already has answers here: SQL Server cache question (5 answers) Closed 10 years ago. When I run a query does the SQL Server caches the results? Because: When I run the below query: SELECT id FROM Foo WHERE Foo.Name LIKE '%bar%' The query runs for 40 seconds on the 1st time. But on the second run it takes only a few seconds. Is this because the execution plan is somehow cached or actually the data is cached so that I can retrieve it much faster on the 2nd run?
Does SQL Server CACHES Query Results? [duplicate]
this seems to be fixed in 3.2M1, see https://jira.springsource.org/browse/SPR-8696
I have a util module that produces a jar to be used in other applications. I'd like this module to use caching and would prefer to use Spring's annotation-driven caching. So Util-Module would have something like this: DataManager.java ... @Cacheable(cacheName="getDataCache") public DataObject getData(String key) { ... } ... data-manager-ehcache.xml ... <cache name="getDataCache" maxElementsInMemory="100" eternal="true" /> ... data-manager-spring-config.xml ... <cache:annotation-driven cache-manager="data-manager-cacheManager" /> <!-- ???? ---> <bean id="data-manager-cacheManager" class="org.springframework.cache.ehcache.EhcacheCacheManager" p:cache-manager="data-manager-ehcache"/> <bean id="data-manager-ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="data-manager-ehcache.xml"/> ... I would also like my deployable unit to have caching via Spring annotation, while including the above jar as a dependency. So my Deployable-Unit would have something like this: MyApp.java ... @Cacheable(cacheName="getMyAppObjectCache") public MyAppObject getMyAppObject(String key) { ... } ... my-app-ehcache.xml ... <cache name="getMyAppObjectCache" maxElementsInMemory="100" eternal="true" /> ... my-app-spring-config.xml ... <cache:annotation-driven cache-manager="my-app-cacheManager" /> <!-- ???? ---> <bean id="my-app-cacheManager" class="org.springframework.cache.ehcache.EhcacheCacheManager" p:cache-manager="my-app-ehcache"/> <bean id="my-app-ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="my-app-ehcache.xml"/> ... Question: Is it possible to use annotation driven caching in both your main project and a dependency module while keeping the configurations separated? If not, an explanation of why it isn't would be appreciated. If so, an explanation of what needs to change in the above configuration would be appreciated.
Using Spring cache annotation in multiple modules
Yarn website says the following:- Yarn stores every package in a global cache in your user directory on the file system. You can use yarn cache list & it will print out every cached package. Just in case you wanted to go through. You can use yarn cache clean & it will clean the cache (Note that it's a time taking process and considering your cache size this will be forever!) You can use yarn cache dir and it will display the path where it's storing the cache! You know what to do next, use rm -rf yarn-cache-dir this will free up your space and clear all the cache as well. Note generally your yarn cache dir can be found at ~/.cache/yarn/. In my system the cache is stored in ~/.cache/yarn/v6/. UPDATE:- It's recommended to use latest version of yarn.
I was shocked few minutes ago when I figured it out that Yarn cache on location /Users/user/Library/Caches/Yarn takes more than 50GB of my disk space. What the heck? Why is there every existing package in this universe on my computer? I am glad that Yarn advertise itself as ultra fast, but in compensation with eating that much disk space I am not fan anymore. Now I even understand why yarn cache clean takes years to finish its job. Does Yarn cache all versions of particular package? Are there any plans in Yarn dev team to implement some kind of warning that cache is taking a lot of space and it could be cleared?
Yarn cache takes a lot of space
4 Here is a good comparison between the features of NCache and Appfabric Share Improve this answer Follow answered May 9, 2012 at 9:27 WilliamWilliam 4911 bronze badge Add a comment  | 
Has anyone done a thorough comparison of AppFabric and NCache or AppFabric and ScaleOut? We are currently looking to implement either AppFabric, NCache or ScaleOut for distributed caching in geographically distant locations and I would like to know anyone's thoughts who has compared them side by side. I appreciate that many people use one or the other and tell me why their chosen solution is great but I am really looking for a comparison of the two products. Such things as what does AppFabric not do or not do well (if anything), partially from a features point of view but also from developer's point of view. Is working with one compared to the other nicer, easier, more flexible, more powerful, etc. There are plenty of lists of features which I can compare but am really looking for a comparison from someone who has perhaps been in a similar position to us and has performed the evaluation that we are about to launch into which will give us some food for thought whilst we do so. Thanks in advance.
Caching Solutions
Because to get the data from the database's cache, you still have to: Generate the SQL from the ORM's "native" query format Do a network round-trip to the database server Parse the SQL Fetch the data from the cache Serialise the data to the database's over-the-wire format Deserialize the data into the database client library's format Convert the database client librarie's format into language-level objects (i.e. a collection of whatevers) By caching at the application level, you don't have to do any of that. Typically, it's a simple lookup of an in-memory hashtable. Sometimes (if caching with memcache) there's still a network round-trip, but all of the other stuff no longer happens.
Modern database provide caching support. Most of the ORM frameworks cache retrieved data too. Why this duplication is necessary?
Why use your application-level cache if database already provides caching?
If you want to completely clear the cache do the following: Obj-c: - (IBAction)clearCache:(id)sender { [[SDImageCache sharedImageCache]clearMemory]; [[SDImageCache sharedImageCache]clearDisk]; } Swift 5 SDImageCache.shared.clearMemory() SDImageCache.shared.clearDisk() Swift 3.0 @IBAction func clearCache(sender: UIButton) { SDImageCache.shared().clearMemory() SDImageCache.shared().clearDisk() }
I have all images loaded on my app via SDWebImage. The downloading and caching works great, but I wanted to make a button that can clear all cached images in the entire app. I have a "Clear Cache" button as a UIButton on one of my tab bar views. How can I make it so when this button is tapped, all the cached images are removed and need to be re-downloaded? Using Swift. Thank you!
How to clear all cached images loaded from SDWebImage?
So i was looking for an answer to the same question as OP but was not really satisfied with the solutions. So i started playing around with this recently and going through the source code of the framework, I found out that the remember() method accepts second param called key and for some reason it has not been documented on their site (Or did i miss that?). Now good thing about this is that, The database builder uses the same cache driver which is configured under app/config/cache.php Or should i say the same cache system that has been documented here - Cache. So if you pass min and key to remember(), you can use the same key to clear the cache using Cache::forget() method and in fact, you can pretty much use all the Cache methods listed on the official site, like Cache::get(), Cache::add(), Cache::put(), etc. But i don't recommend you to use those other methods unless you know what you're doing. Here's an example for you and others to understand what i mean. Article::with('comments')->remember(5, 'article_comments')->get(); Now the above query result will be cached and will be associated with the key0 key which can then be used to clear it anytime (In my case, I do it when i update). So now if i want to clear that cache regardless of how much time it remembers for. I can just do it by calling key1 and it should work just as expected. Hope this helps everyone :)
Theoretical question on Laravel here. So Example of the caching I'd do is: Article::with('comments')->remember(5)->get(); Ideally I'd like to have an event for Article updates that when the ID of a instance of that model (that's already cached) is updated I want to forget that key (even if it's the whole result of the query that's forgotten instead of just that one model instance), it is possible to do so? If not is there some way to implement this reasonably cleanly?
How would you forget cached Eloquent models in Laravel?
You could create a class (doesn't matter that its only used here) that overrides GetHashCode and Equals: Thanks theDmi (and others) for improvements... public class CarKey : IEquatable<CarKey> { public CarKey(string carModel, string engineType, int year) { CarModel = carModel; EngineType= engineType; Year= year; } public string CarModel {get;} public string EngineType {get;} public int Year {get;} public override int GetHashCode() { unchecked // Overflow is fine, just wrap { int hash = (int) 2166136261; hash = (hash * 16777619) ^ CarModel?.GetHashCode() ?? 0; hash = (hash * 16777619) ^ EngineType?.GetHashCode() ?? 0; hash = (hash * 16777619) ^ Year.GetHashCode(); return hash; } } public override bool Equals(object other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; if (other.GetType() != GetType()) return false; return Equals(other as CarKey); } public bool Equals(CarKey other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return string.Equals(CarModel,obj.CarModel) && string.Equals(EngineType, obj.EngineType) && Year == obj.Year; } } If you don't override those, ContainsKey does a reference equals. Note: the Tuple class does have its own equality functions that would basically do the same as above. Using a bespoke class makes it clear that is what is intended to happen - and is therefore better for maintainability. It also has the advantage that you can name the properties so it is clear Note 2: the class is immutable as dictionary keys need to be to avoid potential bugs with hashcodes changing after the object is added to the dictionary See here GetHashCode taken from here
I have a cache that I implement using a ConcurrentDictionary, The data that I need to keep depends on 5 parameters. So the Method to get it from the cache is: (I show only 3 parameters here for simplicity, and I changed the data type to represent CarData for clearity) public CarData GetCarData(string carModel, string engineType, int year); I wonder what type of key will be better to use in my ConcurrentDictionary, I can do it like this: var carCache = new ConcurrentDictionary<string, CarData>(); // check for car key bool exists = carCache.ContainsKey(string.Format("{0}_{1}_{2}", carModel, engineType, year); Or like this: var carCache = new ConcurrentDictionary<Tuple<string, string, int>, CarData>(); // check for car key bool exists = carCache.ContainsKey(new Tuple(carModel, engineType, year)); I don't use these parameters together any other place, so there is no justification to create a class just to keep them together. I want to know which approach is a better in terms of performance and maintainability.
Tuple vs string as a Dictionary key in C#
I believe the issue is that when you cache the fragment in your view, a cache digest is being added to the cache key (views/all_available_releases/41cb0a928326986f35f41c52bb3d8352), but expire_fragment is not using the digest (views/all_available_releases). If you add skip_digest: true to the cache call in the view it should prevent the digest from being used. <% cache "all_available_releases", skip_digest: true do %> <% @releases.each do |release| %> <% cache(release) do %> <html code with> <%ruby code @release.name blah blah blah%> <%end%> <%end%> <%end%> Cache digests are only intended to be used with automatic cache expiration. If you need to manually expire cache keys then you can't use cache digests.
I have been trying to use the caching capabilities of rails, but I am unable to expire some cache fragment although they seem to expire. Using the 'Russian Doll Caching' as pointed out in the rails tutorial site, I am using this configuration <% cache "all_available_releases" do %> <% @releases.each do |release| %> <% cache(release) do %> <html code with> <%ruby code @release.name blah blah blah%> <%end%> <%end%> <%end%> I expire the outer caching in the release_controller.rb controller, where I use expire_fragment("all_available_releases") to expire the fragment. I use it in every method of the controller that updates or deletes or adds an entry. This is the log of WEBrick, where although the expire fragment gets registered, 5 lines later the expired fragment is read and used while it shouldn't. This example is after a destroy call. Processing by ReleasesController#destroy as HTML Parameters: {"authenticity_token"=>"***/***/********************+********=", "id"=>"2"} Release Load (0.1ms) SELECT "releases".* FROM "releases" WHERE "releases"."id" = ? LIMIT 1 [["id", "2"]] (0.1ms) begin transaction SQL (2.0ms) DELETE FROM "releases" WHERE "releases"."id" = ? [["id", 2]] (148.0ms) commit transaction Expire fragment views/all_available_releases (0.1ms) Redirected to http://127.0.0.1:3000/releases Completed 302 Found in 180ms (ActiveRecord: 150.2ms) Started GET "/releases" for 127.0.0.1 at 2013-07-03 13:09:51 +0300 Processing by ReleasesController#index as HTML Read fragment views/all_available_releases/41cb0a928326986f35f41c52bb3d8352 (0.1ms) Rendered releases/index.html.erb within layouts/application (0.6ms) Completed 200 OK in 5ms (Views: 4.0ms | ActiveRecord: 0.0ms) I even tried using Rails.cache.delete("all_available_releases") and it didn't work either. if I delete <%cache "all_available_releases"%> (and one <%end%>) from my html.erb the caching works fine and gets expired whenever it should.
Rails 4.0 expire_fragment/cache expiration not working
Use <cache-path>, not <external-path>. See the documentation.
I found so many links which is related to FileProvider, but I didn't found solution for cache directory java.lang.IllegalArgumentException: Failed to find configured root that contains /data/data/pkg name/cache/1487876607264.png I want to use it for CACHE DIRECTORY, How can I give path in provider. <paths> <external-path name="external_files" path="." /> </paths> I used it as : File file = new File(context.getCacheDir(), System.currentTimeMillis() + ".png"); Uri uri = FileProvider.getUriForFile(context, context.getApplicationContext().getPackageName() + ".provider", file); Its working fine if I give application folder path, but not working with Cache Directory. Any Help?
Android FileProvider for CACHE DIR : Failed to find configured root that contains
To a first approximation: ActiveRecord::Base.connection.query_cache.clear
I'm building a command line application using ActiveRecord 3.0 (without rails). How do I clear the query cache that ActiveRecord maintains?
Clearing ActiveRecord cache
You can do it at the input level in HTML by adding autocomplete="off" to the input. http://css-tricks.com/snippets/html/autocomplete-off/ You could also do it via JS such as: someForm.setAttribute( "autocomplete", "off" ); someFormElm.setAttribute( "autocomplete", "off" );
Is there a way to disable autofill in Chrome and other browsers on form fields through HTML or JavaScript? I don't want the browser automatically filling in answers on the forms from previous users of the browser. I know I can clear the cache, but I can't rely on repeatedly clearing the cache.
Disable autofill on a web form through HTML or JavaScript?
There are two basic options: disable the caching altogether (the caching is done with the cacheprovider plugin): pytest -p no:cacheprovider -p is used to disable plugins. changing the cache location by tweaking the cache-dir configuration option (requires pytest 3.2+) Sets a directory where stores content of cache plugin. Default directory is .cache which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Here is a sample PyCharm run configuration with the no:cacheprovider:
I'm using Pycharm for this years Advent of Code and I'm using pytest for testing all of the examples and output. I'd prefer it if pytest didn't create the .cache directories throughout my directory tree. Is there anyway to disable the creation of .cache directories when tests fail?
Preventing pytest from creating .cache directories in Pycharm
The main reason is: performance. Another reason is power consumption. Separate dCache and iCache makes it possible to fetch instructions and data in parallel. Instructions and data have different access patterns. Writes to iCache are rare. CPU designers are optimizing the iCache and the CPU architecture based on the assumption that code changes are rare. For example, the AMD Software Optimization Guide for 10h and 12h Processors states that: Predecoding begins as the L1 instruction cache is filled. Predecode information is generated and stored alongside the instruction cache. Intel Nehalem CPU features a loopback buffer, and in addition to this the Sandy Bridge CPU features a µop cache The microarchitecture of Intel, AMD and VIA CPUs. Note that these are features related to code, and have no direct counterpart in relation to data. They benefit performance, and since Intel "prohibits" CPU designers to introduce features which result in excessive increase of power consumption they presumably also benefit total power consumption. Most CPUs feature a data forwarding network (store to load forwarding). There is no "store to load forwarding" in relation to code, simply because code is being modified much less frequently than data. Code exhibits different patterns than data. That said, most CPUs nowadays have unified L2 cache which holds both code and data. The reason for this is that having separate L2I and L2D caches would pointlessly consume the transistor budget while failing to deliver any measurable performance gains. (Surely, the reason for having separate iCache and dCache isn't reduced complexity because if the reason was reduced complexity than there wouldn't be any pipelining in any of the current CPU designs. A CPU with pipelining is more complex than a CPU without pipelining. We want the increased complexity. The fact is: the next CPU design is (usually) more complex than the previous design.)
This question already has an answer here: What does a 'Split' cache means. And how is it useful(if it is)? (1 answer) Closed 3 years ago. Can someone please explain what do we gain by having a separate instruction cache and data cache. Any pointers to a good link explaining this will also be appreciated.
why are separate icache and dcache needed [duplicate]
1 Can you tell what hash_strategy you are using for memcache? I've had problems in the past using the default standard but everything has been fine since changing to consistent: http://php.net/manual/en/memcache.ini.php#ini.memcache.hash-strategy Share Improve this answer Follow answered Jan 2, 2013 at 11:09 JasonJason 50733 silver badges1010 bronze badges Add a comment  | 
We are using Zend Cache with a memcached backend pointing to an AWS ElastiCache cluster with 2 cache nodes. Our cache setup looks like this: $frontend = array( 'lifetime' => (60*60*48), 'automatic_serialization' => true, 'cache_id_prefix' => $prefix ); $backend = array( 'servers' => array( array( 'host' => $node1 ), array( 'host' => $node2 ) ) ); $cache = Zend_Cache::factory('Output', 'memecached', $frontend, $backend); We have not noticed any problems with the cache in the past when using a single EC2 server to write and read from the cache. However, we have recently introduced a second EC2 server and suddenly we're seeing issues when writing to the cache from one server and reading from another. Both servers are managed by the same AWS account, and neither server has issues writing to or reading from the cache individually. The same cache configuration is used for both. Server A executes $cache->save('hello', 'message'); Subsequent calls to $cache->load('message'); from Server A return the expected result of hello. However, when Server B executes $cache->load('message');, we get false. As far as my understanding of ElastiCache goes, the server making the read request should have no bearing on the cache value returned. Can anyone shed some light on this?
Inconsistent cache values using Zend Cache with AWS ElastiCache across multiple servers
31 You can prevent the creation of .cache/ by disabling the "cacheprovider" plugin: py.test -p no:cacheprovider ... Share Improve this answer Follow edited Jul 24, 2016 at 9:34 answered Jun 28, 2016 at 19:50 vogvog 24.2k1111 gold badges6060 silver badges7878 bronze badges 0 Add a comment  | 
I need to be able to change the location of pytest's .cache directory to the env variable, WORKSPACE. Due to server permissions out of my control, I am running into this error because my user does not have permission to write in the directory where the tests are being run from: py.error.EACCES: [Permission denied]: open('/path/to/restricted/directory/tests/.cache/v/cache/lastfailed', 'w') Is there a way to set the path of the .cache directory to the environment variable WORKSPACE?
Is there a way to change the location of pytest's .cache directory?
Memcached is faster, but the memory is limited. HDD is huge, but I/O is slow compared to memory. You should put the hottest things to memcached, and all the others can go to cache files. (Or man up and invest some money into more memory like these guys :) For some benchmarks see: Cache Performance Comparison (File, Memcached, Query Cache, APC) In theory: Read 1 MB sequentially from memory       250,000 ns Disk seek 10,000,000 ns http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf
I don't think it's clear to me yet, is it faster to read things from a file or from memcached? Why?
Which is faster/better for caching, File System or Memcached?
According to the documentation, you can ignore data from previous volume with this: docker-compose up -d --force-recreate --renew-anon-volumes See https://docs.docker.com/compose/reference/up/
We use our gitlab-ci to build fresh images with the latest version of our code. These images are day to day built with the latest tag. We tag images during the release process. My problem is related to the latest tag. We deploy automatically these images on servers to test our product. However, on a test server if we pull the latest docker image (verified by its checksum), stop the compose and up it again, we sometime still have the content of the old image (for example a configuration file). We tried with docker-compose up -d --force-recreate but it doesn't help. The only way to fix it was: docker-compose down docker system prune -f docker rmi $(docker images -q) docker-compose pull docker-compose up -d Any better idea ?
docker-compose keeps using old image content
Server-side cache control headers should look like: Expires: Tue, 03 Jul 2001 06:00:00 GMT Last-Modified: {now} GMT Cache-Control: max-age=0, no-cache, must-revalidate, proxy-revalidate Avoid rewriting URLs on the client because it pollutes caches, and causes other weird semantic issues. Furthermore: Use one Cache-Control header (see rfc 2616) because behaviour with multiple entries is undefined. Also the MSIE specific entries in the second cache-control are at best redundant. no-store is about data security. (it only means don't write this to disk - caches are still allowed to store the response in memory). Pragma: no-cache is meaningless in a server response - it's a request header meaning that any caches receiving the request must forward it to the origin. Using both Expires (http/1.0) and cache-control (http/1.1) is not redundant since proxies exist that only speak http/1.0, or will downgrade the protocol. Technically, the last modified header is redundant in light of no-cache, but it's a good idea to leave it in there. Some browsers will ignore subsequent directives in a cache-control header after they come across one they don't recognise - so put the important stuff first.
What is the definitive solution for avoid any kind of caching of http data? We can modify the client as well as the server - so I think we can split the task between client and the server. Client can append to each request a random parameter http://URL/path?rand=6372637263 – My feeling is that using only this way it is not working 100% - might be there are some intelligent proxies, which can detect that… On the other side I think that if the URL is different from the previous one, the proxy cannot simply decide to send back some cached response. On server can control a bunch of HTTP headers: Expires: Tue, 03 Jul 2001 06:00:00 GMT Last-Modified: {now} GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Any comments to this, what is the best approach?
Avoid caching of the http responses
Summary: False sharing and cache-line ping-ponging are related but not the same thing. False sharing can cause cache-line ping-ponging, but it is not the only possible cause since cache-line ping-ponging can also be caused by true sharing. Details: False sharing False sharing occurs when different threads have data that is not shared in the program, but this data gets mapped to a cache line that is shared. For example imagine a program that had an array of integers where one thread performed reads and writes to all of the array entries with an even index, and the other thread performed reads and writes to entries with an odd index. In this case the threads would not actually be sharing data, but they would share cache lines since each cache line would contain both odd and even indexed values (assuming the cache line was bigger than an integer, which is typically true). Cache-line ping-ponging Cache line ping-ponging is the effect where a cache line is transferred between multiple CPUs (or cores) in rapid succession. This can be cause by either false or true sharing. Essentially if multiple CPUs are trying to read and write data in the same cache line then that cache line might have to be transferred between the two threads in rapid succession, and this can cause a significant performance degradation (possibly even worse performance than if a single thread were executing). False sharing can make this problem particularly difficult to detect, because a programmer might have tried to write an application so that the threads weren't sharing data, without realizing that the data was mapped to the same cache line. But false sharing is not the only possible cause of cache-line ping-ponging. This could also be caused by true sharing where multiple threads are trying to read and write the same data.
For my bachelor thesis I have to evaluate common problems on multicore systems. In some books I have read about false sharing and in other books about cache-line-ping-pong. The specific problems sound very familiar, so are these the same problems but given other names? Can someone give me names of books which discuss these topics in detail? (I already have literature from Darry Glove, Tanenbaum,...)
Are cache-line-ping-pong and false sharing the same?
In the beginning of code you need to use this: <!DOCTYPE html> <html manifest="manifest.appcache"> ... Then create manifest.appcache with such content: CACHE MANIFEST # Cache manifest version 1.0 # no cache NETWORK: *
I converted a plain vanilla HTML page to HMTL5/CSS3 with a responsive layout, and for security reasons (dictated by the security people) the page must never cache. The page previously used <meta http-equiv="Pragma" content="no-cache"> and <meta http-equiv="Expires" content="-1"> to prevent the page from being cached. What replaces this in HTML5? How do you prevent an html page from caching in the client? I've spent a week reading about manifest files, but they seem to do exactly opposite of what I want as attaching a manifest file explicitly causes the page it is attached to to cache. And please don't refer me back to the w3c definition of which meta elements are now allowed — I understand that HTML5 does not include the cache-control or Pragma in meta elements. I need to know what it does include that will prevent a page from being cached.
How to prevent html5 page from caching?
As long as you don't abuse the application state, then I don't see a problem in using it for items that you don't want to expire. Alternatively I would probably use a static variable near the code that uses it. That way you avoid to go through HttpApplicationState and then be forced to have a reference to System.Web if i want to access my data. But be sure to think through how you use the object(s) that you store in HttpApplicationState. If it's a DataSet which you keep adding stuff to for each request, then at some point you end up eating up too much memory on the web-server. The same could happen if you keep adding items to HttpApplicationState when you process requests, at some point you will force the application to restart. That's probably the advantage of using Cache in your situation. Consuming larger amounts memory isn't as fatal because you allow ASP.NET to release the items in your cache when memory becomes scarce.
I know that most people recommend using HttpRuntime.Cache because it has more flexibility... etc. But what if you want the object to persist in the cache for the life of the application? Is there any big downside to using the Application[] object to cache things?
HttpRuntime.Cache[] vs Application[]
after some reseach, I found a solution : The idea is to log the cached images, binding a log function on the images 'load' event. I first thought to store sources in a cookie, but it's not reliable if the cache is cleared without the cookie. Moreover, it adds one more cookie to HTTP requests... Then i met the magic : window.localStorage (details) The localStorage attribute provides persistent storage areas for domains Exactly what i wanted :). This attribute is standardized in HTML5, and it's already works on nearly all recent browsers (FF, Opera, Safari, IE8, Chrome). Here is the code (without handling window.localStorage non-compatible browsers): var storage = window.localStorage; if (!storage.cachedElements) { storage.cachedElements = ""; } function logCache(source) { if (storage.cachedElements.indexOf(source, 0) < 0) { if (storage.cachedElements != "") storage.cachedElements += ";"; storage.cachedElements += source; } } function cached(source) { return (storage.cachedElements.indexOf(source, 0) >= 0); } var plImages; //On DOM Ready $(document).ready(function() { plImages = $(".postLoad"); //log cached images plImages.bind('load', function() { logCache($(this).attr("src")); }); //display cached images plImages.each(function() { var source = $(this).attr("alt") if (cached(source)) $(this).attr("src", source); }); }); //After page loading $(window).bind('load', function() { //display uncached images plImages.each(function() { if ($(this).attr("src") == "") $(this).attr("src", $(this).attr("alt")); }); });
Short version question : Is there navigator.mozIsLocallyAvailable equivalent function that works on all browsers, or an alternative? Long version :) Hi, Here is my situation : I want to implement an HtmlHelper extension for asp.net MVC that handle image post-loading easily (using jQuery). So i render the page with empty image sources with the source specified in the "alt" attribute. I insert image sources after the "window.onload" event, and it works great. I did something like this : $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { $(this).attr("src", $(this).attr("alt")); }); }); The problem is : After the first loading, post-loaded images are cached. But if the page takes 10 seconds to load, the cached post-loaded images will be displayed after this 10 seconds. So i think to specify image sources on the "document.ready" event if the image is cached to display them immediatly. I found this function : navigator.mozIsLocallyAvailable to check if an image is in the cache. Here is what I've done with jquery : //specify cached image sources on dom ready $(document).ready(function() { var plImages = $(".postLoad"); plImages.each(function() { var source = $(this).attr("alt") var disponible = navigator.mozIsLocallyAvailable(source, true); if (disponible) $(this).attr("src", source); }); }); //specify uncached image sources after page loading $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { if ($(this).attr("src") == "") $(this).attr("src", $(this).attr("alt")); }); }); It works on Mozilla's DOM but it doesn't works on any other one. I tried navigator.isLocallyAvailable : same result. Is there any alternative?
Post-loading : check if an image is in the browser cache
Thanks a lot it is exactly what I was looking for. Just made a small fix to the ETagFilter that will handle 304 in case that the content wasn't changed public class ETagAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Filter = new ETagFilter(filterContext.HttpContext.Response, filterContext.RequestContext.HttpContext.Request); } } public class ETagFilter : MemoryStream { private HttpResponseBase _response = null; private HttpRequestBase _request; private Stream _filter = null; public ETagFilter(HttpResponseBase response, HttpRequestBase request) { _response = response; _request = request; _filter = response.Filter; } private string GetToken(Stream stream) { byte[] checksum = new byte[0]; checksum = MD5.Create().ComputeHash(stream); return Convert.ToBase64String(checksum, 0, checksum.Length); } public override void Write(byte[] buffer, int offset, int count) { byte[] data = new byte[count]; Buffer.BlockCopy(buffer, offset, data, 0, count); var token = GetToken(new MemoryStream(data)); string clientToken = _request.Headers["If-None-Match"]; if (token != clientToken) { _response.Headers["ETag"] = token; _filter.Write(data, 0, count); } else { _response.SuppressContent = true; _response.StatusCode = 304; _response.StatusDescription = "Not Modified"; _response.Headers["Content-Length"] = "0"; } } }
I would like to create an ETag filter in MVC. The problem is that I can't control the Response.OutputStream, if I was able to do that I would simply calculate the ETag according to the result stream. I did this thing before in WCF but couldn't find any simple idea to do that in MVC. I want to be able to write something like that [ETag] public ActionResult MyAction() { var myModel = Factory.CreateModel(); return View(myModel); } Any idea?
Create ETag filter in ASP.NET MVC
This has to do with how your browser handles resource requests. Flash has similar issues and there are a couple workarounds. Here's an article that details the issue and possible solutions. I would suggest doing something like this: Say you have this for your xap in your html: <param name="source" value="ClientBin/myApp.xap"/> I would version it so whenever you do a push you change the version number. Example: <param name="source" value="ClientBin/myApp.xap?ver=1"/>
I have a Silverlight control packaged up and deployed to a SharePoint web part. I'm having trouble with the browser loading new versions of the control after I push an update. I'm updating the assembly and file version of my xap project, but it doesn't seem to matter. The only way to get the browser to load the new xap is to go in and delete temporary Internet files. For me, during development, that's OK, but I'll need to find a solution before it's time for production. Any ideas?
Forcing browsers to reload Silverlight xap after an update
18 Use different URLs. If the main entry point to your website (like the main index file) is cached, then you're screwed... maybe you should register another domain name? Share Improve this answer Follow answered Jul 30, 2009 at 15:58 lioriliori 41.4k1313 gold badges7979 silver badges107107 bronze badges 2 4 The site ranks high in search engines; A different url would not work in our case base of the SEO penalty – Chris Jul 30, 2009 at 16:38 13 You can make it a different URL by adding a URL parameter, like a timestamp: ?new=<timestamp>. This would not screw up the SEO. – Julien Mar 8, 2010 at 17:48 Add a comment  | 
I have a website that because of an ill-prepared apache conf file has instructed users to cache a website URL several years into the future. As a result, when a person visits the site, they often make no attempt to even request the page. The browser just loads the HTML from cache. This website is about to get a major update, and I would like for users to be able to see it. Is there a way for me to force a user to actually re-request the webpage? I fear that for some users, unless they happen to press F5, they may see the old webpage for several years.
How to force a browser to refresh a cached version of a webpage
13 Three things to try: Run it in a sampling profiler, including a "cold" run (first thing after a reboot). Should usually be enough. Check memory usage, does it grow so high (even transiently) the OS would have to swap things out of RAM to make room for your app? That alone could be an explanation for what you're seeing. Also look at the amount of free RAM you have when you start your app. Enable system performance tools and check the I/O counters or file accesses, and make sure under FileMon / Process Explorer that you don't have some file or network accesses you've forgotten about (leftover log/test code) Share Improve this answer Follow edited Nov 7, 2011 at 10:33 answered Sep 26, 2011 at 22:09 Eric GrangeEric Grange 6,04111 gold badge4141 silver badges6363 bronze badges Add a comment  | 
Something I've noticed when testing code I write is that long-running operations tend to run much longer the first time a program is run than on subsequent runs, sometimes by a factor of 10 or more. Obviously there's some sort of cold cache/warm cache issue here, but I can't seem to figure out what it is. It's not the CPU cache, since these long-running operations tend to be loops that I feed a lot of data to, and they should be fully loaded after the first iteration. (Plus, unloading and reloading the program should clear the cache.) Also, it's not the disc cache. I've ruled that out by loading all data from disc up-front and processing it afterwards, and it's the actual CPU-bound data processing that's going slowly. So what can cause my program to run slow the first time I run it, but then if I close it and run it again, it runs dramatically faster? I've seen this in several different programs that do very different things, so it seems to be a general issue. EDIT: For clarification, I'm writing in Delphi, though I don't really think this is a Delphi-specific issue. But that means that whatever the problem is, it's not related to JIT issues, garbage collection issues, or any of the other baggage that managed code brings with it. And I'm not dealing with network connections. This is pure CPU-bound processing. One example: a script compiler. It runs like this: Load entire file into memory from disc Lex the entire file into a queue of tokens Parse the queue into a tree Run codegen on the tree to produce bytecode If I feed it an enormous script file (~100k lines,) after loading the entire thing from disc into memory, the lex step takes about 15 seconds the first time I run, and 2 seconds on subsequent runs. (And yes, I know that's still a long time. I'm working on that...) I'd like to know where that slowdown is coming from and what I can do about it.
What can cause a program to run much faster the second time?
The documentation for the Cache constructor says that it is for internal use only. To get your Cache object, call HttpRuntime.Cache rather than creating an instance via the constructor.
Context: .Net 3.5, C# I'd like to have caching mechanism in my Console application. Instead of re-inventing the wheel, I'd like to use System.Web.Caching.Cache (and that's a final decision, I can't use other caching framework, don't ask why). However, it looks like System.Web.Caching.Cache is supposed to run only in a valid HTTP context. My very simple snippet looks like this: using System; using System.Web.Caching; using System.Web; Cache c = new Cache(); try { c.Insert("a", 123); } catch (Exception ex) { Console.WriteLine("cannot insert to cache, exception:"); Console.WriteLine(ex); } and the result is: cannot insert to cache, exception: System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Caching.Cache.Insert(String key, Object value) at MyClass.RunSnippet() So obviously, I'm doing something wrong here. Any ideas? Update: +1 to most answers, getting the cache via static methods is the correct usage, namely HttpRuntime.Cache and HttpContext.Current.Cache. Thank you all!
How can I use System.Web.Caching.Cache in a Console application?
If you are using php to check if the user is logged in before outputting the message, then you don't want the browser to cache the image. The entire point of caching is to call the server once and then never call it again. If the browser caches the image, it won't call the server and your script won't run. Instead, the browser will pull your image from cache and display it, even if the user is no longer logged in. This could potentially be a very big security hole.
I'm totally new to how to cache images. I output all images in a gallery with PHP, and want the images already shown, to be cached by the browser, so the PHP script don't have to output the same image again. All I want is the images to show up faster. When calling an image I do like this: <img src="showImage.php?id=601"> and the showImage.php-file does: $id = (int) $_GET['id']; $resultat = mysql_query(" SELECT filename, id FROM Media WHERE id = $id "); $data = mysql_fetch_assoc($resultat); ... //Only if the user are logged in if(isset($_SESSION['user'])){ header("Content-Type: image/jpeg"); //$data['filename'] can be = dsSGKLMsgKkD3325J.jpg echo(file_get_contents("images/".$data['filename']."")); }
How to get the browser to cache images, with PHP?
Disclaimer: My rails is a bit rusty, but this or something like it should work ActionController::Base.new.expire_fragment(key, options = nil)
I've pretty much tried everything, but it seems impossible to use expire_fragment from models? I know you're not supposed to and it's non-MVC, but surely there much be some way to do it. I created a module in lib/cache_helper.rb with all my expire helpers, within each are just a bunch of expire_fragment calls. I have all my cache sweepers setup under /app/sweepers and have an "include CacheHelper" in my application controller so expiring cache within the app when called via controllers works fine. Then things is I have some external daemons and especially some recurring cron tasks which call a rake task that calls a certain method. This method does some processing and inputs entries into the model, after which I need to expire cache. What's the best way to do this as I can't specify cache sweeper within the model. Straight up observers seem to be the best solution but then it complains about expire_fragment being undefined etc etc, I've even tried including the ActionController caching classes into the observer but that didn't work. I'd love some ideas of how to create a solution for this. Thanks.
How to call expire_fragment from Rails Observer/Model?
Could you please tell me in what cases should I use rdd.cache() and rdd.broadcast() methods? RDDs are divided into partitions. These partitions themselves act as an immutable subset of the entire RDD. When Spark executes each stage of the graph, each partition gets sent to a worker which operates on the subset of the data. In turn, each worker can cache the data if the RDD needs to be re-iterated. Broadcast variables are used to send some immutable state once to each worker. You use them when you want a local copy of a variable. These two operations are quite different from each other, and each one represents a solution to a different problem.
It looks like broadcast method makes a distributed copy of RDD in my cluster. On the other hand execution of cache() method simply loads data in memory. But I do not understand how does cached RDD is distributed in the cluster. Could you please tell me in what cases should I use rdd.cache() and rdd.broadcast() methods?
Spark cache vs broadcast
Guava team member here. The Guava Cache implementation expires entries in the course of normal maintenance operations, which occur on a per-segment basis during cache write operations and occasionally during cache read operations. Entries usually aren't expired at exactly their expiration time, just because Cache makes the deliberate decision not to create its own maintenance thread, but rather to let the user decide whether continuous maintenance is required. I'm going to focus on expireAfterAccess, but the procedure for expireAfterWrite is almost identical. In terms of the mechanics, when you specify expireAfterAccess in the CacheBuilder, then each segment of the cache maintains a linked list access queue for entries in order from least-recent-access to most-recent-access. The cache entries are actually themselves nodes in the linked list, so when an entry is accessed, it removes itself from its old position in the access queue, and moves itself to the end of the queue. When cache maintenance is performed, all the cache has to do is to expire every entry at the front of the queue until it finds an unexpired entry. This is straightforward and requires relatively little overhead, and it occurs in the course of normal cache maintenance. (Additionally, the cache deliberately limits the amount of work done in a single cleanup, minimizing the expense to any single cache operation.) Typically, the cost of cache maintenance is dominated by the expense of computing the actual entries in the cache.
I want to use a CacheBuilder, as recommended here: Java time-based map/cache with expiring keys However I don't understand when Guava knows to expire entries. How does Guava do it and what performance cost does it incurr?
How does Guava expire entries in its CacheBuilder?
19 There is a very good cache solution named MapDB(JDBM4 formerly). It supports HashMap and TreeMap But it is only application embedded. It also support persistent file based cache. Example for off heap cache: DB db = DBMaker.newDirectMemoryDB().make(); ConcurrentNavigableMap<Integer, String> map = db.getTreeMap("MyCache"); Or persistent file based cache: DB db = DBMaker.newFileDB(new File("/home/collection.db")).closeOnJvmShutdown().make(); ConcurrentNavigableMap<Integer,String> map = db.getTreeMap("MyCache"); Share Improve this answer Follow answered Nov 13, 2012 at 12:29 Majid AzimiMajid Azimi 5,6431313 gold badges6565 silver badges116116 bronze badges 2 1 It's disk backed solution. Not really what I'm asking about. – Tema Nov 13, 2012 at 16:42 1 Worth mentioning that MapDB was developed in Kotlin and will impose a dependency on its runtime. – Carlos Ferreyra Oct 18, 2017 at 20:53 Add a comment  | 
Is there any open-source alternative for Terracotta BigMemory? Actually I didn't even manage to find any commercial alternative. I'm interested in pure Java solution which will work inside JVM without any JNI and C-backed solution.
Is there a open-source off-heap cache solution for Java?
Yes, redis is good for that. But to get the gist, there are basically two approaches to caching. Depending on whether you use a framework (and which) or not, you may have first option available in standard or with use of a plug-in: Cache database queries, that is - selected queries and their results will be kept in redis for quicker access for a given time or until clearing cache (useful after updating databse). In this case you can use built-in mysql query caching, it will be simpler than using additional key-value store, or you can override default database integration with your own class making use of cache (for example http://pythonhosted.org/johnny-cache/). Custom caching, that is creating your own structures to be kept in cache and periodically or manually refilling them with data fetched from the database. It is more flexible and potentially more powerful, because you can use built-in redis features such as lists or sorted sets, which make update overhead much smaller. It requires a bit more of coding, but it usually offers better results, since it is more customized. Good example is keeping top articles in form of redis list of ids, and then accessing serialized article(s) with given id as well from redis. You can keep that article unnormalized - ie. serialized object can contain user id as well as user name, so that you can keep the overhead of additional queries to a minimum. It is yours to decide which approach to take, I personally almost always go with approach number two. But, of course, everything depends on how much time you have, and what the application is supposed to do - you might as well start with mysql query caching and if the results are not good enough move to redis and custom caching.
I need to create a solution using php, with a mysql database with lots of data. My program will have many requisitions, I think that if I work with cache and an OO database, I'll have a good result, but I don't have experience. I think for example if I cache the information that is saved in mysql in a redis database, performance will be improved, but I don't know if this is a good idea, so I would like someone to help me to choose. Sorry if my English is not very good, I'm from Brazil.
Using redis as a cache for a mysql database
The warm up is just the period of loading a set of data so that the cache gets populated with valid data. If you're doing performance testing against a system that usually has a high frequency of cache hits, without the warm up you'll get false numbers because what would normally be a cache hit in your usage scenario is not and will drag your numbers down.
I am working with some multicore simulators such as GEMS or M5. In all of them there is an option to "Warm up the cache". What does that term mean?
What is a Warm-Up Cache?
In a very simplified form, Pragma:no-cache or Pragma:cache are now "almost" obsolete ways of passing caching instructions to client implementations, specifically browers and proxies. The way the client implementation responds to Pragma headers vary which is why the specification says it is implementation specific. The more modern way of Cache-control is what you can safely depend on, as almost all client implementations follow it rigidly. Also, if you have both Cache-control and Pragma set for the same instruction, say caching, then Cache-control takes precedence. This is an excellent article about everything related to Caching and I think it makes a very interesting and useful read: http://www.mnot.net/cache_docs/
So I am sending a header in php to cache my page (this integrates into our "CDN" (contendo/akamai) as well). I always use this pragma: cache header, I've seen various examples use it as well; however, I just checked fiddler to test traffic for this .net application we developed and it says: Legacy Pragma Header is present: cache !! Warning IE supports only an EXACT match of "Pragma: no-cache". IE will ignore the Pragma header if any other values are present. ... I suppose that is ok. The rest of the response seems fine and to my specs. Here is my code: function headers_for_page_cache($cache_length=600){ $cache_expire_date = gmdate("D, d M Y H:i:s", time() + $cache_length); header("Expires: $cache_expire_date"); header("Pragma: cache"); header("Cache-Control: max-age=$cache_length"); header("User-Cache-Control: max-age=$cache_length"); } The question is does this matter? What does the pragma header even do? Do I need it? I checked the HTTP header spec documentation and it said it is implementation specific and the only Pragma that is enforced is "Pragma: no-cache". Is this the best choice of headers to cache for a specific amount of time?
What is the Pragma Header? Caching pages.. and IE
I just make my portfolio website 99/100. Google says: We recommend a minimum cache time of one week and preferably up to one year for static assets. "headers": [ { "source" : "**/*.@(eot|otf|ttf|ttc|woff|font.css)", "headers" : [ { "key" : "Access-Control-Allow-Origin", "value" : "*" } ] }, { "source" : "**/*.@(js|css)", "headers" : [ { "key" : "Cache-Control", "value" : "max-age=604800" } ] }, { "source" : "**/*.@(jpg|jpeg|gif|png)", "headers" : [ { "key" : "Cache-Control", "value" : "max-age=604800" } ] }, { // Sets the cache header for 404 pages to cache for 5 minutes "source" : "404.html", "headers" : [ { "key" : "Cache-Control", "value" : "max-age=300" } ] } ] Use this, it works for me.
I have hosted my personal blog on Google's firebase. My Blog is based on jekyll. Firebase provides firebase.json file from where owner of project can modify the http header. I have my css files https://blogprime.com/assets/css/init.css and my fonts in https://blogprime.com/assets/font/fontname.woff ( http cache control not working ) My images are stored inside :: https://blogprime.com/assets/img/imagename.entension ( http cache control working ). Even though both images and css, fonts are two dir deep from root. Now heres my .json file code.. {"hosting": {"public": "public", "headers": [ {"source" : "**/*.@(eot|otf|ttf|ttc|woff|css)", "headers" : [ {"key" : "Access-Control-Allow-Origin", "value" : "*"}] }, {"source" : "**/*.@(jpg|jpeg|gif|png)", "headers" : [ {"key" : "Cache-Control", "value" : "max-age=30672000" }] }, {"source" : "404.html", "headers" : [ {"key" : "Cache-Control", "value" : "max-age=300" }] }] } } Before adding this my images and everything had 1hour of cache lifespan.... but now only my css files along with font files are having 1 hour cache lifespan. Can you please tell me how to fix the "Leverage Browser Caching" for my css files. I think their's some problem with the directory link structure which I have "source" : "/*.@(eot|otf|ttf|ttc|woff|css)",**. I really don't know how to fix it. You can check the Google pagespeed test .. https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fblogprime.com%2Fwordpress%2Fdns-prefetch-in-wordpress.html
How to Leverage Browser Caching in Firebase hosting
First thing to note is that when you hit F5 (refresh) in Chrome, Safari or IE the images will be requested again, even if they've been cached in the browser. To tell the browser that it doesn't need to download the image again you'll need to return a 304 response with no content, as per below. Response.StatusCode = 304; Response.StatusDescription = "Not Modified"; Response.AddHeader("Content-Length", "0"); You'll want to check the If-Modified-Since request header before returning the 304 response though. So you'll need to check the If-Modified-Since date against the modified date of your image resource (whether this be from the file or stored in the database, etc). If the file hasn't changed then return the 304, otherwise return with the image (resource). Here are some good examples of implementing this functionality (these are for a HttpHandler but the same principles can be applied to the MVC action method) Make your browser cache the output of a httphandler HTTP Handler implementing dynamic resource caching
I have an actionmethod that returns a File and has only one argument (an id). e.g. public ActionResult Icon(long id) { return File(Server.MapPath("~/Content/Images/image" + id + ".png"), "image/png"); } I want the browser to automatically cache this image the first time I access it so the next time it doesn't have to download all the data. I have tried using things like the OutputCacheAttribute and manually setting headers on the response. i.e: [OutputCache(Duration = 360000)] or Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetExpires(Cache.NoAbsoluteExpiration); But the image is still loaded every time I hit F5 on the browser (I'm trying it on Chrome and IE). (I know it is loaded every time because if I change the image it also changes in the browser). I see that the HTTP response has some headers that apparently should work: Cache-Control:public, max-age=360000 Content-Length:39317 Content-Type:image/png Date:Tue, 31 Jan 2012 23:20:57 GMT Expires:Sun, 05 Feb 2012 03:20:56 GMT Last-Modified:Tue, 31 Jan 2012 23:20:56 GMT But the request headers have this: Pragma:no-cache Any idea on how to do this? Thanks a lot
ASP.NET MVC: Make browser cache images from action
It really depends on which cache storage you're using. Rails provides several, one of them most popular is Memcached. One of key features of Memcached is that it automatically expires old unused records, so you can forget about :expire option. Other Rails cache storages, like memory storage or redis storage will keep will not expire date unless you explicitly specify when to do that. More about how cache key expiration works in Rails.
I've done some googling and couldn't find the answer to this question. Rails allows to specify expiry times for its cache like that: Rails.cache.fetch("my_var", :expires_in => 10.seconds) But what happens if I specify nothing: Rails.cache.fetch("my_var") It never expires? Is there a default value? How can I explicitly define something that never expires?
What is the default expiry time for Rails cache?
35 You have to distinguish between the object reference and the actual object. For the reference your field modifier is relevant. When you change the reference to a different object (i.e. reference a different String) the change might not be noticed by a different Thread. If you want to enforce visibility you have to use final or volatile. The actual object on the heap is not affected by the field modifier. Instead how you see each field of this object is determined by its own field modifier according to the same rules (is it volatile or final? If not, visibility for concurrent Threads is not enforced) So the answer is: Yes, you have to add volatile or final. Stylistically it would be much better to make the field final, though. It has the same effect Thread-wise but is a stronger statement, too: This field cannot be changed - which is the reason why it can be cached heedlessly by the JVM. And for the same reason it comes with a little performance benefit compared to volatile, Java needs not care whether the field is changes again and does not need to add overhead. Share Improve this answer Follow edited Aug 16, 2011 at 20:10 answered Aug 16, 2011 at 8:37 Stefan Schubert-PetersStefan Schubert-Peters 5,43922 gold badges2020 silver badges2121 bronze badges 1 even when you made a field final, updates done by thread1 is not visible to thread2. – amarnath harish May 26, 2018 at 6:10 Add a comment  | 
We often use volatile to ensure that a condition variable can be visible to every Thread. I see the volatile fields are all primitive type in code so far. Does object field has this problem? For example: class a { public String str; public List list; } If there are some threads which will access str and list, must I add 'volatile'? I guess each access to Object will get directly from Heap, and the Object will not be cached like primitive type. Is that right?
Do I need volatile for variables of reference types, too?
I have been working a a similar situation, where I just want to cache specific elements, and want these elements to be loaded once on start up, and kept in cache, until the application is shut down. This is a read only cache, and is used to populate a list of countries, so that a user can select their country from the list. I used fluentNhibernate Mappings, and defined Country my class with Cache.readonly() public class CountryMap : ClassMap<Country> { public CountryMap() { Schema("Dropdowns"); Cache.ReadOnly(); // Class mappings underneath } } My user class map looks like this: public class UserMap : ClassMap<User> { Id(x => x.Id).Column("UserId"); Map(x => x.FirstName); Map(x => x.LastName); References(x => x.Country) .Column("CountryId"); } I manually configure Fluent Nhibernate to use Second level cache. So in my fluent Confuguration I have: var sessionFactory = Fluently.Configure() .Database (...) // set up db here .Mappings(...) //set up mapping here .ExposeConfiguration(c => { // People advice not to use NHibernate.Cache.HashtableCacheProvider for production c.SetProperty("cache.provider_class", "NHibernate.Cache.HashtableCacheProvider"); c.SetProperty("cache.use_second_level_cache", "true"); c.SetProperty("cache.use_query_cache", "true"); }) .BuildSessionFactory(); I have checked in SQL profiler, and when I get a list of countrys for a user, the are loaded once, and I get cache hits after every other request. The nice thing is that when displaying the users country name, it loads from the cache, and does not make a request to the database. I got some tips from this posting by Gabriel Schenker. Hope that helps? If you found a better/proper way, please let me know? Thanks!
Does anyone have an example how to set up and what entities to cache in fluent nhibernate. Both using fluent mapping and auto mapping? And the same for entity relationships, both one-to-many and many-to-many?
Set up caching on entities and relationships in Fluent Nhibernate?
17 Yes it is safe, I have deleted npm and npm-cache folder manually and reinstall node its working fine. Share Improve this answer Follow answered Dec 15, 2018 at 6:07 Subhajit DasSubhajit Das 43911 gold badge88 silver badges1919 bronze badges 2 Is it necessary to reinstall node after manually deleting npm and npm cache folder content? I haven't reinstalled node and so far, everything seems to work good. So, just want to make sure if it's necessary to reinstall node in order to avoid problems in future. – Ruturaj Patki Sep 10, 2020 at 5:38 1 No it's not necessary to reinstall node. – Subhajit Das Sep 11, 2020 at 13:34 Add a comment  | 
npm cache clean -f is not able to clear the npm_cache folder located at the path C:\Users\jerry\AppData\Roaming\npm-cache. Though it clears some of the files in this folder. Output of command: npm WARN I sure hope you know what you are doing. However, Node.js page says clean command will delete all data out of the cache folder. So, why is it not happening? Would it be okay if I manually delete the folder? I'm on Windows 10 with node 8.7.0
Is it safe to remove npm-cache folder in windows?
So it's been a while since I asked this question and I thought i'd give some info about how we solved it. Basically we encoded the data into PNG's using a similar technique to this: http://audioscene.org/scene-files/yury/pngencoding/sample.html Then cached the image on the mobile device using html5 local storage and accessed it as needed. The PNG's were pretty big but this worked for us.
I've been experimenting with the audio and local storage features of html5 of late and have run into something that has me stumped. I'd like to be able to cache or store the source of the audio element locally to enable speedier and offline playback. The problem is I can't see how this is possible with the current implementation. I have tried the following using WebKit: Creating a manifest file to set up local caching but the audio file appears not to be a cacheable item maybe due to the way it is stream or something I have also attempted to use javascript to put an audio object into local storage but the size of the mp3 makes this impossible due to memory issues (i think). I have tried to use the data uri and base64 to use the html as a audio transport that can be cached but again the filesize makes this prohibitive. Also the audio element does not seem to like this in WebKit (works fine in mozilla) I have tried several methods of putting the data into the local database store. Again suffering the same issues as the other cases. I'd love to hear any other ideas anyone may have as to how I could achieve my goal of offline playback using caching/local storage in WebKit.
HTML5 Local Storage of audio element source - is it possible?
I'll self answer my question since no one gave any and could help others. The problem I had when dealing with this issue was a problem of misconception of Cache usage. My need posted on this question was related to how to update members of a cached list (method response). This problem cannot be solved with cache, because the cached value was the list itself and we cannot update a cached value partially. The way I wanted to tackle this problem is related to a "map" or a distributed map, but I wanted to use the @Cacheable annotation. By using a distributed map would have achieved what I asked in my question without using @Cacheable. So, the returned list could have been updated. So, I had (wanted) to tackle this problem using @Cacheable from other angle. Anytime the cache needed to update I refreshed it with this code. I used below code to fix my problem: @Cacheable("users") List<User> list() { return userRepository.findAll() } // Refresh cache any time the cache is modified @CacheEvict(value = "users", allEntries = true") void create(User user) { userRepository.create(user) } @CacheEvict(value = "users", allEntries = true") void update(User user) { userRepository.update(user) } @CacheEvict(value = "users", allEntries = true") void delete(User user) { userRepository.delete(user) } In addition, I have enabled the logging output for spring cache to ensure/learn how the cache is working: # Log Spring Cache output logging.level.org.springframework.cache=TRACE
I'm trying to add caching to a CRUD app, I started doing something like this: @Cacheable("users") List<User> list() { return userRepository.findAll() } @CachePut(value = "users", key = "#user.id") void create(User user) { userRepository.create(user) } @CachePut(value = "users", key = "#user.id") void update(User user) { userRepository.update(user) } @CacheEvict(value = "users", key = "#user.id") void delete(User user) { userRepository.delete(user) } The problem I have is that I would like that create/update/delete operations can update the elements already stored in the cache for the list() operation (note that list() is not pulling from database but an data engine), but I am not able to do it. I would like to cache all elements returned by list() individually so all other operations can update the cache by using #user.id. Or perhaps, make all operations to update the list already stored in cache. I read that I could evict the whole cache when it is updated, but I want to avoid something like: @CacheEvict(value = "users", allEntries=true) void create(User user) { userRepository.create(user) } Is there any way to create/update/remove values within a cached collection? Or to cache all values from a collection as individual keys?
Spring cache all elements in list separately
7 I recommend generating a hash of the the content, e.g. md5($content). Additionally, to prevent hash collision, you might want to add e.g. the ID of the content element to it (if this is appropriate). Share Improve this answer Follow answered Oct 2, 2008 at 15:44 blueyedblueyed 27.5k44 gold badges7777 silver badges7171 bronze badges 2 2 Isn't it slow? You need to read the content of the file, which can be several MB long. Add concurrent access to that and you have a crawling website. – rr- Oct 8, 2014 at 19:13 1 Yes, with large files that will cause problems, also if $content is being available/read already. In this case using file metadata (size and timestamp) should be used. – blueyed Oct 14, 2014 at 16:38 Add a comment  | 
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago. What's a good method of programatically generating etag for web pages, and is this practice recommended? Some sites recommend turning etags off, others recommend producing them manually, and some recommend leaving the default settings active - what's the best way here?
What's the best way to create an etag? [closed]
Make sure your backend does not return Set-Cookie header. If Nginx sees it, it disables caching. If this is your case, the best option is to fix your backend. When fixing the backend is not an option, it's possible to instruct Nginx to ignore Set-Cookie header proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; See the documentation proxy_ignore_header will ensure that the caching takes place. proxy_hide_header will ensure the Cookie payload is not included in the cached payload. This is important to avoid leaking cookies via the NGINX cache.
I'm trying to cache static content which are basically inside the paths below in virtual server configuration. For some reason files are not being cached. I see several folders and files inside the cache dir but its always something like 20mb no higher no lower. If it were caching images for example would take at least 500mb of space. Here is the nginx.conf cache part: ** nginx.conf ** proxy_cache_path /usr/share/nginx/www/cache levels=1:2 keys_zone=static$ proxy_temp_path /usr/share/nginx/www/tmp; proxy_read_timeout 300s; Heres the default virtual server. **sites-available/default** server { listen 80; root /usr/share/nginx/www; server_name myserver; access_log /var/log/nginx/myserver.log main; error_log /var/log/nginx/error.log; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location ~* ^/(thumbs|images|css|js|pubimg)/(.*)$ { proxy_pass http://backend; proxy_cache static; proxy_cache_min_uses 1; proxy_cache_valid 200 301 302 120m; proxy_cache_valid 404 1m; expires max; } location / { proxy_pass http://backend; } }
nginx as cache proxy not caching anything
According to http://code.google.com/speed/page-speed/docs/using_mod.html#htaccess you can turn off the module with the line ModPagespeed off in a .htaccess file. The best solution would be to have a non-live development environment that didn't have mod_pagespeed on at all, or where it could be added only for some final testing.
I'm working on a site which I havent coded from scratch and in firebug the css files are being displayed as: style.css.pagespeed.ce.5d2Z68nynm.css with the pagespeed extension. Can anyone tell me what's doing this as I can't find it. I'm guessing mod-pagespeed possibly running on server? I want to turn it off for now because it's caching my css and stopping updates which is really annoying to develop with. Thanks in advance.
Pagespeed caching css, annoying to develop
42 Generally, dirty flags are used to indicate that some data has changed and needs to eventually be written to some external destination. It isn't written immediate because adjacent data may also get changed and writing bulk of data is generally more efficient than writing individual values. Share Improve this answer Follow answered Nov 21, 2012 at 22:44 Dietmar KühlDietmar Kühl 152k1515 gold badges231231 silver badges387387 bronze badges 1 1 Often change is not tracked but rather writes (i.e., a "silent store" still marks data as dirty). – user2467198 Jul 12, 2015 at 18:38 Add a comment  | 
I see some variables named 'dirty' in some source code at work and some other code. What does it mean? What is a dirty flag?
What does 'dirty-flag' / 'dirty-values' mean?
From Wiki: In computer science, a cache (pronounced /kæʃ/, kash) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache operates as a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data. whereas A Translation lookaside buffer(TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed. It was the first cache introduced in processors. All current desktop and server processors (such as x86) use a TLB. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. It is typically a content-addressable memory (CAM), in which the search key is the virtual address and the search result is a physical address. Also have a look at this.
What is the difference between Cache and Translation LookAside Buffer [TLB] ?
Difference between Cache and Translation LookAside Buffer[TLB]
You can use the IIS Rewrite Module 2.0 to remove the ETag. The following rewrite rule should do it: <rewrite> <outboundRules> <rule name="Remove ETag"> <match serverVariable="RESPONSE_ETag" pattern=".+" /> <action type="Rewrite" value="" /> </rule> </outboundRules> </rewrite> You can see an example image of the rule configuration in IIS Manager on my blog.
I know this question has been asked alot of times, however most of them were in 2009-2010. I am pretty sure a while back a project I was working on removed them, however I cannot find any way to remove them at the moment. So has there been any advances in this field? It seems crazy that microsoft has made IIS to not be able to easily configure these headers. Currently have tried: Adding a blank etag header to the web.config Adding an etag with quotes inside to the web.config Adding a blank etag header directly through IIS Adding a custom module which removes an etag on BeginResponse Same as above but for EndResponse Same as both above but instead of removing an etag, make it empty I hear there is an ISAPI filter you can get to remove them, but I cannot find it anywhere, and have no experience in writing one from scratch but may end up being the only way to do it. Just so there is some reason why I want to remove Etags for everything. I let the clients cache everything (expires & last-modified) so once my static files are gotten from the server it never needs to query the server again until it expires. As if you use Etags you still need to make a request to the server for each file to find out if the tag still matches. So using the client cache you only make requests for the content you need. I also have a versioning system in place so when a change happens the static content is then referenced as my.js?12345 rather than my.js?12344. Anyway the point is I currently believe removing Etags will greatly improve one of the bottlenecks on my current project.
IIS 7.5 remove etag headers from response
Looking into the tornado/web.py it seems that the easiest way is to subclass the StaticFileHandler and override the set_extra_headers method. def set_extra_headers(self, path): self.set_header("Cache-control", "no-cache")
By default, Tornado puts a Cache-Control: public header on any file served by a StaticFileHandler. How can this be changed to Cache-Control: no-cache?
Disable static file caching in Tornado
Yes, any change in any part of the URL (excluding HTTP and HTTPS protocols changes) is interpreted as a different resource by the browser (and any intermediary proxies), and will thus result in a separate entity in the browser-cache. Update: The claim in this ThinkVitamin article that Opera and Safari/Webkit browsers don't cache URLs with ?query=strings is false. Adding a version number parameter to a URL is a perfectly acceptable way to do cache-busting. What may have confused the author of the ThinkVitamin article is the fact that hitting Enter in the address/location bar in Safari and Opera results in different behavior for URLs with query string in them. However, (and this is the important part!) Opera and Safari behave just like IE and Firefox when it comes to caching embedded/linked images and stylesheets and scripts in web pages - regardless of whether they have "?" characters in their URLs. (This can be verified with a simple test on a normal Apache server.) (I would have commented on the currently accepted answer if I had the reputation to do it. :-)
It's common to want browsers to cache resources - JavaScript, CSS, images, etc. until there is a new version available, and then ensure that the browser fetches and caches the new version instead. One solution is to embed a version number in the resource's filename, but will placing the resources to be managed in this way in a directory with a revision number in it do the same thing? Is the whole URL to the file used as a key in the browser's cache, or is it just the filename itself and some meta-data? If my code changes from fetching /r20/example.js to /r21/example.js, can I be sure that revision 20 of example.js was cached, but now revision 21 has been fetched instead and it is now cached?
Is it the filename or the whole URL used as a key in browser caches?
A quick Googling says that APC is 5 times faster than Memcached. My experience say that APC is nearly 7-8 times faster than Memcached.. but, memchached can be accessed by different services (for example, if you run mainly on apache and delegates some traffic, e.g. static contents like images or pure html, to another web-service, like lighttpd), that can be really usefull, if not indispensable. APC have less feature than memcached and is easly to use and optimize, but this depends on your needs.
I have a single server site thats pushing 200k unqiues per day, and the traffic doubles roughly every 40 days (for the last 5 months anyway). I pretty much only plan to cache the output of mysql_query functions for an hour or so. If cache is older than that, run query, put result back into the cache for another hour. My mysql DB is about 200mb in size (grows by maybe 10-20mb/month). Im doing a lot of file caching by writing HTML outputs and using them for a few minutes, and then regenerating the html. Unfortunately, since its a database site, that allows for many sorting, searching and ordering methods, as well as pagination.... there are over 150,000 cached pages. Im also not caching the search queries, which cause most of the load. I'd like to implement a caching system, and I wanted to know which one is faster. Would love to see some benchmarks.
Memcache vs APC for a single server site data caching
By default, the locations of Temporary Internet Files (for Internet Explorer) are: Windows 95, Windows 98, and Windows ME c:\WINDOWS\Temporary Internet Files Windows 2000 and Windows XP C:\Documents and Settings\\[User]\Local Settings\Temporary Internet Files Windows Vista and Windows 7 %userprofile%\AppData\Local\Microsoft\Windows\Temporary Internet Files %userprofile%\AppData\Local\Microsoft\Windows\Temporary Internet Files\Low Windows 8 %userprofile%\AppData\Local\Microsoft\Windows\INetCache Windows 10 %localappdata%\Microsoft\Windows\INetCache\IE Cache for Microsoft Edge %localappdata%\Microsoft\Edge\User Data\Default\Cache Some information came from The Windows Club.
Where is cache for IE for current user located?
Internet Explorer cache location
NO, you CANNOT set expiration for each item in a LIST. You can only set an expiration for the entire LIST. In order to achieve what you want, you need to have a key for each item: SET user1:item1 value EX 86400 SET uesr1:iter2 value EX 86400 SET user2:item1 value EX 86400 To get all items of a specified user, you can use the SCAN command with a pattern (or use the Keyspace Notification to achieve better performance, but with more complex work): SCAN 0 MATCH user1:*
I'm looking for a way to store a list of items for a user, that will expire within 24 hours. Is there a way to accomplish this using Redis? I was thinking of just using the list and setting an expiration for each individual item, is there a better way?
Redis list with expiring entries?
Function that you need is under IServer interface, and can be reached with: ConnectionMultiplexer m = CreateConnection(); m.GetServer("host").Keys(); Note that prior to version 2.8 of redis server that will use KEYS command you mentioned, and it can be very slow in certain cases. However if you use redis 2.8+ - it will use SCAN command instead, which performs better. Also ensure that you really need to get all keys, in my practice I've never ever needed this.
I am using Redis cache for caching purpose (specifically stackexchange.Redis C# driver. Was wondering is there any ways to get all the keys available in cache at any point in time. I mean the similar thing I can do in ASP.NET cache object (below code sample) var keys = Cache.GetEnumerator(); while(keys.MoveNext()) { keys.Key.ToString() // Key } Redis documentation talks about KESY command but do stackexchange.Redis have implementation for that command. Debugging through the connection.GetDataBase() instance, I don't see any method / property for that. Any idea?
Get all keys from Redis Cache database
The way to "force" facebook to clear their cache for a specific url is to use the Debugger tool. I tried using the debugger with the url of the image and it shows the new image and not the old one, though when trying the cached link you posted the old image still appears. I suspect that if you try to post new posts the new icon will be used and not the old cached version, but that link you posted probably won't be changed.
I have a Facebook application that's created several wall-posts on behalf of my users. The image in the wall posts is cached by Facebook's servers. I've replaced the original image on my server and I would like to clear Facebook's image cache so all of the other wall posts update with the new image. What Facebook has cached: http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=236915563048749&v=1&size=z&cksum=aebffc27f986977797a9903c2b6e08df&src=http%3A%2F%2Fvcweb2.s3.amazonaws.com%2Fassets%2Fweb%2Fimages%2Ficon_square.png Original cached URL, now updated with new image. http://vcweb2.s3.amazonaws.com/assets/web/images/icon_square.png Is this possible? Thanks!
How to clear Facebook's image cache
Generally that is done using the .htaccess file on your host. Here is an example cut and pasted from HTTP cache headers with .htaccess <IfModule mod_headers.c> # WEEK <FilesMatch "\.(jpg|jpeg|png|gif|swf)$"> Header set Cache-Control "max-age=604800, public" </FilesMatch> </IfModule> If delivering materials from a PHP shell you could use PHP to create the header in which case you would refer to the HTTP protocal outlined here section 14.9 Cache-Control http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html <?php /* This file is a wrapper, */ header( 'Cache-Control: max-age=604800' ); /* now get and send images */ ?> I consider the .htaccess the easier of the two methods.
This question already has answers here: How do you set the expiry date or a maximum age in the HTTP headers for static resources in IIS (3 answers) Closed 10 years ago. I just finished a website that I designated and submitted it to google insights http://developers.google.com/speed/pagespeed/insights/ for performance reviews, and this is the result I got. It says, I need to set expiry date or a maximum age in the the HTTP headers, but I don't know how it is possible to set expiry date for anything other than cookies/sessions.
Setting an expiry date or a maximum age in the HTTP headers [duplicate]
25 In my case a tool called Dell SupportAssist has cleaned up browser caches by deleting their temp folder contents, meanwhile the browser had database entries to those cached files somewhere else and thought the cached data is still available. Solution was to delete those references to inexisting files by cleaning the browser cache within Chrome Share Improve this answer Follow answered Feb 3, 2019 at 8:51 Marian KlühspiesMarian Klühspies 16.7k1818 gold badges9898 silver badges146146 bronze badges 2 5 In my case, Norton 360 has cleaned up my browser caches, and it was causing the issue as explained above. If you are getting ERR_CACHE_READ_FAILURE error, you should check if you have some software which cleans up browser caches. – apptaro Feb 21, 2022 at 1:32 6 Thanks Norton 360 was exactly my problem too! This issue was driving me crazy, even cleaning everything and reinstalling Chrome wasn't helping and I was truly stumped but lived with the issue for months since although it's annoying as _ it was never actually blocking me from using Chrome. Then I finally found this little sub-comment that I missed several times before. For anyone else using Norton 360, go to: Settings -> Task Scheduling -> Automatic Tasks -> Chrome Temporary Files – Johnson Wong Apr 27, 2022 at 6:04 Add a comment  | 
more often than not, I get a list of ERR_CACHE_READ_FAILURE errors when loading a web page in google chrome - this results in assets not being loaded, images, style sheets etc. what would be the cause of this? I have tried disabling browser extensions, clearing cache etc. It is causing me issues when testing websites, as they work fine on other machines or browsers
ERR_CACHE_READ_FAILURE in google chrome
36 Go to http://developers.facebook.com/tools/debug Enter the URL following by fbrefresh=CAN_BE_ANYTHING Examples: http://www.example.com?fbrefresh=CAN_BE_ANYTHING http://www.example.com?postid=1234&fbrefresh=CAN_BE_ANYTHING OR visit: http://developers.facebook.com/tools/debug/og/object?q=http://www.example.com/?p=3568&fbrefresh=89127348912 I was having the same issue last night, and I got this solution from some website. I hope this helps. Share Improve this answer Follow answered Oct 8, 2011 at 6:41 P.C.P.C. 3,75122 gold badges1919 silver badges1717 bronze badges 4 6 It seems like it doesn't work for images. To refresh og:image you need to change name (url) of the image. It happens because fb doesn't use this URL but downloads image to its own server, and refreshes it only when URL changes... – Karol Mar 30, 2012 at 7:41 7 There is nothing special about fbrefresh. What you are actually doing is just changing the URL that gets requested -- it is treated as a new site. You can make any query string change for the same result. I believe this confusion stems from an example that was mentioned in the old FB blog back in early 2008. – AndrewF Dec 20, 2012 at 14:08 Can we be sure, that FB isn't geo-caching? If I see the new page, how can I be certain that old page was purged from FB caches around the world? – Patrik Beck Sep 4, 2015 at 6:50 So, the point being it CANT_BE_ANYTHING but rather NOT_SEEN_BEFORE ? – Andy Hayden Sep 19, 2016 at 22:24 Add a comment  | 
It seems the facebook debug tool http://developers.facebook.com/tools/debug is using a cache. I made an update to my site but facebook debug tool is still showing up the old data. Is their any way to force facebook to refresh its data? It has been a few days now and it seems the cache will not expire.
How to clear debug tool cache data?
21 You can use a filter of larger-than:1 to hide all requests that returned less than 1 byte. When I tested this, requests served from the cache have (from cache) in the size column and are excluded by this filter. Negating it showed only cache cached requests. Granted, this will also exclude/include 0B responses from the server. If that's a concern, you might be able to add mimetype or status code filters to achieve your aims, depending on the exact responses. Share Improve this answer Follow answered Jun 14, 2016 at 23:08 John NeuhausJohn Neuhaus 1,8372222 silver badges3232 bronze badges 3 How do I show only those requests which were cached ? – MasterJoe Dec 28, 2017 at 0:25 1 @testerjoe2 I thought that was a silly question until I went to confirm it. It took me an inordinate amount of time to verify that you'd use -larger-than:1 (note the minus sign) – John Neuhaus Jan 2, 2018 at 21:59 1 But that will also include all 0b responses unless you have some other way to filter them out – John Neuhaus Jan 2, 2018 at 22:00 Add a comment  | 
Background Chrome devtools "Network" tab has the option to filter requests based on string-match of the URL and some predefined content type filters (CSS/JS/...). If you set a filter, the bottom bar of the network tab, contains extra information related only to the matching filter. Question Is it possible to filter requests if they were served (or not) by browser cache? Usecase If someone has an alternate approach to do this: I would like to measure the "real" request-count/transferred-size of my HTML-UI. The bottom of the network tab already contains the transferred-size properly, however the request-count contains the cached requests also. I could use wireshark/tcpdump however, the HTML-UI could request resources from other domains, maybe I could write a complicated filter, however this seems a normal use-case.
How to filter cached requests in chrome devtools?
Guava's Cache type is generally intended to be used as a computing cache. You don't usually add values to it manually. Rather, you tell it how to load the expensive to calculate value for a key by giving it a CacheLoader that contains the necessary code. A typical example is loading a value from a database or doing an expensive calculation. private final FooDatabase fooDatabase = ...; private final LoadingCache<Long, Foo> cache = CacheBuilder.newBuilder() .maximumSize(10) .build(new CacheLoader<Long, Foo>() { public Foo load(Long id) { return fooDatabase.getFoo(id); } }); public Foo getFoo(long id) { // never need to manually put a Foo in... will be loaded from DB if needed return cache.getUnchecked(id); } Also, I tried the example you gave and mycache.get("key123") returned "value123" as expected.
I'm new to Google's Guava library and am interested in Guava's Caching package. Currently I have version 10.0.1 downloaded. After reviewing the documentation, the JUnit tests source code and even after searching google extensively, I still can't figure out how to use the Caching package. The documentation is very short, as if it was written for someone who has been using Guava's library not for a newbie like me. I just wish there are more real world examples on how to use Caching package propertly. Let say I want to build a cache of 10 non expiring items with Least Recently Used (LRU) eviction method. So from the example found in the api, I build my code like the following: Cache<String, String> mycache = CacheBuilder.newBuilder() .maximumSize(10) .build( new CacheLoader<String, String>() { public String load(String key) throws Exception { return something; // ????? } }); Since the CacheLoader is required, I have to include it in the build method of CacheBuilder. But I don't know how to return the proper value from mycache. To add item to mycache, I use the following code: mycache.asMap().put("key123", "value123"); To get item from mycache, I use this method: mycache.get("key123") The get method will always return whatever value I returned from CacheLoader's load method instead of getting the value from mycache. Could someone kindly tell me what I missed?
Can someone help me understand Guava CacheLoader?
23 You might also want to add this line if you are setting the max age that far out : // Summary: // Sets Cache-Control: public to specify that the response is cacheable // by clients and shared (proxy) caches. Response.Cache.SetCacheability(HttpCacheability.Public); I do a lot of response header manip with documents and images from a file handler that processes requests for files the are saved in the DB. Depending on your goal you can really force the browsers the cache almost all of you page for days locally ( if thats what u want/need ). edit: I also think you might be setting the max age wrong... Response.Cache.SetMaxAge(new TimeSpan(dt.Ticks - DateTime.Now.Ticks )); this line set is to 30 min cache time on the local browser [max-age=1800] As for the 2x Cache Control lines... you might want to check to see if IIS has been set to add the header automatically. Share Improve this answer Follow edited Jun 25, 2009 at 13:45 answered Jun 25, 2009 at 13:29 BigBlondeVikingBigBlondeViking 3,89311 gold badge3232 silver badges2828 bronze badges 1 1 I had the same problem, and in IIS you can set the cache headers somewhat granularly, so for the whole site, for a folder, or for individual files. If you want even more control in IIS (6) you might want to check out the Metabase editor as well. – Gyuri Jan 15, 2010 at 22:52 Add a comment  | 
I am trying to set the cache-control headers for a web application (and it appears that I'm able to do it), but I am getting what I think are odd entries in the header responses. My implementation is as follows: protected override void OnLoad(EventArgs e) { // Set Cacheability... DateTime dt = DateTime.Now.AddMinutes(30); Response.Cache.SetExpires(dt); Response.Cache.SetMaxAge(new TimeSpan(dt.ToFileTime())); // Complete OnLoad... base.OnLoad(e); } And this is what the header responses show: ----- GET /Pages/Login.aspx HTTP/1.1 Host: localhost:1974 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive X-lori-time-1: 1244048076221 Cache-Control: max-age=0 HTTP/1.x 200 OK Server: ASP.NET Development Server/8.0.0.0 Date: Wed, 03 Jun 2009 16:54:36 GMT X-AspNet-Version: 2.0.50727 Content-Encoding: gzip Cache-Control: private, max-age=31536000 Expires: Wed, 03 Jun 2009 17:24:36 GMT Content-Type: text/html; charset=utf-8 Content-Length: 6385 Connection: Close ----- Why does the "Cache-Control" property show up twice? Do I need both "Cache-Control" and the "Expires" properties? Is "Page_Load" the best place to put this code? Thanks!
Cache-Control Headers in ASP.NET
Reuse In the same thread those objects can and should be reused. For example you can use the DocumentBuilder to parse multiple documents. Thread Safety DocumentBuilderFactory used to explicity state it was not thread safe, I believe this is still true: An implementation of the DocumentBuilderFactory class is NOT guaranteed to be thread safe. It is up to the user application to make sure about the use of the DocumentBuilderFactory from more than one thread. http://download.oracle.com/javase/1.4.2/docs/api/javax/xml/parsers/DocumentBuilderFactory.html From Stack Overflow, DocumentBuilder does not appear to be thread safe either. However in Java SE 5 a reset method was added to allow you to reuse DocumentBuilders: Is DocumentBuilder.parse() thread safe? http://download-llnw.oracle.com/javase/6/docs/api/javax/xml/parsers/DocumentBuilder.html#reset() http://www.junlu.com/msg/289939.html (about DocumentBuilder.reset()) XPath is not thread safe, from the Javadoc An XPath object is not thread-safe and not reentrant. In other words, it is the application's responsibility to make sure that one XPath object is not used from more than one thread at any given time, and while the evaluate method is invoked, applications may not recursively call the evaluate method. http://download-llnw.oracle.com/javase/6/docs/api/javax/xml/xpath/XPath.html Node is not thread safe, from Xerces website Is Xerces DOM implementation thread-safe? No. DOM does not require implementations to be thread safe. If you need to access the DOM from multiple threads, you are required to add the appropriate locks to your application code. http://xerces.apache.org/xerces2-j/faq-dom.html#faq-1 ErrorHandler is an interface, so it is up to your implementation of that interface to ensure thread-safety. For pointers on thread-safety you could start here: http://en.wikipedia.org/wiki/Thread_safety
I'd like to know which objects can be reused (in the same or different document) when using the Java API for XML processing, JAXP: DocumentBuilderFactory DocumentBuilder XPath Node ErrorHandler (EDIT: I forgot that this has to be implemented in my own code, sorry) Is it recommended to cache those objects or do the JAXP implementations already cache them? Is the (re)use of those objects thread-safe?
Java and XML (JAXP) - What about caching and thread-safety?