Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I don't know a better way. You can create your own attribute ForeverOutputCache, which will set Duration = Int32.MaxValue in constructor, but this is almost same approach.
What would be the best way to cache an actionresult forever, that is untill the application is restarted. // Cache this result forever, it will reset on app restart [OutputCache(Duration=999999)] [ChildActionOnly] public ActionResult Footer() { if (HttpContext.Application != null && HttpContext.Application.AllKeys.Contains("Version")) return Content(HttpContext.Application["Version"].ToString()); return null; } Is there a better way to do this?
Cache an actionresult for the lifetime of the application
Know Your Problem Space See memcached's own wiki: Can using memcached make my application slower? Yes, absolutely. If your DB queries are all fast, your website is fast, adding memcached might not make it faster. Get It Working First Any optimization before you have a stable, working product is premature. You'll spend all your time fiddling with knobs that have zero performance impact on your application. Your best bet is to get it into a working state and then do several things: Profile performance Until you gather performance data you'll have no idea where your bottlenecks truly are. Most likely they're in areas you haven't considered. Identify Optimization Payoffs Evaluate each problem area for potential performance improvements. Prioritize the areas that will give you the best ROI (return on investment). In some cases the best ROI may be more hardware. Implement Optimizations Once you've identified what needs to be optimized, create a plan and implement it. Sources http://code.google.com/p/memcached/wiki/NewProgrammingFAQ
I'm working on a web app that's not even online yet. Is implementing memcache or memcached as of right now a bit premature? Should I use it only if: the web app is up, and the database is having a poor performance due to a high traffic/load? Or is better to implement during development? Also, in which cases is using a caching interface unnecessary or even disencouraged?
Is using memcache(d) premature optimization?
Fast forward 7 years, and you can now easily do this with a Service Worker. You could even get crafty and cache/combine various Range requests if you wanted. https://developers.google.com/web/ilt/pwa/caching-files-with-service-worker
On my website, I intend to stream background audio using the HTML5 <audio> tag. However, even after cutting down on the track length, my two files (MP3 and OGG Vorbis, for different browsers) end up at just short of 5MB a piece. Due to this, it would be nice to ensure loading time and bandwidth is conserved by caching the files. What I would like to know, but can't seem to find, is if it's possible to force the files to cache, or if browsers would normally cache the files at all. Thanks for your input!
Force caching of MP3/OGG in <audio> tag
The most notable impact with having multiple JavaScript files is the time required to render the page. Each script tag is processed separately and adds time to the overall render process. A pretty good answer can be found here @ multiple versus single script tags If we are talking a large number of scripts then you may see an improvement in render time; if it is just two or three files then it likely won't bring abount a noticable difference once the files have been cached. I would suggest testing the page render time in both cases and see how much improvement you see in your case and decide based on that information. As a useful example, here are some stats from Xpedite (runtime minification tool I created a while back); note the difference in time from load to ready for combined vs uncombined scripts.
I'm working on improving the page performance of my company's intranet page. We're looking to (dynamically) combine our javascript files as well as cache them for 30+ days. The page launches on login for everyone. One of my coworkers asked if it's worth the time to combine the JS files if we're already caching them for a month. He's hesitant to do both because the combining tool is server side and doesn't run on our desktop, requiring a somewhat hacky workaround. I've been doing some research and most of the recommendations for performance I've seen are for external sites. Since we're in a closed system it would seem like we wouldn't get much benefit from combining the files once everyone's cache is primed. What would combining the files buy us that aggressive caching wouldn't? We're on IE8 if that makes any difference.
Aggressive Caching vs Combining of Javascript files for an Intranet Site
You could use JPA with an in memory database so it would effectively just be a cache, yes. Using it 'without a database' at all would take huge amounts of work to build a custom JPA provider that works against whatever your storage is. If it's truly a full JPA implementation that simply leaves off the 'Persistent' part, you'd spend months if not years alone just reinventing the wheel to implement the query language against your non-RDBMS cache and so forth. I haven't worked everywhere, but personally would certainly not file such a setup under 'standard practices.' :)
I'm new to JPA, so forgive this question if this is pretty standard functionality, but can you use JPA without having a database and basically use it as a cache to store objects across your application? If so, is that standard practice?
Using JPA as a caching mechanism?
3 Read: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode It explains in what situations code reloading occurs. Share Improve this answer Follow answered Nov 1, 2011 at 20:52 Graham DumpletonGraham Dumpleton 58.1k66 gold badges121121 silver badges136136 bronze badges Add a comment  | 
I am running Django on a shared webhost using WSGI and Apache. My problem is that everything is cached, thus making it very difficult to test changes. Even if I remove an app (such as admin) from the URLconf or remove it from settings.py, I'm able to access it through the URL I removed from the URLconf. Is there a way to prevent this "caching"? I understand that it's ideal to use Django's runserver while developing, but I'd prefer to use this webhost and I don't have access to run runserver there. I'm also aware that I could restart Apache everytime I change something, but as this is a shared host I obviously don't have access to do that.
Django running on Apache with WSGI caches everything
2 Maybe Boost is the one you are searching for. http://drupal.org/project/boost Share Improve this answer Follow answered Oct 31, 2011 at 14:16 KristofferKristoffer 62655 silver badges1515 bronze badges 1 Yah - I saw this but no stable release on 7. Same with Varnish. Any others out there? Thanks! – Joseph Steven Shell Oct 31, 2011 at 15:53 Add a comment  | 
Looking for modules to speed up a Drupal 7 with Drupal Commerce install on a Media Temple Virtual host. Open topic!
What is the equivalent Drupal 7 module for the W3 Total Cache Wordpress plugin?
You can pass the current timestamp as a variable in the url, like this : var timestamp = new Date().getTime(); ajax.open("GET", url+'?ts='+timestamp, true); Also, you can force the page to be reloaded on server-side, using the proper headers
I'm created this class to fetch a file from web to check for new version using Ajax. This code run on a Windows gadget, on IE8. But I'm having trouble because of the cache. Is there a way to fix this Ajax class to disable cache? PS: I don't use any library or frameworks. var ClassAjax = function() { this.data = null; var that = this; this.get = function(url, send) { var ajax = new function ObjAjax() { try{ return new XMLHttpRequest(); } catch(e){try{ return new ActiveXObject("Msxml2.XMLHTTP"); } catch(e){ return new ActiveXObject("Microsoft.XMLHTTP"); }} return null; } ajax.onreadystatechange = function() { if(ajax.readyState == 1) { that.onLoading(); } if(ajax.readyState == 4) { that.data=ajax.responseText; that.onCompleted(that.data); } } ajax.open("GET", url, true); ajax.send(send); }; this.onLoading = function() { //function called when connection was opened }; this.onCompleted = function(data) { //function called when download was completed }; } var request = new ClassAjax(); request.onCompleted = function(data) { alert(data); } request.get('http://exemple.com/lastversion.html', null);
How to prevent Ajax caching
PEAR's Cache_lite works well, even if is based (and compatible with) PHP 4, as most pear packages.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 12 years ago. Could you recommend me any good PHP written File-Cache library which is up2date (PHP >= 5)?
File Cache Library for PHP [closed]
Yii can cache the dynamic content it generates a couple of different ways by storing it as static content, including caching database results, page fragments, etc. Anything queried from the database and dynamically turned into an HTML document by PHP can be cached by the framework. CSS, JS and Images are (usually) already static content, which the framework is not generating, so it cannot cache it. Static content is mostly cached in the client's web browser, or on fast distributed content delivery (CDN) servers. That said, Yii can do a few things to help speed up CSS and JS: Yii's CAssetManager allows you to use Yii to compress your static scripts (using 3rd party tools) and then "cache" the optimized scripts (in the assets folder). This can also be done with server scripts and extensions. You can also specify different cache backends, like Memcached and APC, where Yii will store the HTML it caches, but again that does not directly affect your images and CSS/JS.
Does Yii Cache external javascript files and CSS files? I want to cache my home page using Yii cache. The file size of my home page is quite small(excluding js and css). The cached size is 5.7K in my database. I am wondering whether Yii cache external js and css files? I do think It does not cache external images files.
Does Yii Cache external javascript files and CSS files?
I would avoid memory completely as IIS is not that great with it, if you found your self in the IIS need for refreshing the Application Pool for some sort of reason, your list is simply gone! Maybe a MemCache system? If it does not loose things in the above way, but... I would advice to be in the middle, IO File is fast that request data to a Database, specially if it's not in the same machine (witch for security reasons, it should never be), so... why not, and just to hold your list, you don't use one of the currently famous NoSQL database? MongoDB is a document database that has a .NET Library and it's easy to use, it is not as fast as Memmory, but extremely quicker than Physical databases for what you want. Normally the NoSQL Database will be hosted in the App_Data folder so it will be extremely fast to access and you can just hold there the task_id and user_id of all locked tasks.
I have a project which provides users with a list of current tasks that need to be completed. Any user can complete any task, and so to ensure that only one user is working on a task at a time I need to be able to 'lock' it. I'm using SignalR for this, so a user requests a lock on a task, and if they are successful (ie. if noone else has locked it) then they will be able to access the further information that they need. My problem is how to store the list of locked tasks. The original plan was simply to add an additional bit field 'IsLocked' to the Task table and update this when the user requested a lock and when the task was unlocked. We have about 300 concurrent users, however, and a task takes only about 3-4 minutes, meaning huge numbers of additional - and tiny - queries on the database. Therefore we were wondering about in-memory storage, simply storing a list of task ids in a 'lockedTasks' list. I had considered using caching, but am unsure on the best ways to do this, or even if better alternatives exist. If anyone has any experience in this then some advice would be great thanks
ASP.NET MVC 3 in-memory data store
Right, interesting question! Let's split this into two problems you need to handle: the lower quality image generation and the caching. Lower Quality Images For this I would look at adding a new processor into the getMediaStream pipeline. This processor could check the UserAgent of the incoming request and resample the image accordingly. For example if it determines the browser is a standard desktop browser it will do nothing, but if it is a mobile browser it will take the image stream and create a new one with a resampled image. Take a look at this example which shows how you can add your own processor into the pipeline. Also you could reflect on the Sitecore.Resources.Media.ResizeProcessor for an idea on how to do this. Caching Looking through the code for the Sitecore media caching, it uses the current MediaOptions of the request to generate a unique key for the cache. MediaOptions has a CustomOptions property which is a string dictionary, anything you add into there will also get used in the hash to create the key. I would look at extending the handler for image processing, override the ProcessRequest method and get the current MediaOptions for the request. At that point you could identify the browser (using similar code for the first part, you could share this logic), and if it is a mobile browser you could insert something into the Dictionary of the MediaOptions, e.g. "Mobile":"1". Then you would just call the base.ProcessRequest method and let the standard process continue on. This page shows an example of extending the MediaRequestHandler. Hope that helps, I haven't looked into it in more detail and there are likely some potential problems I've overlooked, but hopefully that's enough to allow you to make a start. It's a bit of code but it seems like a good library to have with plenty of reuse possibilities, and you won't need to alter the tags across your site.
I'm looking for an easy way to serve lower resolution images to mobile browsers. I'd like to generate these lower quality images on the fly and store them in media cache for use by similar browsers. Any ideas as to a good implementation of this? (Note: We have sc:image tags all over the site, and I want to avoid changing these.)
Sitecore: Serving lower resolution images to low bandwidth clients
You can add autocomplete="off" to the afflicted fields, MDN Document. If that doesn't work and you can use jQuery I would go with $(form).reset() on page load as an easy fix.
How do i prevent firefox from caching page state? I'm developing a web app and firefox is automatically setting previously checked checkboxes without triggering any events. Is there a way to prevent firefox from doing this or do I just have to reset my entire ui on startup with js?
prevent firefox page state caching
If the information you're saving in the session is not sensitive, I suggest you store the information in cookies instead. Cookies won't be affected by the session timeout. It also gives you the ability to use the data that you've persisted in another session at a later date. For example: You could keep track of the items/products that the user views. Next time they visit, you could show them "Recently viewed items". Amazon.com uses cookies for this purpose (and many, many others). If you really want to store the information in the database OR the information is too much / too sensitive to store in a cookie, you could save it to the DB and then write a cookie with the ID of the record that contains the information. This way, when the user returns to the site, you could look up the previous sessions information in the DB based on the ID in the cookie. ASP.NET Cookies Overview
So there are a few things I know about this topic but I am having a hard time finding where or what to research for this application. I am developing a web app for c# and asp.net web forms and am saving information in session for each user. I am just wondering when I should timeout session and any information about persisting session to a database so that it can be accessed later. The variables are being changed in session on a frequesnt basis and I do not want it to impact performance to persist these changes in the database each time. The goal of this is to have variables saved in session for each customer's information as they preform several transactions and pay at the end. If someone walks away for 10 mins and the session times out, I want a way to get that information back to complete the order. I also do not want this to affect performance. I may also implement a session timeout that triggers a popup screen that will persist these changes to the database only if the user wishes and not each time something changes. This is my first time dealing with session or caching in a web app and I would like to know more information or where to look or how you would attack this problem. Thank you.
Session variables and persisting them to a database
4 It should be obvious that option 4 is definitely not the least effort. I have had good results with Hazelcast. It provides a good return from minimal effort. Configuration is simple/straightforward, and the library as a whole "just works." I am not familiar with redis or webdis. You didn't include it in your list, but consider using Ehcache if what you really need is a cache. Share Improve this answer Follow edited Apr 21, 2016 at 21:02 answered Aug 13, 2011 at 22:15 Matt BallMatt Ball 357k101101 gold badges648648 silver badges715715 bronze badges 2 Well we are not talking about rolling our own version of hazelcast, I meant creating our own simple object store. I would argue that after what it takes to learn a framework, it might be close, but that is why I am asking. I would also like to include JCS in the discussion. Forgot to list that one. – Jesse Demarco Aug 13, 2011 at 22:21 I'm also not familiar with JCS. I did find that Hazelcast and Ehcache are both very straightforward to get started with - not much of a learning curve. It would help if you could provide more specific/detailed requirements. (BTW, these are really more libraries than frameworks.) – Matt Ball Aug 13, 2011 at 22:24 Add a comment  | 
I am working on a web application, that will require some memory caching of potentially very large and changing data sets. My partners and I are starting to debate several solutions, but would like to gain some insight into what we can expect for a couple of different solutions. Our app is written in Java and will run under glassfish 3.1 redis and webdis hazelcast Apache JCS Create our own with java We are also considering apache solr or possible lucene alone (if we use hazelcast). Should we count solr as a memory cache solution, or is the solr cache not really comparable with the solutions listed above. Thanks in advance for your recommendations
What mem cache implementation will take the least amount of effort to build
EJB singletons are very convenient for providing access to small amounts of application scoped data, but I wouldn't recommend using only them for managing very large amounts of data shared in a cluster. Typical caching solutions like Coherence or Infinispan have a lot of features like eviction policies, cluster replication topologies, spill over to disk, etc, that you'll really appreciate in those situation. Singletons can be conveniently used to give your other beans access to global resources like an Infinispan cache though.
I wish to ask if you would recommend EJB3.1 singleton beans as storage for application memory shared data. Imagine some simple application that needs to hold data in memory (rather than in a database) - e.g. some instant messaging server got data who is and who isn't online (user status). Would you recommend usage of EJB3.1 singletons or do you prefer some typical caching mechanisms like Coherence and so on? I can imagine a cluster with multiple JVMs, then comes to my mind to use JMS to tell other singletons that application memory changed.
EJB singleton as shared memory?
Use these rules: Options +FollowSymLinks -MultiViews RewriteEngine On RewriteBase / # do not do anything for already existing files RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule .+ - [L] # serve cached file if exists RewriteCond %{DOCUMENT_ROOT}/cache/$1.html -f RewriteRule (.*) /cache/$1.html [L] This needs to be placed in .htaccess file in website root folder. If placed elsewhere some tweaking may be required. The rule will check if cache exist before rewriting. It will do the check for ANY resource that does not exist (the "# do not do anything for already existing files" rule will not pass requests for real files that far). If you wish, you can get rid of "# do not do anything for already existing files" rule -- it still should work OK (it may be required to do so based on your app/website logic). If /cache/ folder or any other folder in question is not actual folder within a website root but alias, then this most likely will not work.
I've been messing around with .htaccess files for the past day, but with only moderate successes. I wrote a caching script that generates a cached version of each page and stores it in www.mysite.com/cache/ and maintains the same directory structure of the actual file, but it adds .html to the end. So if the actual file was: www.mysite.com/blue/turtleneck the cache file would be: www.mysite.com/cache/blue/turtleneck.html I need to check if the cached version exists and if so, load it. If it doesn't exist I need to load the actual file. Also, I need some way of forcing it to load the non-cached version. I was thinking perhaps adding /nocache/ to the end of the URL to load the noncached copy. Example: www.mysite.com/blue/turtleneck/nocache/ I've been struggling with this and any help would be very appreciated.
How to check if a file exists in cache using .htaccess, load normal script if not
4 Looks like the static files don't have expires set. Read - http://www.absolutelytech.com/2010/08/02/howto-add-expire-headers-to-cache-static-files-using-htaccess/ You'll need to post the below code in your .htaccess # Turn on the Expires engine ExpiresActive On # Expires after a month client accesses the file ExpiresByType image/jpeg A2592000 ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/x-icon A2592000 ExpiresByType text/plain A2592000 # Good for one week ExpiresByType application/x-javascript M604800 ExpiresByType text/css M604800 ExpiresByType text/html M604800 Share Improve this answer Follow answered Jul 23, 2011 at 19:01 SukumarSukumar 3,54233 gold badges2727 silver badges2929 bronze badges 2 How to do it with web.config file? (in Windows), can this be done in shared hosting? – Benny Jun 20, 2013 at 4:23 I don't have much idea about windows systems. If it is a directory specific configuration which the hosting provider hasn't restricted, it should work in shared hosting environment as well. – Sukumar Jun 20, 2013 at 15:28 Add a comment  | 
I got this error from Google Speed test: The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources: http://localhost/english/favicon.ico (expiration not specified) http://localhost/english/images/bg_center.png (expiration not specified) http://localhost/english/images/bg_top.jpeg (expiration not specified) http://localhost/english/images/footer_bg2.png (expiration not specified) http://localhost/english/images/m_facebook.png (expiration not specified) http://localhost/english/images/m_rss.png (expiration not specified) http://localhost/english/images/top_bg.png (expiration not specified) http://localhost/english/javascript/gram.js (expiration not specified) http://localhost/english/javascript/top_start.js (expiration not specified) http://localhost/english/jquery.js (expiration not specified) http://localhost/english/style/gram.css (expiration not specified) http://localhost/english/style/style.css (expiration not specified) Should I do somthing in my htaccess file?
Google Speed Leverage browser caching
I found a better way to do it. If anyone else is having trouble after following the link android system cache, use this Google developer's blog post instead. The source code in that blog post is designed for a ListView but I am using it for all image retrievals. It downloads the image in an AsyncTask, puts a temporary image while downloading, and has an image cache. This last part is listed as a "Future Item" in the blog post, but if you download the source code, the cache is implemented. I had to modify the code slightly because the AndroidHttpClient isn't supported in 2.1. I switched it to a URL connection. So far, this looks to be a great image downloader class. Let's just hope it doesn't impact our already struggling memory management issues.
I would like to use Android's system cache when downloading images as per these previous instructions: android system cache. I was able to get the following code working but the log statements are telling me that the images are never being read from the cache. try { //url = new URL("http://some.url.com/retrieve_image.php?user=" + username); URL url = new URL("http://some.url.com/prof_pics/b4fe7bdfa174ff372c9f26ce6f78f19c.png"); URLConnection connection = url.openConnection(); connection.setUseCaches(true); Object response = connection.getContent(); if (response instanceof Bitmap) { Log.i("CHAT", "this is a bitmap"); current_image.setImageBitmap((Bitmap) response); } else { Log.i("CHAT", "this is not a bitmap"); Log.i("CHAT", response.toString()); InputStream is = connection.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is); current_image.setImageBitmap(BitmapFactory.decodeStream(bis)); } } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } I have tried two different types of requests, one is to go through a PHP script that returns the image and another that is directly accessing the image file. I refresh the same image multiple times in a row and it never seems to get cached. For the direct image access, I get: 05-31 23:45:12.177 I/CHAT ( 2995): this is not a bitmap 05-31 23:45:12.177 I/CHAT ( 2995): org.apache.harmony.luni.internal.net.www.protocol.http.FixedLengthInputStream@40c1c660` For the indirect image access, I consistently get: 05-31 23:45:14.550 I/CHAT ( 2995): this is not a bitmap 05-31 23:45:14.550 I/CHAT ( 2995): org.apache.harmony.luni.internal.net.www.protocol.http.ChunkedInputStream@40c02448
Using Android's System Cache
If you can use boost, look at boost::unordered_map, otherwise you can use a std::map. You will have to provide functor to generate the key.
I want to do some caching in my project. Let my API is int foo(int a, float b, float c, int d, char e) Now in my project, there is lot of calls to above time consuming API with repeating values of a, b, c ,d and e. Now I want to store return value of this function with these arguments as keys. suppose my call sequence is foo(23, 3.45, 4.5, 90, 'd') // returns 1000, so I need to store it in cache as (23,3.45, 4.5, 90, 'd')->1000 foo(30, 1.2, 3.5, 100, 'e') // returns 2000, so I need to store it in cache as (30, 1.2, 3.5, 100, 'e')->2000 foo(23, 3.45, 4.5, 90, 'd') // No need to call this API, I just check in my cache value associated with //(23, 3.45, 4.5, 90, 'd'), which is already stored as 1000 What should be best strategy to implement above in C++? which data structure would be best to make cache table?
caching multiple key hash
Are you making heavy use of AJAX? Make sure each AJAX request is unique, otherwise IE9 will serve up a cached version of the request response. For example, if your AJAX request URL normally looks like: http://www.mysite.com/ajax.php?species=dog&name=fido Instead, add a unique value to each request so IE doesn't just use the cached response. The easiest way to do that in Javascript is a variable that increments each time you make a request: var request_id = 0; var request_url = "http://www.mysite.com/ajax.php?species=dog&name=fido&request_id="+request_id; request_id++;
I'm developing a dynamic web application (running on IIS7), it works fine in all the major browsers, except IE9. It seems, that it caches practically everything, and that leads to quite many problems, like Often changing contents remain unchanged User visits an authorized content, then signs out, then tries to go back to the secured content and gets it from cache! I've tried disabling cache with <meta http-equiv="Expires" CONTENT="0"> <meta http-equiv="Cache-Control" CONTENT="no-cache"> <meta http-equiv="Pragma" CONTENT="no-cache"> but no luck so far...
IE9 caching dynamic pages
If you're using a HashSet, the underlying implementation actually uses a HashMap anyway, so I suggest you go with a HashMap.
I'm trying to cache a lot of similar values with only set-like requirements. Unfortunately Set<?> allows me only to check whether an element exists inside - it won't give the existing element back to me. What I'd like to do is: Element e = receiveSomeElement(); e = elements.cache(e); // now e is either the original e, or one that was already in the cache doSomeWorkOn(e); I could probably simulate that with SortedSet and getting .subSet(e, e), but it seems like waste of time to keep the set sorted. I could also use HashMap<Element, Element> and store the same referrence as the key and value, but that seems just as dirty... Is there some better way to do this?
Caching set-like collection
Answer from the SQLite User Mailing List: In short, because SQLite cannot read your mind. To understand the answer compare speeds of executing one query (with one TABLE_A) and creating an in-memory database, creating a table in it and using that table in one query (with the same TABLE_A). I bet the first option (straightforward query without in-memory database) will be much faster. So SQLite selects the fastest way to execute your query. It cannot predict what the future queries will be to understand how to execute the whole set of queries faster. You can do that and you should split your query in two parts. Pavel
I have two tables to join. TABLE_A (contains column a) and TABLE_BC (contains columns b and c). There is a condition on TABLE_BC. The two tables are joined by rowid. SELECT a, b, c FROM main.TABLE_A INNER JOIN main.TABLE_BC WHERE (b > 10.0 AND c < 10.0) ON main.TABLE_A.rowid = main.TABLE_BC.rowid ORDER BY a; Alternatively: SELECT a, b, c FROM main.TABLE_A AS s1 INNER JOIN ( SELECT rowid, b, c FROM main.TABLE_BC WHERE (b > 10.0 AND c < 10.0) ) AS s2 ON s1.rowid = s2.rowid ORDER BY a; I need to do this a couple of times with different TABLE_As, but TABLE_BC does not change. I could therefore speed things up by creating a temporary in-memory database (mem) for the constant part of the query. CREATE TABLE mem.cache AS SELECT rowid, b, c FROM main.TABLE_BC WHERE (b > 10.0 AND c < 10.0); followed by (many) SELECT a, b, c FROM main.TABLE_A INNER JOIN mem.cache ON main.TABLE_A.rowid = mem.cache.rowid ORDER BY a; I get the same result set from all the queries above, but the last is by far the fastest one. I would like to avoid splitting the query into two parts. I would expect SQLite to do that automatically (at least in the second scenario), but it does not. Why?
Why doesn't SQLite split this query into two parts automatically?
If the URL is different, different resources are assumed. And this fact does also need to be reflected by the cache. So the two URLs in your example will result in two cache entities. Besides the URL, caches do also take further information of the request and response into account: with the Vary response header field the server can indicate “the set of request-header fields that fully determines […] whether a cache is permitted to use the response to reply to a subsequent request without revalidation.” So it is possible that there are even more than just two cached entities.
For example, if I have a .js file, will browsers have a seperate cached copy of: http://www.mysite.com/myfile.js and https://www.mysite.com/myfile.js Or will they only cache a single copy?
Do browsers vary cached content by protocol?
Yes, sublayout caching can vary by several different criteria by default. You can leverage varying by parameters to do this. The vary-by's are: Vary by Data Vary by Device Vary by Login Vary by Parameters Vary by Query String Vary by User The approach for you to customize here is Vary by Parameters and YOU define what the parameters are. You can do this in Presentation Details where you dynamically assign a sublayout to an item (there is a section at the bottom of the control properties to define parameters) or you can set this via C# code. Here an example using C# code to statically assign a sublayout into my layout: <h1>My website</h1> <h2>My site is great</h2> <sc:Sublayout ID="slMyControl" path="~/path/to/my/control.ascx" VaryByParm="true" Cachable="true" runat="server" /> (One thing to note in the above code, the attribute for VaryByParam is actually VaryByParm in Sitecore, which is obviously a typo in their code.) Now in the C#, set the parameters programatically: slMyControl.Parameters = "myKey1=MyVal1&myKey2=myVal2"; If you can get a Moon Position in C#, then convert it to a string and assign it to the parameters: slMyControl.Parameters = "position=" + getMoonPosition().ToString(); I recently cached a calendar by the month and year which appear in the query string. Simple example with no error handling: slEventCalendar.Parameters = string.Format("m={0}&y={1}", Request.QueryString["m"], Request.QueryString["y"]); The parameter string you end up with eventually becomes part of the actual cache key. Coupling this with other vary by options just making a more complex cache key with more criteria and thus more cached instances. The general rule is, cache by the least amount of criteria you need to which will result in the most amount of use from that cached instance.
When using WebControls in Sitecore, there is a way to customize caching behavior - override GetCachingID method. Is there a way to achieve something like this with Sublayouts(UserControls)? I'd like to add custom "VaryBy" options(example - "Vary By Moon Position").
Customizing sublayout caching in Sitecore
Caching is a broad term that can happen at a number of different points. The optimum solution may be a combination of some or all. For example, you can add page, or output caching as described here, which caches output on the web server, which I think is what you were referring to. In addition, you can cache the data in memory using something like memcached, so that your data is more available to the web server as it builds the page, but you need to look at cache hit rate to know for sure that you are caching the right data. Also, although slightly off the topic of improving db heavy pages, you can cache static resources that change infrequently like images, css and include files using a content delivery network. Any CDN will almost certainly have a higher bandwidth and a cheaper data plan than your own connection because of the economies of scale, so the more of your content you can serve from there the better, in general. Your first question was "I was wondering if it's worth "caching" a static version of these pages". I guess the answer to that depends on whether there is a performance problem at the moment, and where the cause of that problem is. If the pages are being served quickly and reliably, then quite possibly it's not worth implementing caching. If there is a performance problem, then where is it? Is it in db read time, or is it in the time spent building the page once the data has been returned?
I have an asp.net web site with 10-25k visitors a day (peaks of over 60k before holidays). Pages/visit is also high, since it's a content site. I have a few specific pages which generate about 60% of the traffic. These pages are a bit complex and are DB heavy (sql server 2008 r2 backend). I was wondering if it's worth "caching" a static version of these pages (I hear this is possible) and only re-render them when something changes (about once in 48hs). Does this sound like a good idea? Where would be the best place to implement this? (asp.net, iis, db) Update: Looks like a good option for me is outputcache with SqlDependency. I see a reference to some kind of SQL server notification for invalidating the cache, but I only see talk of SQL server 2005. Has this option been deprecated by Microsoft? Any new way to handle this?
Is caching a good idea? If so, where?
You may add the data in the Cache using a separate thread. i.e., create a separate thread and start caching the data using that thread. You application should serve perfectly in the meanwhile.
I am doing a ton of work currently in Application_Start, and it takes an hour or two to cache the 2 gigs of data into memory that make my application operate efficiently. Using this method, the Azure web role instance(s) are not available until these processees are complete. I am inserting into the HTTPRuntime cache, so I cannot use the WebRole.cs OnStart() or Run() methods (they don't have acceess to this cache). Can you think of alternate ways that I can get this data loaded into the cache, while also making the website available during this caching period? The website operates fine while the data is loading, just not as fast. Thanks so much, -Kevin
Caching lots of data in global.asax Application_Start, windows azure role
Tadaa, did it :) Just paste this piece op php in my script: $deletecachesql = "DELETE FROM cache"; $deletecachequery = mysql_query($deletecachesql) or die ("error").mysql_error(); $deletecacheresult = mysql_fetch_array($deletecachequery); The script does clear the cache, but I'm not sure it's a good thing to do. The website also told me to delete: DELETE FROM cache; DELETE FROM cache_menu; DELETE FROM cache_filters; DELETE FROM cache_page; DELETE FROM watchdog; Is it a wise thing to do? To clear (delete) the cache like this?
I'm writing a php script which will enable people to change the theme of their drupal website. So far, so good but one last thing I couldn't figure out. Every time when I submit the form, the database is changed but the theme doesn't change. Apparently, I have to clear the cache as well. I found this on the Drupal website: <?php include_once './includes/bootstrap.inc'; drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); drupal_flush_all_caches(); ?> I should make a little file 'clear.php' with this script, and every time I want to clear the cache, I should go to this file and the cache shout be cleared... But their is my problem. I don't know how to call this page in my script. Sure I can make a button which will redirect the user to this page, but I'ld like it in one script. Any idea's? Or are their other ways to flush the Drupal cache using php? Thanks in advance!
clear drupal cache - php script
Caching issues... Note that IE is the only browser platform that utilizes caching in AJAX requests because they treat an AJAX request no different then a normal browser request. $(document).ready(function() { $('#ticketsearch').click(function() { var ticketcode = $('[name=ticketcode]').val(); $.getJSON('/import/envelope.json', function(data) { $('.ticket').text(data[ticketcode][3]); $('.envstatus').text(data[ticketcode][1]); $('.track').text(data[ticketcode][2]); $('.track').attr("href", data[ticketcode][2]); $.ajaxSetup({ cache: false }); // <------this will disable caching }); }); });
The following code works only in Firefox, but not in IE. The word "Meanscoil na mBraithre Criostaí" brakes the JSON file: "2028425":[19, "Awaiting Correction", "", "Meanscoil na mBraithre Criostaí"], $(document).ready(function() { $('#ticketsearch').click(function() { var ticketcode = $('[name=ticketcode]').val(); $.getJSON('/import/envelope.json', function(data) { $('.ticket').text(data[ticketcode][3]); $('.envstatus').text(data[ticketcode][1]); $('.track').text(data[ticketcode][2]); $('.track').attr("href", data[ticketcode][2]); }); }); }); PS> How can I clear cache after each JSON request.
json request not working in IE and issue with cache
This will work: Rails.cache.write('key_test', Rails.cache.read('key_test'), :expires_in => 10.seconds)
I am using Ruby on Rails 3 and I would like to increase the expiration time for a cache key during the execution time. I use memcached. For example, I have Rails.cache.write("key_test", "value_test", :expires_in => 10.seconds) so that the key_test will expire in 10 seconds. In order to make available that key value for longer, what I can do? Is it possible to increase only the expires_in without to set again the value_test? The following code doesn't work, but maybe I have to do something like this: Rails.cache.write("key_test", :expires_in => 10.seconds)
Ruby on Rails cache: Is it possible to increase the expiration time for a cache fragment during the execution time?
Of course you can. First of all, only you know whether a resource is expired or not, the resource might from a file, an article from database, therefore, there won't be an universe "expired or not" method for you. Here is a simple example: class WSGICache(object): def __init__(self, app): self.app = app self.cache = {} def is_expired(self, environ): """Determine is the resource the request for already expired? """ # FIXME: check is the resource expired, by looking # PATH_INFO, if it is a file, it might be last modified time # if it is an object from database, see what is the last modified time return False def __call__(self, environ, start_response): path = environ['PATH_INFO'] cached = self.cache.get(path) # do we have valid cache? if self.is_expired(environ) or not cached: cached = list(self.app(environ, start_response)) self.cache[path] = cached return cached But for production usage, I suggest use some already built caching system like Beaker, I think it should be good enough to do what you want. I didn't test the code above, but a middleware like this be able to do what you want.
I'd like to build a caching proxy as a Python WSGI middleware and wonder how this middleware could find out whether a cached page is expired. As far as I know WSGI doesn't support something like the getLastModified(HttpServletRequest req) method of Java Servlets. What I'm not looking for is a per client caching strategie with "if modified since" or "etags". I want to cache content for all clients like a proxy server. So the cache have to check whether the WSGI app, or resource in terms of REST, was modified and thus expired in the cache. client cache wsgi app ------ ----- -------- | get /some/x | | |------------------>| /some/x expired? | | |------------------->| | | | | | update /some/x | | | if modified | | return /some/x |<-------------------| |<------------------| Is it possible to implement it without by-passing WSGI?
How to implement caching with WSGI?
You heard right. The files remain cached in the browser, so don't worry about that.
I have an intranet site where half of the pages use jQuery, and ~20% use jQuery UI. Users who navigate the site will almost certainly open a page containing jQuery UI during their visit. What I'd like to do is use for all pages a standard template that includes calls to jQuery and jQuery UI. It would make the site maintenance easier, but it also means that for 80% of my pages jQuery UI will be loaded for nothing. What I have been told is that I should not worry too much about that. Calls to jQuery and jQuery UI will be cached by the browser, so anyway they'll only be called once during the user's visit. Is this correct? Or are there other performance factors that could make the systematic call a bad idea?
What is the cost of calling jQuery and jQuery UI on every page?
4 1) EHCache is a caching implementation. JPA2 L2 is a caching interface. You can use EHCache as your JPA L2 cache provider. The L2 cache provider that you pick should depend entirely on your requirements. If you think EHCache is the best fit, then use it. 2) I'm going to leave this as it's a bit vague. There are many good strategies for many different scenarios. 3) If you have something that's updating your database and not hitting your cache, then your cache will become stale. If that's okay in your domain, then fine. Otherwise, you'll need to come up with a different solution to either your caching or updating needs so that they both remain in sync. Share Improve this answer Follow answered Jan 17, 2011 at 9:11 GaryFGaryF 24.1k1010 gold badges6060 silver badges7373 bronze badges Add a comment  | 
HI all, I'm new to the world of L2 Caching so please go easy on me :). I have a few questions: 1) What would be the difference between EhCache vs JPA 2.0 L2 Cache? To my understanding, EhCache is distributed (can also be stand-alone), while JPA 2.0 L2 Cache is not (per JVM). 2) Caching Strategy Please share your caching 101 strategy. How to cache collection (issues and tips)? How to search an object in your cache pool (if you know you're caching it). 3) Caching and Stored Procedures Let's say the database supports 2 different applications on top of it. How would one solve update problem when one application updates the data via Stored Procedure, while the other (cache) reads it? To the reader, it is as if there is no update. I heard stories that certain big websites cache everything. Does this mean they write their own data access layer on top of both their cache library and their JPA / ORM? PS: I understand that the golden rule is to avoid caching early on or better yet to increase the hardware capability. I'm asking this question for learning purposes. I'm also not asking for a specific scenario, but more of the general rule, general scenario, lowest common denominator, it doesn't have to solve everybody's problem. Thanks!
EhCache, JPA 2.0 L2 cache, Caching strategy
The MemoryCache object is maintained in volatile memory so you will need to extend ObjectCache object to store in the file system or some other non-volatile storage mechanism.
I've got a C# WPF application and I'm trying to implement Caching using the new System.Runtime.Caching. When I add something to the Cache, I then check it and confirm it is there, which it is. However, when I restart the application, it is gone from the Cache. So, in the below example, the output will always be "Found It". What I expect it to be is that the first time it is run it outputs "Found It", but all subsequent times (for the next 10 days) it would output "Was already present", but it doesn't. I've run it through the debugger and confirmed that every time you restart the application, cache["MYDATA"] will equal null. What am I doing wrong? Thanks! Dictionary<Type, Dictionary<string, object>> _CachedObjects = null; ObjectCache cache = MemoryCache.Default; _CachedObjects = cache["MYDATA"] as Dictionary<Type, Dictionary<string, object>>; if (_CachedObjects == null) { _CachedObjects = new Dictionary<Type, Dictionary<string, object>>(); //code that fills _CachedObjects CacheItemPolicy policy = new CacheItemPolicy(); policy.AbsoluteExpiration = DateTimeOffset.Now.AddDays(10.0); cache.Set("MYDATA", _CachedObjects, policy); if(cache["MYDATA"] != null) Console.WriteLine("Found It"); } else Console.WriteLine("Was already present");
Using System.Runtime.Caching, But When I Go To Retrieve it, it's null?
The proper way is to send these HTTP headers in the response: Pragma: no-cache Expires: -1 Cache-Control: no-cache, no-store Using them makes everything work in IE without any other modifications.
I've tried numerous ways to get IE8 to reload a page but failed. IE just keeps using it's internal cache without asking the webserver for it. I'm sending the following headers from my webserver: Response.Add(new StringHeader("Expires", DateTime.UtcNow.AddYears(-1).ToString("r"))); Response.Add(new StringHeader("Cache-Control", "no-store, no-cache, must-revalidate, max-age=0")); Response.Add(new StringHeader("Pragma", "no-cache")); Response.Add(new StringHeader("Last-Modified", DateTime.UtcNow.ToString("r"))); It's just one of many combinations that I've tried. How do I make IE fetch the page every time (without forcing my users to turn off caching inside IE)?
Force IE to get a page
It could be that caching is not properly set up for the .htc file extension in your web server. Check the response headers, e.g. using Firebug, for what caching instructions get served. Also using Firebug's "Net" tab, you'll be able to see whether the URL gets loaded in non-IE browsers. It shouldn't, but you never know.
A site I am working on just exceeded the monthly bandwidth our host provides (25,000 MB) and when looking at the server stats and logs, I found TwinHelix's iepngfix.htc to be the #4 largest bandwidth drain. #4 hits:73939 KBytes:181035 /iepngfix.htc I find this especially interesting because a .swf used as a background image on every page had only 3,918 hits compared to the 73,939 hits that iepngfix.htc received. Hard for me to believe that there are even that many IE6 users visiting this site. This file is being called within screen.css in the following way: img, div, input { behavior: url("iepngfix.htc") } The only way I can explain this 4KB file eating so much bandwidth, is if it is being read and re-read for every single img, div, and input element, whether or not there is a PNG used and possibly for more browsers than just IE. Am I understanding this correctly? If anyone could help me understand how all this works, it would be much appreciated. Thanks!
In CSS does the iepngfix.htc file get called once, or is it re-read for each element?
The three things you need to do are: Go to Site Configuration -> Performance: Set the following options, and click Save configuration: Caching mode: Disabled Minimum cache lifetime: none Page compression: Disabled Block cache: Disabled Optimize CSS files: Disabled Optimize JavaScript files: Disabled Click Clear cached data. Go to Site building -> Views -> Tools: Check Disable views data caching and click Save configuration. Click Clear Views' cache. Install the Devel module, and go to Site Configuration -> Devel settings: Check Rebuild the theme registry on every page load and click Save configuration. This will make sure all registries and caches except for the menu router will be rebuilt on every page, effectively preventing caching in practice. If you really need the menu router to be rebuilt on every page (it's completely unnecessary, as you only need to worry about it when you change your implementation for hook_menu() or hook_menu_alter()), you could add menu_rebuild() to hook_init() in a custom module: function mymodule_init() { menu_rebuild(); }
I'm developing a site in Drupal 6, and I'm going mad trying to work out why pages (specifically pages containing views), I'm working on locally are caching content instead refreshing the contents of the page, and that of linked js files, I'm relying on for making a mashup - is there a checklist I can check against to be sure I'm not missing when trying to deactivate caching? These are the steps I'm taking: On the server: set the site to rebuild the theme on each load cleared cache using drush (as in drush @dev cc all`) on each page load checked that the json output from a view isn't caching disabled any css or js caching in admin/settings/performance On Firefox/firebug using the web developer extension, disabled the cache been refreshing the site using shift-F5 to force a clear of the cache I'm not using varnish or memcached, nor any other caching modules like boost - it's straight Apache-PHP through to Drupal and MySQL. What am I missing here?
Where can I check to be sure Drupal's caching is switched off for local development?
2 Safari refuses to cache URLs with query parameters. So instead of a query parameter you can use something like a versioned path and use mod_rewrite to remove it. Something like: <script type='text/javascript' src='/scripts/1377815076/GamesCharts.js'></script> And in Apache config file (config for other servers left as homework): RewriteEngine On RewriteRule ^/scripts/[0-9]+/(.+)$ /scripts/$1 Share Improve this answer Follow answered Sep 22, 2010 at 7:03 slebetmanslebetman 112k1919 gold badges143143 silver badges175175 bronze badges Add a comment  | 
I have a .NET web applications which uses a lot of javascript. The .aspx and the .js files go hand-in-hand together. Problem: The .aspx files are always up-to-date on the client (not cached) but the .js files might well be cached. This is even a problem if the files are only cached for one session since users are spending many hours on my site and everytime I update a .aspx/.js pair users are running into a problem. Now, I found a solution but I am not sure if there is perhaps a better solution or if my solution has a performance drawback. This is my solution: .js-References in .aspx: <script type='text/javascript' src='../scripts/<%# GetScriptLastModified("MyScript.js") %>'></script> So, the "GetScriptLastModified" will append a ?v= parameter like this: protected string GetScriptLastModified(string FileName) { string File4Info = System.Threading.Thread.GetDomain().BaseDirectory + @"scripts\" + FileName; System.IO.FileInfo fileInfo = new System.IO.FileInfo(File4Info); return FileName + "?v=" + fileInfo.LastWriteTime.GetHashCode().ToString(); } So, the rendered .js-Link would look like this to the client: <script type='text/javascript' src='/scripts/GamesCharts.js?v=1377815076'></script> The link will change every time, when I upload a new version and I can be sure that the user immediately gets a new script or image when I change it.
Howto: Javascript files always up-to-date
About the cache requests with GET parameters: Cache a django view that has URL parameters Filesystem caching is usually fast enough, easy to setup, and maintenance is same as managing any directory. Delete the cache by removing the files in the cache directory.
I'm developing a Django application on a shared server (Dreamhost). A view that I'm implementing takes several HTTP GET parameters to perform database lookups and return serialized data. Some of these lookups generate several hundreds of kilobytes of data that is expensive to compute. Caching this data would be ideal as it would save both DB access and computation time. I have two questions: The Django documentation mentions that the cache middleware doesn't cache requests with GET or POST parameters. Is there any way around this? The Dreamhost wiki indicates that either Filesystem caching or Database caching are best suited for Dreamhost sites. Which of these will be better in terms of performance, setup, and maintainability. Are there any alternatives for shared hosting? I'm also open to suggestions for other solutions to my problem. Thanks in advance! -Advait
Cache a Django view with GET parameters on a shared server
Cake is automatically caching the model schema, whatever you set in Cache::config has absolutely no impact on this behavior. In debug mode (Configure::write('debug', > 0)) Cake is pretty much constantly refreshing the model schema to allow you to make changes to your database at any time and have these changes properly reflect in the application. In production mode (Configure::write('debug', 0)) the model cache will rarely be refreshed. And BTW, you should read the core.php documentation: ;-P /** * CakePHP Debug Level: * * Production Mode: * 0: No error messages, errors, or warnings shown. Flash messages redirect. * * Development Mode: * 1: Errors and warnings shown, model caches refreshed, flash messages halted. * 2: As in 1, but also with full debug messages and SQL output. * * In production mode, flash messages redirect after a time interval. * In development mode, you need to click the flash message to continue. */
cache model files in app\tmp\cache\models\ I set config Cache::config('default', array( 'engine' => 'File', 'duration' => 3600000, 'serialize' => false ) ); why model seem only cache in 3s , if > 3s it reloading model. ( because my app loading >4s if i dont cache , if I refresh page in <3s it loading only 1s, but if >3s it loading >4s. I assumes slow loading because app model in plugin ) WHY I SET duration 3600000 or '+5minutes' it still cache <3s model file AND serialize => false it still serialize ( i checked in file ) AND error usually happen is C:\xampp\htdocs\myapp\app\tmp\cache\models\cake_model_default_poll_votes) [function.fopen]: failed to open stream: Invalid argument [CORE\cake\libs\file.php, line 154] Anyone help I very appreciated >< ( i read documentation very many , please dont suggest read documentation...)
cache file model cakePHP
One way is to use the EF cache provider. Another is to use your web server's cache. But make sure you are not prematurely optimizing, or optimizing the wrong thing. Doing so will make your life miserable. It is generally better to optimize a web site with front end caching.
I have EntityTypes generated from a database using Entity Framework 4. I would like to use Cache to store some of these EntityTypes for performance reasons. Is it safe to do the following provided that the object will be used for read-only actions: context.Students.MergeOption = MergeOption.NoTracking; var students = context.Students.Where(s => s.Name == "Adam").ToList(); Cache["students"] = students; Thanks.
Caching Entity Framework EntityTypes
3 I've head good about Redis, that's one. I've also heard extremely positive things about memcached. It is suitable for binary data as well. Take Facebook for example: These guys use memcached, also for the images! As you know, images are in binary. So, get memcached, get a machine to utilize it, a binder for PHP or whatever you use for your sites, and off you go! Good luck! Share Improve this answer Follow answered Jul 23, 2010 at 11:02 PoniPoni 11.1k2525 gold badges8282 silver badges122122 bronze badges 3 My problem with memcached is that if power goes out I lose my data. So that in addition to recovering from a power outage, I have to re-build my cache. – Scott Jul 23, 2010 at 19:48 It depends on what you're looking for, exactly, and what is the budget. Consider having "mirrors" or data so even if one machine fails, the other(s) may serve. Additionally, if the server fails, and is the only one, you could write a script to re-upload the data from the hdisk or something upon startup. There are many options. I wouldn't go for an ACID database since it has a lot of overhead which is not needed for that purpose. It also doesn't cache, as much as I know, the data in a way that memcached does, which on the latter is designed specifically for your purpose, thus optimized for that – Poni Jul 23, 2010 at 20:45 I agree about the database, but I've been given the luxury of exploring a few options so I figured what-the-heck. – Scott Jul 24, 2010 at 16:52 Add a comment  | 
I pre-generate 20+ million gzipped html pages, store them on disk, and serve them with a web server. Now I need this data to be accessible by multiple web servers. Rsync-ing the files takes too long. NFS seems like it may take too long. I considered using a key/value store like Redis, but Redis only stores strings as values, and I suspect it will choke on gzipped files. My current thinking is to use a simple MySQL/Postgres table with a string key and a binary value. Before I implement this solution, I wanted to see if anyone else had experience in this area and could offer advice.
What's the best way to cache binary data?
Most heap implementations will get you the lowest key in your collection in O(1) time, but there's no guarantees regarding the speed of random lookups or removal. I'd recommend pairing up two data structures: any simple heap implementation and any out-of-the-box hashtable. Of course, any balanced binary tree can be used as a heap, since the smallest and largest values are on the left-most and right-most leaves respectively. Red-black tree or AVL tree should give you O(lg n) heap and dictionary operations.
I'm implementing something like a cache, which works like this: If a new value for the given key arrives from some external process, store that value, and remember the time when this value arrived. If we are idle, find the oldest entry in the cache, fetch the new value for the key from external source and update the cache. Return the value for the given key when asked. I need a data structure to store key-value pairs which would allow to perform the following operations as fast as possible (in the order of speed priority): Find the key with the lowest (unknown) value. Update a value for the given key or add a new key-value pair if the key does not exist. Other regular hash-table operations, like delete a key, check if a key exists, etc. Are there any data-structures which allow this? The problem here is that to perform the first query quickly I need something value-ordered and to update the values for the given key quickly I need something key-ordered. The best solution I have so far is something like this: Store values an a regular hashtable, and pairs of (value, key) as a value-ordered heap. Finding the key for the lowest value goes like this: Find the key for the lowest value on the heap. Find the value for that key from the hashtable. If the values don't match pop the value from the heap and repeat from step 1. Updating the values goes like this: Store the value in the hashtable. Push the new (value, key) pair to the heap. Deleting a key is more tricky and requires searching for the value in the heap. This gives something like O(log n) performance, but this solution seems to be cumbersome to me. Are there any data structures which combine the properties of a hashtable for keys and a heap for the associated values? I'm programming in Python, so if there are existing implementations in Python, it is a big plus.
Data structure to store key-value pairs and retrive the key for the lowest value quickly
If you have recently upgrade your Smarty to version 3.x.x some method names are changed in your case "clear_assign" is changed to "clearAssign" for more information get offline documentation of "Smarty 3.0.x" from HERE or online documentation from HERE
I need to remove the cache when a user change language, but I get an error message. $smarty = new Smarty; //$smarty->force_compile = true; $smarty->debugging = true; $smarty->caching = false; $smarty->cache_lifetime = 120; if (isset($_COOKIE['country'])) { $country = $_COOKIE['country']; $language = "eng"; if ($country == "NO"){ $language = "nor"; $smarty->clear_all_cache(); } } I also get this message when i use clear_assign: function call 'clear_assign' is unknown or deprecated
function call 'clear_all_cache' and 'clear_assign' is unknown or deprecated. error in smarty?
Pass it to your AsyncTask constructor make it static variable
Can I use getCacheDir() only in a class that extends Activity? I would like to use it in an AsyncTask so that I can do the time intensive cache file saving in it. Thanks Chris
Android getCacheDir in AsyncTask
Zend_Cache_Frontend_Output may be what you need here: if (!($cache->start('indexpublic'))) { // output everything as usual $this->render('indexpublic'); $cache->end(); // output buffering ends } Before that, the cache manager needs to be initialized (could be in the bootstrap), e.g: $frontendOptions = array( 'lifetime' => 7200 ); $backendOptions = array( 'cache_dir' => '/tmp/' ); // getting a Zend_Cache_Frontend_Output object $cache = Zend_Cache::factory('Output', 'File', $frontendOptions, $backendOptions);
I have an action that renders two different view scripts based on whether the user is logged in or not. class IndexController extends Zend_Controller_Action { .... public function indexAction() { $auth = Zend_Auth::getInstance(); if($auth->hasIdentity()) { $this->render('indexregistered'); return; } else { $this->render('indexpublic'); return; } } .... } I have seen quite some useful examples on how to use the Zend Cache and they seem to be based on the fact that the action renders one particular script. What am really looking at is the best approach to cache the indexpublic script which gets quite some hits and I would really like to avoid the Zend MVC overhead if possible.
How do I use Zend Cache on this particular problem
In a real ASP.NET MVC site you will hardly have the chance to use OutPutCache. The problem is that there is no save way to use donut caching. This means you cache most of the page but have some parts that are user specific that you dont cache. This works in old school ASP.NET. Phil Haack wrote about donut caching but it turned out to be not feasible in MVC2 http://haacked.com/archive/2008/11/05/donut-caching-in-asp.net-mvc.aspx Often I started using OutputCaching not thinking about this little user specific detail everybody forgott. You might think: I can work around it by doing some Javascript stuff that loads dynamic data...bad idea for a public facing website. There are non javascript clients and there is the google bot. First I was frightened that this might bring our high traffic site down: But it works very well by caching the results of calls into the datalayer.
I have an ecommerce working in ASP.Net MVC. I'm using Caching to improve performance in my pages and it's working. I'd link to know what offers better performance, for example, I can set OutputCache in my views and and use this cache for all page OR I could get my List of Products in controller, put it on cache (like the code below) and send it to View to render for the user? private IEnumerable<Products> GetProductsCache(string key, ProductType type) { if (HttpContext.Cache[key] == null) HttpContext.Cache.Insert(key, ProductRepository.GetProducts(type), null, DateTime.Now.AddMinutes(10), Cache.NoSlidingExpiration); return (IEnumerable<Products>)HttpContext.Cache[key]; } public ActionResult Index() { var home = new HomeViewModel() { Products = GetProductsCache("ProductHomeCache", ProductType.Product) Services = GetProductsCache("ServiceHomeCache", ProductType.Service) }; return View(home); } Both work, but I'd like to know what is the preferred method to improve performance, or are there other, better ways to do this?
Cache of Objects or Output in View: Which is better?
You can add rand seed to your js file. I mean <script src='jsFile.js?seed=12345' And every time you want to empty cache - change seed number Update: as I understood you have to write like this config.templates_files = [ '/mytemplates.js?seed=12345' ];
I've created a page that uses the CKEditor javascript rich edit control. It's a pretty neat control, especially seeing as it's free, but I'm having serious issues with the way it allows you to add templates. To add a template you need to modify the templates js file in the CKEditor templates folder. The documentation page describing it is here. This works fine until I want to update a template or add a new one (or anything else that requires me to modify the js file). Internet Explorer caches the js file and doesn't pick up the update. Emptying the cache allows the update to be picked up, but this isn't an acceptable solution. Whenever I update a template I do not want to tell all of the users across the organisation to empty their IE cache. There must be a better way! Is there a way to stop IE caching the js file? Or is there another solution to this problem? Update Ok, I found this section in the CKEditor API that will allow me to use the "insert timestamp into the url" solution suggested by several people. So the script now looks like this: config.templates_files = [ '/editor_templates/site_default.js?time=' + utcTimeMilliseconds ]; Thanks for your help guys.
How do I stop js files being cached in IE?
Depending on how you're downloading the files, they may already be getting added to the cache. How are you downloading them now? You can add items to the IE cache. Natively you have a couple of options: URLDownloadToCacheFile() will do it in one nice "easy" step. CommitUrlCacheEntry() is the hardcore way of doing it. I assume the sample you found uses FindFirst/FindNextUrlCacheEntry() to enumerate the cache, so you should be able to add the interop you need for CommitUrlCacheEntry() fairly easily. However, as a former member of the IE team, I cannot recommend enough that you should not use the Wininet cache for anything. It is not reliable, it can be cleared out from underneath you, it frequently gets corrupted, it has some hard limits on how many things it can store, it's subject to various rules you don't understand, and it's going to be optimized for IE's usage, not yours. Seriously, don't do this. If you really need a cache, write your own.
I'm writing a program in C# using the WPF framework. I need to display images, and I'd like to cache them to avoid downloading them constantly. I can code my own cache, however, IE already has a caching system. I can find code to read entries out of the IE cache, however I've found nothing dealing with the issue of adding items to the cache. Is there a good way to do it, or should I just implement a separate cache?
Working with the IE cache
For Java, take a look at H2. It has in-memory databases, is written in Java and should provide the performance you need. Derby or HSQLDB are other Java alternatives.
in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql
cached data base
This is how I resolved it. NorthScale was not throwing any error when the serialization failed. I serialized my domain object with Binary Serialization and was able to find out which classes were failing (since they were not marked as [Serializable]). Fixed it and it worked Seeing the results of protobuf-net I am thinking about switching my serializer too
I have this huge domain object(say parent) which contains other domain objects. It takes a lot of time to "create" this parent object by querying a DB (OK we are optimizing the DB). So we decided to cache it using memcached (with northscale to be specific) So I have gone through my code and marked all the classes (I think) as [Serializable], but when I add it to the cache, I see a Serialization Exception getting thrown in my VS.net output window. var cache = new NorthScaleClient("MyBucket"); cache.Store(StoreMode.Set, key, value); This is the exception: A first chance exception of type 'System.Runtime.Serialization.SerializationException' occurred in mscorlib.dll SO my guess is, I have not marked all classes as [Serializable]. I am not using any third party libraries and can mark any class as [Serializable], but how do I find out which class is failing when the cache is trying to serialize the object ? Edit1: casperOne comments make me think. I was able to cache these domain object with Microsoft Cache Application Block without marking them [Serializable], but not with NorthScale memcached. It makes me think that there might be something to do with their implementation, but just out of curiosity, am still interested in finding where it fails when trying to add the object to memcached
Serialization for memcached
Imagine the media server as an S3 bucket, would probably make your life easier to understand what should happen where. Install lighthttpd on the media server and serve the images directly from there. For storage, process the image on the main server, upload the image to the media server, store all the info related to the image in the database, so that when you want to serve it, you already have all the info available and you assume for all the right reasons that the image is still there :) As for the way you want to do it, i think it would cause a serious bottleneck and raise alot of network traffic, you are kind of trying to implement "messages" found in distributed systems and we all know the pitfalls involved there. I say keep it simple!
So, I have three server, and the idea was to keep all media (images, files, movies) on a media server. I never got around to do it but I think I probably should. So these are the three servers: WWW server DB server Media server Visitors obviously connect to the WWW server and currently image resizing and cache:ing is done on the WWW servers as the original files are kept there. So the idea for me is for image functions I have, that does all the image compositioning, resizing and cahceing would just pie the command over to the media server that would return ther path to the finnished file. What I don't know is how to handle functions such as file_exists() and figuring out image dimensions when needed before even any image management comes into play. Do I pipe all these commands to the other server, via HTTP? I was thinking along the ways of doing it this way: function image(##ARGS##){ if ($GLOBALS["media_host"] != "localhost"){ list ($src, $width, height) = file('http://$GLOBALS[media_host]/imgfunc.php?args=##ARGS##'); return "<img src='$src' height and width >"; } .... do other stuff here } Am I approaching this the wrong way? Is there a better way to do this?
How to handle media kept on a separate server (PHP)
The System.Web.Caching.Cache object is available already for you by adding this to the controller: this.HttpContext.Cache That is the already in-built cache that's also available in web forms.
Is it correct to implement my caching object like this in my controller : public class HomeController : BaseController { public static Cache cachingControl = new Cache(); ... And I Use it like this : public ActionResult Home() { IndexViewData view = new IndexViewData(); view.UserId = UserId; if (cachingControl.Get("viewHome") != null) { view = (IndexViewData)cachingControl.Get("viewHome"); } else { view.allAdsList = AllAds(5000, 0); if (Request.QueryString["voirTous"] != null) view.loadGeneral(true); else view.loadGeneral(false); cachingControl.Insert("viewHome", view); } view.adsList = SetupSearch(5, false, 0); return View(view); } But When I Call this line : if (cachingControl.Get("viewHome") != null) { They trow me the error NullErrorException But I know it can be null this is why i'm put this condition to Do you have an alternative or a tips for me thank you! P.S.: I Know that the code is weird :P but I must to support it ...
Implement System.Web.Caching.Cache Object in a controller to cache specific query
The only way you can consistently do this is if you are using https. If not you have no way to enforce the browser to not use a cached page. There are the hacks you mentioned about but they are not full proof. If it is really important, use https because each request will force a reload.
Basically all pages on this site I am building cannot be accessed when the user clicks on "Back" (or with key control) in the browser, and the page should expire if one is trying to navigate back in history. I put into Global.asax::Application_BeginRequest Response.Cache.SetCacheability(HttpCacheability.NoCache) Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)) Response.Cache.SetValidUntilExpires(False) Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches) Response.Cache.SetNoStore() This would clear out the cache and disallow going back to any pages when the user is logged out, but doesn't do the job while the user is logged in. I saw posts where people suggested using a javascript approach, by calling History.Forward(1) on the page. But I wouldn't like to do this, as it will require javascript enabled to work (which user can disable). Appreciate any suggestions.
ASP.Net: Expiring a page when navigating back
You don't need access to the client machines for this. Best practices are all serverside: GZip everything; Minify all Javascript and CSS; Minimize the number of external HTTP requests. Try to keep these to, say, 1-5 JS files, 1-5 CSS files and a few images. If you have lots of images you may want to use CSS spriting; Version you images, CSS and Javascript; and Use a far futures Expires header for all images, CSS and Javascript. The last two points mean the content is cached until it changes. You can use an actual version number for official releases (eg jquery-1.4.2.min.js, the 1.4.2 is a version number). For code for your own application, often you'll use something like the mtime (last modified time) of the file as a query string. For example: <script type="text/javascript" src="/js/code.js?1233454356"></script> After the ? is generated from the modified time of the file. The client downloads it once and because of the far future Expires header it won't be downloaded again until the version number changes, which won't happen until the file changes.
I am working for an intranet application. Therefore I have some control on the client machines. The JavaScript library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the JavaScript library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a JavaScript library for subsequent requests but I would like the library to be cached once for all subsequent requests, sessions and users. What is the best mechanism to achieve this?
How best to pre-install OR pre-load OR cache JavaScript library to optimize performance?
You shouldn't have permanent js files located in your files folder. Either they should be in your theme or a module that uses them. The files folder is meant for uploaded files and other files that Drupal creates on the fly. The reason to your problem is probably that Drupal has write access to the folder where you have placed the files, and it cleans out in it, since it's only used for compressions. You should think twice about which files you let drupal write to. Letting Drupal have write access to a script file you use, is an added security risk. Generally, you don't want to let Drupal write to js or php files. This is because if a cracker would be able to get Drupal to write to those files, he would be able to more or less gain control over your entire site. This is also why the compressed js files that drupal uses has a long and random name. So try moving those files into your theme and see if that doesn't fix it. If you want to link to them you can do drupal_add_js(drupal_get_path('theme', 'name_of_your_theme') . 'path/to/file.js');
When I delete "Page requisites" cache, my 2 Javascript files that I use for my home page image rotator get deleted. Here is how I'm adding the javascript for those 2 files into a WYSIWYG editor with PHP code enabled: <? drupal_add_js('sites/default/files/js/jquery.jcarousellite.js'); drupal_add_js('sites/default/files/js/cycle.js'); ?> Some html here for the rotator...... Then I also have some JS code added to the home page only using the js Injector . Any ideas why this is happening? thanks
Drupal flushing "Page requisites" cache also deletes javascript files
Double check the security permissions for the changed files and the temp files. If the web server's account cannot read them, then your changes won't show up. Ensure you have an entry for the "neutral" culture resource. If you do not have an entry in your main resx file, then the compiler will not emit code to translate that particular Label/Button, etc. Also forcing the site to recompile is a good idea (but you have that up there). Sometimes what I do after a resource file change is to open the aspx page and just add a space to the end, save and close. Voila it will be recompiled.
I'm having some strange problems with a site using .net 2.0 and IIS 6. The site uses resx files so it's localized in many languages. In some of those files we make changes to the resx and recompile, and the changes don't show up on the site. Ever. It's primarily in one language (Arabic) that this happens, but occasionally other languages as well. My first thought was that the changes were not drastic enough for visual studio to build them in, but after publishing I can verify the changes in the new resx files. Also I can go on the server and physically confirm the changes in the resx, and open the webpage and the changes are not showing. I thought there may be a server at work caching it, but it's the same from multiple internet connections. What I've already tried I have restarted IIS, both the individual sites and the whole service. I have restarted the server. I have recycled the application pool. I have cleared out all the temp files in the .net folder. I have inserted headers into the site setting it to no-cache and verified it I have inserted meta tags to indicate no-cache I have deleted the resx, accessed the site and got the error, then reuploaded it. I have drastically changed the size of the resx file in hopes it would trigger something. I have drastically changed the amount of text in the field, none of which shows up. I have triple checked that I'm asking for the right fields in the resx. I have verified my webhost is not using any caching service. For some reason the changes just aren't taking. Any ideas an help would be greatly appreciated, this one has me and other knowledgeable folks here completely stumped.
Strange IIS Caching Behavior with Resx files
4 KeywordExtension will let you put a keyword in a file whose results you can tear apart in order to get the hash. Share Improve this answer Follow answered Feb 9, 2010 at 22:13 Ignacio Vazquez-AbramsIgnacio Vazquez-Abrams 786k155155 gold badges1.4k1.4k silver badges1.4k1.4k bronze badges Add a comment  | 
I'd like to do the same thing that they're doing here in stackoverflow. <link rel="stylesheet" href="http://sstatic.net/so/all.css?v=6274"> <script type="text/javascript" src="http://sstatic.net/so/js/master.js?v=6180"></script> <script src="http://sstatic.net/so/js/question.js?v=6274" type="text/javascript"></script> Do you see those ?v=... ? I'd like, at each commit, to change some variable in my code in order to make browsers refresh their cache when needed. It may be even just one for each commit (it doesn't need to monitor each file in an independent way) but I'd like it to be automatically generated when I commit. The difference is that I'm using mercurial and not subversion. Any hint?
How to include in code an unique ID related to a mercurial commit?
You have two options, you can use either / or both. 1) Cache the call at the web service. You need to ensure that the cache is indexed against the exact parameters used so you don't send back "the wrong answer" to a request. For example "http://webservice/GetSomething/983" should only cache the result of "GetSomething" where the id parameter is 983. If another request for 983 comes in, you can use your cache, otherwise you'll make a new request. 2) Cache the response at the client. Be careful about doing this with large volumes of data as you'll start consuming too much memory. Essentially, you create a JavaScript cache for the response data - you'll still need to bear in mind the parameters used for the call to ensure you don't use an item in the cache that was called using different parameters.
Any clues on how to do it ?
caching remote json (or xml) calls (from webservices)
4 This technique is called memoization. This post from Bart de Smet's blog should get you started: Memoization for dummies Share Improve this answer Follow answered Jan 30, 2010 at 19:41 dtbdtb 215k3737 gold badges406406 silver badges434434 bronze badges Add a comment  | 
C# Win Forms Application, requires caching of function's return values, that should be based on the parameters so if function's parameters changes while calling the function it must call again, but for same parameters it should return the value directly from cache, is there any existing C# facility available or any rapid easy technique or link would appreciate.
Is there a readymade decorator available to cache the function return value based on parameters?
3 You can use System.Web.HttpRuntime.Cache to access the cache statically. Share Improve this answer Follow answered Jun 15, 2011 at 20:20 Shane CourtrilleShane Courtrille 14k2222 gold badges7878 silver badges113113 bronze badges Add a comment  | 
I have a wrapper class for Caching (CachingBL) where I store users that are currently signed in (some of their session info). In CachingBL wrapper there is actually a dictionary of users, and I am putting that dictionary in cache like this: HttpContext.Current.Cache.Insert(...): At the session end I would need to access to the cache like this: var cacheBL = (CacheBL)HttpContext.Current.Cache.Get("MyCache_CacheSlot"); But the problem is that HttpContext.Current is empty, so I cannot access the Cache object. The Cache itself is not empty (tested), but I can't figure out how to access it at Session_End.
asp.net - deleting cache object at session end
The solution for this is to create a scheduled task/crontab that will simulate a sync for you. You will have to create a branchspec for this client, export P4CLIENT=prefetch p4 -Zproxyload sync //depot/main/... This command will not copy the files to your client, it will only tell the proxy to cache them.
Perforce proxy does cache files only when a user does sync them so if you are the single user of a perforce proxy that is syncing some locations you will gain almost nothing from it. The question is how to make perforce cache the files for you?
How do I precache files on a perfoce proxy server in order to obtain some decent speed?
You need to use IShellItemImageFactory::GetImage(). .NET interop is here. http://www.pinvoke.net/default.aspx/Interfaces.IShellItemImageFactory There is also a sample using Direct2D and this interface on msdn.microsoft.com.
I wanted to access the thumb cache of vista and 7 to be used in my ImageList. I know how to do it in XP by means of the thumbs.db files, but in vista and 7 the thumbs.db is not present so how will i do it?
How to access thumbnail cache of vista and 7 using c#
I don't think I'm at the point where I need to go through and get memcached setup for my Rails app, but I would like to do some simple caching on a few things Then use your existing database to store your cached items. (You are using a database, right?) memcached is only a fast-but-dumb database. If you don't need the ‘fast’ part(*) then don't introduce the extra complexity, inconsistency and overhead of having a separate cache layer. memcache with file_store is a dumb-but-not-even-fast database, and thus of little use to anyone except for compatibility/testing. (*: and really, most sites don't. Memcache should be a last resort when you can't optimise your schema, denormalise it for common queries or pre-calculate complex operations any further. It's not something the average web application should be considering as essential for scalability.)
I don't think I'm at the point where I need to go through and get memcached setup for my Rails app, but I would like to do some simple caching on a few things. Is using file_store for as the config.cache_store setting sufficient enough? Or will having to access files for data over and over kill the benefit of caching in the first place from a server load stand point? Or maybe I'm not really understanding the difference between file_store and mem_cache_store...
Is there a big performance hit with using file_store for storing the cache as opposed to mem_cache_store?
Try adding these headers: header("Cache-Control: no-cache, must-revalidate"); header('Pragma: no-cache'); Also maybe pass a rand number and or time() in the query string?
I have an application written in PHP/Javascript which uses AJAX extensively. I am concerned that the default caching behaviour for IE7 and IE8 set for our organisation, of 'Automatic' will scupper my application. There are approximately 1500 users and my IT department say that they won't change the caching option in IE for all those users. My question is: How can I absolutely guarantee that if I make a change to my application, that all users will immediately see that change? Also, how can I guarantee that AJAX will always bring back fresh results? Do I really have to resort to making all my URLs unique for every call? There seems to be a fair amount of uncertainty on this topic on the internet. There must be a definitive answer that always works. More Questions Why doesn't just setting the HTTP headers in the AJAX files do the trick? Also, how do I know that these solutions really work? What is the correct procedure for testing caching behaviour?
PHP Ajax Caching for IE7 and iE8
I have defined a block function that excludes small blocks of the templates from cache. function smarty_block_dynamic($param, $content, $smarty) { return $content; } $smarty->register_block("dynamic", "smarty_block_dynamic", false); Thus, anything in the template surrounded by {dynamic}{/dynamic} will not be cached. This allows for the output of, for example, session based data such as the logged in user name et cetera.
I was wondering. How do you guys deal with scenario of website where you have login and log out states at the top. So if someone is logged in, you say "Hello Scott". If someone's not logged in, it says "Log In". I am using force compile = false. And using (!$smarty->is_cached('index.tpl',$template_cache_id)) { do something } What do you guys use to keep some sections not cache and others cached for such a common scenario? My site is photoidentify.com Thanks!
smarty cache and login states
A relation will be kept, but it will be different than you expect. When you serialize two instances of Character that both refer to the same EquipmentHandler, you're going to get two separate instances of this EquipmentHandler, instead of the single one you expected. As this example illustrates: <?php echo "BEFORE SERIALIZE:\n"; class A { } class B { } $a = new A; $b = new B; $a -> b = $b; $a2 = new A; $a2 -> b = $b; var_dump($a->b); var_dump($a2->b); echo "AFTER SERIALIZE:\n"; $a3 = unserialize(serialize($a)); $a4 = unserialize(serialize($a2)); var_dump($a3->b); var_dump($a4->b); The output of this is: BEFORE SERIALIZE: object(B)#2 (0) { } object(B)#2 (0) { } AFTER SERIALIZE: object(B)#5 (0) { } object(B)#7 (0) { } Look for the number after the pound. This refers to the object ID within PHP. Before serializing both $a->b and $a2->b refer to an object with object ID #2: the same instance. But after the serialization they refer to object IDs #5 and #7: different instances. This may, or may not, be a problem for you. To restore the connection to one single B object, you're going to have to get a little tricky. You could use the __sleep() handler in A to flatten the actual reference to an INSTANCE of B to just a mentioning of B: "I had a reference to B". Then implement the __wakeup() handler using that mentioning of a B instance in A to acquire a single instance of a new B object. BTW. The PHP session extension already does serializing automatically, no need for you to pre-serialize it yourself :)
I am writing a fairly complex PHP applications where a single user action can trigger changes in many other sub-systems, and I'm contemplating using an observer pattern. However, I am wondering if I have to re-create all the objects involved. Is it possible to while serializing objects to store their relationships? For example $equipmentHandler = new EquipmentHandler(); $character = new Character(); $character->subscribeOnEquipmentChanged($equipmentHandler); $_SESSION['character'] = serialize($character); $_SESSION['subscriber'] = serialize($equipmentHandler); Will the relationship be preserved after unserializing? Or do I have do lump them all into one object? $cache['character'] = $character; $cache['subscriber'] = $equipmentHandler; $_SESSION['cache'] = serialize($cache); Any advice would be appreciated. (PS. The character data requires many DB requests to create and I am thinking of storing it by doing a write to cache and DB, but only read from cache policy, so it will be serialized anyway)
PHP 5 - serializing objects and storing their relationship
1) I would cache them. You can always set CacheItemPriority.Low if you are worrying about the cache 'filling up' 2) Yes the cache is designed to be accessed regularly. It can lead to huge performance improvements.
My first time really getting into caching with .NET so wanted to run a couple of scenarios by you. Question 1: Many expensive objects I've got some small objects (simple int/string properties) which are pretty expensive to instantiate. These are user statistic objects which each user may have 1 - 10 of. Is it good or bad practice to fill up the cache with these fellas? Question 2: Few cheap regularly used objects Also got a few objects (again small) which are used many times on every page load. Is the cache designed to be accessed so regularly? Fanks! stackoverflow: Cracking question suggestion tool btw.
ASP.NET object caching - how much is too much?
Remember that in UNIX, everything is a file. When you put that many files into a directory, something has to keep track of those files. If you do an :- ls -la You'll probably note that the '.' has grown to some size. This is where all the info on your 10000 files is stored. Every seek, and every write into that directory will involve parsing that large directory entry. You should implement some kind of directory hashing system. This'll involve creating subdirectories under your target dir. eg. /somedir/a/b/c/yourfile.txt /somedir/d/e/f/yourfile.txt This'll keep the size of each directory entry quite small, and speed up IO operations.
I'm using PHP to make a simple caching system, but I'm going to be caching up to 10,000 files in one run of the script. At the moment I'm using a simple loop with $file = "../cache/".$id.".htm"; $handle = fopen($file, 'w'); fwrite($handle, $temp); fclose($handle); ($id being a random string which is assigned to a row in a database) but it seems a little bit slow, is there a better method to doing that? Also I read somewhere that on some operating systems you can't store thousands and thousands of files in one single directory, is this relevant to CentOS or Debian? Bare in mind this folder may well end up having over a million small files in it. Simple questions I suppose but I don't want to get scaling this code and then find out I'm doing it wrong, I'm only testing with chaching 10-30 pages at a time at the moment.
PHP writing large amounts of files to one directory
You may find you need to use the PRG pattern (Post/Redirect/Get). With this pattern, the handler for the POST will: perform the heavy computations, determine the search results, and store them in the user's session (or store them in the db keyed by the user's session). Send a response with a redirect header to an idempotent page, which is then fetched by the browser using a GET, when it follows the redirection. When the redirected-to page is accessed, the server displays the search results page, computed from the stored data in the session, and at a different URL from the URL that was POSTed to. You should be able to use normal caching headers for this (search results) page, depending on how volatile your search results will be.
I have got a familiar problem. I am using Django-0.97, and cannot upgrade -- though the version of Django being used should not play any part in the cause of the problem. I have a search view that presents a form to the user, and, on submission of the form via POST, performs heavy computations and displays a list of items that are generated as a result of those computations. Users may click on the "more info" link of any of those items to view the item detail page. Users on IE, once they are on the item detail page for any item from the search results page, get the familiar "webpage has expired, click on refresh button, yadda yadda yadda" error when they hit the "back" button on the browser. Sadly, a good majority of the users of the site use IE, are not tech savvy, and are complaining about this problem. Thinking that setting up a cache backend may solve the problem, I configured a simple cache backend. I juggled with per-site cache and per-view cache, but to no avail. And now, I am not too sure I have set up the cache stuff properly. Any hints, suggestions that may help in mitigating the problem will be hugely appreciated. Thanks. UPDATE (20 July 2009) I have used Fiddler to inspect the HTTP headers of both the request and response. IE is sending the Pragma: no-cache header in the POST request. The HTTP response generated as a result of the request has the following headers: Cache-Control: public, max-age=3600 Date: someDateHere Vary: Cookie And, yes, I am not using the PRG pattern.
Setting up cache with Django to work around the "page has expired" IE problem
I think you are on the right track with putting version numbers on your js css files. And you may want to use a build tool to put all of this together for you like http://ant.apache.org/ or http://nant.sourceforge.net/
What are your tricks on getting the caching part of web application just right? Make the expiry date too long and we'll have a lot of stale caches, too short and we risk the servers overloaded with unnecessary requests. How to make sure that all changes will refresh all cache? How to embed SVN revision into code/url? Does having multiple version side-by-side really help to address version mismatch problem?
How to deal with browser cache?
What I have done is use page caching, and then make an AJAX call to fetch either: The entire header. Specific parts of the header that are dynamic. Also, if you are just looking to include the users name, a better way exists. Simply store their name in a cookie and then use javascript to display it in the header. With no cookie, show a link to go login or register.
I have a set of largely static pages which I'd be happy to page cache for relatively long periods apart from the fact that their layout includes a much more dynamic header. The most promising idea so far seems to be using action caching without layout :- class SomethingController < ApplicationController caches_action :index, :layout => false end Then at least the main content of the page is cached. Does that make sense? Or would I be better off doing something else, e.g. fragment caching, server-side include, etc...?
What is the best Rails caching option for largely static pages with a dynamic header
It appears that you cannot do an update on a DOM id element that is inside a form tag in ie. Has anyone found a way around this? My code works fine when I move it outside the form tag, and also when I just comment out the form tag and don't move the DOM element.
I am trying to submit a form using Ajax.Updater and have the result of that update a div element in my page. Everything works great in IE6, FF3, Chrome and Opera. However, In IE7 it sporadically works, but more often than not, it just doesn't seem to do anything. Here's the javascript: function testcaseHistoryUpdate(testcase, form) { document.body.style.cursor = 'wait'; var param = Form.serialize(form); new Ajax.Updater("content", "results/testcaseHistory/" + testcase, { onComplete: function(transport) {document.body.style.cursor = 'auto'}, parameters: param, method: 'post' } ); } I've verified using alert() calls that param is set to what I expect. I've read in many places that IE7 caches aggressively and that it might be the root cause, however every after adding the following to my php response, it still doesn't work. header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); To further try to fix a caching issue I've tried adding a bogus parameter which just gets filled with a random value to have different parameters for every call, but that didn't help. I've also found this, where UTF-8 seemed to be causing an issue with IE7, but my page is clearly marked: <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> Does anyone have any idea what could be wrong with IE7 as opposed to the other browsers I tested to cause this kind of issue?
Prototype's Ajax.Updater not actually updating on IE7
Evaluating an element by name (form.elementName) when non-existent returns undefined. Evaluating the property value of an object ($('elementId')) returns null. Undefined and null are treated differently.
I'm using mootools to toggle the display (and existence) of two DOM elements in one of my forms. Then, I am using javascript to validate the form to make sure that all of the required fields were filled in. The problem is that the the browser seems to be caching the elements. For example, I have html like this: <input name="inputbox" id="inputbox" type="text" /> <select name="selection" id="selection">...</select> And the javascript for validation is something like this: if (form.inputbox != null && form.inputbox.value == "") { //don't submit form { else if (form.selection != null && form.selection.value == 0) { //don't submit form } Now, this works fine when the page is first loaded and the input element has been removed. However, when I click the button that replaces the input element with the select element, from then on the form.inputbox and form.selection in the javascript code contain the respective element as it was in its last state in the DOM - even if it is no longer in the DOM. So is the javascript caching the DOM and not updating the elements when they are removed from the DOM? What is going on here, and, more importantly, how should I go about fixing it? Edit: I am using mootools to do the removing and replacing of the elements, the documentation for the respective functions can be found here and here.
Does javascript cache DOM elements?
3 Wow, this one is a bit of a doozy. I did some research and apparently lots of people have been experiencing this problem for years. It's possible that you are caching types that are defined in your web site. Such types do not have an assembly, so one is randomly generated for them at runtime. The next time you recycle your web server, your types are given a different randomly generated assembly, and your cache won't be able to deserialize because the old assembly no longer exists. Here are some possible fixes you can try: Define all types in a separate assembly rather than in your web site. On your local box, see if running the site in webdev rather than iis has the same behavior. If you're using out-of-proc (SQL Server) session/cache storage, try using in-proc (local in-memory) session/cache storage Delete all subdirectories under C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files (as Dimi mentioned, use Unlocker to break locks any applications might have) If you precompile your web site when deploying, make sure your web server is shut down before deploying. (Apparently if users are requesting pages on the old version of the site, it will screw things up when the new version is deployed.) Share Improve this answer Follow edited Feb 4, 2009 at 1:31 answered Feb 2, 2009 at 18:49 davogonesdavogones 7,3293232 silver badges3636 bronze badges Add a comment  | 
I have recently begun working with AJAX-Enabled WCF, and have been plagued with this .NET caching issue - Could not load file or assembly App__Web__hamznvwf I was having issues with this 4 to 5 times a day on my server (Win 2003) - see first post So I moved my files off of the server and started running the project locally (Win XP). Arghh! The issue came up again - locally. And it happened after a reboot! Do you think this is a network policy causing this issue on my local machine and server? Guess I am going to try to open C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files to Everyone. And see if this helps? Any other things I should try before I call MS Support? How do I delete my "AppNameFolder" in Temporary ASP.NET Files? I thought I could shutdown the built-in debugging web server and that would be it. Something is preventing me from deleting it.
ASP.NET: WCF and Could not load file or assembly 'App_Web_hamznvwf
2 For php5, just make use of the destructor in the record class. class Table { /* [...] */ protected $record_cache = array(); public function getRecord($id) { if(isset($this->record_cache[$id]) return $this->record_cache[$id]; $r = $db->getRecord($id); if($r instanceof Record) { $this->record_cache[$id] = $r; } return $r; } public function _unregister($id) { unset($this->record_cache[$id]); } } class Record { /* [...] */ function __destruct() { this->table->_unregister($this->id); } } If you don't want to have a public method for this in Table, you could probably use some callback trick or some other clever hack :) Share Improve this answer Follow answered Jan 2, 2009 at 21:08 gnudgnud 78k55 gold badges6565 silver badges7878 bronze badges 1 Yes -- but the same method could be used with some external cache and reference count. – gnud Jan 3, 2009 at 9:37 Add a comment  | 
I'd like to implement database caching functionality in PHP based on reference counts. For example, code to access the record in table foo with an ID of 1 might look like: $fooRecord = $fooTable->getRecord(1); The first time this is called, $fooTable fetches the appropriate record from the database, stores it in an internal cache, and returns it. Any subsequent calls to getRecord(1) will return another reference to the same object in memory. $fooRecord signals $fooTable when it destructs, and if there are no remaining references, it stores any changes back to the database and removes it from the cache. The problem is that PHP's memory management abstracts away the details about reference counts. I've searched PECL and Google for an extension to do so, but found no results. So question #1 is: does such an extension exist? In an alternative approach, $fooTable returns a super-sneaky fake object. It pretends to be the record by forwarding __call(), __set(), and __get(), and its constructor and destructor provide the appropriate hooks for reference counting purposes. Tests, works great, except that it breaks type-hinting. All my methods that were expecting a FooRecord object now get a Sneaky object, or maybe a FooSneaky if I feel like creating an empty subclass of Sneaky for every one of my tables, which I do not. Also, I'm afraid it will confuse maintenance programmers (such as myself). Question #2: Is there another approach I've missed?
Reference counting in PHP
I can give you some metrics for our environment. We run memcached for Win32 on 12 boxes (as cache for a very database heavy ASP.NET web site). These boxes each have their own other responsibilities; we just spread the memcached nodes across all machines with memory to spare. Each node had max 512MB allocated by memcached. Our nodes have on average 500 - 1000 connections open. A typical node has 60.000 items in cache and handles 1000 requests per second (!). All of this runs fairly stable and requires little maintenance. We have run into 2 kinds of limitations: 1. CPU use on the client machines. We use .NET serialization to store and retrieve objects in memcached. Works seamless, but CPU use can get very high with our loads. We found that some object can better be first converted to strings (or HTML fragments) and then cached. 2. We have had some problems with memcached boxes running out of TCP/IP connections. Spreading across more boxes helped. We run memcached 1.2.6 and use the .NET client from http://www.codeplex.com/EnyimMemcached/
Has anyone experienced memcached limitations in terms of: of objects in cache store - is there a point where it loses performance? Amount of allocated memory - what are the basic numbers to work with?
memcached limitations
4 Generally the memory used by the cache wouldn't get swapped out. It would be marked as in-use. edit - Yes virtual memory is memory blocks copied to disk because the RAM is full. In order to use it you have to copy it back into memory (which is slow). Cache is keeping copies of recently used files in memory because this is quicker than going back to disk for them. There is a conflict here - the more RAM you use for cache, the more other memory you have to swap out to disk, assuming you have none free left. This isn't necessarily bad, there are lots of things running on your computer that only run occaisonally and so pushing them out to disk to make space for cached copies of your photos might make sense. Share Improve this answer Follow edited Feb 6, 2018 at 16:57 rogerdpack 64.5k3636 gold badges275275 silver badges395395 bronze badges answered Nov 3, 2008 at 4:51 Martin BeckettMartin Beckett 95.4k2828 gold badges191191 silver badges265265 bronze badges 4 ah, so the swap space is just to support virtual memory, and not so much to improve performance, correct? – Kevin Dente Nov 3, 2008 at 5:11 Yes. (obligatory minimum comment length text here) – cfeduke Nov 3, 2008 at 5:30 It can improve performance by freeing memory to be used by caches. – CesarB Nov 3, 2008 at 9:22 Can it slow performance if my data is on an SSD and my swap on a spinning hard disk? – User1 Jun 12, 2012 at 3:29 Add a comment  | 
Isn't the point of caching things in main memory to avoid the expensive disk i/o? If you're caching things in the swap space of a hard drive, how does that avoid disk i/o?
What's the point of a cache on the swap space?
Explicitly calling a MethodInfo is indeed slow - but you can make it much, much faster if you convert it into a delegate. See this blog post for example. That doesn't help in terms of finding methods etc of course, but if you're going to call the method repeatedly it's worth bearing in mind. The cache key sounds easy enough to build - types and strings compare nice and easily. Values are always relatively simple :) Once built, is the cache going to be read-only? Can you separate out the phases so that you can guarantee it won't be read before being fully built? If so, you should be able to get away without any explicit locking - basically a dictionary from your custom key type to your custom value type.
In .NET 3.5, I'm going to be working with System.Reflection to use AOP (probably in the context of Castle's Windsor Interceptors) to do things like define which security actions need to be performed at the method level, etc. I have heard that some parts of Reflection are slow (I've read the MSDN article around it), and would like to cache these parts (when I get closer to production code, at any rate). I would like to validate my approach: cache key is {type} + {case sensitive method name} + {list of parameter types} cache key objects can be compared via an Equals operation cache payload is a {MethodInfo} + {list of custom-attributes defined on the method} cache is injected to my interceptors via constructor injection cache can be maintained for a long time (based on the assumption that I'm not going to be writing self-modifying code ;-) ) Update: I'm not intending to call methods via Reflection something I'm writing myself; just (at the moment) look up attributes on the ones I want to inject functionality into, where the attributes define the behaviour to inject. My interceptors at the moment will be using Castle's Windsor IInterceptor mechanism until I notice a reason to change it.
What sorts of things should I do to make a performant and robust reflection cache?
You cannot set secondary caching settings at property level (as far as I know), but you can individually tune cache settings for each entity directly in their XML files. For instance: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="ClassName" table="Table"> <cache usage="nonstrict-read-write" /> <id name="Id" type="Int64" ... Where the cache "usage" property can be any of the following values: read-write: assures read committed isolation, makes sure data is consistent but doesn't reduce DB access as much as the other modes, nonstrict-read-write: objects with rare writes, slight chance of inconsistency between DB and cache, read-only: for data objects that never change, no chance of inconsistency.
How can I configure NHibernate to not cache a file? I know I can create a method that does an HSQL, but can I through a configuration setting in the <class>.xml file or the hibernate xml file itself to not cache a property?
NHibernate to not cache a property
If you don't mind using lombok then there is a neat trick to achieve that: @FieldNameConstants(onlyExplicitlyIncluded = true) public enum CacheName { @FieldNameConstants.Include CACHE_A, } and then in your annotation you have access to a static field throught Fields subclass: @Cacheable(CacheName.Fields.CACHE_A) public MyObject getObject(){ //return something } Tested on lombok version 1.18.28.
I would like to use enum in a @Cacheable as its cache name such as @Cacheable(CacheName.CACHE_A.getName()) I have a sample enum like public enum CacheName { CACHE_A("CACHE_A"); private final String name; CacheName(String name){ this.name=name; } public String getName(){ return name; } } I tried to use it like a constant String as cache name in my service method @Cacheable(CacheName.CACHE_A.getName()) public MyObject getObject(){ //return something } This is not working. It works when I declare a constant class with public static String CACHE_A = "CACHE_A"; Is there any workaround if I prefer to use enum over a constant class, I do not see any difference as enum suppose to be fixed, right? please correct me, thanks
Can I use enum in @Cacheable
This is common on ARM platforms that support a cache. The cache line is being used as a temporary store for the exclusive lock. The term in ARM is exclusive reserve granule or the size of locked memory. On systems with a cache, you will find the granule is a cache line size. So internally, the ldrex and strex are implemented as part of the cache resolution policy. You can compare it to cortex-m systems, where the entire memory space is a reserve granule. The ldrex/strex pair are useless for synchronizing with external devices that are not part of an AXI structure. If you want to disable cache to work with an FPGA interface, I don't believe this can work. You would need to implement the cache protocol in the FPGA. For Cortex-M systems, there is no cache structure and custom logic implements a 'global monitor'. The cache mechanism actually seems useful as the cache line could be used as a transactional memory. Ie, either the whole line commits on not. It seems possible to create lock-free algorithm for structures with multiple pointers. The node do not lock an entire list but only an entry. However, I haven't seen it used like this ever. Mainly I think because the ARM documentation recommends not to do this (do not rely on the ERG size).
We are using a zynq-7000 based CPU, so an cortex-a9 and we encountered the following issue while using atomic_flags which are inside an library we are using (open-amp). We are using the second CPU on the SoC to execute bare-metal code. When disabling the dcache, atomic ints are no longer able to be set, here's a simple code which triggers the issue for us: #define XREG_CONTROL_DCACHE_BIT (0X00000001U<<2U) #define XREG_CP15_SYS_CONTROL "p15, 0, %0, c1, c0, 0" #define mfcp(rn) ({uint32_t rval = 0U; \ __asm__ __volatile__(\ "mrc " rn "\n"\ : "=r" (rval)\ );\ rval;\ }) #define mtcp(rn, v) __asm__ __volatile__(\ "mcr " rn "\n"\ : : "r" (v)\ ); static void DCacheDisable(void) { uint32_t CtrlReg; /* clean and invalidate the Data cache */ CtrlReg = mfcp(XREG_CP15_SYS_CONTROL); CtrlReg &= ~(XREG_CONTROL_DCACHE_BIT); /* disable the Data cache */ mtcp(XREG_CP15_SYS_CONTROL, CtrlReg); } int main(void) { DCacheDisable(); atomic_int flag = 0; printf("Before\n"); atomic_flag_test_and_set(&flag); printf("After\n"); } The CPU executes the following loop for atomic_flag_test_and_set: dmb ish ldrexb r1, [r3] ; bne jumps here strexb r0, r2, [r3] cmp r0, #0 bne -20 ; addr=0x1f011614: main + 0x00000060 dmb ish but the register r0 always stays 1. When omitting the function call to DCacheDisable, the code works flawlessly. I really can't find any any information about disabled dcache and atomic flags. Does anybody has a clue? Toolchain: We are using vitis 2022.2 which comes with a arm-xilinx-eabi-gcc.exe (GCC) 11.2.0. Compiler options are -O2 -std=c11 -mcpu=cortex-a9 -mfpu=vfpv3 -mfloat-abi=hard
Disabled DCache will prevent atomic_flag from being set
This is explained in documentation of @CacheEvict, more specifically in the beforeInvocation argument. Setting this attribute to true, causes the eviction to occur irrespective of the method outcome (i.e., whether it threw an exception or not). Defaults to false, meaning that the cache eviction operation will occur after the advised method is invoked successfully (i.e. only if the invocation did not throw an exception). So by default it will only evict when no exception occurs.
What is the behavior of CacheEvict when the exception is thrown in the following method? Will it clear the entries anyway? @CacheEvict(allEntries = true) public void deleteById(int id) { if (!this.repository.existsById(id)) { throw new Exception("Resource not found."); } this.repository.deleteById(id); } In another words, I want to CacheEvict only if no exception occurs.
Cache evict behavior when method throws exception
3 Here is my steps to make Redis work in Strapi 4: install packages: strapi-plugin-redis, strapi-plugin-rest-cache, strapi-provider-rest-cache-redis Create Redis host and add it to env, or directly to a config. You can use redislabs for tests Update config with new props (see below) And you can build a project if it will not work (don't remember for sure did I build it for this plugin.) Config: { "redis": { "enabled": true, "config": { "connections": { "default": { "connection": { "host": "REDIS_HOST", "port": "REDIS_PORT", "password": "REDIS_PASS" }, "settings": { "debug": true } } } } }, "rest-cache": { "enabled": true, "config": { "provider": { "name": "redis", "getTimeout": 5000 }, "strategy": { "contentTypes": [ { "contentType": "api::homework.homework", "hitpass": false } ], "debug": true } } } } Share Improve this answer Follow edited Aug 4, 2023 at 14:17 answered Nov 28, 2022 at 11:09 Ruslan KorkinRuslan Korkin 4,58311 gold badge2929 silver badges2626 bronze badges Add a comment  | 
Set up strapi-plugin-rest-cache with redis. Connection to the redis passes, debug shows that everything works. Entities appear in the redis. But requests are executed with the same time, both with cache and without cache. What else can be seen? "rest-cache": { config: { provider: { name: "redis", options: { max: 32767, connection: "default", }, }, strategy: { enableEtag: true, debug: true, maxAge: 3600000, hitpass: false, keys: { useQueryParams: true, }, contentTypes: [ "api::homework.homework", "api::homework-task.homework-task", "api::homework-part.homework-part", "api::task.task", ], }, }, }, I tried to change various parameters in the config, but it did not lead to anything. After starting the server, the first request is executed in ~1sec. Further, if you send the second one right away, then it runs in 500ms. If you wait about 5 seconds and send the request again, then again it will be ~ 1sec. It does not depend on the cache, it works with and without the cache.
How to enable rest cache in strapi?
I was wondering if it is possible to save it locally and compare it against the data in the collection to prevent it from reading all of the documents in the collection on every refresh? While saving the data locally, is indeed a good solution, remember that also Firestore has its own caching mechanism: For the web, offline persistence is disabled by default. To enable persistence, call the enablePersistence method. Besides that, you can also specify the source of your readings. There are three options, CACHE, DEFAULT, and SERVER. However, if try to read the data only from the cache, you'll lose the updates that are coming from the server. If you want to limit the number of reads, I recommend you read the following article: How to drastically reduce the number of reads when no documents are changed in Firestore?
I currently have a really small web app in which users can make song requests. In Firestore I have a collection called songRequests and every request is a document. For the admin, I created a dashboard in which he can see the incoming song requests. To try it out I used a onSnapshot listener which correctly shows all requests once the component gets mounted, also when I add a new song it gets added properly and it only reads the newly added document. The collection currently has around 10 documents and after a refresh, I get 10 full reads, I was wondering if it is possible to save it locally and compare it against the data in the collection to prevent it from reading all of the documents in the collection on every refresh? const [songs, setSongs] = useState([]); useEffect(() => { // getDocuments(); const unsub = onSnapshot(collection(db, "songrequests"), (snapshot) => { snapshot.docChanges().forEach((change) => { if (change.type === "added") { const newItem = change.doc.data(); console.log(newItem); setSongs((prevState) => [...prevState, newItem]); } }); }); return unsub; }, []);
Reduce firestore reads by saving data locally and comparing it against the data in Firestore?
RFC 7234 defines the cache key: The primary cache key consists of the request method and target URI.... If a request target is subject to content negotiation, its cache entry might consist of multiple stored responses, each differentiated by a secondary key for the values of the original request's selecting header fields. The secondary key uses the headers listed in the Vary response header. See section 4.1 for more detail. When a cache receives a request that can be satisfied by a stored response that has a Vary header field, it MUST NOT use that response unless all of the selecting header fields nominated by the Vary header field match in both the original request (i.e., that associated with the stored response), and the presented request. To answer your specific questions: But the question here is, what is the same request? Does it mean the same resource URI in the case of a GET request? Yes. What about the values ​​of headers included in the request or the content of the body? The content of the body doesn't matter, and the headers only matter if they're listed in the Vary header. I've seen that the Cache-Control header is also used for POST requests. If so, is the standard for the same Post request determined by comparing the contents of the body? Since the primary cache key consists of the URI and the method, two POST requests will have the same key if they have the same URI. However, as the standard notes, "since HTTP caches in common use today are typically limited to caching responses to GET, many caches simply decline other methods and use only the URI as the primary cache key."
I understand that the browser save the response, to return quickly at next same request. But the question here is, what is same request? Does it mean that same resource uri in the case of Get request? What about the values ​​of headers included in the request or the content of the body? I've seen that the Cache-Control header is also used for Post requests. If so, is the standard for the same Post request determined by comparing the contents of the body?
How browser cache determine that request is same or not?
Based on the current version (6.7.4) of code, it's not possible to disable the cache any other way than setting the env variable ETHEREAL_CACHE to something different than ['true', 'yes', 'y', '1']. Aka process.env.ETHEREAL_CACHE needs to be false Keep in mind that this is OS level env variable. Not the ones setup in Cypress. And the best thing is the great documentation which mentions ETHEREAL_CACHE variable ...
I created an E2E to test for signups, using Nodemailer with Ethereal. When the test runs the first time everything ends smoothly, but when I executed it a second time the test, for some reason, breaks. While investigating the above issue, I noticed that the createTestAccount returns the same email address (unless cypress is restarted). Here's the function code for createTestAccount: https://github.com/nodemailer/nodemailer/blob/master/lib/nodemailer.js#L58. Is createTestAccount using an internal cache? If yes, is there a way to disable it (besides setting and process.env.ETHEREAL_CACHE to false)?
Nodemailer. createTestAccount and cypress: generate same email address
3 The statement var imageCache = NSCache<UIImage, String>() creates a cache where the key type is UIImage and the objects stored in the cache are Strings. You want it to be the other way round with one caveat: NSCache doesn't support String for the key type, it has to be NSString: var imageCache = NSCache<NSString, UIImage>() That way you'll be able to put UIImages into the cache while identifying them by URL strings. Check out my demo project on GitHub: https://github.com/vadimbelyaev/ImagesCache Share Improve this answer Follow answered Sep 25, 2021 at 21:36 Vadim BelyaevVadim Belyaev 2,65122 gold badges1919 silver badges2626 bronze badges 2 I managed to find a nice and easy solution to caching images - kingfisher install on cocopods – jo1 Dec 27, 2021 at 17:14 Yes, there are many libraries that would do image loading and caching: Kingfisher, SDWebImage, Nuke to name a few. But sometimes you just need a simple solution that you can implement in one short class that uses NSCache instead of adding a fairly complex 3rd party dependency to the project. – Vadim Belyaev Jan 6, 2022 at 15:16 Add a comment  | 
This question already has answers here: Download and cache images in UITableViewCell (2 answers) Closed 2 years ago. I am having trouble finding a good example of how I can cache images in a table view using URLSession to get image from the internet (API). However, I am getting issues with the cache - Here is my code so far: func configure(with urlString: String, name: String, address: String, station: String) { var imageCache = NSCache<UIImage, String>() loader.startAnimating() loader.isHidden = false guard let url = URL(string: urlString) else { return } let task = URLSession.shared.dataTask(with: url) { [weak self] data, _, error in guard let data = data, error == nil else { return } let locationImage = UIImage(data: data) imageCache.setObject(locationImage!, forKey: urlString) imageCache[url] = locationImage DispatchQueue.main.async { self?.nameLabel.text = name self?.addressLabel.text = address self?.locationImageView.image = locationImage self?.loader.stopAnimating() self?.loader.isHidden = true } } task.resume() } errors include: Cannot convert value of type 'String' to expected argument type 'UIImage' Value of type 'NSCache<UIImage, AnyObject>' has no subscripts 'NSCache' requires that 'String' be a class type
how can I cache an image in swift 5? [duplicate]
forEach in Dart is async, so you can't predict execution order of iterations, especially if you have async routines inside the block. If you need to guarantee order with internal async calls use a regular for loop. for(int i=0;i<urls.length;i++) { var file = await DefaultCacheManager().getSingleFile(urls[i]); videos.add(file); }); This will guarantee videos[] is in the same order as urls[], and that videos[] is populated with File objects before being returned.
I have a requirement in a flutter app to play multiple videos in sequence. Basically, one video is played and when I click on a button, the next video should play and so on. The videos are very short (1-3 seconds) and very small in size. They are stored on Firebase Storage. The issue is that there can't be any delay in playing the videos. So I am looking for a way to download and cache multiple videos before navigating to the screen that contains the videos. I have tried using the flutter cache manager like this: Future<List<File>> fetchFile(urls) async { urls.forEach((url) async { var file = await DefaultCacheManager().getSingleFile(url); videos.add(file); }); return videos; } Here I try to fetch all the videos from a list of urls and then later I use a Future builder and navigate to the next page with the list of videos. Navigator.of(context).push(MaterialPageRoute(builder: (context) => LessonScreen(videos))); But I've faced a lot of problems with this approach, for example videos not playing in order, or not loading at all, or even crashing the app. Does anyone know how to achieve this with flutter cache manager or any other approach? Thanks for the help in advance!
Downloading and Caching multiple videos before moving to a new screen in Flutter
It is pretty literal: when the cache entry is removed from the cache. For every endpoint-argument-combination, you have one cache entry. So, when the first component does a useChatRoomQuery("flower"), a cache entry for that is added (and a onCacheEntryAdded function is run for the "chatroom" entry with the argument "flower"). If another component also uses useChatRoomQuery("flower"), the same cache entry is used (no call to onCacheEntryAdded). And if a component calls useChatRoomQuery("afterhour"), that will create a new cache entry (and start another onCacheEntryAdded). Once the last component using a cache entry stops using it (by unmounting or changing to another argument), a timer is started (usually, 60 seconds - you can configure this on api and endpoint level via keepUnusedDataFor). After that time, the cache entry is removed, and that promise resolves. So, generally it is probably a good idea to also unsubscribe from your socket connection - and when this was the last topic you were listening for, also to disconnect from it. After all, nobody was interested in that data for some time and you can always reconnect and add a new listener later.
The docs say this: cacheEntryRemoved - A Promise that allows you to wait for the point in time when the cache entry has been removed from the cache, by not being used/subscribed to any more in the application for too long or by dispatching api.utils.resetApiState. I've visually scanned the docs and I do not yet understand this part as well as I'd like to. Does the above quote mean that the promise is fulfilled when the component that uses the query unmounts? What about when the component is receiving streaming updates? Is it good practice to partially unsubscribe from an existing WebSockets connection immediately after the cacheEntryRemoved promise is fulfilled in onCacheEntryAdded, when a part of the messages being received from the WS server are not needed anymore? Or is it better to unsubscribe in a useEffect hook's cleanup function from that channel from the WS connection? I have a single Socket.IO connection that is used to receive more channels of messages in parallel, some component needs a channel, some other component needs another, some other components need the same channel as an existing mounted component. Is it OK to unsubscribe from a channel after this promise is fulfilled? I have put more information about this in this other question.
When is the cacheEntryRemoved promise fulfilled?
Just fyi, CaffeineCacheManager and CaffeineCache are Spring wrappers around the real Caffeine cache. org.springframework.cache.caffeine.CaffeineCache implements org.springframework.cache.Cache (emphasis on packages of both) As to your question, CaffeineCacheManager returned from your @Bean has NO caches actually. So when you call cacheManager.getCache("test_cache"), you get a cache created by Spring on the fly, called a dynamic cache. And this cache's expireAfterAccess and CaffeineCache0 are not set. Hence, the CaffeineCache1 you put in it is never evicted. To get expected behavior, you need to add CaffeineCache2 to the cache manager. Check my answer.
My configuration: @Bean public CaffeineCacheManager cacheManager() { return new CaffeineCacheManager(); } @Bean public CaffeineCache testCache() { return new CaffeineCache("test_cache", Caffeine.newBuilder() .maximumSize(10000) .expireAfterAccess(30, TimeUnit.SECONDS) .expireAfterWrite(30, TimeUnit.SECONDS) .recordStats() .build()); } test code:(read cache 3 times in a row with 45 seconds pause between reads) static int value = 1; ... Cache testCache = cacheManager.getCache("test_cache"); System.out.println("read " + testCache.get("myKey", () -> value++)); try { Thread.sleep(45000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("read " + testCache.get("myKey", () -> value++)); try { Thread.sleep(45000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("read " + testCache.get("myKey", () -> value++)); Actual result: read 1 read 1 read 1 Expected result: Cache is evicted after 30 seconds: read 1 read 2 read 3 What do I wrong ? How to fix ?
caffeine Cache is not evicted after expireAfterAccess/expireAfterWrite
Check your free resources when the project is running on the server and allocate the right amount of free resources to maximum size, do not worry about filling the RAM because there is a recycling strategy that manages when the space is full. The following article describes the types of strategies: Recycling strategy
We have developed an application using Springboot and used Springboot cache as well as Caffeine cache to reduce DB calls and improve performance, But we are thinking cache size may occur problem so kindly help us to know how much cache will work fine? Note : Server RAM size is 2 GB. Using Kubernetes & Docker Its a small Rest endpoints application, but has some complex task
How much cache can be stored using springboot and Caffeine cache?
If i read a file in pyspark: Data = spark.read(file.csv) Then for the life of the spark session, the ‘data’ is available in memory,correct? No. Nothing happens here due to Spark lazy evaluation, which happens upon the first call to show() in your case. So if i call data.show() 5 times, it will not read from disk 5 times. Is it correct? No. The dataframe will be re-evaluated for each call to show. Caching the dataframe will prevent that re-evaluation, forcing the data to be read from cache instead.
If i read a file in pyspark: Data = spark.read(file.csv) Then for the life of the spark session, the ‘data’ is available in memory,correct? So if i call data.show() 5 times, it will not read from disk 5 times. Is it correct? If yes, why do i need: Data.cache()
Pyspark caches dataframe by default or not?
3 S.Ray , You can use the below Rest API doc to purge content from Azure CDN: https://learn.microsoft.com/en-us/rest/api/cdn/cdn/endpoints/purgecontent In the request body, under contentPaths parameter, you can provide the path to the content to be purged. It can describe a file path or a wild card directory. Thanks! Share Improve this answer Follow edited May 19, 2021 at 15:10 answered May 19, 2021 at 14:30 Gitarani SharmaGitarani Sharma 77533 silver badges44 bronze badges Add a comment  | 
After integration of azure CDN in our site, we found that the cache is not purged while replacing the file. This is happening only for PDF file type. We want to purge the cache on replace of the file. Please help us if anyone has faced the similar issue and purged the CDN cache programmatically. Thanks
How to purge CDN cache programmatically for an item?
Here's the main point. In the end a BLOB is just a string of bytes. The major difference with a formal string type is that a BLOB doesn't have any sort of encoding or collation associated with it. It's a kind of binary string which is another way of saying "generic data". file_get_contents also generally returns this sort of "generic data". Laravel's cache is build to be generic like that so it doesn't have any specific data type associated with it. It's just a store of key/value pairs and the keys must be ascii strings while the values can be anything Laravel serialises things before they go in the cache so basically anything that can be represented as a variable in PHP and is serializable can be cached.
Hi everyone, Is there a way to cache BLOB types temporarily in Laravel ? Scenario: I'm gonna cache some data MEDIUMBLOB with the size of 2048KB temporarily. These data are actually parts of a large single file 16MB. After caching all parts, they will be combined together into a single file, then will be removed from cache. The content of each single part is given by file_get_contents function. I'm already doing this with MySQL. (However, there are lots of queries and takes time to be done.) Is there a better way to store MEDIUMBLOB data temporarily in Cache storage ? I've faced with Redis and Cache in Laravel, but I'm not sure they support MEDIUMBLOB.
How to cache BLOB type in Laravel
Per the docs there are three methods: Using the UI: "More options" -> "Caches" on the repo's page Using the CLI: travis cache --delete Using the API: DELETE /repos/{repository.id}/caches That said, Docker images are one of the examples explicitly called out as a thing not to cache: Large files that are quick to install but slow to download do not benefit from caching, as they take as long to download from the cache as from the original source In your example it's not clear what's involved in the pipeline beyond that Dockerfile - even if the file itself hasn't changed, any of the things that go into it (base image, source code, etc.) might have. Caching the image means you may get false positives, builds that pass even though docker build would have failed.
I cached a docker image on travis-ci. The docker image is created from a dockerfile. Now my dockerfile changed, and I need to remove caches and rebuild the docker image. How can I remove the caches on travis-ci? My current .travis.yml looks like this: language: C services: - docker cache: directories: - docker_cache before_script: - | echo Now loading... filename=docker_cache/saved_images.tar if [[ -f "$filename" ]]; then echo "Got one from cache." docker load < "$filename" else echo "Got one from scratch"; docker build -t $IMAGE . docker save -o "$filename" $IMAGE fi script: - docker run -it ${IMAGE} /bin/bash -c "pwd" env: - IMAGE=test04
How to remove caches on Travis CI?
You can use CachedNetworkImage package to avoid downloading the image every time. It's simple to use and you just need to pass the URL to the Widget: CachedNetworkImage( imageUrl: "http://via.placeholder.com/350x150", placeholder: (context, url) => CircularProgressIndicator(), errorWidget: (context, url, error) => Icon(Icons.error), ), To control how long the image is cached*, make sure you add cache header to your images when you upload them so they get cached properly (in the browser too if you're using flutter web): final contentType = 'image/*'; final cacheControl = 'public, max-age=31556926'; // seconds -- ie 31556926 == one year final uploadTask = reference.putData( image.data, SettableMetadata( cacheControl: cacheControl, contentType: contentType, )); So make sure to store the URL of the images when you upload them and just pass the URL to the users to get the images instead of downloading the images directly from FirebaseStorage in case you're doing that. *I believe the package default is 7 days if no cache header is available but I cannot confirm.
I'm enrolled in a project using Flutter and Firebase and I'm having trouble with bandwidth limits. The free quota is 1gb per day and I have a list with 100 images (and some files). Is there a way to minimize the bandwidth costs through caching this files in local phone cache to not have to get the items from DB each time I open the screen? Is there a package or something like this to do it?
How to store Firebase Storage Items to local cache to save bandwidth costs?
You can find the location of the local app data on any OS by doing console.log(nw.App.dataPath);. This folder is created automatically by Chromium and exists separately from your app in a user account specific location. When you delete the app, this folder remains because it may contain settings, in case you reinstall it or replace it with a newer version of the app.
In my NW.js app, I store some data in the Local Storage. Now I want to delete the app and surely the Local Storage data as well. I move the app to the trash and clean it up. Also, I delete cache information related to the app located in /Users/<username>/Library/Caches/<yourmagicapp>. But unfortunately, all this doesn't help. After building a new copy of the app and shipping it to the macOS Application folder the Local Storage data restores from somewhere like it never has been deleted at all. All the data still here. The question is how to delete all the stored data and where does it sit? Or at least from where to read about it because I couldn't find information in the nw.js official docs. Thank you.
NW.js Local Storage Cache Still Persists Even After App Uninstallation
No, size param is useful if you want to fetch results different than 10, as default size is 10, so if you are using a search query for which you need to fetch lets suppose 1000 results, than you specify size param to 1000, without this you will get only top 10 search results, sorted on their score in descending order. size=0, in shard request cache, is that it will not cache the exact results(ie number of documents with their score) but only cache the metadata like total number of results(which is hits.total) and other things.
By default, the requests cache will only cache the results of search requests where size=0, so it will not cache hits, but it will cache hits.total, aggregations, and suggestions. I do not understand the part where stated: "size=0". What is the the size context/meaning here? Does it mean that results cache will cache only for empty results? cache page 1 only (default 10 results I think)?
What does size=0 mean in Elasticsearch shard request cache?
The values 1-3 are just one time trigger for action. There is no value 0. sysctl is in package procps . apt install procps
Recently I tried to empty the buffers cache on a Debian webserver. The command I used was: free && sync && echo 3 > /proc/sys/vm/drop_caches && free So far so good, running cat /proc/sys/vm/drop_caches prints a value of 3. When I try to reset the value to 0 with sync && echo 0 > /proc/sys/vm/drop_caches, this error shows: bash: echo: write error: Invalid argument Which is due to drop_caches being a command, not a variable to be set. I found this: sudo sysctl -w vm.drop_caches=3 It immediately clears the pagecache, dentries and inodes and clears again only if you call sysctl -w again, there is no need to apparently set it back to 0 in an explicit manner. The command sysctl is not natively supported on Debian so I'm looking for an alternative command, more precisely: How to reset drop_caches or immediately empty the cache to reset the value to 0?
Reset /proc/sys/vm/drop_caches to default value 0