Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
If the result of the first execution hasn't been cached, the second invocation will proceed. You should understand that @Cacheable is centered around the content of the cache (and not specifically a thread's execution context [well, kind of; the cache still needs to be threadsafe]). On execution of a method, the cache is first checked to see if the key exists: if t1 is taking a while to complete, its result will not be cached therefore, concurrent executions will proceed without regard for t1's execution
If a method marked @Cacheable takes 10 minutes to complete and two threads t1,t2 access the method. t1 accesses at time 0 (cache method is now run for first time) t2 accesses at time t1+5mins Does this mean that t2 will not access the data for approx 5 mins since t1 has already started the @Cacheable operation and it's due to complete in 5 mins(as its been running for 5 mins) or will a new call to @Cacheable be invoked by t2?
Does Spring @Cacheable block if accessed by more that 1 thread?
You can manually expire the cache using the .delete method: Rails.cache.delete("recent_news")
I'm using the following low-level caching for the five most recent news articles in my Rails application: @recent_news = Rails.cache.fetch("recent_news", :expires_in => 1.hour) do News.order("created_at desc").limit(5) end Is there a way to keep this query cached until a new news article is created? I was thinking of manually expiring the cache using an observer, but wasn't sure if there was a way to actually do this, e.g: class NewsObserver < ActiveRecord::Observer def after_create #expire recent_news cache end end
Manually expire low level cache
You can use the unwrap method to get at the vendor implementation when you want to use vendor specific extensions. e.g., org.hibernate.Query hquery = query.unwrap(org.hibernate.Query.class); Then you can work with the vendor specific interface. Alternately you could just unwrap your EntityManager to a Session before ever creating the query. If you don't want to have any hibernate imports in your code, you could also do query.setHint("org.hibernate.cacheable", Boolean.TRUE); Really up to you which way you'd rather introduce vendor dependence. I would favor the first as it will fail with an exception if hibernate is removed from your dependencies sending up a big red "Hey you developer changing this, there was a vendor dependence here." Whereas the hint simply does nothing if it's not understood by the provider. Other persons would rather tolerate having vendor dependent magic strings in code over needing to have a compile time vendor dependence.
I am using the Hibernate 3.5.1 and EntityManager for data persistence (with JPA 2.0 and EHCache 1.5). I can obtain the query by the following code: EntityManager em; ... Query query = em.createQuery(...); ... Now, the problem is that EntityManager's createQuery() method returns javax.persistence.Query which, unlike org.hibernate.Query (returned by the SessionFactory's createQuery() method), does not have the org.hibernate.Query.setCacheable() method. How am I, then, supposed to cache the queries with EntityManager (or some other part of Hibernate)?
HIbernate Entity Manager: How to cache queries?
The underlying issue is that those are opaque responses, and by default, they won't be used with a cacheFirst strategy. There's some background at https://workboxjs.org/how_tos/cdn-caching.html There's logging in Workbox to help debug this sort of thing, but as it's noisy, it's not enabled by default in the production build. Either switching your importScripts() to use the development build (e.g. importScripts('https://unpkg.com/[email protected]/build/importScripts/workbox-sw.dev.v2.0.3.js'), or going in to DevTools and explicitly setting workbox.LOG_LEVEL = 'debug' would give you a log message like the following: You have a few options for getting things working as you expect: Change to workboxSW.strategies.staleWhileRevalidate(), which supports opaque response by default. Tell the cacheFirst strategy that you're okay with it using opaque responses: workboxSW.strategies.cacheFirst({cacheableResponse: {statuses: [0, 200]}}) Because your third-party CDNs all seem to support CORS, you could opt-in to CORS mode for your CSS and image requests via the crossorigin attribute, and the responses will no longer be opaque: <img src='https://cors.example.com/path/to/image.jpg' crossorigin='anonymous'> or <link rel='stylesheet' href='https://cors.example.com/path/to/styles.css' crossorigin='anonymous'>
I am using workbox runtime caching to cache external calls (materialize.css is one of those). In my network tab it shows that the request is coming from serviceWorker (looks fine): But on cache storage my runtime cache looks empty: You can see my service worker on chromes's application tab, and this is the website: https://quack.surge.sh/ Service worker code: const workboxSW = new self.WorkboxSW(); workboxSW.precache(fileManifest); workboxSW.router.registerNavigationRoute("/index.html");workboxSW.router.registerRoute(/^https:\/\/res.cloudinary.com\/dc3dnmmpx\/image\/upload\/.*/, workboxSW.strategies.cacheFirst({}), 'GET'); workboxSW.router.registerRoute('https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.min.css', workboxSW.strategies.cacheFirst({}), 'GET'); workboxSW.router.registerRoute('https://res.cloudinary.com/dc3dnmmpx/image/upload/(.*)', workboxSW.strategies.cacheFirst({}), 'GET'); Is this the expected behaviour? I'm pretty new to service workers and I am not sure what is the correct result.
Using workbox runtime caching, requests are not showing on cache storage on chrome
I have simlar problem in my company :) The problem was on server side. In serwer response i have: Pragma: no-cache So when i removed this everything starts working. Before i removed it i get all the time such exceptions: 504 Unsatisfiable Request (only-if-cached) Ok so how implementation on my side looks. OkHttpClient okHttpClient = new OkHttpClient(); File httpCacheDirectory = new File(appContext.getCacheDir(), "responses"); Cache cache = new Cache(httpCacheDirectory, maxSizeInBytes); okHttpClient.setCache(cache); OkClient okClient = new OkClient(okHttpClient); RestAdapter.Builder builder = new RestAdapter.Builder(); builder.setEndpoint(endpoint); builder.setClient(okClient); If you have problems in testing on which side is problem (server or app). You can use such feauture to set headers received from server. private static final Interceptor REWRITE_CACHE_CONTROL_INTERCEPTOR = new Interceptor() { @Override public Response intercept(Chain chain) throws IOException { Response originalResponse = chain.proceed(chain.request()); return originalResponse.newBuilder() .removeHeader("Pragma") .header("Cache-Control", String.format("max-age=%d", 60)) .build(); } }; and simply add it: okHttpClient.networkInterceptors().add(REWRITE_CACHE_CONTROL_INTERCEPTOR); Thanks to that as you can see i was able to remove Pragma: no-cache header for test time. Also i suggest you to read about Cache-Control header: max-age,max-stale Other usefull links: List of HTTP header fields Cache controll Another sample code
I want to Retrofit with OkHttp uses cache when is no Internet. I prepare OkHttpClient like this: RestAdapter.Builder builder= new RestAdapter.Builder() .setRequestInterceptor(new RequestInterceptor() { @Override public void intercept(RequestFacade request) { request.addHeader("Accept", "application/json;versions=1"); if (MyApplicationUtils.isNetworkAvaliable(context)) { int maxAge = 60; // read from cache for 1 minute request.addHeader("Cache-Control", "public, max-age=" + maxAge); } else { int maxStale = 60 * 60 * 24 * 28; // tolerate 4-weeks stale request.addHeader("Cache-Control", "public, only-if-cached, max-stale=" + maxStale); } } }); and setting cache like this: Cache cache = null; try { cache = new Cache(httpCacheDirectory, 10 * 1024 * 1024); } catch (IOException e) { Log.e("OKHttp", "Could not create http cache", e); } OkHttpClient okHttpClient = new OkHttpClient(); if (cache != null) { okHttpClient.setCache(cache); } and I checked on rooted device, that in cache directory are saving files with the "Response headers" and Gzip files. But I don't get the correct answer from retrofit cache in offline, although in GZip file is coded my correct answer. So how can I make Retrofit can read GZip file and how can he know which file it should be (because I have a few files there with other responses) ?
How Retrofit with OKHttp use cache data when offline
EDIT: If you're using the System.Runtime.Caching.MemoryCache there is a callback on the CacheItemPolicy object for deletion, as well as one for update. myMemoryCache.Set("key", null, new CacheItemPolicy() {RemovedCallback = new CacheEntryRemovedCallback(CacheRemovedCallback) /* your other parameters here */}); public void CacheRemovedCallback(CacheEntryRemovedArguments arguments) { // do what's needed } Initial answer When inserting data in the .net cache for the System.Web.Caching namespace you have the option to set a callback to be notified of removal Cache.Insert("data", "", null, DateTime.Now.AddMinutes(1), System.Web.Caching.Cache.NoSlidingExpiration, CacheItemPriority.High, new CacheItemRemovedCallback(CacheRemovedCallback)); public string CacheRemovedCallback(String key, object value, System.Web.Caching.CacheItemRemovedReason removedReason) { // here you can log, renew the value, etc... } There is also a signature for the Insert method that lets you specify a callback to be notified before the item is removed
I'm using a .net Memory Cache with .NET 4.0 and c#, I want my application to be notified when an item is removed (so I can write that it has been removed to a log file or notify the UI, that the item is removed). Is there anyway to do this. I'm using System.Runtime.Caching.MemoryCache not System.Web.Caching
.net MemoryCache - notify on item removed
That loads all.css with a different query string so that if version 6637, for instance, is already cached on your machine, you'll get the new one (6638). Changing that number (in this case) will not give you a different file. This is just a cache trick so they can send the file down with no expiration (i.e. you never have to ask for it again), because when it does change, the "file name", changes. That said, you could make it so you load a different version based on the query string parameter. Doing so would be slightly non-trivial and akin to how you get different questions when you pass a different question ID to the URL of this page.
I've noticed that on some websites (including SO) the link to the CSS will look like: <link rel="stylesheet" href="http://sstatic.net/so/all.css?v=6638"> I would say its safe to assume that ?v=6638 tells the browser to load version 6638 of the css file. But can I do this on my websites and can I include different versions of my CSS file just by changing the numbers?
What does the question mark at then end of a css include url do?
RewriteRule ^pages/([^/\.]+) cache/pages/$1.html [NC,QSA] # At this point, we would have already re-written pages/4 to cache/pages/4.html RewriteCond %{REQUEST_FILENAME} !-f # If the above RewriteCond succeeded, we don't have a cache, so rewrite to # the pages.php URI, otherwise we fall off the end and go with the # cache/pages/4.html RewriteRule ^cache/pages/([^/\.]+).html pages.php?p=$1 [NC,QSA,L] Turning off MultiViews is crucial (if you have them enabled) as well. Options -MultiViews Otherwise the initial request (/pages/...) will get automatically converted to /pages.php before mod_rewrite kicks in. You can also just rename pages.php to something else (and update the last rewrite rule as well) to avoid the MultiViews conflict. Edit: I initially included RewriteCond ... !-d but it is extraneous.
How can you use ModRewrite to check if a cache file exists, and if it does, rewrite to the cache file and otherwise rewrite to a dynamic file. For example I have the following folder structure: pages.php cache/ pages/ 1.html 2.html textToo.html etc. How would you setup the RewriteRules for this so request can be send like this: example.com/pages/1 And if the cache file exists rewrite tot the cache file, and if the cache file does not exists, rewrite to pages.php?p=1 It should be something like this: (note that this does not work, otherwise I would not have asked this) RewriteRule ^pages/([^/\.]+) cache/pages/$1.html [NC,QSA] RewriteCond %{REQUEST_FILENAME} -f [NC,OR] RewriteCond %{REQUEST_FILENAME} -d [NC] RewriteRule cache/pages/([^/\.]+).html pages.php?p=$1 [NC,QSA,L] I can off coarse do this using PHP but I thought it had to be possible using mod_rewrite.
RewriteRule checking file in rewriten file path exists
It's basically a two-or-three-step process, which cleans the project of all cached assets. Of course, if anyone uses this technique, and a project still does not show updated assets, then please add an answer! It’s definitely possible that someone out there has encountered situations that require a step that I’m not including. Clean your project with Shift-Cmd-K Delete derived data by calling a shell script (details below), defined in your bash profile Uninstall the App from the Simulator or device. For certain types of assets, you may also have to reset the Simulator (under the iOS Simulator menu) To call the shell script below, simply enter enter the function name (in this case 'ddd') into your terminal, assuming it's in your bash profile. Once you've saved your bash profile, don't forget to update your terminal's environment if you kept it open, with the source command: source ~/.bash_profile ddd() { #Save the starting dir startingDir=$PWD #Go to the derivedData cd ~/Library/Developer/Xcode/DerivedData #Sometimes, 1 file remains, so loop until no files remain numRemainingFiles=1 while [ $numRemainingFiles -gt 0 ]; do #Delete the files, recursively rm -rf * #Update file count numRemainingFiles=`ls | wc -l` done echo Done #Go back to starting dir cd $startingDir } I hope that helps, happy coding!
Is there a procedure I can follow that includes running a script in the terminal, to delete all the files under the derived data folder and reliably clean a project? Sometimes, a project's assets don't always get updated to my simulator or device. It's mostly trial and error, and when I find that an old asset made its way into a test build, it's too late, not to mention embarrassing! I've looked at this question, but it seems a little outdated: How to Empty Caches and Clean All Targets Xcode 4 I also checked out this question, but I don't want to waste time in Organizer, if I don't absolutely need to: How to "Delete derived data" in Xcode6? I've looked at other posts out there, but found nothing that solves the problem of reliably cleaning a project and saves time with a script.
How to Delete Derived Data and Clean Project in Xcode 5 and later?
You should cache results of typeof(T).GetProperty(propName); and typeof(T).GetProperty(propName); Another possible approach is to combine PropertyInfo.GetGetMethod Method (or PropertyInfo.GetSetMethod Method for setter) with Delegate.CreateDelegate Method and invoke the resulting delegate every time you need to get/set values. If you need this to work with generics you can use approach from this question: CreateDelegate with unknown types This should be much faster compared to reflection: Making reflection fly and exploring delegates There are also other ways to get/set values in a faster way. You can use expression trees or DynamicMethod to generate the il at runtime. Have a look at these links: Late-Bound Invocations with DynamicMethod Delegate.CreateDelegate vs DynamicMethod vs Expression
I know that Reflection can be expensive. I have a class that gets/sets to properties often, and one way I figured was to cache the reflection somehow. I'm not sure if I'm supposed to cache an expression or what to do here really. This is what I'm currently doing: typeof(T).GetProperty(propName).SetValue(obj, value, null); typeof(T).GetProperty(propName).GetValue(obj, null); So... what would be the best way to make this quicker?
Best way to cache a reflection property getter / setter?
11 Whether a javascript file is cached depends on how your web server is setup, how the users browser is setup and also how any HTTP proxy servers between your server and the user are setup. The only bit you can control is how your server is setup. If you want the best chance of your javascript being cached then you server needs to be sending the right HTTP headers with the javascript file. Exactly how you do that depends on what web server you are using. Here are a couple of links that might help: Apache - http://httpd.apache.org/docs/2.0/mod/mod_expires.html IIS - http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0fc16fe7-be45-4033-a5aa-d7fda3c993ff.mspx?mfr=true Share Improve this answer Follow edited Feb 11, 2009 at 8:56 answered Feb 11, 2009 at 8:48 andynormancxandynormancx 13.6k66 gold badges3737 silver badges5252 bronze badges Add a comment  | 
How can I check for a javascript file in user's cache. If he refreshed the page or visits the site after sometime. I need not download that js file again. Does the js files get cleaned up after a site is closed.
Check browser's cache for a js file
If you want Postgres to automatically do something on the basis of an insert/update/delete - i.e. if you want this operation to trigger some other action - then you need to write a trigger. It's pretty straightforward. Simple enough that I doubt anyone would bother creating an extension (let alone a language feature) to save you the trouble. And it's certainly simpler (and as you pointed out, safer) than whatever ActiveRecord has going on under the hood. Something like this is generally all it takes (I haven't tested this, so you might want to do so...): CREATE FUNCTION maintain_comment_count_trg() RETURNS TRIGGER AS $$ BEGIN IF TG_OP IN ('UPDATE', 'DELETE') THEN UPDATE tasks SET comment_count = comment_count - 1 WHERE id = old.task_id; END IF; IF TG_OP IN ('INSERT', 'UPDATE') THEN UPDATE tasks SET comment_count = comment_count + 1 WHERE id = new.task_id; END IF; RETURN NULL; END $$ LANGUAGE plpgsql; CREATE TRIGGER maintain_comment_count AFTER INSERT OR UPDATE OF task_id OR DELETE ON comments FOR EACH ROW EXECUTE PROCEDURE maintain_comment_count_trg(); If you want it to be airtight, you'd need an additional trigger for TRUNCATEs on comments; whether it's worth the trouble is up to you. To handle updates to a tasks.id value which is being referenced (either via deferred constraints or ON UPDATE actions) then there's a bit more to it, but this is an uncommon case. And if your client library / ORM is naive enough to send through every field in every UPDATE statement, you may want a separate UPDATE trigger which fires only when the value has actually changed: CREATE TRIGGER maintain_comment_count_update AFTER UPDATE OF task_id ON comments FOR EACH ROW WHEN (old.task_id IS DISTINCT FROM new.task_id) EXECUTE PROCEDURE maintain_comment_count_trg();
In my database I have tasks and comments tables. Each task has many comments. I'd like to create tasks.comments_count column that would be updated automatically by PostgreSQL, so I can get comments_count (and sort / filter by it) in O(1) time while selecting all tasks. I know there are language-specific solutions like counter cache of ActiveRecord, but I don't want to use them (I find them fragile). I'd like PostgreSQL to take care of such counter caches. I also know PostgreSQL supports triggers, but they are hard to write and use (not a solid solution) Ideally it would be a PostgreSQL extension or some native feature I'm not aware of. Lazy calculation of such counters would be a great bonus.
Counter cache column in PostgreSQL
Most of the discussions on cache line alignment deal with high-performance computing working with many threads, and keeping scalability as close to linear as possible. In those discussions the reason for cache line alignment is to prevent a write to one data variable invalidating the cache line that also contains another variable used by a different thread. So, unless you are trying to write code that will scale to a very high number of processor cores, cache line alignment probably won't matter much to you. but again, test it and see.
I just know basic ideas on aligned memory allocation. But I didn't cared much about align issue because I am not an assembly programmer, also didn't have experience with MMX/SIMD. And I think this is the one of the the premature optimizations. These days people saying more and more about cache hit, cache coherent, optimization for size, etc. Some source code even allocate memory explicitly aligned on CPU cache lines. Frankly, I don't know how much is the cache line size of my i7 CPU. I know there will be no harm with large size align. But will it really pay off, without SIMD ? Let's say there 100000 items of 100 bytes data in a program. And access to these data is the most intensive work of the program. If we change the data structure and make all the 100 bytes size data aligned by 16 byte, is it possible to gain noticeable performance gain ? 10%? 5%?
Will the cache line aligned memory allocation pay off?
It turns out that this line: HttpContext.Current.Cache[remoteIp] = ((int)HttpContext.Current.Cache[remoteIp]) + 1; removes the previous value and re-inserts the value with NO absolute or sliding expiration time. In order to get around this I had to create a helper class and use it like so: public class IncrementingCacheCounter { public int Count; public DateTime ExpireDate; } public void UpdateCountFor(string remoteIp) { IncrementingCacheCounter counter = null; if (HttpContext.Current.Cache[remoteIp] == null) { var expireDate = DateTime.Now.AddMinutes(5); counter = new IncrementingCacheCounter { Count = 1, ExpireDate = expireDate }; } else { counter = (IncrementingCacheCounter)HttpContext.Current.Cache[remoteIp]; counter.Count++; } HttpContext.Current.Cache.Insert(remoteIp, counter, null, counter.ExpireDate, Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } This will get around the issue and let the counter properly expire at the absolute time while still enabling updates to it.
I am storing a single integer value in HttpContext.Cache with an absolute expiration time of 5 minutes from now. However, after waiting 6 minutes (or longer), the integer value is still in the Cache (i.e. it's never removed even though the absolute expiration has passed). Here is the code I am using: public void UpdateCountFor(string remoteIp) { // only returns true the first time its run // after that the value is still in the Cache // even after the absolute expiration has passed // so after that this keeps returning false if (HttpContext.Current.Cache[remoteIp] == null) { // nothing for this ip in the cache so add the ip as a key with a value of 1 var expireDate = DateTime.Now.AddMinutes(5); // I also tried: // var expireDate = DateTime.UtcNow.AddMinutes(5); // and that did not work either. HttpContext.Current.Cache.Insert(remoteIp, 1, null, expireDate, Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } else { // increment the existing value HttpContext.Current.Cache[remoteIp] = ((int)HttpContext.Current.Cache[remoteIp]) + 1; } } The first time I run UpdateCountFor("127.0.0.1") it inserts 1 into the cache with key "127.0.0.1" and an absolute expiration of 5 minutes from now as expected. Every subsequent run then increments the value in the cache. However, after waiting 10 minutes it continues to increment the value in the Cache. The value never expires and never gets removed from the Cache. Why is that? It's my understanding that an absolute expiration time means the item will get removed approximately at that time. Am I doing something wrong? Am I misunderstanding something? I'm expecting the value to be removed from the Cache after 5 minutes time, however it stays in there until I rebuild the project. This is all running on .NET 4.0 on my local machine.
ASP.net Cache Absolute Expiration not working
Cache doesn't have an official abstraction or provider, but you can easily build one: http://weblogs.asp.net/zowens/archive/2008/08/04/cache-abstraction.aspx http://memcachedproviders.codeplex.com/SourceControl/changeset/view/15983#58762 ASP.NET 4.0 includes an output cache provider abstraction (AFAIK not a general cache abstraction but only for output caching)
I am migrating a MonoRail application to ASP.NET MVC 1.0. In my original application I wrote a custom cache provider (a distributed cache provider using memcached). In MonoRail this task was very easy because the framework used interfaces and there is ICacheProvider that looks like this: public interface ICacheProvider : IProvider, IMRServiceEnabled { void Delete(string key); object Get(string key); bool HasKey(string key); void Store(string key, object data); } An instance of this interface is available in every controller action. So, all I had to do was to implement a custom cache provider that uses memcached and tell MonoRail to use my cache provider instead of the default one. It was also very easy to mock and unit test my controller. In ASP.NET MVC 1.0 there's the System.Web.Abstractions assembly (name looks promising) that defines the HttpContextBase like this: public abstract class HttpContextBase : IServiceProvider { ... public virtual System.Web.Caching.Cache Cache { get; } ... } I don't understand how the Cache property used here is an abstraction of a cache provider. It is the legacy sealed Cache class. It seems that I am not the only one struggling to mock out the classes in the framework. I am very new to the ASP.NET MVC framework and I must be missing something here. I could write a CustomBaseController that uses an ICacheProvider interface that I define and all my controllers derive from this base class, but if there's a more elegant (ASP.NET MVCish) solution I would be glad to implement it. I've noticed that HttpContextBase implements IServiceProvider. Where's the GetService method going to look for services? Can this be easily mocked?
How to implement a custom cache provider with ASP.NET MVC
Linked lists are good for LRU caches. For indexed lookups inside the linked list (to move the entry to the most recently used end of the linked list), use a HashTable. The least recently used entry will always be last in the linked list.
I intended to implement a HashTable to locate objects quickly which is important for my application. However, I don't like the idea of scanning and potentially having to lock the entire table in order to locate which object was last accessed. Tables could be quite large. What data structures are commonly used to overcome that? e.g. I thought I could throw objects into a FIFO as well as the cache in order to know how old something is. But that's not going to support an LRU algorithm. Any ideas? how does squid do it?
What data structures are commonly used for LRU caches and quickly locating objects?
I suggest you look into gwt-presenter and the CachingDispatchAsync . It provides a single point of entry for executing remote commands and therefore a perfect opportunity for caching. A recent blog post outlines a possible approach.
We have a gwt-client, which recieves quite a lot of data from our servers. Logically, i want to cache the data on the client side, sparing the server from unnecessary requests. As of today i have let it up to my models to handle the caching of data, which doesn't scale very well. It's also become a problem since different developers in our team develop their own "caching" functionality, which floods the project with duplications. I'm thinking about how one could implement a "single point of entry", that handles all the caching, leaving the models clueless about how the caching is handled. Does anyone have any experience with client side caching in GWT? Is there a standard approach that can be implemented?
Client side caching in GWT
I am looking at my performance monitor and under the ASP.NET Apps v2.0.50727 category I have the following cache related counters: Cache % Machine Memory Limit Used Cache % Process Memory Limit Used There are also a lot of other cache related metrics under this category. These should be able to get you the percentage, then if you can get the total allowed with Cache.EffectivePrivateBytesLimit or some other call you should be able to figure it out. I do not have personal experience with these counters so you will have to do some research and testing to verify. Here is a quick start article on reading from performance counters: http://quickstart.developerfusion.co.uk/quickstart/howto/doc/PCRead.aspx
I'm using the ASP.net cache in a web project, and I'm writing a "status" page for it which shows the items in the cache, and as many statistics about the cache as I can find. Is there any way that I can get the total size (in bytes) of the cached data? The size of each item would be even better. I want to display this on a web page, so I don't think I can use a performance counter.
How to determine total size of ASP.Net cache?
10 Faster stuff costs more per bit. So you have a descending chain of storage, from a few registers at one end, through several levels of cache, down to RAM. Each level is bigger and slower than the one before. And all the way at the bottom you have disk. Share Improve this answer Follow answered Sep 23, 2011 at 16:25 Tom ZychTom Zych 13.4k99 gold badges3636 silver badges5454 bronze badges 4 1 True, you could have all your memory be TLB memory, but the cost would be astronomical. – Andres Sep 23, 2011 at 16:31 3 What exactly is adding cost to the manufacturing of cache astronomically more compared to RAM ? – Andy Sep 23, 2011 at 16:39 3 Short answer is, different kinds of electronic circuitry. Here's one article that explains some of it: differencebetween.net/technology/… – Tom Zych Sep 23, 2011 at 19:43 3 This. L1/2/3 Cache is made out of SRAM. Main memory is DRAM. SRAM is faster, but requires more transistors per bit, which makes it more expensive, more power hungry, and requires more chips (won't fit on a typical DIMM).If you can ever find an old 386/486 mobo with off-chip L2 cache, look at the SRAM chips, and you'll see what I mean. In the old days, a lot of laptops omitted L2 cache for this reason. – myron-semack Sep 24, 2011 at 16:29 Add a comment  | 
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 12 years ago. Improve this question Why do we need to cache in Cache Memory? Why cant RAM Memory be made as fast as register or Cache Memory or Cache be as large as RAM Memory (4GB) so that everything can be in cache? Any good article/books to understand these concepts?
Why isn't RAM as fast as registers/cache memory? [closed]
Yes, Memcache is shared across all instances of your app.
I'm new to Google App Engine, and I've spent the last few days building an app using GAE's Memcache to store data. Based on my initial findings, it appears as though GAE's Memcache is NOT global? Let me explain further. I'm aware that different requests to GAE can potentially be served by different instances (in fact this appears to happen quite often). It is for this reason, that I'm using Memcache to store some shared data, as opposed to a static Map. I thought (perhaps incorrectly) that this was the point of using a distributed cache so that data could be accessed by any node. Another definite possibility is that I'm doing something wrong. I've tried both JCache and the low-level Memcache API (I'm writing Java, not Python). This is what I'm doing to retrieve the cache: MemcacheService cache = MemcacheServiceFactory.getMemcacheService(); After deployment, this is what I examine (via my application logs): The initial request is served by a particular node, and data is stored into the cache retrieved above. The new few requests retrieve this same cache and the data is there. When a new node gets spawned to serve a request (from the logs I know when this happens because GAE logs the fact that "This request caused a new process to be started for your application .."), the cache is retrieved and is EMPTY!! Now I also know that there is no guarantee to how long data will be in Memcache, but from my findings it appears the data is gone the moment a diff instance tries to access the cache. This seems to go against the whole concept of a distributed global cache no? Hopefully someone can clarify exactly how this SHOULD behave. If Memcache is NOT suppose to be global and every server instance has its own copy, then why even use Memcache? I could simply use a static HashMap (which I initially did until I realized it wouldn't be global due to different instances serving my requests). Help?
Is Memcache (Java) for Google App Engine a global cache?
You need to configure this option in the Gateway API panel. Choose your API and click Resources. Choose the method and see the URL Query String session. If there is no query string, add one. Mark the "caching" option of the query string. Perform the final tests and finally, deploy changes. Screenshot
I'm configuring the caching on AWS API Gateway side to improve performance of my REST API. The endpoint I'm trying to configure is using a query parameter. I already enabled caching on AWS API Gateway side but unfortunately had to find out that it's ignoring the query parameters when building the cache key. For instance, when I make first GET call with query parameter "test1" GET https://2kdslm234ds9.execute-api.us-east-1.amazonaws.com/api/test?search=test1 Response for this call is saved in cache, and when after that I make call another query parameter - "test2" GET https://2kdslm234ds9.execute-api.us-east-1.amazonaws.com/api/test?search=test2 I get again response for first call. Settings for caching are pretty simple and I didn't find something related to parameters configuration. How can I configure Gateway caching to take into account query parameters?
AWS API Gateway caching ignores query parameters
Caching the collection using the caching abstraction is a duplicate of what the underlying caching system is doing. And because this is a duplicate, it turns out that you have to resort to some kind of duplications in your own code in one way or the other (the duplicate key for the set is the obvious representation of that). And because there is duplication, you have to sync state somehow If you really need to access to the whole set and individual elements, then you should probably use a shortcut for the easiest leg. First, you should make sure your cache contains all elements which is not something that is obvious. Far from it actually. Considering you have that: //EhCacheCache cache = (EhCacheCache) cacheManager.getCache("products"); @Override public Set<Product> findAll() { Ehcache nativeCache = cache.getNativeCache(); Map<Object, Element> elements = nativeCache.getAll(nativeCache.getKeys()); Set<Product> result = new HashSet<Product>(); for (Element element : elements.values()) { result.add((Product) element.getObjectValue()); } return Collections.unmodifiableSet(result); } The elements result is actually a lazy loaded map so a call to values() may throw an exception. You may want to loop over the keys or something. You have to remember that the caching abstraction eases the access to the underlying caching infrastructure and in no way it replaces it: if you had to use the API directly, this is probably what you would have to do in some sort. Now, we can keep the conversion on SPR-12036 if you believe we can improve the caching abstraction in that area. Thanks!
I am working with Spring and EhCache I have the following method @Override @Cacheable(value="products", key="#root.target.PRODUCTS") public Set<Product> findAll() { return new LinkedHashSet<>(this.productRepository.findAll()); } I have other methods working with @Cacheable and @CachePut and @CacheEvict. Now, imagine the database returns 100 products and they are cached through key="#root.target.PRODUCTS", then other method would insert - update - deleted an item into the database. Therefore the products cached through the key="#root.target.PRODUCTS" are not the same anymore such as the database. I mean, check the two following two methods, they are able to update/delete an item, and that same item is cached in the other key="#root.target.PRODUCTS" @Override @CachePut(value="products", key="#product.id") public Product update(Product product) { return this.productRepository.save(product); } @Override @CacheEvict(value="products", key="#id") public void delete(Integer id) { this.productRepository.delete(id); } I want to know if is possible update/delete the item located in the cache through the key="#root.target.PRODUCTS", it would be 100 with the Product updated or 499 if the Product was deleted. My point is, I want avoid the following: @Override @CachePut(value="products", key="#product.id") @CacheEvict(value="products", key="#root.target.PRODUCTS") public Product update(Product product) { return this.productRepository.save(product); } @Override @Caching(evict={ @CacheEvict(value="products", key="#id"), @CacheEvict(value="products", key="#root.target.PRODUCTS") }) public void delete(Integer id) { this.productRepository.delete(id); } I don't want call again the 500 or 499 products to be cached into the key="#root.target.PRODUCTS" Is possible do this? How? Thanks in advance.
How update/remove an item already cached within a collection of items
7 Reusing pip cache between builds is a very good idea but doing the same for the virtualenvs is a really bad idea. This is because virtualenv can easily become messed in a way that you cannot really detect at runtime. This not only happens, it happens more often than you could imagine and for that reason please avoid it. PS. Advise from someone that learnt that the hard way. Share Improve this answer Follow answered Feb 27, 2019 at 16:19 sorinsorin 165k184184 gold badges545545 silver badges819819 bronze badges 3 1 Could you give more details in which way it could become messed? – Martin Thoma Oct 8, 2020 at 17:36 1 @sorin - it's possible you could be correct, however gitlab OFFICIAL docs contradict your answer. Your answer needs explanation or downvotes. docs.gitlab.com/ee/ci/caching/#caching-python-dependencies – some bits flipped Oct 26, 2020 at 15:56 Actually, GitLab docs have been updated to recommend exactly this approach. – Max Truxa Sep 26, 2022 at 11:27 Add a comment  | 
I cached Pip packages using a Gitlab CI script, so that's not an issue. Now I also want to catch a Conda virtual environment, because it reduces time to setup the environment. I cached a virtual environment. Unfortunately it takes a long time at the end to cache all the venv files. I tried to cache only the $CI_PROJECT_DIR/myenv/lib/python3.6/site-packages folder and it seems to reduce run time of the pipe. My question is: am I doing it correctly? The script is given below: gitlab-ci.yml image: continuumio/miniconda3:latest cache: paths: - .pip - ls -l $CI_PROJECT_DIR/myvenv/lib/python3.6/site-packages - $CI_PROJECT_DIR/myvenv/lib/python3.6/site-packages before_script: - chmod +x gitlab-ci.sh - ./gitlab-ci.sh stages: - test test: stage: test script: - python eval.py gitlab-ci.sh #!/usr/bin/env bash ENV_NAME=myenv ENV_REQUIREMENTS=requirements.txt if [ ! -d $ENV_NAME ]; then echo "Environment $ENV_NAME does not exist. Creating it now!" conda create --path --prefix "$CI_PROJECT_DIR/$ENV_NAME" fi echo "Activating environment: $CI_PROJECT_DIR/$ENV_NAME" source activate "$CI_PROJECT_DIR/$ENV_NAME" echo "Installing PIP" conda install -y pip echo "PIP: installing required packages" echo `which pip` pip --cache-dir=.pip install -r "$ENV_REQUIREMENTS"
Caching virtual environment for gitlab-ci
9 Caching is a typical approach to speed up long running operations. It sacrifices memory for the sake of computational speed. Let's suppose you have a function which given a set of parameters always returns the same result. Unfortunately this function is very slow and you need to call it a considerable amount of times slowing down your program. What you could do, is storing a limited amount of {parameters: result} combinations and skip its logic any time the function is called with the same parameters. It's a dirty trick but quite effective especially if the parameters combination is low compared to the function speed. In Python 3 there's a decorator for this purpose. In Python 2 a library can help but you need a bit more work. Share Improve this answer Follow answered Jan 7, 2016 at 17:06 noxdafoxnoxdafox 14.8k44 gold badges3434 silver badges4646 bronze badges Add a comment  | 
I am using python/pysam to do analyze sequencing data. In its tutorial (pysam - An interface for reading and writing SAM files) for the command mate it says: 'This method is too slow for high-throughput processing. If a read needs to be processed with its mate, work from a read name sorted file or, better, cache reads.' How would you 'cache reads'?
How to cache reads?
Edit: After some testing, this seems to work just fine and doesn't get purged. This isn't 100% confirmed yet, but it worked fine on our basic tests (I'll post more thorough results as they come). It seems saving the data to the app's "Application Support" folder resolves these issues, as this folder isn't purged. The docs state: Use the Application Support directory constant NSApplicationSupportDirectory, appending your <bundle_ID> for: Resource and data files that your app creates and manages for the user. You might use this directory to store app state information, computed or downloaded data, or even user created data that you manage on behalf of the user / Autosave files.` You could get to that folder as follows (Notice appending of the bundle ID as requested by the official apple docs): [[NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:[[NSBundle mainBundle] bundleIdentifier]] Hope this helps anyone , and as I said, I will post more thorough test results during the weekend. Edit 2: Its very important to add this line to prevent syncing of temporary content from Application Support. I use it in the end of my applicationDidFinishLaunching: [[NSURL URLWithString:NSApplicationSupportDir] setResourceValue:@(YES) forKey:NSURLIsExcludedFromBackupKey error:nil];
I have an app that lets you download "modules" that can expand your app usage. When the user downloads a module, I fetch a ZIP file from a server and extract it to his Caches folder. (Each of these zips could be sized anywhere from 60k to 2MB). Unfortunately, there are over 300 modules available, and many of the users download at least 50-60 of these to their device. Lately, I got many complaints that modules just disappear off the user device, so I did some investigation and came across the following wording in Apple's documentation. iOS will delete your files from the Caches directory when necessary, so your app will need to degrade gracefully if it's data files are deleted. And also the following article explaining further about this situation: http://iphoneincubator.com/blog/data-management/local-file-storage-in-ios-5 My problem is, I have no actual way of degrading gracefully, since I can't automatically let the user download so many modules. It could take hours depending on the internet connection and size of the modules. So I have a few questions: Did any of you ever had to deal with a similar situation, and if yes, how? Does anyone know when exactly iOS purges the Cache? What is considered "low space" warning? This way I could at least give the user a warning that he doesnt have enough space to install a new module. Is there a way to receive some sort of warning before the Cache folder is cleared? This is a really frustrating move from Apple and I don't really see a way out. Would really love to hear some ideas from you.
Caches folder purged (Emptied/Cleared) automatically on iOS
I believe that this has already been answered: How to get expiry date for cached item? If that isn't what you're looking for, consider the following: The API doesn't support getting the policy back from the retrieval of the cached item. Since you can cache any object, you could cache the policy in conjunction with the object and do some time based comparisons when you need to later access the object.
Is it possible to read the expiration time of an item in MemoryCache? I'm using the .NET System.Runtime.Caching.MemoryCache to store my configuration information for 30 min before I reload it from the database. As part of a status page I would like to show how old a specific cache item is or when it will expire. object config = LoadSomeStuffFromDatabase(); CacheItemPolicy cachePolicy = new CacheItemPolicy() { AbsoluteExpiration = DateTime.Now.AddMinutes(30) }; Cache.Add("MyConfigKey", config, cachePolicy); // And now I want to do something like this :-) DateTime dt = Cache.SomeMagicMethod("MyConfigKey"); Today I'm using a variable that is updated every time I reload the configuration. But it would be much nicer to read this information from the cache itself.
How to check when an item in MemoryCache will expire?
According to the AMP project FAQ you cannot: By using the AMP format, content producers are making the content in AMP files available to be cached by third parties. As a content producer I dislike Google adding their own URL, and branding around my content... From the consumer perspective looks like the content comes from Google. They say it is to improve speed, but you can see Google's intention behind this "free" technology.
Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 1 year ago. Improve this question Some results on Google Search comes with AMP (Accelerated Mobile Pages) icon on theirs links, at least when using a mobile, as soon you click on the link instead of loading the site, google show you a cached version of it rather. I want to disable this behaviour on my results, I see at least two good reasons for it: When sharing the link it is a pain in the neck to have the huge google URL in place of the shorter one would be just with the original one. Security: when you access any site and see a URL other than the site you wanted to load, you should distrust it, even if it looks like google (remember, you can get phished or even get caught in a trap hosted on gsites), Google should respect that instead of encouraging users to trust it just because the url looks like google! Even worst if combined with the first reason and you want to share the URL with a friend. I have to remove the google AMP prefix ever and ever, there is no advanced search option or cookie that makes Google give the clean URL?
How to disable AMP caching from Google Search? [closed]
You need to redefine every findBy* or findOneBy* function into custom repository: this is the only way as doctrine2 default behaviour doesn't take into account this situation. Is up to you, unfortunately. Also Ocramius (a Doctrine2 devel) say it here https://groups.google.com/d/msg/doctrine-user/RIeH8ZkKyEY/HnR7h2p0lCQJ
I am working on the symfony 2.5 project with doctrine 2.4. I want to cache query result with cache id and cache time, so I can delete the cache result, whenever needed though admin. I am able to cache the query result with "createQueryBuilder()" option. Example: $this->createQueryBuilder('some_table') ->select('some_table') ->where('some_table.deleted_date IS NULL') ->getQuery() ->useResultCache(true) ->setResultCacheLifetime(120) //Query Cache lifetime ->setResultCacheId($queryCacheId) //Query Cache Id ->getResult(); But I am not able to find the similar way to chache query result for "findOneBy()" option. Example: $this->findOneBy(array('some_field'=>$some_value)); I am looking some proper solution, any help is much appreciated.
How to cache doctrine "findOneBy()" query with cache id and cache lifetime option in Symfony 2.4?
6 I think I've run into this problem, or a very similar one. What I did then was to implement my own DNS provider for the JVM, see how to change the java dns service provider for details. You can use the dnsjava mentioned there or roll your own. Share Improve this answer Follow edited May 23, 2017 at 11:46 CommunityBot 111 silver badge answered Jan 7, 2014 at 17:45 Alexander TorstlingAlexander Torstling 18.7k77 gold badges6262 silver badges7474 bronze badges 1 Can you change the "root" DNS service provider? I thought you could only add new providers that would be called when the earlier providers are negative. – Robert Fischer Jan 9, 2014 at 13:38 Add a comment  | 
I am facing a problem with JVM and DNS. Everything I'm reading (including the docs and this) says that I can disable JVM DNS caching using networkaddress.cache.ttl, which can be set using java.security.Security.setProperties, but through the standard approach of using system properties. I have successfully changed this to 0, so no more caching in my JVM. But now, on each call of InetAddress.getByName("mytest.com"), it seems that my JVM is using the system DNS cache (in my case Windows 8). Indeed, between 2 calls of the method, I have changed the BIND9 properties for "mytest.com", but the IP return is still the same. Here is the workflow: setCachePolicyInJVM(0) in my Java code. set mytest.com to 192.168.1.188 in BIND9, restart. InetAddress.getByName("mytest.com").getHostAddress(); -> 192.168.1.188 set mytest.com -> 192.168.1.160 in BIND9, restart. InetAddress.getByName("mytest.com").getHostAddress(); -> 192.168.1.188 (should be 160 if there was no caching). Flush the Windows DNS InetAddress.getByName("mytest.com").getHostAddress(); -> 192.168.1.160 I have read several times that the JVM does not use the system cache, but that is wrong: it clearly does. How do we force a new DNS resolution on each call, bypassing the OS DNS cache?
JVM and OS DNS Caching
8 The problem with the movntdqa instruction with %%xmm as target (loading from memory) is that this insn is only available with SSE4.1 and on. This means newer Core 2 (45 nm) or i7 only so far. The other way around (storing data to memory) is available in earlier SSE versions. For this instruction, the processor moves the data into one very small of very few read buffers (Intel doesn't specify the exact size, but assume it is in the range of 16 bytes), where it is readily available, but gets kicked out after a few other loads. And it does not pollute the other caches, so if you have streaming data, your approach is viable. Remember, you need to use a sfence insn afterwards. Prefetching exists in two variants: prefetcht0 (Prefetches data in all caches) and prefetchnt (Prefetch non temporal data). Usually prefetch in all caches is the right thing to do, for a streaming data loop the latter would be better, if you make consequent use of the streaming instructions. You use it with the address of an object you want to use in the near future, usually some iterations ahead if you have a loop. The prefetch insn doesn't wait or block, it just makes the processor start getting the data at the specified memory location. Share Improve this answer Follow edited Sep 20, 2015 at 12:53 gui11aume 2,81833 gold badges1919 silver badges2323 bronze badges answered Aug 15, 2009 at 13:02 Gunther PiezGunther Piez 30.2k66 gold badges7171 silver badges104104 bronze badges Add a comment  | 
I want to read a memory location without polluting the cache. I am working on X86 Linux machine. I tried using MOVNTDQA assembler instruction: asm("movntdqa %[source], %[dest] \n\t" : [dest] "=x" (my_var) : [source] "m" (my_mem[0]) : "memory"); my_mem is an int* allocated with new, my_var is an int. I have two problems with this approach: The code compiles but I am getting "Illegal Instruction" error when running it. Any ideas why? I am not sure what type of memory is allocated with new. I would assume that WB. According to documentation, the MOVNTDQA instruction will work only will USWC memory type. How can I know what memory type I am working on? To summarize, my question is: How can I read a memory location without polluting the cache on an X86 machine? Is my approach in the right direction, and can it be fixed to work? Thanks.
How can I load values from memory without polluting the cache?
I found the SysInternals Sync worked well for me - although it flushes ALL cache, not just for the specific folder. Example of usage: IF EXIST Output RD /S /Q Output && Sync && MD Output By default it flushes all cached data for all drives - you can specify command-line options to restrict which drives but you cannot restrict it to just specific folders. Without it I would often get Access denied errors because the MD was trying to create a new folder while the system was still in the process of deleting the old one.
Does anyone know how to flush the disk write cache data from the cache manager for the current directory (or any given file or directory, for that matter), from a Windows command line?
Flush disk write cache from Windows CLI
4 Laravel has a feature explicitly for this scenario, called Retrieve Or Update: use Cache; public function findById($id) { return Cache::rememberForever("article-$id", function () use ($id) { return Article::find($id); }); } This will cache and return the closure's returned value, otherwise executing the closure to populate the cache if needed. Share Improve this answer Follow answered Jan 26, 2016 at 16:01 tjbptjbp 3,46733 gold badges2424 silver badges3535 bronze badges Add a comment  | 
On News Website, I have an Article model, and I want to cache the latest articles since I expect they have the highest hits. How can I write a method that operates in this way: public function findById($id) { if(Article::inMemory($id)) return Article::findFromMemory($id); return Article::find($id); } If there are any better approaches, please mention them as well
Laravel, using in-memory DB to cache results
4 I can't immediately see how to do this portably. However, GHC does have "weak pointers". (See System.Mem.Weak.) If you create items and hang on to them via weak pointers (only), then the garbage collector will automatically start deleting items if you run low on physical memory. (Unfortunately, this doesn't give you the ability to decide which items to delete first — e.g., the ones that are cheapest to recreate or the ones that have been least-used or something.) Share Improve this answer Follow answered Sep 25, 2014 at 12:21 MathematicalOrchidMathematicalOrchid 62.3k2020 gold badges125125 silver badges223223 bronze badges 4 Looks very interesting! Unfortunately, it will require some architecture change, but I can consider it. Thanks! – remdezx Sep 25, 2014 at 12:34 1 You may also be interested to know that System.Mem contains functions to provoke the GC to run at a specific time (e.g., after you just stopped using a really big object). Not directly related to your question, but might be interesting. – MathematicalOrchid Sep 25, 2014 at 12:37 I know, I found them when I was digging more into that topic :) – remdezx Sep 25, 2014 at 13:11 1 I don't think this will work very well. First, the garbage collector will free items when they're no longer referenced elsewhere. They won't just hang around. You'd have to completely disable automatic GC, which may not even be possible (and will surely be painful). Second, the GHC RTS doesn't return allocated memory to the system, so even after items are freed the process will hang on to the RAM. It may get paged out, but given how GHC uses blocks I'd expect the process to start thrashing. – John L Sep 25, 2014 at 17:12 Add a comment  | 
I'm creating a program which implements some kind of cache. I need to use as much memory as possible and to do that I need to do two things: Check how much memory is still available in system (RAM only, not SWAP) Check how much memory my app is already using. I need a platform independent solution (Linux, Windows, etc.). Using these two pieces of information I will reduce the size of cache or enlarge it. How can I get this information in Haskell? Are there any packages that can provide that information?
Check memory usage in haskell
4 If you want to store this information in a file you manually create in the app/cache folder, you may use the following solution: https://stackoverflow.com/a/13410635/1443490 If you don't want/need to care about what folder is used inside the app/cache folder and your project already uses Doctrine, you can follow this solution: https://stackoverflow.com/a/8900283/1443490 Share Improve this answer Follow edited May 23, 2017 at 12:31 CommunityBot 111 silver badge answered Dec 7, 2012 at 18:49 cheesemacflycheesemacfly 11.7k1111 gold badges5454 silver badges7272 bronze badges Add a comment  | 
I have a configuration-table(id,name,value), containing some configuration variables for my symfony application such as email_expiration_duration. I want to write a service to read these configuration varibales from symfony application. I want cache the data in app/cache folder. That means I will read the data from database only if the data is not in cache. I will clear the cached data whenever the configuration value changes. How can I do it in symfony2?
How to store data in cache in symfony2
4 if you know which parameters are important for cache key generation then you could specify their manually. Based on your example I wrote next example: set $cache_key "$uri?id=$arg_id&type=$arg_type&sort=$arg_sort&limit=$arg_limit"; Or you could use embedded perl and write your own function that will generate cache key, please see examples here http://wiki.nginx.org/Configuration#Embedded_Perl_examples Share Improve this answer Follow answered Jan 20, 2012 at 6:10 Sergei LomakovSergei Lomakov 2,0311919 silver badges1818 bronze badges 1 1 The problem is there are many, many possible API calls each with their own set of parameters. Some of them are more global like limit, but most of them are API-call dependent. Thanks for the embedded Perl link - I will check that out while waiting on other solutions. – Vance Lucas Jan 20, 2012 at 16:44 Add a comment  | 
I am generating a cache key with nginx based on the request URI and query params that checks memcache directly and then serves the page from PHP-FPM if a cache key is not found. My problem is that many URLs have query string options that come in in varying orders and thus generated two or more separate cache keys per response. My cache setting is something like: set $cache_key "$uri?$args"; So URLs that come in like these with query string params in different orders end up generating multiple possible cache keys for the same type: http://example.com/api/2.2/events.json?id=53&type=wedding&sort=title&limit=10 http://example.com/api/2.2/events.json?id=53&limit=10&type=wedding&sort=title http://example.com/api/2.2/events.json?id=53&limit=10&sort=title&type=wedding Ad nauseum for n! possibilities... The end result is that memcache often fills up a lot faster than it should because I have a potential n!-1 duplicate copies of cached content simply because the query string parameters come in a different order. Is there a way I can order them alphabetically before setting the cache key to avoid this? Are there other ways to elegantly solve this issue?
Nginx caching with variable param order
2 i am using interceptors. if request includes exact chunk of url(path to templates) i set header "Cache-Control": "no-cache, must-revalidate" $httpProvider.interceptors.push(function($q,ngToast) { return { request: function(config){ if(config.url.includes('some_url_to_your_template')){ Object.assign(config.headers,{"Cache-Control": "no-cache, must-revalidate"}); } return config; } } }) Share Improve this answer Follow answered Feb 7, 2017 at 15:33 dimson ddimson d 1,58111 gold badge1313 silver badges1515 bronze badges Add a comment  | 
I've been researching back and fourth on this issue, which is quite simple: Modern browsers (chrome/ FF) are caching stuff, html pages among others. When you release a new version, angular GETs these templates. However since the browser serve a cache version of these pages and not the new updated version. I've read about 2000 article on how to achieve this.. None of the "meta" tags worked for me.. (for instance: Using <meta> tags to turn off caching in all browsers?) The only thing that works is manually manage the versions of the file by adding some param value http://bla.com?random=39399339. However this is really annoying and extremely tough to maintain if "clear caching" is only sometimes needed (mainly between versions). Is there any chance browsers does not provide a simple, controlled way to manually "clear cache". Either on server or client way? P.S. Angular template makes it even tougher to manage.
Prevent browser cache of angular templates
1 Just change your web url to local path url... Try this code... NSBundle *bundle = [NSBundle mainBundle]; NSString *moviePath = [bundle pathForResource:@"Movie" ofType:@"m4v"]; NSURL *movieURL = [[NSURL fileURLWithPath:moviePath] retain]; MPMoviePlayerController *theMovie = [[MPMoviePlayerController alloc] initWithContentURL:movieURL]; theMovie.scalingMode = MPMovieScalingModeAspectFill; [theMovie play]; Share Improve this answer Follow answered Sep 22, 2011 at 6:36 SonuSonu 93711 gold badge1010 silver badges3939 bronze badges 0 Add a comment  | 
I am developing an iPhone application in which I will store stream video from URL directly to cache in local, now I need to play video in movie-player while it was in downloading in cache. I followed this http://lists.apple.com/archives/cocoa-dev/2011/Jun/msg00844.html, but I couldn't do exact. I am able to download video in cache but I couldn't play video from cache. So how can I play while its downloading?
Play video from cache in iphone programmatically
Workaround that work for me is create additional method with @OneToMany @OneToMany(cascade={}, fetch=FetchType.EAGER, mappedBy="a") public Set<B> getBSet() {}; @Transient public B getB() { return b.iterator().next(); } I'm not very happy with this solutions, but it works and I can't find other way.
I have code like: @Entity @Table(name = "A") @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class A { @OneToOne(cascade={CascadeType.ALL}, fetch=FetchType.EAGER, mappedBy="a") public B getB() {}; } @Entity @Table(name = "B") @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class B { @OneToOne(cascade={}, fetch=FetchType.LAZY) @JoinColumn(name="A_ID") public A getA() {}; } each time when A is loaded there is query for B. Why is A.getB() not cached after A is loaded and is it possible to cache it?
Hibernate not caching my OneToOne relationship on the inverse side
There is two typos in Dan Udey's rewrite example (and I can't comment on it), it should rather be : RewriteCond %{REQUEST_URI} ^/images/cached/ RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteRule (.*) /images/generate.php?$1 [L] Regards.
What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)
Best way to cache resized images using PHP and MySQL
22 You might also be running a too-new version of Java. Downgrading to Java 1.8 via https://adoptopenjdk.net/ fixed this issue for me. See BUG! exception in phase 'semantic analysis' Share Improve this answer Follow answered Feb 26, 2020 at 17:25 bobmagoobobmagoo 35622 silver badges88 bronze badges Add a comment  | 
I am new to Android Studio and I keep getting this error. I have researched and tried deleting .gradle, closing program and restarting, checking power save mode, and Cleaning and Rerunning. Any other ideas to try would be greatly appreciated. It worked perfect last night and now I am getting this error. Error:Could not open cp_init class cache for initialization script 'C:\Users\Owner\AppData\Local\Temp\asLocalRepo6.gradle' (C:\Users\Owner.gradle\caches\2.10\scripts\asLocalRepo6_4rdykqo5vjpjfhlk1g3pwkx76\cp_init). java.io.FileNotFoundException: C:\Users\Owner.gradle\caches\2.10\scripts\asLocalRepo6_4rdykqo5vjpjfhlk1g3pwkx76\cp_init\cache.properties (The system cannot find the file specified) Thanks!
Could not open cp_init class cache for initialization script
This behaviour is done with a meta tag titled apple-mobile-web-app-capable. Details (and other meta tags useful for iPhone web apps): https://developer.apple.com/library/content/documentation/AppleApplications/Reference/SafariHTMLRef/Articles/MetaTags.html <meta name="apple-mobile-web-app-capable" content="yes"> To set a nice icon for your app, you can specify a URL for your icon: https://developer.apple.com/library/content/documentation/AppleApplications/Reference/SafariWebContent/ConfiguringWebApplications/ConfiguringWebApplications.html <link rel="apple-touch-icon" href="/custom_icon.png" /> and a startup screen: <link rel="apple-touch-startup-image" href="/startup.png" /> Data can be locally cached. You can store data using the various HTML5 JavaScript APIs and cache manifest.
How do I go about allowing my webapp to be installed as an icon on a user's homescreen? Is the data cached locally, so that the webapp can be run when the user is outside of 3G? I did a quick google, but my search terms were lacking. I noticed that Google Buzz allowed me to install locally, and I'm wondering what the process is for creating web apps, and if they get special treatment (full caching/running offline).
Install webapp to homescreen on iPhone?
I highly recommend "The Service Worker Lifecycle" as an authoritative source of information about the different stages of a service worker's installation and updating. To summarize some info from that article, as it applies to your question: The service worker first enters the installing phase, and however many install listeners you've registered, they will all get a chance to execute. As you suggest, Workbox creates its own install listener to handle precaching. Only if every install listener completes without error will the service worker move on to the next stage, which might either be waiting (if there is already an open client using the previous version of the service worker) or activating (if there are no clients using the previous version of the service worker). skipWaiting(), if you choose to use it, it will bypass the waiting stage regardless of whether or not there are any open clients using the previous version of the service worker. Calling skipWaiting() will not accomplish anything if any of the install listeners failed, because the service worker will never leave the install0 phase. It's basically a no-op. The one thing that you should be careful about is using install1 when you are also using lazy-loading of versioned, precached assets. As the article warns: Caution: install2 means that your new service worker is likely controlling pages that were loaded with an older version. This means some of your page's fetches will have been handled by your old service worker, but your new service worker will be handling subsequent fetches. If this might break things, don't use install3. Because lazy-loading precached, versioned assets is a much more common thing to do in 2018, Workbox does not call install4 for you by default. It's up to you to opt-in to using it.
I use Workbox to pre-cache assets required to render the app shell, including a basic version of index.html. Workbox assumes that index.html is available in cache, otherwise, page navigation fails because I have this registered in my Service Worker: workbox.routing.registerNavigationRoute('/index.html'); I also have the self.skipWaiting() instruction in the install listener: self.addEventListener('install', e => { self.skipWaiting(); }); As I understand it, there are 2 install listeners now: One that's registered by Workbox for pre-caching assets (including index.html) One that I registered manually in my Service Worker Is it possible for self.skipWaiting() to succeed while Workbox's install listener fails? This would lead to a problematic state where assets don't get pre-cached but the Service Worker is activated. Is such a scenario possible and should I protect against it?
Workbox: the danger of self.skipWaiting()
23 Adding this will make it work. ExpiresByType text/x-javascript "access plus 1 month" ExpiresByType application/javascript "access plus 1 month" ExpiresByType application/x-javascript "access plus 1 month" Share Improve this answer Follow answered Jun 23, 2013 at 18:35 AmjadAmjad 1,65755 gold badges2121 silver badges4141 bronze badges 1 1 This worked for me. It was application/javascript that I was missing. – Liam McArthur Jan 25, 2017 at 11:22 Add a comment  | 
I am trying to modify my .htaccess file by specifying an expiration for resources. It has worked for images but not for javascript files. When running GTMetrix it still recommends that the javascript files need expiration. I have tried "application/javascript" and "application/x-javascript" but to no avail. Not sure what I am doing wrong. Here is my code: ## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType application/javascript "access 1 week" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 2 days" </IfModule> ## EXPIRES CACHING ##
Leverage browser caching | modifying .htaccess file | - not working for javascript files
As nruth suggests, Rails' built-in cache store is probably what you want. Try: def get_listings Rails.cache.fetch(:listings) { get_listings! } end def get_listings! Hpricot.XML(open(xml_feed)) end fetch() retrieves the cached value for the specified key, or writes the result of the block to the cache if it doesn't exist. By default, the Rails cache uses file store, but in a production environment, memcached is the preferred option. See section 2 of http://guides.rubyonrails.org/caching_with_rails.html for more details.
I have an expensive (time-consuming) external request to another web service I need to make, and I'd like to cache it. So I attempted to use this idiom, by putting the following in the application controller: def get_listings cache(:get_listings!) end def get_listings! return Hpricot.XML(open(xml_feed)) end When I call get_listings! in my controller everything is cool, but when I call get_listings Rails complains that no block was given. And when I look up that method I see that it does indeed expect a block, and additionally it looks like that method is only for use in views? So I'm guessing that although it wasn't stated, that the example is just pseudocode. So my question is, how do I cache something like this? I tried various other ways but couldn't figure it out. Thanks!
How do I cache a method with Ruby/Rails?
17 Expiration times are useful when you don't need precise information, you just want it to be accurate to within a certain time. So you cache your data for (say) five minutes. When the data is needed, check the cache. If it's there, use it. If not (because it expired), then go and compute the value anew. Some cached values are based on a large set of data, and invalidating the cache or writing new values to it is impractical. This is often true of summary data, or data computed from a large set of original data. Share Improve this answer Follow answered Jun 9, 2009 at 2:20 Ned BatchelderNed Batchelder 369k7676 gold badges570570 silver badges667667 bronze badges Add a comment  | 
Memcached provides a cache expiration time option, which specifies how long objects are retained in the cache. Assuming all writes are through the cache I fail to understand why one would ever want to remove an object from the cache. In other words, if all write operations update the cache before the DB, then the cache can never contain a stale object, so why remove it? One possible argument is that the cache will grow indefinitely if objects are never removed, but memcached allows you to specify a maximum size. Once this size is reached, memcached uses a least-recently-used (LRU) algorithm to determine which items to remove. To summarise, if a sensible maximum size has been configured, and all writes are through the cache, why would you want to expire objects after a certain length of time? Thanks, Don
memcached expiration time
Problem 1: "want my password to be forgotten" by git Problem 2 (implied): contradictory configuration settings Answer: git config --unset-all credential.helper git config --global --unset-all credential.helper git config --system --unset-all credential.helper Explanation: Git configuration is specified in three places: (repository_home)/.git/config...........................for the subject repository. ~/.gitconfig..........................for this particular user. /etc/gitconfig.......................for all users on this system. The commands noted above will remove all settings related to credentials at the repository, user and system level... which (I think) answers your question. However, it sounds like your problem may be limited to having some sort of configuration contradiction related to one option of credential.helper, cache. If you'd prefer to reset only that option, do this: git config --unset credential.helper 'cache' git config --global --unset credential.helper 'cache' git config --system --unset credential.helper 'cache' ... then set the timeout at the appropriate level, any of: git config --set credential.helper 'cache --timeout=600' git config --global --set credential.helper 'cache --timeout=600' git config --system --set credential.helper 'cache --timeout=600' For more, see the excellent documentation here: git config command git credential caching
I want my password to be forgotten, so I have to type it again. I have setup this: git config credential.helper 'cache --timeout=600' but much later on, several days, it still remembers the password and does not ask me it again... git version 1.7.10.4 (at Ubuntu) did I run into a bug? (as I see similar questions but none I found that answers this...) EDIT: or am I missing something? EDIT: now I know commit is local, and push is remote. BUT my commits (with RabbitVCS Git nautilus addon) seem to be performing the push as remote repo is being updated... When I issue push, it do asks for password... but with the commit command it does not ask AND perform the remote update; I checked that 4 hours ago my commit updated the remote server :(
git credential.helper=cache never forgets the password?
In short, no: not at the SQL server end; it will of course load the data into memory if possible, and cache the execution plan - so subsequent calls may be faster, but it can't cache the results. Options: tune the plan; the sort sounds aggressive - could you perhaps denormalize some data or add an index (perhaps even a clustered index); there may be other things we can do with the query if you show it (but tuning without a fully working DB is guestimation at best) cache the results at the web-server if it is sensible to do so
There is a certain query that is being called from an ASP .NET page. I studied the execution plan of that query in Management Studio and 87% is for a sort. I badly need the sorting or else the data displayed would be meaningless. Is there anyway that I can request SQL Server to cache a sorted results set so it will return the data faster in consequent runs? Or is SQL Server smart enough to do the cache handling and am I doing mistake by trying to force it to cache results, if that is possible? Any relevant information will be highly appreciated and thanks a lot in advance :) UPDATE: I just read in an article that creating a View with a clustered index will increase performance because the index will persist the data in a view to disk. Is this true? How do i get about doing this? Any articles?
Can i request SQL Server to cache a certain result set?
Java Collections provide LinkedHashMap out of the box, which is well-suited to building caches. You probably don't have this in Java ME, but you can grab the source code here: http://kickjava.com/src/java/util/LinkedHashMap.java.htm If you can't just copy-paste it, looking at it should get you started implementing one for inclusion in your mobile app. The basic idea is just to include a linked list through the map elements. If you keep this updated whenever someone does put or get, you can efficiently track access order and use order. The docs contain instructions for building an MRU Cache by overriding the removeEldestEntry(Map.Entry) method. All you really have to do is make a class that extends LinkedHashMap and override the method like so: private static final int MAX_ENTRIES = 100; protected boolean removeEldestEntry(Map.Entry eldest) { return size() > MAX_ENTRIES; } There's also a constructor that lets you specify whether you want the class to store things in order by insertion or by use, so you've got a little flexibility for your eviction policy, too: public LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder) Pass true for use-order and false for insertion order.
What would be the best way to implement a most-recently-used cache of objects? Here are the requirements and restrictions... Objects are stored as key/value Object/Object pairs, so the interface would be a bit like Hashtable get/put A call to 'get' would mark that object as the most recently used. At any time, the least recently used object can be purged from the cache. Lookups and purges must be fast (As in Hashtable fast) The number of Objects may be large, so list lookups are not good enough. The implementation must be made using JavaME, so there is little scope for using third-party code or neat library classes from the standard Java libraries. For this reason I'm looking more for algorithmic answers rather than recommendations of off-the-peg solutions.
How to implement a most-recently-used cache
.order_by performs sorting at database level. Here is an example. We store lasy queryset in var results. No query has been made yet: results = SampleModel.objects.filter(field_A="foo") Touch the results, for example, by iterating it: for r in results: # here query was send to database # ... Now, if we'll do it again, no attempt to database will be made, as we already have this exact query: for r in results: # no query send to database # ... But, when you apply .order_by, the query will be different. So, django has to send new request to database: for r in results.order_by('?'): # new query was send to database # ... Solution When you do the query in django, and you know, that you will get all elements from that query (i.e., no OFFSET and LIMIT), then you can process those elements in python, after you get them from database. results = list(SampleModel.objects.filter(field_A="foo")) # convert here queryset to list At that line query was made and you have all elements in results. If you need to get random order, do it in python now: results0 After that, results will have random order without additional query being send to database.
Here is sample codes in django. [Case 1] views.py from sampleapp.models import SampleModel from django.core.cache import cache def get_filtered_data(): result = cache.get("result") # make cache if result not exists if not result: result = SampleModel.objects.filter(field_A="foo") cache.set("result", result) return render_to_response('template.html', locals(), context_instance=RequestContext(request)) template.html {% for case in result %} <p>{{ case.field_A}}</p> {% endfor %} In this case, there's no generated query after cache made. I checked it by django_debug_toolbar. [Case 2] views.py - added one line result = result.order_by('?') from sampleapp.models import SampleModel from django.core.cache import cache def get_filtered_data(): result = cache.get("result") # make cache if result not exists if not result: result = SampleModel.objects.filter(field_A="foo") cache.set("result", result) result = result.order_by('?') return render_to_response('template.html', locals(), context_instance=RequestContext(request)) template.html - same as previous one In this case, it generated new query even though I cached filtered query. How can I adapt random ordering without additional queryset? I can't put order_by('?') when making a cache. (e.g. result = SampleModel.objects.filter(field_A="foo").order_by('?')) Because it even caches random order. Is it related with 'django queryset is lazy' ? Thanks in advance.
Django : random ordering(order_by('?')) makes additional query
15 You could use a plugin as suggested by mkoryak. or you could use the following: (no plugin required - jQuery only): // jQuery - Wait until images (and other resources) are loaded $(window).load(function(){ // All images, css style sheets and external resources are loaded! alert('All resources have loaded'); }); Using the above method, you can also be sure that all the CSS stylesheets are loaded as well (to make sure your page is displayed properly when isotope kicks in). "DOM ready" fires when the DOM is ready (ie the markup). $(document).ready( function(){ ... }); "Window load" waits for all the resources and then fires. $(window).load( function(){ ... }); Note: (by @DACrosby): load() won't always fire if the images are cached (ie, they're not presently being loaded from the site - you're using your local copy). Share Improve this answer Follow edited Dec 13, 2016 at 14:37 answered Jan 4, 2013 at 19:15 AnilAnil 21.8k99 gold badges7575 silver badges102102 bronze badges 1 2 An issue worth noting - load() won't always fire if the images are cached (ie, they're not presently being loaded from the site - you're using your local copy). – DACrosby Feb 20, 2014 at 21:48 Add a comment  | 
I've got something I've put together using jQuery isotope here.. http://jsbin.com/eziqeq/6/edit It seems to work in general but on first load, of a new tab, the Isotope plugin is setting the height of the wrapper element to 0. If I refresh the page it does work and sets the height of the parent element based on the the items found inside. Any help would be greatly appreciated. I can't think why off this first load this isn't working and on subsequent reloads it is.. unless its perhaps something to do with caching the images it loads? EDIT-- This is a caching issue in Webkit browsers as it works on Firefox and on a working tab when I clear the cache and refresh the page it won't work until refreshed again.
jQuery isotope on first load doesn't work, How do I wait for all resources/images to be loaded?
This will disable caching for jQuery ajax: jQuery.ajaxSetup({ cache: false });
Hy! My JS is requesting a JSON from controller to edit an existing object, a populated dropdownlist. Then, the View send the actual values from my autosuggest dropdown, to lately the new value be compared to the old one and the new values be stored. It is like a list of Persons. When I load the page, there is some persons in my ddl and I can add or delete persons. This is my controller: [HttpGet] public JsonResult JSON(int order) { IEnumerable<Person> persons = dataServ.Envolvidos.GetPerson( order ) return this.Json( new { Result = persons }, JsonRequestBehavior.AllowGet ); } And my Json call: $.getJSON("/Order/JSON", { order: $("#Id").val() }, function (data) { ... }); Everything is going fine, except by the point that I.E. is caching this JSON, and when I send the new values and come back to the edit the page again, the old values are there instead of the new. But the new values is stored on database, like should be. I tested on Chrome and Firefox and after I edit and come to edit again, it's done a new json call and the new values are there, different from I.E. Am I missing something? What I should do to JSON result don't be cached?
Json is being cached incorrectly
I just went through the System.Web.Caching.Cache in reflector. It seems like everything that involves the expiry date is marked as internal. The only place i found public access to it, was through the Cache.Add and Cache.Insert methods. So it looks like you are out of luck, unless you want to go through reflection, which I wouldn't recommend unless you really really need that date. But if you wish to do it anyway, then here is some code that would do the trick: private DateTime GetCacheUtcExpiryDateTime(string cacheKey) { object cacheEntry = Cache.GetType().GetMethod("Get", BindingFlags.Instance | BindingFlags.NonPublic).Invoke(Cache, new object[] { cacheKey, 1 }); PropertyInfo utcExpiresProperty = cacheEntry.GetType().GetProperty("UtcExpires", BindingFlags.NonPublic | BindingFlags.Instance); DateTime utcExpiresValue = (DateTime)utcExpiresProperty.GetValue(cacheEntry, null); return utcExpiresValue; } Since .NET 4.5 the internal public getter of the HttpRuntime.Cache was replaced with a static variant and thus you will need to invoke/get the static variant: object cacheEntry = Cache.GetType().GetMethod("Get").Invoke(null, new object[] { cacheKey, 1 });
Is it possible to get the expiry DateTime of an HttpRuntime.Cache object? If so, what would be the best approach?
How can I get the expiry datetime of an HttpRuntime.Cache object?
Congratulations for realising that writing your own can be more trouble it initially appears! I would check out the Guava cache solution. Guava is a proven library and the caches are easily available (and configurable) via a fluent factory API. All Guava caches, loading or not, support the method get(K, Callable<V>). This method returns the value associated with the key in the cache, or computes it from the specified Callable and adds it to the cache. No observable state associated with this cache is modified until loading completes. This method provides a simple substitute for the conventional "if cached, return; otherwise create, cache and return" pattern.
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 3 years ago. Improve this question I need to frequently access the result of a time-consuming calculation. The result changes infrequently, so I have to recalculate the data from time to time but it is ok to use the outdated result for a while. What would be the easiest way to do this and is there an existing library method or design pattern? I am thinking of something like private static List myCachedList = null; ... // refresh list once in 3600 seconds if (needsRefresh(myCachedList, 3600)) { // run the calculation myCachedList = ... } // use either updated or previous value from here on A proper implementation might not be trivial, it might have to deal with thread safety, race conditions etc., so I would rather use a proven implementation than roll my own here.
Simple Java caching library or design pattern? [closed]
The problem you were asked on the interview is the so-called Cache miss-storm - a scenario in which a lot of users trigger regeneration of the cache, hitting in this way the DB. To prevent this, first you have to set soft and hard expiration date. Lets say the hard expiration date is 1 day, and the soft 1 hour. The hard is one actually set in the cache server, the soft is in the cache value itself (or in another key in the cache server). The application reads from cache, sees that the soft time has expired, set the soft time 1 hour ahead and hits the database. In this way the next request will see the already updated time and won't trigger the cache update - it will possibly read stale data, but the data itself will be in the process of regeneration. Next point is: you should have procedure for cache warm-up, e.g. instead of user triggering cache update, a process in your application to pre-populate the new data. The worst case scenario is e.g. restarting the cache server, when you don't have any data. In this case you should fill cache as fast as possible and there's where a warm-up procedure may play vital role. Even if you don't have a value in the cache, it would be a good strategy to "lock" the cache (mark it as being updated), allow only one query to the database, and handle in the application by requesting the resource again after a given timeout
I was asked this question in an interview: For a high traffic website, there is a method (say getItems()) that gets called frequently. To prevent going to the DB each time, the result is cached. However, thousands of users may be trying to access the cache at the same time, and so locking the resource would not be a good idea, because if the cache has expired, the call is made to the DB, and all the users would have to wait for the DB to respond. What would be a good strategy to deal with this situation so that users don't have to wait? I figure this is a pretty common scenario for most high-traffic sites these days, but I don't have the experience dealing with these problems--I have experience working with millions of records, but not millions of users. How can I go about learning the basics used by high-traffic sites so that I can be more confident in future interviews? Normally I would start a side project to learn some new technology, but it's not possible to build out a high-traffic site on the side :)
Dealing with concurrency issues when caching for high-traffic sites
Your wrapper function creates a new inner() function each time you call it. And that new function object is decorated at that time, so the end result is that each time outter() is called, a new lru_cache() is created and that'll be empty. An empty cache will always have to re-calculate the value. You need to create a decorator that attaches the cache to a function created just once per decorated target. If you are going to convert to a tuple before calling the cache, then you'll have to create two functions: from functools import lru_cache, wraps def np_cache(function): @lru_cache() def cached_wrapper(hashable_array): array = np.array(hashable_array) return function(array) @wraps(function) def wrapper(array): return cached_wrapper(tuple(array)) # copy lru_cache attributes over too wrapper.cache_info = cached_wrapper.cache_info wrapper.cache_clear = cached_wrapper.cache_clear return wrapper The cached_wrapper() function is created just once per call to np_cache() and is available to the wrapper() function as a closure. So wrapper() calls cached_wrapper(), which has a @lru_cache() attached to it, caching your tuples. I also copied across the two function references that outter()0 puts on a decorated function, so they are accessible via the returned wrapper as well. In addition, I also used the outter()1 decorator to copy across metadata from the original function object to the wrapper, such as the name, annotations and documentation string. This is always a good idea, because that means your decorated function will be clearly identified in tracebacks, when debugging and when you need to access documentation or annotations. The decorator also adds a outter()2 attribute pointing back to the original function, which would let you unwrap the decorator again if need be.
I am trying to make a cache decorator for functions with numpy array input parameters from functools import lru_cache import numpy as np from time import sleep a = np.array([1,2,3,4]) @lru_cache() def square(array): sleep(1) return array * array square(a) But numpy arrays are not hashable, TypeError Traceback (most recent call last) <ipython-input-13-559f69d0dec3> in <module>() ----> 1 square(a) TypeError: unhashable type: 'numpy.ndarray' So they need to be converted to tuples. I have this working and caching correctly: @lru_cache() def square(array_hashable): sleep(1) array = np.array(array_hashable) return array * array square(tuple(a)) But I wanted to wrap it all up in a decorator, so far I have tried: def np_cache(function): def outter(array): array_hashable = tuple(array) @lru_cache() def inner(array_hashable_inner): array_inner = np.array(array_hashable_inner) return function(array_inner) return inner(array_hashable) return outter @np_cache def square(array): sleep(1) return array * array But caching is not working. Computation is performed but not cached properly, as I am always waiting 1 second. What am I missing here? I'm guessing lru_cache isn't getting the context right and its being instantiated in each call, but I don't know how to fix it. I have tried blindly throwing the functools.wraps decorator here and there with no luck.
Cache decorator for numpy arrays
You can use ActiveRecord::QueryCache.uncached like this: User.find_by_email('[email protected]') User.find_by_email('[email protected]') # Will return cached result User.uncached do User.find_by_email('[email protected]') User.find_by_email('[email protected]') # Will query the database again end In a controller, it would look something like this: def show # users#index action User.uncached do @user = User.find_by_email('[email protected]') @another_user = User.find_by_email('[email protected]') # Will query database end User.find_by_email('[email protected]') # Will *not* query database, as we're outside of the Users.uncached block end Obviously, in a model, you just have to do: class User < ActiveRecord::Base def self.do_something uncached do self.find_by_email('[email protected]') self.find_by_email('[email protected]') # Will query database end end end User.do_something # Will run both queries
In Ruby on Rails you can find records from the database with this syntax: <model_name>.find_by_<field_name>() Examples: User.find_by_email('[email protected]'), User.find_by_id(1), ... Time ago, if I am not wrong, I read somewhere that you can explicitly disable caching for 'find' operations, but I can not remember how. Can someone help me remember?
Ruby on Rails: How to set up "find" options in order to not use cache
The cache object is thread-safe but HttpContext.Current will not be available from background threads. This may or may not apply to you here, it's not obvious from your code snippet whether or not you are actually using background threads, but in case you are now or decide to at some point in the future, you should keep this in mind. If there's any chance that you'll need to access the cache from a background thread, then use HttpRuntime.Cache instead. In addition, although individual operations on the cache are thread-safe, sequential lookup/store operations are obviously not atomic. Whether or not you need them to be atomic depends on your particular application. If it could be a serious problem for the same query to run multiple times, i.e. if it would produce more load than your database is able to handle, or if it would be a problem for a request to return data that is immediately overwritten in the cache, then you would likely want to place a lock around the entire block of code. However, in most cases you would really want to profile first and see whether or not this is actually a problem. Most web applications/services don't concern themselves with this aspect of caching because they are stateless and it doesn't matter if the cache gets overwritten.
I am using Cache in a web service method like this: var pblDataList = (List<blabla>)HttpContext.Current.Cache.Get("pblDataList"); if (pblDataList == null) { var PBLData = dc.ExecuteQuery<blabla>(@"SELECT blabla"); pblDataList = PBLData.ToList(); HttpContext.Current.Cache.Add("pblDataList", pblDataList, null, DateTime.Now.Add(new TimeSpan(0, 0, 15)), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } But I wonder, is this code thread-safe? The web service method is called by multiple requesters. And more then one requester may attempt to retrieve data and add to the Cache at the same time while the cache is empty. The query takes 5 to 8 seconds. Would introducing a lock statement around this code prevent any possible conflicts? (I know that multiple queries can run simultaneously, but I want to be sure that only one query is running at a time.)
Using 'HttpContext.Current.Cache' safely
Poking around with Reflector reveals that the the interval is hardcoded. Expiry is handled by an internal CacheExpires class, whose static constructor contains _tsPerBucket = new TimeSpan(0, 0, 20); _tsPerBucket is readonly, so there can't be any configuration setting that modifies it later. The timer that will trigger the check for expired items is then set up in CacheExpires.EnableExpirationTimer()... DateTime utcNow = DateTime.UtcNow; TimeSpan span = _tsPerBucket - new TimeSpan(utcNow.Ticks % _tsPerBucket.Ticks); this._timer = new Timer(new TimerCallback(this.TimerCallback), null, span.Ticks / 0x2710L, _tsPerBucket.Ticks / 0x2710L); The calculation of span ensures that the timer fires exactly on :00, :20, :40 seconds, though I can't see any reason to bother. The method that the timer calls is internal, so I don't think there's any way to set up your own timer to call it more often (ignoring reflection). However, the good news is that you shouldn't really have any reason to care about the interval. Cache.Get() checks that the item hasn't expired, and if it has then it removes the item from the cache immediately and returns null. Therefore you'll never get an expired item from the cache, even though expired items may stay in the cache for up to 20 seconds.
I noticed that the ASP.NET cache items are inspected (and possibly removed) every 20 seconds (and oddly enough each time at HH:MM:00, HH:MM:20 and HH:MM:40). I spent about 15 minutes looking how to change this parameter without any success. I also tried to set the following in web.config, but it did not help: <cache privateBytesPollTime="00:00:05" /> I’m not trying to do anything crazy, but it would be nice if it was, say, 5 seconds instead of 20, or at least 10 for my application.
Changing frequency of ASP.NET cache item expiration?
@herau You were right I had to name the bean ! The problem was that there were another bean "cacheManager", so finally, I didn't annotate Application, and created a configuration as: @EnableCaching @Configuration public class CacheConf{ @Bean(name = "springCM") public CacheManager cacheManager() { return new ConcurrentMapCacheManager("entities"); } } in MyEntityRepository: @Cacheable(value = "entities", cacheManager = "springCM") MyEntity findByName(String name);
I'm trying to replace my old: @Component public interface MyEntityRepository extends JpaRepository<MyEntity, Integer> { @QueryHints({@QueryHint(name = CACHEABLE, value = "true")}) MyEntity findByName(String name); } by this: @Component public interface MyEntityRepository extends JpaRepository<MyEntity, Integer> { @Cacheable(value = "entities") MyEntity findByName(String name); } Because I want to use advanced caching features like no caching of null values, etc. To do so, I followed Spring tutorial https://spring.io/guides/gs/caching/ If I don't annotate my Application.java, caching simply doesn't work. But if I add @EnableCaching and a CacheManager bean: package my.application.config; @EnableWebMvc @ComponentScan(basePackages = {"my.application"}) @Configuration @EnableCaching public class Application extends WebMvcConfigurerAdapter { @Bean public CacheManager cacheManager() { return new ConcurrentMapCacheManager("entities"); } // ... } I get the following error at startup: java.lang.IllegalStateException: No CacheResolver specified, and no bean of type CacheManager found. Register a CacheManager bean or remove the @EnableCaching annotation from your configuration I get the same error if I replace My CacheManager bean by a CacheResolver bean like: @Bean public CacheResolver cacheResolver() { return new SimpleCacheResolver(new ConcurrentMapCacheManager("entities")); } Do I miss something ?
Unable to use Spring @Cacheable and @EnableCaching
This is called Standby List under windows. You can purge it globally, or for one volume, or for one file handle. Globally You can do it using a readily available program from Microsoft Technet, by selecting Empty → Empty Standby List Programmatically, you can achieve the same thing using the undocumented NtSetSystemInformation function, for details see line 239 in a program which does the same thing as the previously mentioned one, among other things. For one file handle Open the file with FILE_FLAG_NO_BUFFERING: The documentation is lying insofar as it says that you open the file without buffering, but the true, observable behavior on all Windows versions from Windows 98 up to Windows 8 is that it simply throws away the complete cache contents for that file (for everyone!) and doesn't repopulate the cache from reads that use this handle. For one complete volume A volume handle is just a file handle (a somewhat special one, but still), so assuming you have appropriate privilegues to open a volume handle, you can do the same for a complete volume. Also, as pointed out in the answer here, there seems to be a feature/bug (feature-bug?) which allows you to invalidate a volume's cache even without proper privilegues merely by attepting to open it without shared writes, at least under one recent version of Windows. It makes perfect sense that this happens when any open which is valid for writing succeeds as you may change filesystem-internal data doing so (insofar it is a feature), but apparently it also works when opening the volume fails (which is a bug).
I assume Windows has a similar concept to Linux's page cache for storing in memory data from disks, like files, executables and dynamic libraries. I wonder if it is possible at all to disable such cache or to the very least to clear/flush it.
Disable or flush page cache on Windows
17 The Spray folks have a spray-caching module which uses Futures. There is a plain LRU version and a version that allows you to specify an explicit time to live, after which entries are expired automatically. The use of Futures obviously allows you to write code that does not block. What is really cool, though, is that it solves the "thundering herds" problem as a bonus. Say, for example, that a bunch of requests come in at once for the same entry which is not in the cache. In a naive cache implementation, a hundred threads might get a miss on that entry in the cache and then run off to generate the same data for that cache entry, but of course 99% of that is just wasted effort. What you really want is for just one thread to go generate the data and all 100 requestors to see the result. This happens quite naturally if your cache contains Futures: the first requestor immediately installs a Future in the cache, so only the first requestor misses. All 100 requestors get that Future for the generated result. Share Improve this answer Follow edited Nov 27, 2014 at 0:43 answered Jan 30, 2013 at 15:42 AmigoNicoAmigoNico 6,79211 gold badge3535 silver badges4545 bronze badges Add a comment  | 
I know Guava has an excellent caching library but I am looking for something more Scala/functional friendly where I can do things like cache.getOrElse(query, { /* expensive operation */}) . I also looked at Scalaz's Memo but that does not have lru expiration.
LRUCache in Scala?
quicksort changes the array inplace - in the array it is working on [unlike merge sort, for instance - which creates a different array for it]. Thus, it applies the principle of locality of reference. Cache benefits from multiple accesses to the same place in the memory, since only the first access needs to be actually taken from the memory - the rest of the accesses are taken from cache, which is much faster the access to memory. Merge sort for instance - needs much more memory [RAM] accesses - since every accessory array you create - is accessing the RAM again. Trees are even worse - since 2 sequential accesses in a tree are not likely to be close to each other. [Cache is filled in blocks, so for sequential accesses - only the first byte in the block is a "miss" and the others are a "hit"].
I have seen many places say quicksort is good because it fits to cache-related stuff, such as said in wiki Additionally, quicksort's sequential and localized memory references work well with a cache http://en.wikipedia.org/wiki/Quicksort Could anyone give me some insight about this claim? How is quicksort related to cache? Normally what means that cache in the statement? Why quicksort is better for a cache? Thanks
How is quicksort is related to cache?
The quick and dirty way would be to fire-off a Task from Application_Start But I've found that it's nice to wrap this functionality into a bit of infrastructure so that you can create an ~/Admin/CacheInfo page to let you monitor the progress, state, and exceptions that may be in the process of loading up the cache.
I have an ASP.NET MVC 3 / .NET Web Application, which is heavily data-driven, mainly around the concept of "Locations" (New York, California, etc). Anyway, we have some pretty busy database queries, which get cached after they are finished. E.g: public ICollection<Location> FindXForX(string x) { var result = _cache.Get(x.ToKey()) as Locaiton; // try cache if (result == null) { result = _repo.Get(x.ToKey()); // call db _cache.Add(x.ToKey(), result); // add to cache } return result; } But i don't want to the unlucky first user to be waiting for this database call. The database call can take anywhere from 40-60 seconds, well over the default timeout for an ASP.NET request. I want to "pre-warm" these calls for certain "popular" locations (e.g New York, California) when my app starts up, or shortly after. I don't want to simply do this in Global asax (Application_Start), because the app will take too long to start up. (i plan to pre-cache around 15 locations, so that's a few minutes of work). Is there any way i can fire off this logic asynchronously? Maybe a service on the side is a better option? The only other alternative i can think of is have an admin page which has buttons for these actions. So an administrator (e.g me) can fire off these queries once the app has started up. That would be the easiest solution. Any advice?
Strategies for "Pre-Warming" an ASP.NET Cache
Add QML_DISABLE_DISK_CACHE (set to 1) to your environment variables. You should be able to do it inside your application via qputenv -- put it somewhere in main before loading QML content.
Qt 5.8 was supposed to come with the optional use ahead of time qtquick compiler, instead it arrived with a sort-of-a-jit-compiler, a feature that's enabled by default and caches compiled QML files on disk in order to improve startup performance and reduce memory usage. The feature however arrives with serious bugs which greatly diminish, or in my case even completely negate its benefits, as I didn't have a problem with startup times to begin with, and testing didn't reveal any memory usage improvements whatsoever. So what I would like to do is opt out of that feature in my project, but I don't seem to find how to do that. Going back to Qt 5.7.1 is not an option since my project relies on other new features, introduced with 5.8.
Can QML caching in Qt 5.8 be disabled for a particular project?
You are correct. Think of FIFO as cars going through a tunnel. The first car to go in the tunnel will be the first one to go out the other side. Think of the LRU cache as cleaning out the garage. You will throw away items that you have not used for a long time, and keep the ones that you use frequently. An evolution of that algorithm (an improvement to simple LRU) would be to throw away items that have not been used for a long time, and are not expensive to replace if you need them after all.
I'm really sorry for such simple question. I just want to be sure that I understand FIFO cache model correctly and I hope that someone will help me with that :) LRU cache deletes entry that was accessed least recently if the cache is full. FIFO deletes the entry that was added earlier(?) than other entries if the cache needs free space (for example if 'a' - 'v' - 'f' - 'k' is entries in the cache and 'a' is the oldest entry then cache will delete 'a' if it will need free space). Am I right?
FIFO cache vs LRU cache
thanks for replies guys! As jasonmp85 pointed out LinkedHashMap has a constructor that allows access order. I missed out that bit when I looked at API docs. The implementation also looks quite efficient(see below). Combined with max size cap for each entry, that should solve my problem. I will also look closely at SoftReference. Just for the record, Google Collections seems to have pretty good API for SoftKeys and SoftValues and Maps in general. Here is a snippet from Java LikedHashMap class that shows how they maintain LRU behavior. /** * Removes this entry from the linked list. */ private void remove() { before.after = after; after.before = before; } /** * Inserts this entry before the specified existing entry in the list. */ private void addBefore(Entry<K,V> existingEntry) { after = existingEntry; before = existingEntry.before; before.after = this; after.before = this; } /** * This method is invoked by the superclass whenever the value * of a pre-existing entry is read by Map.get or modified by Map.set. * If the enclosing Map is access-ordered, it moves the entry * to the end of the list; otherwise, it does nothing. */ void recordAccess(HashMap<K,V> m) { LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m; if (lm.accessOrder) { lm.modCount++; remove(); addBefore(lm.header); }
Is there a simple, efficient Map implementation that allows a limit on the memory to be used by the map. My use case is that I want to allocate dynamically most of the memory available at the time of its creation but I don't want OutOFMemoryError at any time in future. Basically, I want to use this map as a cache, but but I wanna avoid heavy cache implementations like EHCache. My need is simple (at most an LRU algorithm) I should further clarify that objects in my cache are char[] or similar primitives that will not hold references to other objects. I can put an upper limit on max size for each entry.
Java fixed memory map
Kernal Mode caching is essentially going to handle caching requests at the OS-level, so contents that are stored in it can be accessed without ever going down the rest of the usual pipeline (i.e. it will not have to go down to the ASP.NET or IIS-level caches to check for the contents) : So the request hits the initial cache (http.sys), finds what it needs and sends it back, all without ever having to proceed further down the pipeline. As a the result of this, it's usually quite fast. A limitation of it however is that it does not support many user-level features such as authentication and authorization, so it may not fit all scenarios. User-mode on the other hand is going to fill in the gaps where Kernal-mode cannot be used, which primarily surrounds authorized/authenticated content (as it requires a check to see if the user can actually access the contents), but there are many other scenarios that could cause the http.sys cache to not be used. With regards to actually checking to see if content is or is not being cached (and possibly why), you can use FREB (Failed Request Event Buffering). The following command can be used to find out which content is cached in kernel mode: netsh http show cachestate
What is the difference between kernel mode caching and user mode caching and how to track both ?
difference between kernel mode and user mode caching in IIS 8.0
Supposing there is a unique server running nginx + php + mysql instances with some remaining free RAM, the easiest way to use that RAM to cache data is simply to increase the buffer caches of the mysql instances. Databases already use LRU-like mechanisms to handle their buffers. Now, if you need to move part of the processing away from the databases, then pre-caching may be an option. Before talking about memcached/redis, a shared memory cache integrated with php such as APC will be efficient provided only one server is considered (actually more efficient than redis/memcached). Both memcached and redis can be considered to perform remote caching (i.e. to share the cache between various nodes). I would not rule out redis for this: it can easily be configured for this purpose. Both will allow to define a memory limit, and handle the cache with LRU-like behavior. However, I would not use couchbase here, which is an elastic (i.e. supposed to be used on several nodes) NoSQL key/value store (i.e. not a cache). You could probably move some data from your mysql instances to a couchbase cluster, but using it just for caching is over-engineering IMO.
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 4 years ago. Improve this question I have a Debian server with about 16GB RAM that I'm using with nginx and several heavy mysql databases, and some custom php apps. I'd like to implement a memory cache between Mysql and PHP, but the databases are too large to store everything in RAM. I'm thinking a LRU cache may be better so far as I research. Does this rule out Redis? Couchbase is also a consideration.
Memcached, Redis, or Couchbase [closed]
In your filter have this line: chain.doFilter(httpRequest, new AddExpiresHeaderResponse(httpResponse)); Where the response wrapper looks like: class AddExpiresHeaderResponse extends HttpServletResponseWrapper { public static final String[] CACHEABLE_CONTENT_TYPES = new String[] { "text/css", "text/javascript", "image/png", "image/jpeg", "image/gif", "image/jpg" }; static { Arrays.sort(CACHEABLE_CONTENT_TYPES); } public AddExpiresHeaderResponse(HttpServletResponse response) { super(response); } @Override public void setContentType(String contentType) { if (contentType != null && Arrays.binarySearch(CACHEABLE_CONTENT_TYPES, contentType) > -1) { Calendar inTwoMonths = GeneralUtils.createCalendar(); inTwoMonths.add(Calendar.MONTH, 2); super.setDateHeader("Expires", inTwoMonths.getTimeInMillis()); } else { super.setHeader("Expires", "-1"); super.setHeader("Cache-Control", "no-store, no-cache, must-revalidate"); } super.setContentType(contentType); } } In short, this creates a response wrapper, which, on setting the content type, adds the expires header. (If you want, you can add whatever other headers you need as well). I've been using this filter + wrapper and it works. See this question on one specific problem that this solves, and the original solution by BalusC.
Does anyone know how to go about coding a servlet filter that will set cache headers on a response for a given file/content type? I've got an app that serves up a lot of images, and I'd like to cut down on bandwidth for hosting it by having the browser cache the ones that don't change very often. Ideally, I'd like to be able to specify a content type and have it set the appropriate headers whenever the content type matches. Does anyone know how to go about doing this? Or, even better, have sample code they'd be willing to share? Thanks!
Servlet filter for browser caching?
Clear the svn authentication from %HOMEPATH%\.subversion\auth\svn.simple. This resets svn authentication and git prompts for a username the next time . Note that earlier I deleted the authentication from under %appdata%\subversion\auth\svn.simple and that did not work.
How do I get git-svn to forget the svn authentication details ? We have a pairing machine running windows server 2008 on which we have a git repo and we check-in to a central subversion repository. I want git to prompt me for my subversion authentication details each time I check-in. I have removed the subversion files from under %APPDATA%\subversion\auth\svn.simple. Now whenever I use a regular subversion client I get prompted for my subversion auth but git-svn remembers the credentials still. Is there anyway I can make it forget the authentication details?
Git-SVN clear auth-cache
I am not positive that no-cache meta tag is the way to go. It negates all caching and kind defeats the purpose of quickly accessible pages. Also, AFAIK, meta tag works per page, so if you have a page without it that references your JS - it will be cached. The widely acceptable way of preventing JS files (and, again, CSS) from being cached is to differentiate the requests for them: Say, you have: <script type=”text/JavaScript” src=”somescript.js″></script> this one will cache it (unless the above meta-tag is present. What you want to have is that on each page load the above line looks different (URL-wise) like so: <script type=”text/JavaScript” src=”somescript.js?some-randomly-generated-string″></script> Same goes for CSS. If you were using some sort of JS network - it would take care of that for you had you given it some sort of "no-cache" configuration option. You, of course, can do it in pure JS too. Some sort of Date related string is an option. Now, normally, you would not want to negate all the caching, so the way to do so is to add a version parameter to your URL: <script type=”text/JavaScript” src=”somescript.js?version=1.0.0″></script> and manage your scripts from there. EDIT There is no need for any additional extension. Unless I am sorely mistaken, "Chrome Developer Tools" is built in in all Chrome versions (in beta and dev, for sure) and is accessible by pressing Ctrl-Shift-I. There, in "Network" tab, you can see all your requests, there content and headers.
I recently made a website and I made a change to a .js file, but when I delete the .js file from the FTP server and upload the new one, the new file doesn't show up on the website. I checked the source code behind the .js file on the website and it's not right, it's showing the source for the old file, not the new one even though the old one is gone. Is that because my browser cached the .js file? Note: I have this source <meta http-equiv="cache-control" content="no-cache" /> on my page to stop the browser from caching my page, and I know that works on the HTML, but with that source there, do the resource files still get cached? I don't have that line of code on my other pages, just my home page, but the .js file is still referenced on the other pages so perhaps that is how it is getting cached? Furthermore, is there a way to check your browsers cache? I use chrome. Edit: I just cleared my browsers cache and reloaded the website and the file worked as it should now, so that means the file did get cached. So now my question becomes how to I prevent a resource file from being cached?
Are the .js files being cached?
Another technique is to stores you static images, css and js on another server (such as a CDN) which has the Expires header set properly. The advantage of this is two-fold: The expires header will encourage browsers and proxies to cache these static files The CDN will offload from your server serving up static files. By using another domain name for your static content, browsers will download faster. This is because serving resources from four or five different hostnames increases parallelization of downloads. If the CDN is configured properly and uses cookieless domain then you don't have unnecessary cookies going back and forth.
Any suggestions on how to do browser caching within a asp.net application. I've found some different methods online but wasn't sure what would be the best. Specifically, I would like to cache my CSS and JS files. They do change, however, it is usually once a month at the most.
Browser Caching in ASP.NET application
You can use a decorator and update UI Router's $templateFactory service to append a suffix to templateUrl function configureTemplateFactory($provide) { // Set a suffix outside the decorator function var cacheBuster = Date.now().toString(); function templateFactoryDecorator($delegate) { var fromUrl = angular.bind($delegate, $delegate.fromUrl); $delegate.fromUrl = function (url, params) { if (url !== null && angular.isDefined(url) && angular.isString(url)) { url += (url.indexOf("?") === -1 ? "?" : "&"); url += "v=" + cacheBuster; } return fromUrl(url, params); }; return $delegate; } $provide.decorator('$templateFactory', ['$delegate', templateFactoryDecorator]); } app.config(['$provide', configureTemplateFactory]);
I have noticed that from time to time I'll make a change to one of the templates within my AngularJS application and that at runtime, the change won't be visible. Rather, I'll have to refresh the application and if that fails, go to the path of the template itself and refresh it in order to see this change. What's the best way to prevent caching of these templates like this? Ideally, I'd like them to be cached during the current use of the Angular application, but that the next time I load the page, they retrieve the latest and greatest templates without having to manually refresh. I'm using ui-router if that makes any difference here. Thanks!
Disable template caching in AngularJS with ui-router
I think it ultimately depends on what you are caching. If you want to cache the result of rendered pages, that is tightly coupled to the Http nature of the request, and would suggest a ActionFilter level caching mechanism. If, on the other hand, you want to cache the data that drives the pages themselves, then you should consider model level caching. In this case, the controller doesn't care when the data was generated, it just performs the logic operations on the data and prepares it for viewing. Another argument for model level caching is if you have other dependencies on the model data that are not attached to your Http context. For example, I have a web-app were most of my Model is abstracted into a completely different project. This is because there will be a second web-app that uses this same backing, AND there's a chance we might have a non-web based app using the same data as well. Much of my data comes from web-services, which can be performance killers, so I have model level caching that the controllers and views know absolutely nothing about.
I'm needing to cache some data using System.Web.Caching.Cache. Not sure if it matters, but the data does not come from a database, but a plethora of custom objects. The ASP.NET MVC is fairly new to me and I'm wondering where it makes sense for this caching to occur? Model or Controller? At some level this makes sense to cache at the Model level but I don't necessarily know the implications of doing this (if any). If caching were to be done at the Controller level, will that affect all requests, or just for the current HttpContext? So... where should application data caching be done, and what's a good way of actually doing it? Update Thanks for the great answers! I'm still trying to gather where it makes most sense to cache given different scenarios. If one is caching the entire page, then keeping it in the view makes sense but where to draw the line when it's not the entire page?
Where should caching occur in an ASP.NET MVC application?
You need to call trimComponentCache() on QQmlEngine after you have set the Loaders source property to an empty string. In other words: helpLoader.source = ""; // call trimComponentCache() here!!! helpLoader.source = "../dynamic.qml"; In order to do that, you'll need to expose some C++ object to QML which has a reference to your QQmlEngine (lots of examples in Qt and on StackOverflow to help with that). trimComponentCache tells QML to forget about all the components it's not current using and does just what you want. Update - explaining in a bit more detail: For example, somewhere you define a class that takes a pointer to your QQmlEngine and exposes the trimComponentCache method: class ComponentCacheManager : public QObject { Q_OBJECT public: ComponentCacheManager(QQmlEngine *engine) : engine(engine) { } Q_INVOKABLE void trim() { engine->trimComponentCache(); } private: QQmlEngine *engine; }; Then when you create your QQuickView, bind one of the above as a context property: QQuickView *view = new QQuickView(...); ... view->rootContext()->setContextProperty(QStringLiteral("componentCache", new ComponentCacheManager(view->engine()); Then in your QML you can do something like: helpLoader.source = ""; componentCache.trim(); helpLoader.source = "../dynamic.qml";
I have main.qml and dynamic.qml files that i want to load dynamic.qml on main.qml using Loader {}. Content of dynamic.qml file is dynamic and another program may change its content and overwrite it. So i wrote some C++ code for detecting changes on file and fires Signal. My problem is that I don't know how can i force Loader to reload file. This is my current work: MainController { id: mainController onInstallationHelpChanged: { helpLoader.source = ""; helpLoader.source = "../dynamic.qml"; } } Loader { id: helpLoader anchors.fill: parent anchors.margins: 60 source: "../dynamic.qml" } I think that QML Engine caches dynamic.qml file. So whenever I want to reload Loader, it shows old content. Any suggestion?
QML Loader not shows changes on .qml file
14 To clear Application Data Please Try this way. I think it help you. public void clearApplicationData() { File cache = getCacheDir(); File appDir = new File(cache.getParent()); if (appDir.exists()) { String[] children = appDir.list(); for (String s : children) { if (!s.equals("lib")) { deleteDir(new File(appDir, s));Log.i("TAG", "**************** File /data/data/APP_PACKAGE/" + s + " DELETED *******************"); } } } } public static boolean deleteDir(File dir) { if (dir != null &amp;&amp; dir.isDirectory()) { String[] children = dir.list(); for (int i = 0; i < children.length; i++) { boolean success = deleteDir(new File(dir, children[i])); if (!success) { return false; } } } return dir.delete(); } Share Improve this answer Follow edited Jun 11, 2012 at 9:24 Praveenkumar 24.2k2323 gold badges9797 silver badges175175 bronze badges answered Jun 11, 2012 at 9:21 Md Abdul GafurMd Abdul Gafur 6,20322 gold badges2828 silver badges3838 bronze badges 2 Thanks for such a wonderful answer. but my concern is that when i should call this method because i am having lots of confusion on timing of calling of this method – Arun Joshi Jun 11, 2012 at 9:37 2 In your application there will be an activity in which the user exits(usually the main activity), override the OnDestroy() and call the above clear cache code. – Sathesh Oct 20, 2013 at 9:43 Add a comment  | 
What I want to do is to clear the cache memory of application on exit of application. this task i can do manually by this steps. < Apps --> Manage Apps --> "My App" --> Clear Cache>> but i wants to do this task by programming on exit of application.. please help me guys.. Thanks in advance..
Clear Application cache on exit in android
14 Caches usually have more management logic than a map, which are nothing else but a more or less simple datastructure. Some concepts, JCaches may implement Expiration: Entries may expire and get removed from the cache after a certain period of time or since last use Eviction: elements get removed from the cache if space is limited. There can be different eviction strategies .e. LRU, FIFO, ... Distribution: i.e. in a cluster, while Maps are local to a JVM Persistence: Elements in the cache can be persistent and present after restart, contents of a Map are just lost More Memory: Cache implementations may use more memory than the JVM Heap provides, using a technique called BigMemory where objects are serialized into a separately allocated bytebuffer. This JVM-external memory is managed by the OS (paging) and not the JVM option to store keys and values either by value or by reference (in maps you to handle this yourself) option to apply security Some of these some are more general concepts of JCache, some are specific implementation details of cache providers Share Improve this answer Follow edited Jun 8, 2016 at 12:03 answered Jun 8, 2016 at 7:16 Gerald MückeGerald Mücke 11k33 gold badges5050 silver badges6767 bronze badges Add a comment  | 
I've gone through javax.cache.Cache to understand it's usage and behavior. It's stated that, JCache is a Map-like data structure that provides temporary storage of application data. JCache and HashMap stores the elements in the local Heap memory and don't have persistence behavior by default. By implementing custom CacheLoader and CacheWriter we can achieve persistence. Other than that, When to use it?
When to use Java Cache and how it differs from HashMap?
StackExchange.Redis had a race condition that could lead to leaked connections under some conditions. This has been fixed in build 1.0.333 or newer. If you want to confirm this is the issue you are hitting, get a crash dump of your client application and look at the objects on the heap in a debugger. Look for a large number of StackExchange.Redis.ServerEndPoint objects. Also, several users have had a bugs in their code that resulted in leaked connection objects. This is often because their code is trying to re-create the ConnectionMultiplexer object if they see failures or disconnected state. There is really no need to recreate the ConnectionMultiplexer as it has logic internally to recreate the connection as necessary. Just make sure to set abortConnect to false in your connection string. If you do decide to re-create the connection object, make sure to dispose the old object before releasing all references to it. The following is the pattern we are recommending: private static Lazy lazyConnection = new Lazy(() => { return ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=true,password=..."); }); public static ConnectionMultiplexer Connection { get { return lazyConnection.Value; } }
I am using the Azure Redis Cache in a scenario of high load for a single machine querying the cache. This machine roughly gets and sets about 20 items per second. During daytime this increases, during nighttime this is less. So far, things have been working fine. Today I realized that the metric of "Connected Clients" is extremely high, although I only have 1 client that just constantly Gets and Sets items. Here is a screenshot of the metric I mean: My code looks like this: public class RedisCache<TValue> : ICache<TValue> { private IDatabase cache; private ConnectionMultiplexer connectionMultiplexer; public RedisCache() { ConfigurationOptions config = new ConfigurationOptions(); config.EndPoints.Add(GlobalConfig.Instance.GetConfig("RedisCacheUrl")); config.Password = GlobalConfig.Instance.GetConfig("RedisCachePassword"); config.ConnectRetry = int.MaxValue; // retry connection if broken config.KeepAlive = 60; // keep connection alive (ping every minute) config.Ssl = true; config.SyncTimeout = 8000; // 8 seconds timeout for each get/set/remove operation config.ConnectTimeout = 20000; // 20 seconds to connect to the cache connectionMultiplexer = ConnectionMultiplexer.Connect(config); cache = connectionMultiplexer.GetDatabase(); } public virtual bool Add(string key, TValue item) { return cache.StringSet(key, RawSerializationHelper.Serialize(item)); } I am not creating more than one instance of this class, so this is not the problem. Maybe I missunderstand the connections metric and what they really mean is the number of times I access the cache, however, it would not really make sense in my opinion. Any ideas, or anyone with a similar problem?
Why are connections to Azure Redis Cache so high?
10 Caching a PHP array is pretty easy: file_put_contents($path, '<?php return '.var_export($my_array,true).';?>'); Then you can read it back out: if (file_exists($path)) $my_array = include($path); You might also want to look into ADOdb, which provides caching internally. Share Improve this answer Follow answered Apr 1, 2013 at 15:24 AdrianAdrian 44.5k66 gold badges109109 silver badges102102 bronze badges 1 Many thanks for placing this way to approach this issue. It did point me to build a simple solution for my case ! – Pedro P Apr 3, 2013 at 10:11 Add a comment  | 
Each time someone lands in my page list.php?id=xxxxx it requeries some MySQL queries to return this: $ids = array(..,..,..); // not big array - not longer then 50 number records $thumbs = array(..,..,..); // not big array - not longer then 50 text records $artdesc = "some text not very long"; // text field Because the database from which I make the queries is quite big I would like to cache this results for 24h in maybe a file like: xxxxx.php in a /cache/ directory so i can use it in include("xxxxx.php") if it is present. ( or txt files !? , or any other way ) Because there is very simple data I believe it can be done using a few of PHP lines and no need to use memcached or other professional objects. Becasuse my PHP is very limited can someone just place the PHP main lines ( or code ) for this task ? I really would be very thankfull !
The most simple way to cache MySQL query results using PHP?
Your best bet is to maximize registers usage so that when you read a temporary you don't end up with extra (likely cached) memory accesses. Number of registers will depend on a system and registers allocation (the logic that maps your variables onto actual registers) will depend on a compiler. So your best bet is I guess to expect only one register and expect its size to be the same as the pointer. Which boils down to a simple for-loop dealing with blocks interpreted as arrays of size_t.
What is the fastest way to swap two non-overlapping memory areas of equal size? Say, I need to swap (t_Some *a) with (t_Some *b). Considering space-time trade-off, will increased temporary space improve the speed? For example, (char *tmp) vs (int *tmp)? I am looking for a portable solution. Prototype: void swap_elements_of_array(void* base, size_t size_of_element, int a, int b);
C - fastest method to swap two memory blocks of equal size?
9 I found two methods for changing CMake variables. The first one is suggested in the previous answer: cmake -U My_Var -D Mu_Var=new_value The second approach (I like it some more) is using CMake internal variables. In that case your variables will be still in the CMake cache, but they will be changed with each cmake invocation if they are specified with -D My_Var=.... The drawback is that these variables would not be seen from GUI or from the list of user's cache variables. I use the following approach for internal variables: if (NOT DEFINED BUILD_NUMBER) set(BUILD_NUMBER "unknown") endif() It allows me to set the BUILD_NUMBER from the command line (which is especially useful on the CI server): cmake -D BUILD_NUMBER=4242 <source_dir> With that approach if you don't specify your BUILD_NUMBER (but it was specified in previous invocations), it will use the cached value. Share Improve this answer Follow edited Mar 2, 2020 at 9:25 answered Oct 4, 2019 at 13:44 avtomatonavtomaton 4,76511 gold badge3939 silver badges4242 bronze badges Add a comment  | 
As I understand it, when you provide a variable via the command line with cmake (e.g. -DMy_Var=ON), that variable is stored inside the cache. When that variable is then accessed on future runs of the CMake script, it will always get the value stored inside the cache, ignoring any subsequent -DMy_Var=OFF parameters on the command line. I understand that you can force the cache variable to be overwritten inside the CMakeLists.txt file using FORCE or by deleting the cache file, however I would like to know if there is a nice way for the -DMy_Var=XXX to be effective every time it is specified? I have a suspicion that the answer is not to change these variables within a single build but rather have separate build sub-dirs for the different configs. Could someone clarify?
CMake override cached variable using command line
19 Yes. Actually, you must include external images in your manifest, or some browsers will not load them at all even if a network connection is available! (Unless you provide a NETWORK section, which may cause the images to be fetched every time, bypassing the regular browser cache.) The images will be cached (at least by Firefox, didn't test Chrome). The spec explicitly says: Offline application cache manifests can use absolute paths or even absolute URLs http://manifest-validator.com/ also reports a manifest with external URLs as OK. I am not 100% sure this also applies to scripts, but a quick test with Firefox looked like the script is cached as expected. Share Improve this answer Follow answered May 10, 2013 at 4:38 Jan SchejbalJan Schejbal 4,0302222 silver badges4040 bronze badges Add a comment  | 
I'm building an offline web application and want to use cache-manifest. Currently my cache-manifest looks like this: CACHE MANIFEST # Change the version number below each time we update a resource. # Rev 1 index.html photo.html js/photo.js css/photo.css http://code.jquery.com/jquery-1.6.1.min.js http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.js http://code.jquery.com/mobile/1.0a4.1/jquery.mobile-1.0a4.1.min.css http://maps.google.com/maps/api/js?sensor=false&amp;region=GB Is there any reason not to include external, CDN-hosted jQuery, jQuery Mobile and Google Maps files in the cache-manifest? I can't think of one, but I thought I would ask those wiser than myself :)
Is it OK to include external files in cache-manifest?
9 I have the same problem on iPhone. On iPad though I figured a turn around. If your manifest contains files less than 5MB the first time and you update the cache by window.applicationCache.update() and before doing the update you increase the manifest files to be below 10mb it will work. If you continue doing that (increasing the manifest by <5MB each time and then update()) you will see that the iPad can cache more than the 5MB limit. It is so sad that Apple by not supporting Flash and MIDP but only HTML5 for web-apps screws so much on that. Share Improve this answer Follow answered May 6, 2010 at 22:08 scaraveosscaraveos 1,01622 gold badges1010 silver badges1212 bronze badges Add a comment  | 
Anyone knows the max size of Safari's 'Offline Application Cache' on the iPad & iPhone. Looks like it's 5MB. Is there any way to enlarge this size? Offline application cache docs: https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/OfflineApplicationCache/OfflineApplicationCache.html
Max size iPad / iPhone Offline Application Cache
Responses in the 300 range are meant to be transparent. AFAIK, web browsers don't expose any of them to javascript. Thus, handling the 303 is not an option. Have you tried setting the cache property to false in the ajaxSetup? It will append a timestamp to the request, preventing the browser from caching the response. You can also do that manually yourself. The browser should match the first request url to the 2nd response
when I send requests to a certain server, a 303 response will come, followed by the requested response in combination with a 200 status code. Funny thing is that I only see this on my developer console's network view. When checking the statuscode and response of my $.ajax() request, there will be the response of the second request, as well as a 200 http status code. The problem is that it seems that the second request is being cached (though 200 status code), and I really need it to be non-cachable. Therefore I'd really like to intervene into the forwarding process that occurs with a http 303 status code. I'd like my jquery function to check for the status code, then send the get request with explicit headers, that tell the server not to cache the response. Well, I just don't know how to do this, since (as mentioned above) the jQuery.ajax method will respond with the forwarded request's response and status code (200). Can you help me? edit 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. maybe I need to somehow prevent the user agent from redirecting himself, so I can do the redirect? Or is there a way to simply tell the server/browser not to cache the second request from client-side? or to prevent it from redirecting my request? or at least modify the second request before redirecting?
React to 303 status code in jquery ( Prevent from redirecting)
4 +100 in chrome you can use FileSystem API http://www.noupe.com/design/html5-filesystem-api-create-files-store-locally-using-javascript-webkit.html this allows you to then save and read files from a sand-boxed file-system though the browser. As for other support it's not been confirmed as an addition to the HTML5 specification set yet. so it's only available in chrome. You could also use the IndexDB system this is supported in all modern browsers. you can use both these services inside a Service Worker to manage the loading and manage of the content however i have to question why would you want to you prevent your self from ever updating your index.html Share Improve this answer Follow edited Feb 15, 2017 at 20:48 answered Feb 9, 2017 at 0:56 Barkermn01Barkermn01 6,8193535 silver badges8484 bronze badges 2 1 Note that the Filesystem API (formal title: File API: Directories and System) is obsolete and no other browsers have any plans to ever implement support for it. The specification itself has been demoted to a W3C “Note” with a warning in bold: Work on this document has been discontinued and it should not be referenced or used as a basis for implementation. So while it’s fine to use for the case where somebody might only care about having something that works just in Chrome, they should not expect to see it working in any other browsers, ever. – sideshowbarker ♦ Feb 12, 2017 at 3:04 BTW this is why we don't trust W3C on HTML5 they standardised something that should not have been standardised updated on the 11th of January 2017, wicg.github.io/entries-api " Other browsers (at this time: Microsoft Edge and Mozilla Firefox) are starting to support subsets of Chrome’s APIs and behavior." – Barkermn01 Feb 22, 2017 at 8:27 Add a comment  | 
I am designing a JavaScript secure loader. The loader is inlined in the index.html. The goal of the secure loader is to only load JavaScript resources are trusted. The contents of index.html are mostly limited to the secure loader. For security purposes, I want index.html (as stored in cache) to never change, even if my website is hacked. How can I cache index.html without the server being able to tamper with the cache? I am wondering if ServiceWorkers can help. Effectively, the index.html would register a service worker for fetching itself from an immutable cache (no network request is even made).
Permanent browser cache using ServiceWorker
Two easy options come to mind: Call [[NSURLCache sharedURLCache] cachedResponseForRequest:request] before you make the request, store the cached response, then do that again after you finish receiving data, and compare the two cached responses to see if they are the same. Make an initial request with the NSURLRequestReturnCacheDataDontLoad cache policy, and if that fails, make a second request with a more sane policy. The first approach is usually preferable, as the second approach will return data that exists in the cache even if it is stale. However, in some rare cases (e.g. an offline mode), that might be what you want, which is why I mentioned it.
I would like to determine if the response from NSURLSessionDataTask came from cache, or was served from server I'am creating my NSURLSessionDataTask from request.cachePolicy = NSURLRequestUseProtocolCachePolicy;
How to know if NSURLSessionDataTask response came from cache?
4 What about checking out Microsoft Velocity? Another option if you don't want to start using Microsoft CTP-ware is to check out Nache which allows distributed cache/session state management Share Improve this answer Follow edited Sep 22, 2014 at 16:33 Simpsons 51611 gold badge77 silver badges1818 bronze badges answered Aug 7, 2008 at 0:03 Jonas FollesøJonas Follesø 6,49177 gold badges4141 silver badges5454 bronze badges Add a comment  | 
I am in the process of figuring out a cache strategy for our current setup, currently have multiple web servers and wanted to know what is the best way to cache data in this environment. I have done research about MemCache and the native asp.net caching but wanted to get some feedback first. Should I go with a Linux box if I use MemCache or a win32 port of MemCache.
Best way to cache data
When configuring the disk tier in Ehcache 3.x there is a boolean value that indicates persistence: true: data will be preserved between JVM restarts if the CacheManager or UserManagedCache has been shut down properly using one of the close methods, false: data will not be preserved between JVM restarts although the disk is used during cache operations. Note that this is the default. Usage depends on where your configuration is sourced from: In Java use ResourcePoolsBuilder.disk(long size, MemoryUnit unit, boolean persistent) with the boolean as defined above, In XML use <ehcache:disk unit="GB" persistent="true">100</ehcache:disk> with the boolean flag again as defined above. So in order to achieve the equivalent of Strategy.LOCALTEMPSWAP in 2.x you just can work with the default. Note that as of 3.1.3 you can use a system property in the XML to configure the data folder location as in CacheManager0.
In EHCache 3.1.3 The 2.x API to set the persistence strategy is missing, for instance the enum net.sf.ehcache.config.PersistenceConfiguration.Strategy is no longer in the lib. I've read the docs (for version 3.1). but I couldn't find anything about how to configure the persistence strategy, so I suppose that in version 3.x is a different concept or maybe the feature has been removed, but It sounds a bit odd. Can anyone tell me how can I to configure the EHCache 3.1.x to manage the persistence like Strategy.LOCALTEMPSWAP ? If It's not possible, Is there any alternative or workaround ?
Set persistence strategy to "localTempSwap" in EHCache 3.x
Found what I believe to be an acceptable solution at How to force browser to reload cached CSS/JS files? No idea how I missed this in my original investigation. For anyone who comes to this question, note I'm referring to the first answer on the linked page that references Google's mod_pagespeed plugin for apache. This works at the web server level, thus "[it works] with PHP, rails, python, static HTML -- anything." This is precisely the kind of solution I was looking for. This tool, or something similar, should be in use by all web developers to keep caching logic orthogonal to the code itself.
I'm working on a moderately-sized web application and trying to come up with the best solution to make all browsers use the cache and only invalidate it when there is an update to the asset being loaded. According to the research I've done here and elsewhere, everyone seems to be in agreement that appending a ?v={version#} to an asset such as a css or js file is a great way to automatically invalidate the cache when an asset is updated. (As per Force browser to clear cache and Better way to prevent browser caching of JavaScript files) But it seems to me that this solution should be generalized to all assets that reside on a web server. So my question is, would it be a good practice to have a build script look through each src="" attribute across the entire website -- whether img, css, or js, and programmatically append ?={timestamp} where timestamp is the time when the file is last modified. This way whenever you push from dev to staging to production, only those files that have been modified will have a changed time stamp, and the browser will know to invalidate the cache for those files. Any flaws with that approach? NOTE: Thinking this over a bit more, timestamp would definitely be undesirable in the case of changes that are later reverted. Therefore, appending ?={md5(filecontents)} is a more robust approach. Nevertheless, the question about whether implementing this across all assets and all builds still stands.
Best Practice to manage all asset caching (images, css, js, everything)
7 Check the documentation at https://pipenv.readthedocs.io/en/latest/advanced/ You can use the environment variable PIPENV_CACHE_DIR to tell pipenv where to cache files, then include that in the cache.directories array. I do this on my gitlab-ci.yml configuration (very similar in syntax to travis-ci). I also cache the virtualenv as well, which speeds build time up quite a bit. My gitlab-ci.yml actually looks like this: # WORKON_HOME sets where pipenv will place the virtualenv. We do this so that we can capture # the environment in the cache for gitlab-ci. # PIP_CACHE_DIR tells pip to cache our pip packages in the same path, so that we also # cache the downloads. variables: WORKON_HOME: .pipenv/venvs PIP_CACHE_DIR: .pipenv/pipcache # Make sure gitlab-ci knows to always cache the .pipenv path cache: key: pipenv paths: - .pipenv Share Improve this answer Follow answered Mar 23, 2019 at 5:07 Lucid DanLucid Dan 10411 silver badge44 bronze badges 1 You mention PIPENV_CACHE_DIR but don't specify it in your code. I've added it to the list of variables, that seemed to help. – Christopher Dec 2, 2021 at 18:52 Add a comment  | 
The Travis documentation on caching does not specifically mention how to cache python dependencies installed from pipenv's Pipfile, rather than from pip's usual requirements.txt. I tried setting up pip caching per documentation anyway, but build times are not improved at all, and I see pipenv installing its deps on every run. This is the syntax I'm currently using - what is the correct syntax? (or is it even supported?) language: python python: - "3.6" cache: pip cache: directories: - proj/static/node_modules - $HOME/.cache/pip
Caching pipenv / Pipfile dependencies on TravisCI
Here's how I solved it: My /logout action where the users session is destroyed in the backend redirects to /exit which has an id attribute of exitPage. In my JavaScript I have asked jQuery Mobile to trigger when that page is about to be created. I then empty the DOM and redirects to the front page. /exit: <div data-role="page" id="exitPage"></div> /my.js: jQuery('#exitPage').live('pagebeforecreate', function(){ jQuery(document).empty(); window.location.replace('/'); });
When users logout from my mobile app, how can I make sure the cache is cleared? What I'm thinking about is to redirect /logout to a specific page that clears the cache and redirects to the front page, but how do I clear everything from the cache? I'm using jQuery Mobile 1.0b2pre.
jQuery clear cache on logout
6 Interesting problem; don't think I've tried to solve this before. I'm thinking you'll need to have a second request going from your front-facing PHP script to your server. This could be a simple call to http://localhost/test.php. If you use fopen-wrappers, you could use fread() to pull the output of test.php as it is rendered, and after each chunk is received, output it to the screen and append it to your test.html file. Here's how that might look (untested!): <?php $remote_fp = fopen("http://localhost/test.php", "r"); $local_fp = fopen("test.html", "w"); while ($buf = fread($remote_fp, 1024)) { echo $buf; fwrite($local_fp, $buf); } fclose($remote_fp); fclose($local_fp); ?> Share Improve this answer Follow answered Mar 6, 2009 at 17:14 pix0rpix0r 31.2k1818 gold badges8686 silver badges102102 bronze badges 1 Interesting. Adding this one to my toolbelt. – Allain Lalonde Jul 18, 2011 at 18:03 Add a comment  | 
So, I'm looking for something more efficient than this: <?php ob_start(); include 'test.php'; $content = ob_get_contents(); file_put_contents('test.html', $content); echo $content; ?> The problems with the above: Client doesn't receive anything until the entire page is rendered File might be enormous, so I'd rather not have the whole thing in memory Any suggestions?
Streaming output to a file and the browser
I am not completely clear on the exact problem but another solution would be to have a Cache with softValues() instead of a maximum size or expiry time. Every time you access the cache value (in your example, start the computation), you should maintain state somewhere else with a strong reference to this value. This will prevent the value from being GCed. Whenever the use of this value drops to zero (in your example, the computation ends and its OK for the value to go away), you could remove all strong references. For example, you could use the AtomicLongMap with the Cache value as the AtomicLongMap key and periodically call removeAllZeros() on the map. Note that, as the Javadoc states, the use of softValues() does come with tradeoffs.
I've got a Guava Cache (or rather, I am migrating from MapMaker to Cache) and the values represent long-running jobs. I'd like to add expireAfterAccess behavior to the cache, as it's the best way to clean it up; however, the job may still be running even though it hasn't been accessed via the cache in some time, and in that case I need to prevent it from being removed from the cache. I have three questions: Is it safe to reinsert the cache entry that's being removed during the RemovalListener callback? If so, is it threadsafe, such that there's no possible way the CacheLoader could produce a second value for that key while the RemovalListener callback is still happening in another thread? Is there a better way to achieve what I want? This isn't strictly/only a "cache" - it's critical that one and only one value is used for each key - but I also want to cache the entry for some time after the job it represents is complete. I was using MapMaker before and the behaviors I need are now deprecated in that class. Regularly pinging the map while the jobs are running is inelegant, and in my case, infeasible. Perhaps the right solution is to have two maps, one without eviction, and one with, and migrate them across as they complete. I'll make a feature request too - this would solve the problem: allow individual entries to be locked to prevent eviction (and then subsequently unlocked). [Edit to add some details]: The keys in this map refer to data files. The values are either a running write job, a completed write job, or - if no job is running - a read-only, produced-on-lookup object with information read from the file. It's important that there is exactly zero or one entry for each file. I could use separate maps for the two things, but there would have to be coordination on a per-key basis to make sure only one or the other is in existence at one time. Using a single map makes it simpler, in terms of getting the concurrency correct.
Is it safe to reinsert the entry from Guava RemovalListener?
There is nice comparison of some commonly used backup utilities on community wiki. I'd recommend Bacula.
I'm considering switching from Windows 7 to Ubuntu 13.04. On Windows, I'm using Acronis True Image 2012 for Backup, which has these three key features for me: Backup system partition and recover it via bootable CD (protects me from OS damage, I could be back and running with all my programs and settings in just one hour) Backup selected folders using differential/incremental backup (I back up every week and use 1 full backup + 7 chains. Chain backup usually takes just 10-15 mins and full backup every 2 months I could survive) Possibility to mount any backed-up version as read-only device, so I can naturally and transparently access and manipulate all my files in the backup. AES 256 Encryption would be also fine, but this could be solved another way - to store an entire backup of the encrypted partition. I have tried DejaDup which is preinstalled on Ubuntu, but I was not able to access backed up files another way than completely restore the backup, which is insufficient. What do you recommend me?
Ubuntu - alternative to Acronis True Image
1 There are ways to do this in straight SQL. If you're comfortable with that, go for it. This way is for devs comfortable in Rails -- so we pull data out using JSON, and create users with a new ID in the new database from that JSON. Since you're pulling only 1 table, AND you want to reset the IDs to the new database, I recommend: bring a copy of the database locally with pbackups File.open('yourfile.json', 'wb') {|file| file << User.all.to_json } Connect to your new database, and move yourfile.json up there then: users_json = JSON.parse(File.read('yourfile.json')) users_json.each do |json| json.delete("id") User.create(json) end Share Follow answered Jul 21, 2013 at 12:54 Jesse WolgamottJesse Wolgamott 40.2k44 gold badges8383 silver badges109109 bronze badges 5 I have never did something like this before, so I would like to ask you regarding to passwords and tokens - will all password hashes and tokens saved correctly, so the users will be able to log in with their credentials? – user984621 Jul 21, 2013 at 14:03 And one more thing - how to pull data in JSON format? The date are coded and saved as "dump". – user984621 Jul 21, 2013 at 14:29 1) That depends on the authentication system. You'll likely need to try and see. 2) If you follow my instructions, it's in json format. – Jesse Wolgamott Jul 21, 2013 at 16:59 Well, but if I open the downloaded file, I see inside only hexa data, nothing like JSON structure. – user984621 Jul 21, 2013 at 17:02 You must be referring to step 1, bring a copy of the of the postgres locally? You should read devcenter.heroku.com/articles/heroku-postgres-import-export – Jesse Wolgamott Jul 21, 2013 at 17:16 Add a comment  | 
We have 2 Heroku apps - the first one is production and the second one is staging. I would like to pull data from one table from the production app (it's table users with all user's data) and push it to the staging database. After a little research I found the addon called pgbackups - I have just a few concerns: Does this addon also allow to get data only from one table, not from the whole database? The second thing is this - let's say that on production are users with IDs from 1 to 300. On the staging version users with IDs from 1 to 10. How to put those 300 users from the production to the staging version the way, that these 300 users would be counted from the ID 11 (we would like to keep our staging users in the staging database as well). Thank you
Heroku - how to pull data from one database and put it to another one?
2 Use COPY. With your ancient Postgres 8.1 you can't use a VIEW or a SELECT. But you can specify columns to export. COPY in modern Postgres can do a lot more. You really should be upgrading to a current version. Postgres 8.1 has been unsupported since 2010. Share Follow answered Jul 9, 2013 at 1:17 Erwin BrandstetterErwin Brandstetter 628k151151 gold badges1.1k1.1k silver badges1.3k1.3k bronze badges 2 Great, that's really useful, thanks! Ya, I am new to postgresql, what's worse, our company use version 8.1 as product database. – Zuckonit Jul 9, 2013 at 1:27 @Zuckonit Then your company needs to update... about three years ago. Seriously. Almost every "how do I" question will get "In PostgreSQL 8.4 or newer..." type answers for you, and newer versions will "just work" with things like autovacuum where you're probably struggling. Not to mention likely performance issues. – Craig Ringer Jul 9, 2013 at 1:51 Add a comment  | 
how to dump part of fields of a special table under postgresql 8.1. command 'pg_dump' ? or some commands else? could you please help with this. Thanks in advance!
how to dump part of fields of a special table under postgresql 8.1
It's to do with the way you pass the filepath to xp_dirtree, the only way I could get it working was with a temp table and dynamic SQL, like so: CREATE PROCEDURE [dbo].[spGetBackUpFiles] AS SET NOCOUNT ON BEGIN IF OBJECT_ID('tempdb..#table') IS NOT NULL DROP TABLE #table CREATE TABLE #table ( [filename] NVARCHAR(MAX) , depth INT , filefile INT ) DECLARE @backUpPath AS TABLE ( name NVARCHAR(MAX) , backuppath VARCHAR(256) ) DECLARE @SQL NVARCHAR(MAX) INSERT INTO @backUpPath EXECUTE [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory' DECLARE @backUpFilesPath AS NVARCHAR(MAX) = ( SELECT TOP 1 backuppath FROM @backUpPath ) SET @SQL = 'insert into #table EXEC xp_dirtree ''' + @backUpFilesPath + ''', 1, 1' EXEC(@SQL) SELECT * FROM #table WHERE [filename] like N'MASK[_]%' DROP TABLE #table END
I have a client - server desktop application (.NET) and client have to get list of available backup files stored in default back up folder (C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup) I decided to create a stored procedure which will return table with all needed files: CREATE PROCEDURE [dbo].[spGetBackUpFiles] AS SET NOCOUNT ON BEGIN declare @backUpPath as table ( name nvarchar(max), backuppath nvarchar(max) ) insert into @backUpPath EXECUTE [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory' declare @table as table ( filename nvarchar(max), depth int, filefile int ) declare @backUpFilesPath as nvarchar(max) = (select top 1 backuppath from @backUpPath) insert into @table EXEC xp_dirtree @backUpFilesPath, 1, 1 SELECT * FROM @table WHERE filename like N'MASK[_]%' END But I am getting following error: Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded. You can try this script on your machine... What can be the problem? Is there another way to get list of available backups (may be using SMO library) ?
SQL Server 2012 : getting a list of available backups
2 Don't do it this way. Use one of the existing well-maintained solutions for the purpose, like Bucardo, Londiste, Slony-I, etc. See replication on the Pg wiki. Londiste at least can cope with being stopped, then resumed when you want it to catch up, so you can run it as a daily batch if you want. If all you're dealing with is an insert-only table then you can avoid the need for full-fledged replication; all you need is something like psql -h host1 db1 -c \ "\copy (SELECT * FROM the_table WHERE the_date = '2012-01-01') TO stdout" \ | psql -h host2 db2 -c \ "\copy the_table FROM STDIN" See the manual on COPY You can do the same thing within your C# app by making two PostgreSQL connections, doing a COPY FROM on one, and a COPY TO on the other, then copying the rows between them. You'll find nPgSQL's support for COPY useful for this. Share Follow answered Jun 12, 2013 at 23:59 Craig RingerCraig Ringer 315k7878 gold badges704704 silver badges791791 bronze badges Add a comment  | 
I have a PostgreSQL Database, with one table. Each day, I want to export the data WHERE date='whatever' so it ONLY dump the data I've managed TODAY. Then, I go to another Database, and import that DUMP file, but instead of overwrite what I already had, I want to append to it... I'm trying to do this on a C# Console APP... Any suggestion? Thank you.
incremental export and import postgresql C#
Directly from Blake the creator of RestKit ! ​ Short answer: there is no ready solution for sync with RestKit (when offline) but Blake points at a small but very interesting starting point - if you have any other idea please feel free to suggest. I am still looking for the best way to do this.​ Blake Watters: I've never done anything like this before. In theory you should be able to just upload the SQLite, but I have no idea what kind of consistency guarantees you will have about the data. There is not any kind of built in mechanism for implementing this as its very hard to generalize syncing behaviors to this kind of level. RestKit gives you the low level components to implement something like this and there are some interesting open source efforts underway trying to tackle the problem: https://github.com/eric-horacek/EHObjectSyncManager
I am fairly new to RestKit and in general with synching core data with a RESTful web service. To simplify this I have decided to use RestKit for only backing up the local store to our rails backend. So here are two questions that are currently on top of my list: 1) What is the best practice for using RestKit to backup core data? I was thinking to create a local context that my apps uses to do all the fetch/create/update/delete operations (locally and persist them on the persistent store) Then in background allow RestKit via RKManagedObjectStore to do the backup using its own MOC every 5 minutes. 2) Does RestKit provides some handles methods to retry HTTP requests when offline? How do I manage situations such as a local CREATE and a local EDIT on the same entity? I was thinking to use Create/Update/Delete flags as suggested by Blake in one of his comments. Thanks a lot for your help!
Backup core data with RestKit 0.20
You can see Full Database Backup at msdn. The primary difference is that the transaction log is backed up. This provides you with many options such as differential backups that will eventually lead to less space needed in your drives to store your data. In addition, using the backup schemes will provide you with easier ways to organize where,when and how(that is strategies) you backup. There are ways to implement a full db backup by scripts look here, where you will lose nothing.
When I want to restore a database I think that the best option is to create a backup of the database. However, I can create a script that saves schema and database, save primary keys, foreign keys, triggers, indexes... In this case of script the result is the same as restore's ? I ask this because the script has a size about 1MB and the backup about 4MB. I ask this because I would like to change the collation of all my columns of all my tables and I try some scripts but this does not work, so I am thinking in the possibility to create and script, so when I create the tables these are created with the collation of database. This collation I set in the script when I create the database. Is it a good option to use a script for that or I can lost some type of restrictions or other design elements? Thanks.
what is the difference between backup database and script it saving schema and data?