Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
This document says: Updating Objects When an action is published, or a Like button pointing to the object clicked, Facebook will 'scrape' the HTML page of the object and read the meta tags. The object scrape also occurs when: The object URL is input in the Object Debugger Every 7 days after the first scrape When an app triggers a scrape using an API endpoint This Graph API endpoint is simply a call to: POST /?id={object-instance-id or object-url}&scrape=true When you do this scrape=true thing, facebook will go to the og:url parameter and scrape all og parameters on this site. So make sure that this page (og:url) has the appropriate og:image meta tag with the new image url. If you update the image, you've to rename it (update og:image url) otherwise facebook doesn't know that you've a new image.
I have several objects hosted by Facebook that are posted to users' activity logs. The URL for the image is pointed to our content delivery solution, which redirects to a versioned image. This means that the URL can stay constant, but the image might change. As you can see in this image when the image changes, Facebook can still load it correctly (see the og:image column), but the image created in the actual post (see the hover text, which comes from https://fbexternal-a.akamaihd.net/safe_image.php) seems to be cached. I have found that changing the URL by adding a dummy parameter works (and we can do that if necessary), but if there's a way to force Facebook to refresh its cached image, that would be better. Does such a call or method exist? In similar questions, I have heard that POST calls to https://graph.facebook.com/?id=[ID]&scrape=true can rescrape the data, but I have been unsuccessful with that call (it appears to only scrape the data if the object is self-hosted, is that right?). There was also something about adding fbrefresh=[ANYTHING] to the URL, but it sounds like that is equivalent to adding a dummy parameter. Also, using the debugger does not seem to do anything either. I have entered tried: - http://developers.facebook.com/tools/debug/og/object?q=[ID] - http://developers.facebook.com/tools/debug/og/object?q=[ID]&refresh=[ANYTHING] - http://developers.facebook.com/tools/debug/og/object?q=[Image URL] - http://developers.facebook.com/tools/debug/og/object?q=[Image URL]&refresh=[ANYTHING] Also - http://graph.facebook.com/?id=[ID]&scrape=true All to no avail. Lastly, does anyone know if/when the cache might be refresh normally? Or would it be expected that the image persists forever?
Can I force a refresh of a Facebook Object's image URL?
It's safe to assume that MSSQL will has the caching worked out pretty well =) Don't bother trying to build anything yourself on top of it, simply make sure that the method you use to query for changes is efficient (eg. don't query on non-indexed columns). PS: wouldn't caching locally defeat the whole purpose of checking for changes on the database?
I have windows server 2008 r2 with microsoft sql server installed. In my application, I am currently designing a tool for my users, that is querying database to see, if user has any notifications. Since my users can access the application multiple times in a short timespan, i was thinking about putting some kind of a cache on my query logic. But then I thought, that my ms sql server probably does that already for me. Am I right? Or do I need to configure something to make it happen? If it does, then for how long does it keep the cache up?
Database caching
a better aproach is to override generateUniqueKey method and simply return $key instead of hashed key it like class RedisCache extends CRedisCache { protected function generateUniqueKey($key) { return $key; } }
I am working on PHP 5.3.10 , Yii v1.1.14 and Redis 2.2.12. I am using CRedisCache for caching (not a extension) . I tried simple code below Yii::app()->cache->set($name, $data, 0); This command stores data in redis but key (in redis) of specified value is not equals to first parameter of set() function . (Actually I can use same key to retrieve cache using Yii::app()->cache->get($name);) Actually I want exact redis key for append value , for appending I am using following code . Yii::app()->cache->executeCommand("APPEND",array("KEY" => $name, "DATA" => $data)); Any idea on getting key of redis .
Why Yii Redis cache set() method don't create key specified in first parameter . Which is used for Append command
6 Guava's Cache does not copy objects put in it. If you're modifying an object obtained from the cache, you're modifying it for everyone (and there's no need to update the cache for that, since it references the same instance). It's better to cache objects that are immutable, or at least properly support concurrent modifications (and that doesn't usually mean that its accessors are simply synchronized, so it's easier said than done). Share Improve this answer Follow answered Sep 4, 2013 at 8:18 Frank PavageauFrank Pavageau 11.6k11 gold badge4343 silver badges5454 bronze badges Add a comment  | 
this is my what I am doing Cache<String,MYClass> cache=CacheBuilder.newBuilder(). maximumSize(100). expireAfterAccess(30, TimeUnit.MINUTES). build(); String id="myid"; MyClass obj=cache.getIfPresent(id); System.out.println(obj.getMyVariable); obj.setMyVariable("myNewString"); updateCache(id,obj); myClass obj2=cache.getIfPresent(id); System.out.println(obj2.getMyVariable); Does the cache update the MyClass object itself?I guess no. If not,Which is a better way? void update(String id,MyClass obj){ cache.put(id,obj); } or void update(String id,Myclass obj){ cache.invalidate(id); cache.put(id,obj); } Desired Output: myOldString myNewString
Best way to update Cache:Google Guava
I installed the older version of the SDK (versions 1.8 and 2.0) and now its working... go figure...
So, today I installed the azure SDK 2.1, and created a new project With a single WorkerRole, got the Caching package using NuGet, set the caching for the role to "Collocated" Set the Host to "WorkerRole1" in the app.config commented the security section since this is only a test inserted the following line on the "Run" `DataCache c = new DataCacheFactory().GetDefaultCache(); Hit Debug and got this error trying to execute the line above: There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.) I looked around a lot (for about 5 hours) and found no explanation for this... Can anyone help?
Cant get Azure Cache to work. "There is a temporary failure. Please retry later."
The Rails.cache.write method is what is sometimes known as a command method, which is called for its side effects, as opposed to a query method, called for its return value (for more info, check out command-query separation). Since the Rails docs make no guarantees about the return value, it is probably best not to depend on it, since it might (and apparently has) change without warning.
I am trying to expire key in 10 seconds, somehow it is, but not working with rspec. In this process, I noticed Rails.cache.write return false in Rails 2.3.11, while Rails.cache.write return true in Rails 3.2.11, is this a problem? why different value? Why Rails 2.3.11 return false and Rails 3.2.11 return true? Rails 2.3.11 irb(main):001:0> Rails.cache.write("test", "java", :expires_in => 10.seconds) => false Rails 3.2.11 irb(main):001:0> Rails.cache.write("test", "java", :expires_in => 10.seconds) => true I am using jruby 1.6.5.1 with Rails 2.3.11 and jruby 1.7.3 with Rails 3.2.11.
Why does the Rails.cache.write return a different value in Rails 2.3.11 and Rails 3.2.11 console?
You're trying to dump an Arel relation into your cache store, which unfortunately is not possible. You want to dump the resulting array, so do this instead: Rails.cache.fetch('time_reports') { reports.where{(time > my{range.begin}) & (time < my{range.end})}.all } ...or... Rails.cache.fetch('time_reports') { reports.where{(time > my{range.begin}) & (time < my{range.end})}.to_a } That will cause the relation to become a real array, which you can store in memcached as per normal.
I'm trying to cache an expensive query that is reused in several requests throughout the site in Rails 3. When a user clicks a table to view reports or when a user clicks to view a map or when a user clicks to print something, this query is performed: reports.where{(time > my{range.begin}) & (time < my{range.end})} It's an expensive query that can result in thousands of records. I want to cache it so after the first time it is called, it is stored in the cache until one of the records in the query is modified (e.g. updated). I replace my query with this: Rails.cache.fetch('time_reports') { reports.where{(time > my{range.begin}) & (time < my{range.end})} } But it raises an exception: TypeError (can't dump anonymous class #<Module:0x007f8f92fbd2f8>): As part of the question, I would like to know if using Rails.cache.fetch also requires me to add the following in config/environments/production.rb: config.cache_store = :mem_cache_store, "cache-1.example.com", "cache-2.example.com" //obviously I would use my ip
rails - using Rails.cache gives error
4 If you are worried that you get a hash collision (i.e. two queries with the same md5), simply use the query itself as a key: if $cache->has($query) return $cache->get($query); $cache->set($query, $queryOutput, array(10)); Alternatively, you can use sha1. It returns a longer string so the chance for collisions is lower. Don't worry about storing 32 or 40 bytes as a cache key, this won't noticably influence the performance of your web application. MySQL also has its own query cache. If you do the same query again, MySQL will get it from its cache. If you insert a user into the users table, MySQL will recognize that it can no longer use the cache, but this is not the case with your cache class. Share Improve this answer Follow answered Mar 12, 2013 at 14:26 SjoerdSjoerd 74.7k1616 gold badges137137 silver badges177177 bronze badges 3 What about blank spaces and symbols like ( or ? in our queries. Do I need to replace spaces with _ for example? I kinda want to use Memcached for caching everything. How would you do this if MySQL didn't had such feature? Are there other alternatives than passing a secondary parameter? – Lisa Miskovsky Mar 12, 2013 at 14:33 Yes, memcached has limitations on the format of the key, so you can not use the query directly as key. I would not cache anything until it is clear that I have a performance problem, and then I would cache specific parts that are slow, not all queries with their results. – Sjoerd Mar 12, 2013 at 14:38 I don't have a performance problems yet, but I don't want to wait the day when my MySQL server is destroyed with requests. I can easily make a basic caching solution. It's mostly for practise purposes. – Lisa Miskovsky Mar 12, 2013 at 14:41 Add a comment  | 
I have a function like this: function query($query) { /* If query result is in cache already, pull it */ if $cache->has(md5($query)) return $cache->get(md5($query)); /* Do the actual query here. mysqli_query() etc, whatever u imagine. */ /* Set the cache with 10 minute expiration */ $cache->set(md5($query), $queryOutput, array(10)); } So basically, if I query SELECT * FROM USERS, it is cached automatically for 10 minutes. I don't know if md5 is safe to rely on. For once, it creates 32 character string, which sounds a bit overkill. Second, md5 is known to give same character string as output on certain inputs. Is there any alternatives to md5 to identify an unique cache key? There is very little chance, that two completely different SQL queries may get the same md5 output and break some pages of the website, but it is still a chance I should predict right now and code accordingly. One more thing is, I feel like such use of functions is considered bad practise. A new user may be inserted to USERS table after my cache, but SELECT * FROM USERS will still get the same md5 output, hence, ignore the newly inserted user. They may register with same nickname few times in 10 minute duration. Should I pass a second parameter to my sensitive queries, like query($query, 'IGNORECACHE')? It doesn't sound logical to me. There will be too much things to keep in mind. How do you guys handle such issues? I will appreciate it if you can reply my first question about md5 alternative for this case, and a short explanation of good use of SQL caching on my second question would be greatly appreciated. Thank you.
Giving an unique cache key identifier for different SQL queries. Alternatives to md5?
Actually, I discovered that I can listen to an event to add new behavior to ./symfony cc: $this->dispatcher->connect('task.cache.clear', array('ClearAPCCache','clearCache')) Remains the problem that is not possibile to clear the APC cache from a command-line task because APC cache is related to the web server process, while the symfony command is a command-line script.
I need to add a simple call to (new sfAPCCache())->clean(), or to apc_clear_cache() when the command ./symfony cc is performed. Anyone knows how to achieve this? In which point should I edit my symfony application, or how should i register this additional behavior?
How to add custom behaviour to symfony cache:clear?
cache_digests is not something that can be compared with pjax or turbolinks. cache_digests enhances Rails caching to allow for Russian-doll caching. Turbolinks tends to be a bit more straightforward and doesn't require jquery. Pjax is configurable but required jquery.
I've done some reading about the various options for improving the speed of a rails site. The following libraries seem promising: pjax turbolinks cache_digests However, it seems like they try to do many similar things. Can/Should you use them in tandem? Are there problems that would arise in doing so? Are there cases where one or the other is better than the rest? (And what are they?) Is there something superior to all three I should check out instead?
pjax vs turbolinks vs cache_digests to improve rails speed?
new solution use Windows Server AppFabric Caching https://github.com/geersch/AppFabric http://dotnet.dzone.com/articles/caching-wcf-services-part-2 http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AAP314 There were issues in .NET 1.0 and 1.1 but apparently not in 2.0 and upwards see http://support.microsoft.com/kb/917411
I have 2 projects, one is an ASP.Net MVC application and the other is a WCF app. These 2 applications share a common business logic layer, which utilizes the ASP.Net Application cache to store some data for quick retrieval. I really don't want both apps to maintain the same set of data, but I'm wondering where, exactly, does the ASP.Net caching live? If I were to share the same app pool, could both processes access that cache? If not, is there any way to get 2 applications to share the cache?
Sharing ASP.Net Application cache in MVC and WCF applications
It looks like you are using Windows Azure Shared Cache, and you want to use some local cache when developing. It might be better to have an abstract cache layer in your system so that you can switch caches between cloud and local, rather than changing the configuration. For example, to have an interface like ICache with some methods like GetItem, SetItem, etc. Then you can have some classes implemented this interface that are using in memory cache for local development, and Azure Cache for production. There's a project named ServiceStack on GitHub that wrapped some cache implementation you can refer https://github.com/ServiceStack/ServiceStack.Contrib/tree/master/src Alternatively, you can use the new Cloud Service Caching, which provides co-located/dedicate cache clusters alone with your cloud services (web role and worker role). It has full local emulator support which means you don't need to change any code and any configuration between local development (use local cache emulator) and production. For more information about this Cloud Service Caching you can refer https://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/
I've got our site all setup like this in my web.config: <dataCacheClients> <dataCacheClient name="default"> <hosts> <host name="mysite.cache.windows.net" cachePort="22233" /> </hosts> <securityProperties mode="Message"> <messageSecurity authorizationInfo="{key}"> </messageSecurity> </securityProperties> </dataCacheClient> </dataCacheClients> I'd like to edit this so while I'm developing on my laptop I'm not hitting the production cache but instead a cache on my laptop. This is a hard problem to Google (at least for me) because "azure cache local" comes up with caching on web roles.
Setting up azure cache client for local development
I'm not too familiar with Webview. but in ordinary websites you can use the querystring to emulate a unique adress, while still loading same image. this is often used on webpages on css files. example: http://www.webpage.com/image.jpg?cachekey=23456456754 by randomizing the cachekey every time the image loaded, it is treated as a unique image and does not load from cache. Put a randomized int or string in the end of your filename string Random r = new Random(); int randInt = r.nextInt(8000000-1000000) + 1000000; String query = "?cachekey=" + randInt ; String html = "<html><head></head><body bgcolor=\"#000000\"><center><img src=\""+ filename + query "\"></center></body></html>"; I have not tested this yet. but it's an idea how to solve it.
I have a WebView that just displays an image from the external SD-card. This works well, however if the image is overwritten with new data, the WebView does not update the image. This is even more curious because the WebView is created totally new in the onCreate method of the activity. Here is my code: @Override public void onCreate(Bundle savedInstanceState) { Log.d(TAG, "Creating Viewer"); super.onCreate(savedInstanceState); Intent intent = getIntent(); Bundle extras = intent.getExtras(); String imagePath = extras.getString("imgFile"); shareImage(Uri.fromFile(new File(imagePath))); setContentView(R.layout.viewer_test); WebView viewer = (WebView) findViewById(R.id.imageViewer); viewer.getSettings().setAllowFileAccess(true); viewer.getSettings().setBuiltInZoomControls(false); viewer.getSettings().setLoadWithOverviewMode(true); viewer.getSettings().setUseWideViewPort(true); viewer.getSettings().setCacheMode(WebSettings.LOAD_NO_CACHE); String filename = "file://"+ imagePath; String html = "<html><head></head><body bgcolor=\"#000000\"><center><img src=\""+ filename + "\"></center></body></html>"; viewer.loadDataWithBaseURL(null, html, "text/html","utf-8", null); viewer.reload(); } The image at the path that is saved in imagePath is overwritten by another activity before the path is sent to this activty. The WebView however still shows the old image data. As you can see I already tried to set the cache mode and I tried to manually reload the WebView, with no luck unfortunately. I also checked with a file manager and the image on the SD-card is overwritten with new data. I tried to overwrite the file multiple times, but that doesn't work either. Somehow the image data gets cached somewhere or something. How can I avoid this and load the new data every time? (Obviously creating a new file with a different filename works fine, but I want to replace the old file.)
WebView loading outdated local images. How to update?
There are lots of cache implementations available: The Spring Cache abstraction Ehcache (which is one possible provider for Spring Cache) Guava's LoadingCache Infinispan as user1516873 suggests A plain ConcurrentHashMap from the JDK if you don't want more dependencies etc.
I need to load some data from DB to Cache on server start up.And once a request came, need to take the data from this cache.Also need to refresh cache in frequent intervals.It would be help if somebody provide way for achieving this.I am using Spring 3.1. Thanks
Caching in Spring Web service?
4 Its a very bad setting from a performance point of view, but what I do in my http.conf is set MaxRequestsPerChild to 1. This has the effect of each apache process handles a single request before dying. It kills throughput (so don't run benchmarks with that setting, or use it on a production site), but it has the effect of giving python a clean environment for every request. Share Improve this answer Follow answered Sep 14, 2012 at 21:31 Joe DayJoe Day 7,11544 gold badges2525 silver badges2626 bronze badges 1 Because it is a very bad idea to do this, it is not something I would be suggesting unless you have no other choice. Even then, MaxRequestsPerChild only applies to mod_wsgi embedded mode and not mod_wsgi daemon mode. For proper explaination, read code.google.com/p/modwsgi/wiki/ReloadingSourceCode – Graham Dumpleton Sep 16, 2012 at 5:23 Add a comment  | 
Developing in Python using mod-python mod-wsgi on Apache 2. All running fine, but if I do any change on my PY file, the changes are not propagated until I restart Apache /etc/init.d/apache2 restart. This is annoying since I can't SSH and restart Apache service everytime in development. Is there any way to disable Apache caching? Thank you.
Disable caching in Apache 2 for Python Development
The default CACHE_TYPE is null which gives you a NullCache - so you get no caching at all which is what you observe. The documentation does not make this explicit, but this line in the source of Cache.init_app does: self.config.setdefault('CACHE_TYPE', 'null') To actually employ some caching, initialise your Cache instance to use a proper cache. cache = Cache(config={'CACHE_TYPE': 'simple'}) Aside: Note that SimpleCache is great for development and testing, and this example, but you shouldn't use it in production. Something like MemCached or RedisCache would be much better Now, with an actual cache in place, you will run into the next problem. On the second call, the cached null0 object will be retrieved from the null1, but it is broken because these objects are not cacheable. Stacktrace looks like this: null2 So instead of caching the null3 object, you should just cache the simple string - the content of the website that you downloaded, and then reparse that to get a fresh null4 object every time. Your cache still helps as you don't hit the other website every time. Here is a full program to demonstrate that solution which works: null5 When I run the program, and make two requests to null6, I get this output. Note that null7 is not called the second time, because the content is served from cache. null8
I'm attempting to make a Flask web application where you have to request the entirety of a non-local website and I was wondering if it was possible to cache it for the purposes of speeding things up, because the website does not change that often but I still want it to update the cache once a day or so. Anyway, I looked it up and found Flask-Cache, which seemed to do what I wanted so I made appropriate changes to it, and came up with adding this: from flask.ext.cache import Cache [...] cache = Cache() [...] cache.init_app(app) [...] @cache.cached(timeout=86400, key_prefix='content') def get_content(): return lxml.html.fromstring(urllib2.urlopen('http://WEBSITE.com').read()) and then I make a call from the functions that need the content to proceed like so: content = get_content() Now I'd expect it to reuse the cached lxml.html object everytime a call is made, but that's not what I'm seeing. The id of the object changes every time a call is made and there's no speed-up at all. So have I misunderstood what Flask-Cache does, or am I doing something wrong here? I've tried using the memoize decorator instead, I've tried decreasing the timeout or removing it all together but nothing seems to be making anything difference. Thanks.
Using Flask-Cache to cache a lxml.html object
It is intended to prevent client-side (or reverse proxy) caching. Since the cache will be keyed on the exact request, by adding a random element to the request, the exact request URL should never be seen twice; so it won't be used more than once, and an intelligent cache won't bother keeping around something that's never been seen more than once, at least, not for long.
How exactly does adding a random number to the end of an AJAX server call prevent the database server or browser (not entirely sure which one is intended) from caching? why does this work?
How does adding a random number to the end of an AJAX server request prevent caching?
For your case, you could use DBNull.Value as the 'no data' marker: HttpContext.Current.Cache.Insert("name...", DBNull.Value);
I actively use caching in my ASP.NET web site. Though, some objects requested from db are really absent. For instance, I'm checking if there is a Discount for the particular product by looking record with ProductId=@ProductId in ProductsDiscount table. Absence of the record means no discount. Do you think it is a good idea to put null discount objects into Cache? Or I would better invent something better (using null-object pattern, for instance). In fact, I don't really like idea to start using null-object pattern as it will require a lot of redesign that I would like to avoid at least right now. Thanks. Any thoughts are welcome. P.S. In fact, I can't even put null object into Cache, when I try to call: HttpContext.Current.Cache.Insert("name...", null); i receive: Value cannot be null. P.P.S. Why MSDN tells nothing about this behavior?
Can null be inserted into Cache?
The HTTP Runtime Cache does not serialize data at all, it just places it in a hash table: What is the default serialization used by the ASP.net HttpRuntime.Cache There are several methods to get or estimate the size of a .NET object in memory: How to get object size in memory? https://stackoverflow.com/a/1128674/141172 It's not possible to directly know the number of bytes consumed by a .NET object: http://blogs.msdn.com/b/cbrumme/archive/2003/04/15/51326.aspx
I am writing an application in .NET 4.0 C#. I am placing objects in the .net httpruntime cache and want to produce some stats on it. I would like to know the size of the object before it put in to cache and the size of it in cache. How can I measure this? Is it serialized when put in to cache and if so what type of serialization is used?
Measure size of .NET objects
Three I'm just curious about Objective-c potentially caching the value returned by myStringProperty on the stack. The value returned by myStringProperty could change between successive messages so perhaps caching doesn't make sense. Nope, it's not cached. Every objc message is sent, provided of course myObject is not nil. The compiler has no idea about any side effects within the method's execution (1) or influence of the global state (2). e.g. does myObject or anything it references ever change during the execution of getting myStringProperty? e.g. is the result affected by the current time?
In the following example how many messages are sent to myObject? - (void) myMethod:(id) myObject NSLog(@"%@", myObject.myStringProperty); NSLog(@"%@", myObject.myStringProperty); NSLog(@"%@", myObject.myStringProperty); } I'm just curious about Objective-c potentially caching the value returned by myStringProperty on the stack. The value returned by myStringProperty could change between successive messages so perhaps caching doesn't make sense.
Basic message counting in Objective-c
4 It depends on the browser. The meta tag will have no effect on the scripts, just the page itself. You'd have to modify your server settings to send a no-cache header for JavaScript: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Avoiding_caching Also, the best way to prevent browsers from caching your JavaScript when you push out a new release is to version the scripts. Not sure if you're using some sort of automated build, but if so it is pretty easy to set up versioned JavaScript / CSS. Share Improve this answer Follow answered Jan 24, 2012 at 0:41 SoWeLieSoWeLie 1,3811212 silver badges1616 bronze badges 3 1 Specifically Expires, Cache-Control, Pragma, Last-Modified and ETag may influence caching. – Evert Jan 24, 2012 at 0:43 thx. I'm getting: Cache-Control: no-cache Pragma: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT. Does that mean it won't get cached outside the current session? – uwe Jan 24, 2012 at 1:36 That SHOULD ensure that it doesn't ever get cached...however, I am not sure if all browsers will respect the headers properly. As we all know some browsers (IE) do some very odd things. That is why I recommended using version numbers on your JavaScript. Since you are changing the name of the file each time you change the contents of the file, there is no way any browser would return an out dated version. – SoWeLie Jan 24, 2012 at 1:38 Add a comment  | 
I'm including an external javascript file in my page using <script src="http://example.com/file.js" type="text/javascript"></script> How long will this script get cached by browsers? There is a <meta http-equiv="Content-Cache" content="no-cache" /> in my HTML. Will that make any difference?
How long will a javascript include get cached?
There's no sense in reinventing the wheel for this, simply set up your server responses with the proper headers so the web browser can cache them. It should be fairly easy to setup server side and will require no setup client side, simply load everything again as you did the first time and it will magically come from the cache instead of your server. Regarding cache header config, see this question or simply google a bit for something that suits your specific case better.
I am loading files (or rather, pictures) in my Flex application from a server, technically from a database. I display some of them at a time, think of it like an image preview, and quite often I display the same image again. But I don't want to re-download the same file time and time again, so I would like to store it locally, and load it from there if it is available (or download it if necessary). I'm quite happy if the files can be stored in some temporary folder in AppData/iDontCare and get deleted on application restart. File.applicationStorageDirectory would fit the bill, but only exists in Air. What am I missing?
Storing files temporarily and loading them again in AS3 without Air
Services like Cloudflare cache your HTML and/or assets like images and CSS files in a CDN, so that your entire server is hit less often. This is great for semi-static sites but may not be the best fit for highly dynamic sites. Local caches like memcached just store any data in a way that's fast to access. You can use that to cache database queries and lower your database activity, but you can also use it to store pre-computed data that would be expensive to re-create all the time or whatever else you may want to store non-permanently in a fast-to-access way. Both solutions solve different problems. You may use both together, or either, or neither. It really depends on where exactly your bottleneck is and which solution fits your problem better.
If you need caching in your website to make database use lower, do you have to do it using memcache or memcached (in PHP, for example) or can you achieve this by using professional services like CloudFlare, Incapsula or others like that do some caching for you?
Is memcache(d) necessary when using Cloudflare/Incapsula
The data is as secure as the server and the web application are. If someone has physical access to the box (or through RDP), they can always cause a memory dump and read the values directly from memory. Depending on how it was written, the application might expose the full Cache. If either of these are not concerns, you can consider the Cache to be secure. Update: Seeing as you are talking about client side - nothing on the client side can be considered completely safe. The client has physical access to their machine and therefor can do a memory dump and any number of other tricks (including direct memory inspection). If the client does not need all of the data, only share the minimum required. Do not persist it.
I cached one DataTable fetched from SQL Server 2005 through C# ASP .NET 4.0 Web App [around 50000 row 32 columns].. Cache by mean is on Client-Side. I want to know whether the DataTable is secure Or Insecure in the Cache? If insecure than how to get that Data directly from cache and be viewed [not by my Web App as a non developer] how to secure the Data.
Security of cached data in .NET
You can either use: Memcache Ehcache As you can see above, both support GAE well.
I need cache for my web application, it stores some small/medium size objects to cache and possibly some max 1-2mb files to cache. What open source solution would be good for this usage? Cache should be easy as Google App Engine cache (example. cache.put("key", "value"); cache.get("key");). I use Google Guice and Servlet, nothing else (so I don't need any Spring etc. recommendations) and I'm using Jetty to run my application.
Open source cache for Servlet
You just need to give permissions to the daemon user in that folder. sudo chown -R daemon.daemon /opt/bitnami/.tmp sudo chmod -R 700 /opt/bitnami/.tmp Also if you are using a version of the BitNami Trac Stack that already uses mod_wsgi (0.12.2-1 or later) you can configure the PYTHON_EGG_CACHE to point to a different location. In the trac.wsgi file you just need to add: os.environ['PYTHON_EGG_CACHE'] = '/path/to/python_egg_cache' Just check that the daemon user has enough permissions in that directory.
I have 3 projects hosted in amazon bitnami ec2 and my none of project is running. When I check my logs it showing the error as follows: The Python egg cache directory is currently set to: /opt/bitnami/.tmp Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory. and my projects are in /opt/bitnami/projects (all 3 project within that directory). How can I solve this?
python egg cache
Files are on disk : Not quite fast ; and concurrent access are not great at all, if several processes try to read/write at the same time Local to one server (if you have several servers, you'll have to store the files on each one of them -- NFS being slow) But you have a lot of space APC is in memory : Really fast But you have less space And it's local to each server too memcached is in memory, on a network cluster : quite fast (a bit less than APC ; but still pretty fast) Shared between all your servers : each item has to be cached only once, even if you have several webservers. You can have several servers in your memcached cluster (which means virtually no limit in the size of the cache)
CakePHP offers support for APC, XCache and Memcache in addition to its default caching engine. Having had some problems with my application sporadically caching broken pages for no known reason, I've decided to try another engine to see if that makes a difference. XCache and Memcache both seem as though they might take a little bit more setup, but APC appears to be literally a case of changing one line in the core.php. My question is, where can I find information about why I should choose APC over the default engine? What are the pros and cons? It can't really be a case of "just try them both and see if one feels better than the other" (can it?), but a basic snoop around hasn't revealed a simple breakdown of the differing merits of cache engines in Cake. Can anyone explain the mysterious workings of cache engines in Cake to me? Or point me to a resource that does? Bonus points if XCache and Memcache are also compared, because they might be my next port of call...
File or APC Cache Engine in CakePHP?
Try using pagebeforeshow but call page() when the page is shown to fix up all the formatting. Like this: $('#instrument').bind('pagebeforeshow', function() { // Do your content insertion }); $('#instrument').bind('pageshow', function() { $(this).page(); }); You may find that this only "half" works (doesn't update formatting when page is updated), in which case you might try this trick: wrapping up the page in a temporary element and calling page() on the wrapper. $('#instrument').bind('pageshow', function() { $(this).wrap('<div id="temporary-instrument-wrapper">'); $('#temporary-instrument-wrapper').page(); $(this).unwrap(); });
With jQuery mobile I'm using a dynamic 'page' template with custom content inserted depending on user input. It all works, but once the page is created once it's cached and won't display the new values if you go back and make a new selection. I've tried applying the following fix: $('#instrument').bind('pagehide', function(){ $(this).remove(); }); Which does remove the page, but if you try to navigate back to that page it won't re-initialize and I'll just keep getting pushed back to the beginning of my app. The dynamic content has to be added to the page using pagebeforecreate (the actual HTML doesn't seem important, so I won't include it here) otherwise it won't be formatted. If I use pagebeforeshow the content will not be formatted, but it WILL change if you go back and make a new selection. I realize that pagebeforecreate will cache the page, but it doesn't appear that I can use any other method due to the content not formatting :( I can't for the life of me figure out a fix!
stop jQuery mobile from caching dynamic page
A random number generator that returns a predictable value is called a hash - predictable randomness is not cool at all in a random number generator :-) So, replace the call to rand by some hash function and you're all set. Use your imagination: the hash function could be something as simple as the modulo 4 of the crc of the authorid.
Bit of a strange question. I have a website which has some pages in classic ASP, and others in ASP.net. I have a script that caches their gravatar image. This is hosted on a cookieless domain, in one of the following locations: http://static1.scirra.net http://static2.scirra.net http://static3.scirra.net http://static4.scirra.net When a page requests a gravatar on my ASP.net site, it passes through this function which distributes it randomly to a static server: /// <summary> /// Returns the static url for gravatar /// </summary> public static string GetGravatarURL(string Hash, int Size, int AuthorID) { Random rndNum = new Random(AuthorID); int ServerID = rndNum.Next(0, 4)+1; string R = "//static" + ServerID.ToString() + ".scirra.net/avatars/" + Size + "/" + Hash + ".png"; return R; } The function in my Classic ASP parts of the website is: function ShowGravatar(Hash, AuthorID) Dim ServerID Randomize(AuthorID) ServerID = Int((Rnd * 4) + 1) ShowGravatar = "//static" & ServerID & ".scirra.net/avatars/" & intGravatarSize & "/" & Hash & ".png" end function It works fine, it seeds on the users ID then assigns them a static server to server their avatars from. The only problem is, the C# and Classic ASP RNG's output different results! This is not optimum for caching as the same image is being served on up to 2 different domains. Any easy way around this?
Matching a classic ASP random number with a C# random number
(r'^menu/$', cache_page(60 * 15)(direct_to_template), { 'template': 'corp_menu.html' }), should work.
I would like to cache a template and I know that it possible to do this in the url. However, the specific template I would like to cache is also delviered with a direct to template: (r'^menu/$', direct_to_template, { 'template': 'corp_menu.html' }), Does anyone know how to convert my url to cache this using the django documentation: The django documentation shows urlpatterns = ('', (r'^foo/(\d{1,2})/$', cache_page(60 * 15)(my_view)), ) Thanks for any help
How to cache a view in urls.py when it is direct to template
Some differences between session and cache: The session is per user while the cache is per application You can store the session data out-of-process (SessionServer or SqlServer) e.g. when using a web farm What you put into the session stays there until the session is terminated/abandoned or times-out With the cache, you can specify that items are automatically removed after some time (absolute) or after they were not accessed for some time (sliding) You can also use SqlCacheDependencies to have items removed from the cache when some data changes in a database the ASP.NET runtime will also automatically remove items from the cache if the available memory gets low As for performance: As long as you use InProc session state, I guess there won't be any performance difference between session and cache. As soon as you use external session state, performance will obviously be lower than the InProc-cache but I don't have any numbers (depends on network, SQL server power, etc). BTW: there are also distributed caching solutions which allow having an external, shared cache (similar to the out-of-process session state).
I'd like to know when exactly I should use the Session and when exactly I should use the cache. Are there differences in performance? Can one of them handle a lot of data better? Should the Cache only be used for stuff that's associated with the Application whilst the Session should only be used for stuff that's associated with the current session/user? Is it wiser to save values which I received from a DB in the Session or the Cache - is there a difference at all assuming I make the cache-keys unique? E.g. Cache["MyKey"+UserId.ToString()]. Also, in general, is using the Session/Cache a lot wiser than retrieving Data from a DB or a Webservice or is there a limit of data that'll be retrieved quicker?
Saving Values in Session or Cache - Difference? ASP.NET
Disable caching of your page with the following code : http://php.net/manual/en/function.header.php <?php header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past ?>
For instance, if you exit your Yahoo mail and then click the back button, it will not load the last page, it will redirect you to the login page. I have to do this with my PHP code, I'm using CodeIgniter. Some friends told me to disable caching but that will be a bad thing because I have a lot of images in my system and it would be bad to download them every time. How do I do this with PHP?
How do I deactivate caching in browsers?
For those just Googling this issue (as I was), Cake 2.2 now supports this kind of functionality (without having to create separate cache configs for each 'group'). There is a little explanation here, although it lacks some details: http://book.cakephp.org/2.0/en/core-libraries/caching.html#using-groups But this is what I did in my app and it appears to work well. ;-) In /app/Config/core.php Cache::config('default', array( 'engine' => $engine, ... 'groups' => ['navigation'], )); Model afterSave hook: function afterSave($created) { // This deletes all keys beginning with 'navigation' Cache::clearGroup('navigation'); parent::afterSave($created); } Then in my controller/model that requires an expensive query... // We create a unique key based on parameters passed in $cacheKey = "navigation.$sitemapId.$levelsDeep.$rootPageId"; $nav = Cache::read($cacheKey); if (!$nav) { $nav = $this->recursiveFind( 'ChildPage', ['page_id' => $rootPageId], $levelsDeep ); Cache::write($cacheKey, $nav); }
Consider the following; Cache::write('Model.key1' , 'stuff'); Cache::write('AnotherModel.key1' , 'stuff'); Cache::write('Model.key2' , 'stuff'); Can I delete a group of keys from the Cache? For instance, if I wanted to clear all cached data for "Model" but leave "AnotherModel" in the cache, I would like to use the following; Cache::delete('Model.*'); Can this kind of thing be achieved in CakePHP 1.3.x? Thanks!
CakePHP Cache::write() Can keys be grouped by model?
3 Database because it will resolve any concurrency problem, which the file doesn't provide you with. Share Improve this answer Follow answered Mar 22, 2011 at 17:49 Pentium10Pentium10 206k124124 gold badges426426 silver badges506506 bronze badges Add a comment  | 
the question is very self-explained, in matter of performance and all, which way is better to store little amounts of data? MySQL or Small File in a Cache folder? Thanks in Advance
Which is Better? Cache on a Disk File or Cache in a Temporary Table in MySQL
The Cache is always using the machine's memory, the Session uses what has been configured: In a web farm the Session can be local (which works only if affinity is set), or remote (state server or database, or custom), but the cache is always local. So, storing a DataTable in the cache will consume memory, but it will not use serialization. PS: storing a DataSet instead of a DataTable will change almost nothing.
I've seen numerous examples of people storing DataTables into the Cache, but I am just wondering, do the same rules apply to the Cache that apply to the Session? The one rule I am most concerned with is: Do not store unserializable objects into the Session. Just because you can doesn't mean it is guaranteed to work (I learned this the hard way). So ultimately my question is: Can you store unserializable objects into the Cache? I reasearched this for a while, reading numerous posts and even reading the chapter about Cache in my ASP.NET 3.5 book and I cannot find it anywhere. Since I am in doubt, I am going to put my DataTable into a DataSet then into the Cache, but is this necessary? Thanks.
Is it wrong to store DataTable in Cache?
You're looking for AppFabric Cache. It's a Windows Server technology from Microsoft. It's free. I should also say that if you like memcached, you can use that on Windows as well, and in fact Microsoft Azure team members used to recommend it, before the AppFabric caching was available on Windows Azure.
I am trying to implement caching in .Net such that the cached data is accessible not just by an application that may run multiple times on the same machine but by other types of applications that may run on the machine. They could be windows services, web services, win forms etc. I have looked at System.Runtime.Caching (because Enterprise Application Blocks Caching is going to become obsolete) as a means to achieve this. The default MemoryCache is insufficient to achieve this as I don't believe that it work across app domains. Is there a way I can implement the kind of caching I am looking for or is there a caching dll of some sort (must be free) that I can use to achieve my goal? Is there a way to use System.Runtime.Caching with IsolatedStorage scoped to MachineLevel? I've looked at memcache too and can't use it because we need this to run on windows machines. I started looking at SharedCache (http://www.codeproject.com/KB/web-cache/AdvanceCaching.aspx) and am curious about the pitfalls it has as well. Thanks. -- Revision 1 -- I think the optimal solution for me would use the Caching object to a Memory Mapped File (http://msdn.microsoft.com/en-us/library/dd997372.aspx). So the question I have now is whether anyone has done that with the System.Runtime.Caching object. There must be a way to extend it if necessary...examples of how to do so would also be much appreciated.
Caching across Applications in .Net on a Windows Machine
The functions cache_set() and cache_get() are what you are looking for. cache_set() has an expire argument. You can use them basically like this: <?php if ($cached_data = cache_get('your_cache_key')) { // Return from cache. return $cached_data->data; } // No or outdated cache entry, refresh data. $data = _your_module_get_data_from_external_service(); // Save data in cache with 5min expiration time. cache_set('your_cache_key', $data, 'cache', time() + 60 * 5); return $data; ?> Note: You can also use a different cache bin (see documentation links) but you need to create a corresponding cache table yourself as part of your schema.
I am looking for more detailed information on how I can get the following caching behavior in Drupal 7. I want a block that renders information I'm retrieving from an external service. As the block is rendered for many users I do not want to continually request data from that service, but instead cache the result. However, this data is relatively frequent to change, so I'd like to retrieve the latest data every 5 or 10 minutes, then cache it again. Does anyone know how to achieve such caching behavior without writing too much of the code oneself? I also haven't found much in terms of good documentation on how to use caching in Drupal (7), so any pointers on that are appreciated as well.
Create a timed cache in Drupal
Well it seems OSX can do this out the box. The answer is quite simple to make it appear your connected to a WiFi network, just create a adhoc (computer-to-computer) network from the Mac, you don't need to connect any other devices but it means that your computer appears connected to the internet to Laptop and iOS simulator. So now I can download a copy of the data I need, set up redirects in Charles Proxy (<- amazing software btw) create a adhoc network and still develop my app while I'm not connected to the internet (even though it requires the internet to function)
I'm using Charles Proxy to redirect web requests to a local folder for an app I'm making. However it seems that NSURLConnection checks for an internet connection before attempting to download. How does it perform this check and can I fake the connection being there, because I know it will be able to download the content from the local cache. Or is there a better way of doing this? EDIT: Actually all I need to do is connect to a fake Wifi hotspot on my laptop, I tried connecting to a BTFON hotspot, which redirects you to a login page for every request until you login (therefore appearing the laptop is connected to the internet, but effectively its not) and it worked when I had Charles proxy enabled. Now I just need to find some software that can connect to Fake Wifi Hotspot, or have a virtual second Wifi adapter, create a adhoc network and connect to that (in a kind of loop)
Fake internet connection on iOS simulator
Use the sysopen function with O_SYNC as one of the flags. Check in the system manpages for the supported flags (man 2 open). I know it's there on Solaris 10, not sure about AIX. For example: sysopen(FH, $path, O_SYNC | O_WRONLY | O_CREAT) See http://perldoc.perl.org/functions/sysopen.html for more information.
I know that modern *nix OSes allow to open file so that data are not cached in system/disk writecache, so any write operation will finish only when data is phisically written to disk. Could you suggest how can I do that in Perl? OS is AIX/Solaris.
Perl: Opening file without write cache
After experimenting a great deal with bitmap caching, we ended up turning it off in our application. It works well when you're wanting to use the GPU to execute transforms on a piece of your UI that isn't changing -- for instance, if you have a picture that you want to animate, squish, rotate, etc. But bitmap caching/GPU acceleration (in its current implementation) slows things down pretty dramatically if you're continuing to update the visual tree inside the part of your UI that you'd like to cache/manipulate. If you're just moving around a static bitmap, it makes sense to cache it and use the GPU to accelerate it. But quite often, you might be tweaking pieces somewhere down the visual tree from the piece of your UI that you flagged to cache, and if that's happening, you need to update the GPU's cache each frame, and that's slow, slow, slow. In other words, whether it makes sense for you to turn it on or not depends entirely on where you turn it on, and what your application is doing. Because of this, my strong recommendation, if you're using bitmap caching, or if you're experiencing performance problems with your Silverlight UI, is to (temporarily) enable cache visualization and redraw regions. Makes your app look funky as hell when they're on, but they're invaluable when it comes to seeing what your UI is doing that's chewing up all your CPU.
We were able to solve a high CPU usage problem by taking advantage of Silverlight's bitmap cache, as described here: Silverlight 3 and GPU Acceleration Discovering Silverlight 3 – Deep Dive into GPU Acceleration We added the EnableGPUAcceleration parameter to the <object> tag. To bring the CPU usage down to a reasonable level, we had to add CacheMode="BitmapCache" to the root visual grid for the whole app. So I'm wondering if there's any downside to relying so much on the bitmap cache. If it was always beneficial, I assume it would be enabled by default. I found this similar question with a good answer by AnthonyWJones: Any reason not to check “application library caching” and “GPU acceleration” in silverlight apps? So one downside is that it uses more video RAM. I guess this could make things worse for other graphics-intensive apps running at the same time. Are there any other downsides? If the graphics card doesn't have enough video RAM to cache everything, I assume Silverlight will degrade gracefully and will just use more CPU cycles to re-render the UI. Thanks for your help, Richard
What's the downside of bitmap caching in Silverlight 4?
I highly suggest you look at AppFabric Caching. I just implemented for my MVC app and it worked great. I used this blog to get started: http://www.hanselman.com/blog/InstallingConfiguringAndUsingWindowsServerAppFabricAndTheVelocityMemoryCacheIn10Minutes.aspx Let me know if you need some code samples.
I am looking for an efficient cache strategy for C#. I am constructing an MVC application however one of my queries targets a historical table with states, etc. Needless to say, the query is highly nested and complex, and I do not want to run it every time a person hits the site, so I decided to cache the data (either the results or the tables themselves). I dont want to store my cache in the Managed heap due to the stop-the-world garbage collection problem which is common with generational GC's and Caches. I was wondering, does the Cache Application Block (http://msdn.microsoft.com/en-us/library/ff650180.aspx) use Unmanaged memory (off the managed heap?). Is there a way to access memory directly via native IO? Any other cache tools worth looking into?
C# Application Cache Block
This will pre-load an image so that the browser can display it immediately when you actually set the src of an img tag. I speculate that pre-loading an image like this will ensure it's in the cache so it won't reload, though I haven't tested it. var myImg = new Image(25, 25); myImg.src = "/foobar.png"; In other words, this should now hopefully only download two images function reassignImage(newSource) { var myImg = new Image(25, 25); myImg.src = newSource; img.src = newSource; } reassignImage("first.png"); reassignImage("second.png"); reassignImage("first.png"); Edit I was doing it wrong. Try creating a new Image() for every new file the user loads. Swap these image elements in and out of the dom. <html> <head> <script> var imageElements = {}; function reassignImage(newSource) { if (typeof imageElements[newSource] == "undefined") { imageElements[newSource] = new Image(); imageElements[newSource].src = newSource; } var container = document.getElementById("imageContainer"); container.innerHTML = ''; container.appendChild(imageElements[newSource]); } </script> </head> <body> <div id="imageContainer"></div> </body> </html>
I'm using javascript to dynamically load any of a series of images into a single img tag, based on user interaction: function reassignImage(newSource) { img.src = newSource; } This works great, except that I when I inspect it with Chrome developer tools, I see that even if I reload an image I've already loaded, it makes another http call AND grows the total Images Size graph. This seems like the worst of both worlds. I would want either: To load from cache if the image were the same. To reload each image everytime, but then not grow the cache. How would I achieve either scenario? Thanks! Yarin
Caching dynamically loaded images
Gregg Pollack of RailsEnvy did a series of "Scaling Rails" screencasts a while back, which are now free (thanks to sponsorship by NewRelic). You might want to start with episode 1, but episode 8 covers memcached specifically: http://railslab.newrelic.com/2009/02/19/episode-8-memcached
I read a few tutorials on getting memcached set up with Rails (2.3.5), and I'm a bit lost. Here's what I need to cache: I have user-specific settings that are stored in the db. The settings are queried in the ApplicationController meaning that a query is running per-request. I understand that Rails has built-in support for SQL cacheing, however the cacheing only lasts for the duration of an Action. I want an easy way to persist the settings (which are also ActiveRecord models) for an arbitrary amount of time. Bonus points if I can also easily reset the cache anytime a setting changes. thanks
Rails memcached: Where to begin?
try passing in the rails environment task (:cache => :environment) do ... end seems like you would get a different error, but I would try this
I want to call a rake task from a cron job that stores remote weather data in the rails cache. However, I must be doing something pretty wrong here because I cannot find any solution through countless fruitless searches. Say I define and call this task namespace :weather do desc "Store weather from remote source to cache" task :cache do Rails.cache.write('weather_data', Date.today) end end I get the error Anonymous modules have no name to be referenced by Which leads me to believe the rails cache isn't available. Outputting Rails.class from the rake file gives me Module but Rails.cache.class again returns the above error. Do I need to include something here? Am I just hopeless at internet? :) Thanks in advance.
rake task can't access rails.cache
There are commercial distributed caches available for .net other than Microsoft Velocity - NCache, Coherence, etc.
I have a situation where information about a user is stored in the web application cache and when that information is updated in one application - I want to notify the other applications (running on the same machine) that the data should be removed from it's cache so it can be refreshed. Basically I need to keep cached data in sync across multiple asp.net applications. I have started down the path of using a central web service to help coordinate the notifcations but it is turning out to be more complex than I think it needs to be. Is there a way that one asp.net application can easily reach across to another on the same box to clear an item from the cache? Is there a better way to achieve shared cached information than using the application cache? I really want to create a way for apps to communicate in a loosely coupled way - I looked at nservice bus but the dependency on MSMQ scared me away - my client has had bad experiences with MSMQ and does not want to support an app that requires it. Suggestions? Michael
Communicating between ASP.NET applications on the same machine
I think the code you would have to write would be like this: (EDIT) def get_value(param1,param2): return "value %s - %s " % (str(param1),str(param2)) def fetch(key,val_function,**kwargs) val = cache.get(key) if not val: val = val_function(**kwargs) cache.set(key,val) return val and you would call it like this: fetch('key',get_value,param1='first',param2='second')
Does Django caching have a method similar to Rails' cache.fetch? (http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#M001023) The rails cache fetch works like: cache.fetch("my_key") { // return what I want to put in my_key if it is empty "some_value" } It's useful because it checks the cache, and returns the value of the cache if it is there. If not, it will store "some_value" in the cache, and then return "some_value". Is there an equivalent of this in Django? If not, what would the Python syntax for this look like if I were to implement such a function?
cache.fetch in Django?
I've a jsp page which loads many images. I'd like to cache the images for faster loading. This is a Good ThingTM. I'll explain my idea, please correct it if it's wrong. I'm calling the picture loading servlet for each image and return as a BLOB. My idea is to add a modified date with the image and the other values like Last-Modified, expires, Cache-control and max age. And thereby make the browser understand if the image changes. For that you actually need the ETag, Last-Modified and optionally also Expires header. With the ETag header both the server and client can identify the unique file. You can if necessary use under each the database key for this. With the Last-Modified header header both the server and client knows if they both have the same version of the file. With the Expires header you can instruct the client when to re-request the file the firstnext time (thus, when the date as specified in Expires has been expired). The Cache-Control header is not so relevant here as you just want to allow caching and the average client already does that by default. For more information and a servlet example, you may find this article useful and maybe also this article for the case you'd be interested in tuning performance of a JSP/Servlet webapplication. But how can i append a modified date to a BLOB? Or is there some better ideas to make them cachable? Just add one more column to the database table in question which represents the insertion date. In most DB's you can just use now() function for this or even create it as an auto-trigger so that it get set automatically on every insert/update.
I've a jsp page which loads many images. I'd like to cache the images for faster loading. I'll explain my idea, please correct it if it's wrong. I'm calling the picture loading servlet for each image and return as a BLOB. My idea is to add a modified date with the image and the other values like Last-Modified, expires, Cache-control and max age. And thereby make the browser understand if the image changes. But how can i append a modified date to a BLOB? Or is there some better ideas to make them cachable? Thanks...
enable caching of images specifying a modified date
By default Application Cache stores data on server memory; depending on your website navigation pattern, maybe you don't get too many cache hits. You could to preprocess all images to generate its thumbnails at once and store it with your original image. This way you don't need to deal with that cache layer and, probably, won't take too much more disk space.
Im building a image gallery which reads file from disk, create thumbnails on the fly and present them to the user. This works good, except the processing takes a bit time. I then decided to cache the processed images using the ASP .NET Application Cache. When a image is processed I add the byte[] stream to the cache. As far as I know this is beeing saved into the system memory. And this is working perfect, the loading of the page is much faster. My question is if there are thousands of images which gets cached in the Application Cache, will that affect the server performance in any way? Are there other, better ways to do this image caching?
.NET Application Cache vs Database Cache
It all depends on the kind of data store you're using to store the cache. If you're using the filesystem you could write a cron job to invalidate the cache at a specific interval, or encode the datetime that the cache is valid until in the cachekey and check that on every request and invalidate when necessary. Alternatively if you're backend if memcached you can use an expirable cache, which is probably the best solution.
I've never really delved into the amazing cache techniques Rails provides until now. One thing that I really can't wrap my head around is how to resolve a this particular problem. Given that I have a Blog model with many Posts: class Blog < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base named_scope :published, :conditions => ["published_at < ?", Time.now] end And the show action in the BlogsController shows a list of Posts that have been published: // BlogsController def show @blog = Blog.find(params[:id) end // View <% @blog.posts.published.each do |p| %> <h2><%=h p.title %></h2> <%= simple_format(p.content) %> <% end %> The cache has to expire when any change is done to the published_at attribute BUT it also need to do it when published_at is put in the future and that time is reached. Can you guys throw me some ideas and pointers of how to best solve this? Cron-job or am I loosing my mind?
Page Caching, best practice on automatic expires/sweepers
Technically you can assign a disconnected ADODB recordset into the Application object. However I wouldn't recommend it. In order to assign an object into the application object it needs to be free-threaded. An ADODB recordset is Single threaded but it will provide a free-threaded proxy when assigned to the application object. What this would mean is whenever a request needs to access the recordset the current thread on which the request is running on will block as the proxy will marshall the call to the thread where the recordset was originally created. If another request happens to be using the recordset the marshalled call will be queued. The problem with this is that in effect all uses of the recordset will be serialised thus creating a scalability issue. Only one request can access the recordset at the same time. But it gets worse. The thread that originally created the recordset could itself be processing a request, in which case nothing (except the thread itself) can access the recordset regardless of whether the recordset is actually being used or not. In addition the ASP dispatcher may still give a request to the thread even though there is an existing queue of recordset accesses on the thread. A Solution However you have got close to the solution. Place convert the contents of the recordset into XML but instead of saving it to a file, load it into a FreeThreadedXMLDocument. This object as its name suggest is a truely free threaded object, its not a proxy to a single threaded object. Now you can place this xml document in the application object and access it from various requests at the same time.
How can you best cache disconnected ADODB recordsets application wide in a classic asp scenario? It does not seem like it is possible to store disconnected recordsets in application state, or am I missing something? I found an example of how you can persist recordsets to xml and load them again, but I was hoping to stay in memory - but it might be the best option.
Caching recordsets in ASP Classic?
This isn't a question of if browsers are smart enough. The W3 standard for HTTP states that different URLs should be cached separately. So browsers are correct in observing the full URL, including GET arguments such as a session ID in their caching. You should not be appending the session ID to anything that is static (such as your style-sheet).
I have a big question: Please see the example link below. My application currently appends to all "resources/links" a Session ID. I more or less stumbled upon this by accident looking in the Firefox Cache: http://localhost:8080/jquery-ui-1.7.2.custom.css;jsessionid=A8483FBF3BB6DDA499E06210BE0D612C My big question is, will a URL like the URL above lead to the fact, that any caching Header (I use Cache-Control with several Years) will become more or less useless, as the session ID will make every request unique? (==>What I mean is, that a new sessionID is assigend after 30 Minutes. And caching will most likely then only be effective within this period. After this period a new Session ID will be generated, indirectly invalidating all the cached content on the client side, that has the SessionID in its url = the url changes as it now has a new sessionID.) => Are the browsers as intelligent to find out that the resource to cache is: http://localhost:8080/jquery-ui-1.7.2.custom.css and not: http://localhost:8080/jquery-ui-1.7.2.custom.css;jsessionid=A8483FBF3BB6DDA499E06210BE0D612C Or will a sessionId in url lead to the fact that caching is more or less disabled in the browser? Thank you very much! jan
Session-ID and Browser-Caching => Are Browser intelligent enough to strip of SessID?
Yes, they are included in source code available at http://pecl.php.net/package/APC. Note that you have to choose this at compilation time, more precisely: at ./configure time. Here are the relevant options of ./configure: --enable-apc-sem Enable semaphore locks instead of fcntl --disable-apc-pthreadmutex Disable pthread mutex locking --enable-apc-spinlocks Enable spin locks EXPERIMENTAL As you see, pthread mutex locking is already the default now.
I recently read in a presentation on Scribd that Facebook had benchmarked a variety of locking mechanisms for APC including file locks (default), IPC semaphore locks, linux Futex locks, pthread mutex locks, and spin locks. You can view this presentation by clicking the following link: APC@Facebook I was wondering if anybody knew off hand if any of this source code had been released, perhaps in a git or SVN repository somewhere? The speed benefits of switching from the default file locking to one of the other choices appears to be significant.
How to change the locking mechanism in Alternative PHP Cache (APC)?
HttpResponse.RemoveOutputCacheItem?
Is it possible to clear one action's cache from another action? Let's say my Index action lists all my Widgets. There are lots of Widgets but new ones are not created very often. So I want to cache my Index action indefinitely but force it to render after a successful Create. public class WidgetController : Controller { [OutputCache(Duration = int.MaxValue, VaryByParam = "none")] public ActionResult Index() { return View(Widget.AllWidgets); } public ActionResult Create() { return View(); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(string name) { Widget widget = new Widget(name); // Can I clear Index's cache at this point? // ClearCache("Index"); return View(widget); } }
ASP.NET MVC: Clear an action's cache from another action
2 It is pretty straight forward, but you may want to keep an eye on what is actually cached. Rick Strahl has an interesting post on how the cache was actually empty due to memory pressures. Share Improve this answer Follow answered Sep 21, 2009 at 12:07 StingyJackStingyJack 1 Add a comment  | 
I'm working on an ASP.NET application that has the following requirements: Once every 15 minutes perform a fairly expensive query of around 20,000 items (not from a database). Has around 10 columns, all short strings, except for one which stores up to 3000 char (usually a lot less). Process the resulting DataTable with various sorting and filtering, then store the top 100 items in additional DataTables. Display this information as a form of aggregation to potentially 10,000's of people. It seems to me that this is an excellent candidate for caching (System.Web.Caching) especially given we may wish to support additional "on the fly" filtering. For example: Filtering the table down to rows only relevant to a specific user. However, before getting started I would like to understand: If there are any best practices around storing such a large DataTable in Cache. If anyone has any experience they are able to share? Large tables you have stored? Any traps to be careful of along the way? Thanks in advance.
Caching Large Datasets
Yes you can cache it until then. There are many ways of doing this. If you have a serverside call to retrieve the data then I would simply add this data to the cache when you first get it and set the expiration to be 3am the following day. Then on each page call check the cache for this data object and if it returns null, initiate another fetch of the data. You can use page output cacheing too but this does not give you such detailed control. something like this: if (HttpContext.Current.Cache["MyData"] != null) return HttpContext.Current.Cache["MyData"] as DataObjectClass //Get data into dataobject HttpContext.Current.Cache.Add( "MyData", DataObject, DateTime (tomorrow 3am), // psuedo null, TimeSpan.Zero, System.Web.Caching.CacheItemPriority.Normal, null); return DataObject;
I've got an application that downloads data from a 3rd party at 3am every morning Nothing changes in terms of content until then... is it possible to cache the "product info" page until then? or is this something i should set in global.asax?
ASP.Net Caching
I created my own solution with a Dictionary/Hashtable in memory as a duplicate of the actual cache. When a method call came in requesting the object from cache and it wasn't there but was present in memory, the memory stored object was returned and fired a new thread to update the object in both memory and the cache using a delegate method.
I'm having some trouble getting my cache to work the way I want. The problem: The process of retrieving the requested data is very time consuming. If using standard ASP.NET caching some users will take the "hit" of retrieving the data. This is not acceptable. The solution?: It is not super important that the data is 100% current. I would like to serve old invalidated data while updating the cached data in another thread making the new data available for future requests. I reckon that the data needs to be persisted in some way in order to be able to serve the first user after application restart without that user taking the "hit". I've made a solution which does somewhat of the above, but I'm wondering if there is a "best practice" way or of there is a caching framework out there already supporting this behaviour?
ASP.NET Persistent Caching ("Lazy loading"-style?)
4 You should definitely start working on a cache system. There are at least 3 ways you can follow. You can also combine them to get more efficiency. Database caching Create a relationship table to hold the ID relationships between the user and the subscribers so that you don't need to calculate them on the fly. Model Caching Expensive queries can be cached. def find_subscribers Rails.cache.fetch("find_subscribers_#{current_user}") do # run the query end end View Caching You can also cache view fragments to prevent expensive elaborations. You might want to start with: Rockstar Memcaching Scaling Rails Videos EDIT: You query can be optimized. ActiveRecord::Base.connection.execute("SELECT count(*) as c FROM subscribers_users WHERE user_id = #{other_user.id} AND subscriber_id = #{self.id}") can become counters = SubscribersUser.count(:conditions => { :subscriber_id => self.id }, :group => "user_id") The query will return a Hash where the key is the user_id and the value the result of count. Then you can iterate the hash instead of running a query for any record in the view. Share Improve this answer Follow edited Jun 26, 2009 at 11:41 answered Jun 26, 2009 at 10:27 Simone CarlettiSimone Carletti 175k5050 gold badges365365 silver badges366366 bronze badges 2 thanks, I already have a relationship table and so far my query isn't that expensive. It's just executed an awfull lot: ActiveRecord::Base.connection.execute("SELECT count(*) as c FROM subscribers_users WHERE user_id = #{other_user.id} AND subscriber_id = #{self.id}") But as I understand you I should get all subscribers with ONE query and cache the result. Correct? – Ole Spaarmann Jun 26, 2009 at 11:31 I added more details into the original response. – Simone Carletti Jun 26, 2009 at 11:41 Add a comment  | 
this question is maybe a little specific, but I think it's interesting from a general pov also. In a Rails App users can subscribe to other users. When I show a list of users I have to check, if the current user has subscribed to the users in the list. If he has subscribed, I show the unsubscribe button and the other way around. Because the whole thing depends on the current user I can't use eager loading. So when I show 20 users in the list, I generate 20 additional hits on the DB, which appears to me to be bad practice. I'm thinking about a good way to solve this problem. The best solution I came up with so far is to load the ids of the users the current_user has subscribed to in the session during the login once and then just check every user.id against the ids in the session. But maybe this could lead to other issues when the user has subscribed to a lot of people. Also I'm not sure if it's the best way to load all subscriptions even though the user might never look at the user list during this session. The next best thing which came to my mind was to do the same thing, but not during login but instead when a user-list is loaded. What do you think?
Rails: How do I minimize DB hits here? Eager loading isn't applicable
5 The performance comes down to subsequent use so even if your object is cached once and used by all ... or cached once per visitor ... it is the subsequent use that counts. If your cached object is used 10,000,000 times a day then you will save. If the cached object is used once or not at all, the gain is negligible. Share Improve this answer Follow answered Jun 17, 2009 at 14:35 Aiden BellAiden Bell 28.3k44 gold badges7575 silver badges119119 bronze badges Add a comment  | 
Is there difference between caching PHP objects on disk rather than not? If cached, objects would only be created once for ALL the site visitors, and if not, they will be created once for every visitor. Is there a performance difference for this or would I be wasting time doing this? Thank you :)
Performance difference of caching PHP Objects on file
Not really an answer, but use getaddrinfo(3) instead :)As far as nscd is concerned, here's from the nscd.conf(5) manual page: enable-cache service <yes|no> Enables or disables the specified service cache. You'll have to find out what the correct service for DNS is.
Is there any way to prevent the gethostbyname() function not to read the nscd cache on Linux?
Forcing non-cached gethostbyname()
Depends on the type of caching you are talking about. Opcode caching does exactly like you are saying. It takes the opcode and caches it so that whenever a user visits a particular page, that page does not need to be re-compiled if its opcode is already compiled and in the cache. If you modify a php file the caching mechanism will detect this and re-compile the code and put it in the cache. If you're talking about caching the data on the page itself this is something different altogether. Take a look at the Alternative PHP Cache for more info on opcode caching.
I was wondering about caching dynamic PHP pages. Is it really about pre-compiling the PHP code and storing it in byte-code? Something similar to Python's .pyc which is a more compiled and ready to execute version and so that if the system sees that the .pyc file is newer than the .py file, then it won't bother to re-compile to .py file. So is PHP caching mainly about this? Can someone offer a little bit more information on this?
Can someone explain a little bit about caching dynamic PHP pages?
5 Although they both refer to keeping objects around, they are quite different, and I wouldn't say they're interchangeable. Cache - store frequently used values, typically because the lookup and/or creation is non-trivial. e.g. if a lookup table from a database is frequently used, or values are read from a file on disk, it's more efficient to keep it in memory and refresh it periodically. A cache only manages object lifetime in the cache, but does not impose semantics on what is held in the cache. A cache also doesn't create the items, but just stores objects. Pool - term to describe a group of resources that are managed by the pool itself. e.g. (Database) Connection Pool - When a connection is needed it is obtained from the pool, and when finished with is returned to the pool. The pool itself handles creation and destruction of the pooled objects, and manages how many objects can be created at any one time. Cache Pool - mostly seems to describe the number of (independent?) cache's that exist. E.g. an asp.net application has 1 cache per Application Domain (cache isn't shared between asp.net applications). Literally a pool of caches, although this term seems to be used rarely. Share Improve this answer Follow answered Nov 13, 2008 at 21:53 Robert PaulsonRobert Paulson 17.8k66 gold badges3535 silver badges5353 bronze badges Add a comment  | 
I'm doing some design (initially for Java) (but may extend to .NET in the future?) and I'm getting confused between the terms "cache" and "pool". As far as I can see, the main point of difference is what is contained in them? Any guidelines as to which should be used when? And what then is a "cache pool" which is referred to in a number of articles?
Guidelines for / differences between cache / pool / cache pool
6 URL's are by definition case-sensitive. The problem is that by default, Windows filesystems are not case-sensitive. This is why IIS added that hack, but normally you should not use it. There's probably nothing you can do about the caching issues, because browsers follow the standard and assume different case is a different file. My suggestion would be to fix your website such that it always uses the same case to request things. Share Improve this answer Follow answered Oct 10, 2008 at 17:20 davrdavr 19k1717 gold badges7777 silver badges9999 bronze badges Add a comment  | 
Is a http header that I can set in IIS that will essentially tell the users browser that url "/something/img.gif" and url "/SomeThing/IMG.gif" are in fact the same thing and that the browser should NOT re-download that resource? I'm running into cache issues where some urls are cased differently, and so users browsers are re-downloading the resource.
URL Case Sensitivity causing Caching problems! Is there a quick fix header?
If you want a quick and lazy solution, just make gzipped copies of your most used files and turn MultiViews on for them. This still has CPU overhead to calculate the right file to send but it's less than a gzip every time. If you want to take it further you can create static type-map files. Also you could consider using Lighttpd if possible instead of Apache. It has a mod_compress which does exactly what you want.
It seems redundant to have zlib compress a web page during every request. It is also the bottleneck of my files' response times. Is there a way to cache the zlib'd file so that it is compressed only once at each modification? Or should I just keep wishing?
Apache: Caching a DEFLATE'd file
Yes you can, option a) split your dockerfile , generate a random result in the uncached command: RUN apt-get update && apt-get -V -y dist-upgrade RUN head -c 23 /dev/urandom > /.randfile && apt-get -V -y --allow-unauthenticated --no-install-recommends --allow-downgrades install -f business=1.1-0 option b) use multi-staged builds , but generate the second image with the --no-cache option of docker-compose and docker build ( e.g. do the upgrades in a first pipeline , push as someimage:baseimage, then use FROM someimage:baseimage in the next stage option c) use a cachebust ENV variable
I have the following dockerfile: FROM debian:buster #Configure apt to look at my Debian repository COPY ./apt /etc/apt #Install the software into the image RUN apt-get update && apt-get -V -y dist-upgrade && apt-get -V -y --allow-unauthenticated --no-install-recommends --allow-downgrades install -f business=1.1-0 ENTRYPOINT ["/usr/sbin/main.sh"] CMD [] So basically it installs package “business” from version 1.1-0 I have a problem with docker cache, I’m pushing a new code change of package “business” with the same version (1.1-0) [yes I’m overriding versions…] and docker cache is not smart enough to pull the new changed .deb. It uses the cached layer without my code change :frowning: As workaround, I build with --no-cache but I don’t like this solution because I’m losing the caching mechanism. Any way to solve that? Can I build with no cache only from specific layer?
Docker cache causes false positive release
5 After a bit of research I found the way! Basically, the files containing the NVM db are called: nvdcve-1.1-[YYYY].json.gz i.e. nvdcve-1.1-2022.json.gz which are later added to a Lucene index. When running Dependency-Check with the Gradle plugin the files are created on: $GRADLE_USER_HOME/.gradle/dependency-check-data/7.0/nvdcache/ When running it with Maven they are created on: $MAVEN_HOME/.m2/repository/org/owasp/dependency-check-data/7.0/nvdcache/ So to cache this the DB on Gitlab CI you just have to add the following to your .gitlab-ci.yaml (Gradle): before_script: - export GRADLE_USER_HOME=`pwd`/.gradle cache: key: "$CI_PROJECT_NAME" paths: - .gradle/dependency-check-data The first CI job run will create the cache and the consecutive (from same or different pipelines) will fetch it! Share Improve this answer Follow answered Jan 4, 2023 at 8:37 Miguel Suarez PeleteiroMiguel Suarez Peleteiro 22522 silver badges1111 bronze badges 1 1 In case you run Dependency-Check as standalone app, the files should be created in: [JAR]/data/7.0/nvdcache/ where [JAR] it's the location of the dependency-check-core JAR file. – Miguel Suarez Peleteiro Jan 4, 2023 at 8:40 Add a comment  | 
OWASP dependency check it's a great way of automating vulnerability discovery in our projects, though when running it as part of a CI pipeline per project it adds 3-4 minutes just to download the NVD database. How can we cache this DB when running it with maven / gradle on a CI pipeline?
How to cache OWASP dependecy check NVD database on CI
All records in aeropsike belong to a namespace, set is just a metadata, like a tag, on the record. default-ttl is the record's remaining life if the client does not specify at create or update. You cannot assign default-ttl by set in the server configuration. (There are some other set specific config parameters, for e.g. disable-eviction, enable-index that are implemented, default-ttl is not one of them.) But you can achieve the same by writing it in your client application. For each set you can use a different write policy and in that write policy define the ttl for creating or updating records in that specific set. For e.g. in Java client, it is called WritePolicy.expiration, in seconds. In your specific case, you could do 2 hours default-ttl as server config - so orders will get that as default, for name, in client, when creating or updating records in name set, you override server default to 6 hours using WritePolicy.expiration = 3600.
I am new to Aerospike.. My namespace has multiple sets. I am trying to set different TTLs for different sets in my aerospike dB namespace. I do not want to use the default-ttl assigned to the namespace, instead I want to set it for each set. my config namespace test { replication-factor 1 memory-size 1G default-ttl 0 } i referred to this link https://docs.aerospike.com/server/operations/configure, where it is stating that set specific record policies can be set. namespace <name> { # Define namespace record policies and storage engine storage {} # Configure persistence or lack of persistence set {} # (Optional) Set specific record policies } but i am not sure what field should i use to set the ttl for each set. Say, i have two sets in this 'test' namespace named - order and name and i want their ttls to be 2hrs and 6 hours respectively. Any help would be appreciated. Thanks in advance
Aerospike Config specific to setName
How is "least recently used" parameter determined? I hope that a dataframe, without any reference or evaluation strategy attached to it, qualifies as unused - am I correct? Results are cached on spark executors. A single executor runs multiple tasks and could have multiple caches in its memory at a given point in time. A single executor caches are ranked based on when it is asked. Cache just asked in some computation will have rank 1 always, and others are pushed down. Eventually when available space is full, cache with last rank is dropped to make space for new cache. Does a spark dataframe, having no reference and evaluation strategy attached to it, get selected for garbage collection as well? Or does a spark dataframe never get garbage collected? Dataframe is an execution expression and unless an action is called, no computation is materialised. Moreover, everything will be cleared once the executor is done with computation for that task. Only when dataframe is cached (before calling action), results are kept aside in executor memory for further use. And these result caches are cleared based on LRU. Based on the answer to the above two queries, is the above strategy correct? Your example seems like transformation are done in sequence and reference for previous dataframe is not used further (no idea why you are using cache). If multiple executions are done by same executor, it is possible that some results are dropped and when asked they will be re-computed again. N.B. - Nothing is executed unless a spark action is called. Transformations are chained and optimised by spark engine when an action is called.
I have the following strategy to change a dataframe df. df = T1(df) df.cache() df = T2(df) df.cache() . . . df = Tn(df) df.cache() Here T1, T2, ..., Tn are n transformations that return spark dataframes. Repeated caching is used because df has to pass through a lot of transformations and used mutiple times in between; without caching lazy evaluation of the transformations might make using df in between very slow. What I am worried about is that the n dataframes that are cached one by one will gradually consume the RAM. I read that spark automatically un-caches "least recently used" items. Based on this I have the following queries - How is "least recently used" parameter determined? I hope that a dataframe, without any reference or evaluation strategy attached to it, qualifies as unused - am I correct? Does a spark dataframe, having no reference and evaluation strategy attached to it, get selected for garbage collection as well? Or does a spark dataframe never get garbage collected? Based on the answer to the above two queries, is the above strategy correct?
Does spark automatically un-cache and delete unused dataframes?
5 Add the below code in the "next.config.js" file of your project to cache images for 60 seconds: module.exports = { images: { minimumCacheTTL: 60, }, } You can change the cache time in secs instead of 60. Here is thr Next.js official documentation https://nextjs.org/docs/api-reference/next/image#style Share Improve this answer Follow edited Aug 11, 2022 at 13:34 Adrian Mole 51.2k183183 gold badges5454 silver badges8787 bronze badges answered Aug 6, 2022 at 5:42 NaveenNaveen 23622 silver badges55 bronze badges Add a comment  | 
I'm getting response from api that is a list of objects. The object contains property imageUrl, which is a link to to the image. How can I set cache-control to that image?
How can I cache dynamic images in Next app?
Your assumption is close, except that it is slightly more optimized in practice. Cache reads and writes are performed on the underlying hash table and appended to internal ring buffers. When the buffers reach thresholds then a task is submitted to Caffeine.executor to call Cache.cleanUp. When this maintenance cycle runs (under a lock), The buffers are drained and the events replayed against the eviction policies (e.g. LRU reordering) Any evictable entry is discarded and a task is submitted to Caffeine.executor to call RemovalListener.onRemoval. The duration until the next entry will expire is calculated and submitted to the scheduler. This is guarded by a pacer so avoid excessive scheduling by ensuring that ~1s occurs between scheduled tasks. When the scheduler runs, a task is submitted to Caffeine.executor to call Cache.cleanUp (see #3). The scheduler does the minimal amount of work and any processing is deferred to the executor. That maintenance work is cheap due to using O(1) algorithms so it may occur often based on the usage activity. It is optimized for small batches of work, so the enforced ~1s delay between scheduled calls helps capture more work per invocation. If the next expiration event is in the distant future then the scheduler won't run until then, though calling threads may trigger a maintenance cycle due to their activity on the cache (see #1,2).
I am using caffeine in the following configuration: Cache<String, String> cache = Caffeine.newBuilder() .executor(newWorkStealingPool(15)) .scheduler(createScheduler()) .expireAfterWrite(10, TimeUnit.SECONDS) .maximumSize(MAXIMUM_CACHE_SIZE) .removalListener(this::onRemoval) .build(); private Scheduler createScheduler() { return forScheduledExecutorService(newSingleThreadScheduledExecutor()); } will I be correct to assume that onRemoval method will be executed on the newWorkStealingPool(15) ForkJoinPool, and the scheduler will be invoked only to find the expired entries that needs to be evicted? meaning it will go something like this: single thread scheduler is invoked (every ~ 1 second) find all the expired entries to be evicted execute onRemoval for each of the evicted entries in the newWorkStealingPool(15) define in the cache builder? I didn't found documentation that explains this behavior, so I am asking here Tnx
Caffeine combining both scheduler and executor service
5 In progressIndicatorBuilder of cached network image, there's a parameter called downloadProgress, pass that value to CircularProgressIndicator to display loading. Note: Replace NetowrkUrl with actual url CachedNetworkImage( height: 400.h, imageUrl: "NetworkUrl", progressIndicatorBuilder: (context, url, downloadProgress) => Container( margin: EdgeInsets.only( top: 100.h, bottom: 100.h ), child: CircularProgressIndicator( value: downloadProgress.progress, color: AppColors.lightBlack ), ), errorWidget: (context, url, error) => Icon(Icons.error), // replace with your own error widget ); Share Improve this answer Follow edited Jun 20, 2023 at 17:43 Jared Burrows 54.8k2626 gold badges152152 silver badges188188 bronze badges answered Aug 22, 2021 at 14:22 Purushottam PokharelPurushottam Pokharel 5111 silver badge22 bronze badges Add a comment  | 
I am displaying an image using cachednetworkimage and when I click to view an image, it first shows the placeholder image and then loads the image. I don't understand why it does not show progress indicator when the image loads. I want to show progress indicator when the image loads. Incase the image is not available, then the placeholder image should be shown. What am I doing wrong here? Container( height: size.height * 0.35, width: double.infinity, child:_imageUrl != null ? CachedNetworkImage( imageUrl: _imageUrl, progressIndicatorBuilder: (context, url, downloadProgress) => Center(child:Loading()), errorWidget: (context, url, error) => Icon(Icons.error), ) :Image.asset('assets/images/placeholder.png'), ),
Cached Network Image does not show progress indicator when image is loading
We thought it could be due to the fact that vector2 has better spatial locatlity, since the array is been accessed by the inner loop constantly, while in vector1, only one element is accessed at a time? Well, both codes have the same accessing pattern, iterating over the array v with a stride of 1. Cache spacial-locality-wise both codes are the same. However, the second code: void incrementVector1(INT4* v, int n) { for (int i = 0; i < n; ++i) { for (int k = 0; k < 100; ++k) { v[i] = v[i] + 1; } } } Has a better temporal-locality because you access the same element 100 times, whereas in: void incrementVector2(INT4* v, int n) { for (int k = 0; k < 100; ++k) { for (int i = 0; i < n; ++i) { v[i] = v[i] + 1; } } } you only access it once on every 'n' iterations. So either you did a mistake, your teacher is playing some kind of strange game or I am missing something obvious.
So I have this question from my professor, and I can not figure out why vector2 is faster and has less cache misses than vector1. Assume that the code below is a valid compilable C code. Vector2: void incrementVector2(INT4* v, int n) { for (int k = 0; k < 100; ++k) { for (int i = 0; i < n; ++i) { v[i] = v[i] + 1; } } } Vector1: void incrementVector1(INT4* v, int n) { for (int i = 0; i < n; ++i) { for (int k = 0; k < 100; ++k) { v[i] = v[i] + 1; } } } NOTE: INT4 means the integer is 4 Bytes in size. In terms of CPU specs: cache size = 12288KB, line size=64B and only considering this single cache interacting with main memory. Question Why does vector2 have a faster runtime than vector1? And why vector1 will have more cache misses than vector2? Me and a few classmates worked on this for a while and couldn't figure it out. We thought it could be due to the fact that vector2 has better spatial locatlity, since the array is been accessed by the inner loop constantly, while in vector1, only one element is accessed at a time? we are not sure if this is correct, and also not sure how to bring cache lines in to this either.
Cache misses when accessing an array in nested loop
5 Migration and replication of shared data are two ways in which caches can be beneficial. Migration: Let's say processing unit A wants do a lot of operations a shared data item. This data item might be located in memory attached to processing unit B, or in some kind of faraway shared memory. Accessing this item would therefore be slow. To solve this problem, we migrate/move said item to a local cache attached to A, where we can access it much more quickly, and without load on the shared memory. This is a benefit of caching we have in single-processor systems as well. Replication: This is only relevant in a multiprocessor context. Let's say processing units A and B simultaneously want to access data in shared memory. Since both requests can't be served at once, one of the two has to wait for the other request to complete, introducing additional latency. If instead this data was replicated/copied to a local cache at A and/or B, they would not have to contend for the shared memory resources, because one or both could instead access their own local copy. Migration means moving data to a local cache because it's faster than the shared or remote memory. Replication means having local copies of data for multiple processing units, so that they don't have to contend for access to shared memory. Share Improve this answer Follow edited Aug 7, 2020 at 17:40 answered Aug 7, 2020 at 17:33 KakistocrtorKakistocrtor 5144 bronze badges Add a comment  | 
In "Computer Organization and Design, RISC-V ed.", for part of "Basic Schemes for Enforcing Coherence", I'm confused with two concepts, migration and replication. The given definitions of two are like this: In a cache-coherent multiprocessor, the caches provide both migration and replication of shared data items: Migration: A data item can be moved to a local cache and used there in a transparent fashion. Migration reduces both the latency to access a shared data item that is allocated remotely and the bandwidth demand on the shared memory. Replication: When shared data are being simultaneously read, the caches make a copy of the data item in the local cache. Replication reduces both latency of access and contention for a read shared data item. Supporting migration and replication is critical to performance in accessing shared data, so many multiprocessors introduce a hardware protocol to maintain coherent caches. I think replication is quite familiar in the cache system but, I cannot figure out how migration works.
what is the difference btw migration and replication in cache coherence
5 Clearing the database once per test class Add the following code to your Android Test class: companion object { @BeforeClass fun clearDatabase() { InstrumentationRegistry.getInstrumentation().uiAutomation.executeShellCommand("pm clear PACKAGE_NAME").close() } } Clearing the database before every test An alternative way to have the database cleared before each test run is to set the clearPackageData flag while using Android Test Orchestrator. This will "remove all shared state from your device's CPU and memory after each test:" Add the following statements to your project's build.gradle file: android { defaultConfig { ... testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" // The following argument makes the Android Test Orchestrator run its // "pm clear" command after each test invocation. This command ensures // that the app's state is completely cleared between tests. testInstrumentationRunnerArguments clearPackageData: 'true' } testOptions { execution 'ANDROIDX_TEST_ORCHESTRATOR' } } dependencies { androidTestImplementation 'androidx.test:runner:1.1.0' androidTestUtil 'androidx.test:orchestrator:1.1.0' } Share Improve this answer Follow edited Jun 28, 2020 at 0:46 answered Jun 27, 2020 at 22:41 Thomas GalesThomas Gales 12733 silver badges88 bronze badges 1 Your first code block doesn't seem to work because my app doesn't boot after it is run and neither do subsequent tests. Its almost like its deleting the app instead? – George Nov 2, 2021 at 17:44 Add a comment  | 
I wrote tests in Android Studio using espresso. Now there are a number of tests that before I run I have to delete the app's cache. I tried a lot of options that I know but nothing worked out. I searched the site for the problem and tried the results but none of them worked either. For example, there is a stage in the app that causes a change gender addressing(My app is in foreign language) and I test a number of things in this section, I log in from 3 different test users and each one has a different view that cann't change unless cache is deleted and without deleting the cache I cann't run them together but I can run each one of them separately. The app defines itself in the momnet the user logs in so to switch users I need to delete the app cache. I attached some links here of what I tried and should have worked but didn't. They may be able to help and explain Clear database before testcase espresso Reset app state between InstrumentationTestCase runs https://github.com/chiuki/espresso-samples/issues/3 https://discuss.appium.io/t/android-how-to-clear-app-data-before-test/7166/10
Clear/delete cache before evrey test case using espresso in android studio
5 It seems like the best solution I can find, is to just leave the HTTP cache layer alone, and use a separate cache layer: const { RESTDataSource } = require('apollo-datasource-rest'); import { PrefixingKeyValueCache } from 'apollo-server-caching'; class MoviesAPI extends RESTDataSource { initialize(config) { super.initialize(config); this.movieCache = new PrefixingKeyValueCache(config.cache, 'movies:'); } async getMovie(id) { const cached = await this.movieCache.get(id); if (cached) return cached; const movie = await this.get(`movies/${id}`); await this.movieCache.set(id, movie); return movie; } async updateMovie(id, data) { const movie = await this.put(`movies/${id}`, data); await this.movieCache.delete(id); return movie; } } It's still using the application cache, but with a different prefix than the HTTP cache. Share Improve this answer Follow answered Apr 9, 2020 at 9:11 RopezRopez 3,50533 gold badges2828 silver badges3131 bronze badges Add a comment  | 
Using the simple "Movies API" example from the documentation. I added a ttl to the getMovie function, so that the result is cached for 10 minutes. How can I invalidate the cache in the updateMovie function? const { RESTDataSource } = require('apollo-datasource-rest'); class MoviesAPI extends RESTDataSource { async getMovie(id) { return this.get(`movies/${id}`, {}, { cacheOptions: { ttl: 600 } }); } async updateMovie(id, data) { const movie = await this.put(`movies/${id}`, data); // invalidate cache here?! return movie; } } I know that the KeyValueCache interface that is passed to ApolloServer provides a delete function. However, this object doesn't seem to be exposed in data sources. It's wrapped inside HTTPCache, which only exposes a fetch function. The KeyValueCache is also wrapped inside a PrefixingKeyValueCache, so it seems almost impossible to invelidate something in the cache, without some nasty hacks, assuming the internal implementation of getMovie0.
How to invalidate cache in Apollo Server RESTDataSource
ORDER clause in sequence is only meaningful in RAC. It guarntees the generation of sequence in order, no matter which instance received the request. If you dont use ORDER then to illustrate, assume a sequence defined with cache=20. Instance 1 has sequence values 1 through 20 in its cache. Instance 2 has sequence values 21 through 40 in its cache. Normally, concurrent sessions might generate sequence values in this order: 1, 2, 21, 3, 22, 4, 23, and 24.but with ORDER clause this values will be 1,2,3,4,5,6,7,.. Hence, It is mentioned in the document that if the purpose of the sequence is to generate unique values then ORDER is not needed but it is needed if sequence is used to define chronological order in RAC. Cache: If you specify the Cache in sequence as 20 then oracle takes 20 value in a bunch and put its value in SGA and data dictionary is updated once. So if you want to use 35 sequence values then there will be only 2 times when data dictionary is updated improving the performance against 35 updates in the data dictionary in case of NO CACHE. The cache is used to improve the performance of the sequence. But also in database shutdown, you will lose unused buffered sequence values. Hope, It will be useful. Cheers!!
I am new to Database . I have been reading about sequences in oracle. I came to know about the order clause in sequence .I should quote the paragraph Specify ORDER to guarantee that sequence numbers are generated in order of request. This clause is useful if you are using the sequence numbers as timestamps. Guaranteeing order is usually not important for sequences used to generate primary keys. ORDER is necessary only to guarantee ordered generation if you are using Oracle Database with Real Application Clusters. If you are using exclusive mode, sequence numbers are always generated in order I did not understand any of this . Link to the site is a link! Can anybody help me out ? Pardon for any grammatical mistakes
Why should i use cache and order in sequence in oracle?
3 As pointed out, It takes some time for ehcache to setup and it is not working completely with @PostConstruct. In that case make use of ApplicationStartedEvent to load the cache. GitHub Repo: spring-ehcache-demo @Service class CodeCategoryService{ @EventListener(classes = ApplicationStartedEvent.class ) public void listenToStart(ApplicationStartedEvent event) { this.repo.findByCodeValue("100"); } } interface CodeCategoryRepository extends JpaRepository<CodeCategory, Long>{ @Cacheable(value = "codeValues") List<CodeCategory> findByCodeValue(String code); } Note: There are multiple ways as pointed by others. You can choose as per your needs. Share Improve this answer Follow edited Jun 6, 2019 at 7:36 answered Jun 6, 2019 at 6:46 BarathBarath 5,15322 gold badges1919 silver badges4242 bronze badges Add a comment  | 
I have a Spring boot Application connecting to SQL Server Database. I need some help in using caching in my application. I have a table for CodeCategory which has a list of codes for Many codes. This table will be loaded every month and data changes only once in a month. I want to cache this entire table when the Application starts. In any subsequent calls to the table should get value from this cache instead of calling the Database. For Example, List<CodeCategory> findAll(); I want to cache the above DB query value during application startup. If there is a DB call like List<CodeCategory> findByCodeValue(String code) should fetch the code result from the already Cached data instead of calling the Database. Please let me know how this can be achieved using spring boot and ehcache.
How to cache data during application startup in Spring boot application
3 By changing the parameter of precacheAndRoute as below it worked for me workbox.precaching.precacheAndRoute(self.__WB_MANIFEST); Share Improve this answer Follow answered Feb 9, 2021 at 13:38 Reuben FrimpongReuben Frimpong 8311 silver badge1010 bronze badges Add a comment  | 
I am quite new to react React workbox. I am trying to make my Electron react App have the ability to cache all images and data to be made available while it is offline. This is exactly what I am trying to accomplish as in this youtube video. from 14:00 to 21:00 minutes: Building PWAs with React and Workbox, /watch?v=Ok2r1M1jM_M But this command is giving "start-sw":"workbox injectManifest workbox-config.js && workbox copylibraries build/ && http-server build/ -c 0" This error: C:\Users\rajesh.ram\Desktop\Day\K\demok\client>npm run start-sw > [email protected] start-sw C:\Users\rajesh.ram\Desktop\Day\K\demok\client > workbox injectManifest workbox-config.js && workbox copylibraries build/ && http-server build/ -c 0 Using configuration from C:\Users\rajesh.ram\Desktop\Day\K\demok\client\workbox-config.js. Service worker generation failed: Unable to find a place to inject the manifest. Please ensure that your service worker file contains the followin g:/(const precacheManifest =)\[\](;)/ Please help me fix this or suggest alternative packages/repositories/videos to make it possible.
Please ensure that your service worker file contains the following:/(const precacheManifest =)\[\](;)/
5 Install Redis. https://redis.io/download or if the os is windows then https://github.com/MicrosoftArchive/redis/releases Run Redis. $ src/redis-server or in windows then run redis-server.exe Set the .env variables REDIS_HOST=127.0.0.1 REDIS_PASSWORD=null REDIS_PORT=6379 Add redist code to database.php 'redis' => [ 'client' => 'predis', 'default' => [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => 0, ], ], Share Improve this answer Follow answered May 24, 2018 at 13:35 Isuru DilshanIsuru Dilshan 73999 silver badges1818 bronze badges Add a comment  | 
I just setup GeneaLabs/laravel-model-caching packages. When running serve i got redis class missing. Then i run composer required predis/predis After that I got this error No connection could be made because the target machine actively refused it. [tcp://127.0.0.1:6379] and i got this error. Still working but I have not made any progress yet. Any idea? PS: i am working localhost with mysql. Not homestead.
Redis Connection refuse [tcp://127.0.0.1:6379]
4 Infinispan doesn't require your values to be Serializable. This is only needed for clustered caches, but for your use case looks like a reasonable local-only Cache could be better suited. Obviously if you need the caches to replicate data across servers and/or data centers, then Infinispan will need some way to marshall your objects across wires. If you want to use those features too, you can plug in custom Externalizer implementations for your types. Plugging in custom Externalizer implementations is possibly a good idea even for your Serializable types, as the custom Externalizer framework will typically perform better than Java's standard serialization. Share Improve this answer Follow answered Oct 18, 2017 at 10:55 SanneSanne 6,1072222 silver badges3535 bronze badges 2 java.io.Externalizable is reasonably fast (ca. 10 times faster than Serializable the last time I checked) – Lothar Oct 18, 2017 at 11:01 Besides speed, the primary benefit of using Infinispan's custom externalizers is in the more compact output size; that helps with memory consumption, less GC work, and faster data transfer over the nextwork. – Sanne Oct 18, 2017 at 11:06 Add a comment  | 
I am trying to implement a cache with a prefetch functionality. The plan is that if the cache entry is new, then it will be returned as is. If the cache is older than a set age then the "old" cache will be returned while a new thread is spawned to refresh the entry. If it is even older than that it will update the entry and then return it. The plan with this is to avoid having a cache miss where the user needs to wait for the cache to be refreshed. I have sort of gotten a working model using a hashmap as a cache store, but this seems kind of dirty. So I want to use the javax.cache.cache-api package for this and I chose org.infinispan.infinispan.jcache as an implementation. The problem is that the objects I want to save in the cache is not serializable and I can't figure out how to make inifinispan allow them. The reason for why they aren't serializable is because they store the functions to also update the cache entry. Question is: Can you store non serializable objects like this with infinispan and if so, how? Or is there any out of the box solution that already does what I am after?
Caching non serializable objects
This is quite strange, it's not happening to me locally. Anyway, you can use uncached Track.uncached { Track.order("RANDOM()").first } Note that you don't need to limit(1) as first already takes a single item. You also don't need that loop. To take a different track use current_track = Track.find(10) random_track = Track.where("id <> ?", current_track.id).order("RANDOM()").first You will save a bunch of unneeded code.
I'm trying to choose a random track but want it to avoid matching a specific record. This is my code, but it keeps returning the CACHE of the query so that the while loop never ends. current_track = Track.find(10) random_track = Track.limit(1).order("RANDOM()").first while random_track == current_track random_track = Track.limit(1).order("RANDOM()").first Rails.logger.debug "getting another random one..." random_track end What is the best way to prevent this?
Preventing caching a random SQL query in Rails
5 I think the answer is pretty simple. Your server does not automatically transfer you to https://abo-deg.surge.sh when you type abo-deg.surge.sh on your mobile. I was able to run it on Chrome Mobile by typing full URL with https:// Service worker API is available only for websites running over HTTPS, because Having modified network requests wide open to man in the middle attacks would be really bad https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API Please keep in mind, that your code will work only on these browsers, so don't expect it to work on iOS Share Improve this answer Follow answered Jul 13, 2017 at 7:59 Karol KlepackiKarol Klepacki 2,07811 gold badge1919 silver badges3030 bronze badges 0 Add a comment  | 
I set up a service worker for a static web application that needs to work on mobile phones. I'm using react/webpack2 for this application. Service Worker installs and works greatly when I open the application via desktop, but when I try to visit the application through mobile, it does not install. When SW finishes installing, it gives you an alert message ("onInstalled"); https://abo-deg.surge.sh (here is an example) https://abo-deg.surge.sh/survey/background https://github.com/strongharris/sample (sw.js located inside src, main entrypoint: src/index.js, webpack.config) The alert message shows up via desktop, but not on mobile browsers. Am I missing something? Is there a different way to set up a service worker for mobile web applications? Any resources, or tips, or guesses would be greatly appreciated.
Service Worker not working for Mobile Web (Chrome, Firefox, IE)
Per-user data that does not need to roam should be stored under CSIDL_LOCAL_APPDATA. You can get this path by calling SHGetFolderPath (or Environment.GetFolderPath in .NET). Use CSIDL_APPDATA instead if you need the data to roam in domain environments but it is not a good idea for large files...
I have a Windows application that downloads various files. I would like to cache this files but I am unsure where to put the cache. It should live in the user's home folder, so that they always have access, but I don't think that folders such as Documents are appropriate. The equivalent of what I am looking for on macOS and Linux are: ~/Library/Caches/MyApp ~/.cache/MyApp Where should application caches be stored on Windows? Note that unlike in this question, I am not asking about temp files. My cache should be re-used across sessions. Note that I am not building a Windows Store app.
Where should I store application caches on Windows?
5 You can implement this manually by writing a pair of entries for each item you want to cache. item: the actual item with a TTL set to a hard eviction item/fresh: Boolean with a TTL set to a soft eviction (i.e mark as stale). This TTL should be less than the hard TTL. When querying, always ask for both keys Scenario 1 - cache miss (item does not exist) Fetch the item from somewhere Populate the cache with both keys Return the item Scenario 2 - cache hit (both keys exist) Return the item Scenario 3 - stale (item/fresh does not exist) Create a job in the background that refreshes both keys Return the cached (stale) item Share Improve this answer Follow answered May 26, 2020 at 14:21 mmmdregmmmdreg 6,32822 gold badges2525 silver badges1919 bronze badges 2 1 I know this is an old post, but if we hit scenario 3, and we have a large amount of servers trying to get the value, and calculating the value takes a significant amount of time, it is very likely we will land in a situation were all the servers are running the same expensive background task at the same time, all trying to re-populate the cache. Separatelly, I believe the same can be achieved by requesting the TTL of the key, and refreshing it if there is less than X time left. This virtually halves the memory footprint and the bandwidth required. (still has the same issue though). – alianos- Jan 26, 2022 at 17:02 1 As I can find surprisingly little info on this, I will just add that the above problem can be solved by having the first server that wants to do the revalidation doing a GETSET with a small expiration time for safety, and only if the value is missing it performs the background update. (just throwing ideas in case anyone else bumbs on this). – alianos- Jan 26, 2022 at 17:17 Add a comment  | 
I have a Java application that is responsible for serving up various (sometimes large) json responses to client applications. At the app layer, it uses Redis (AWS ElastiCache) to cache the json with TTLs from 30s to 5min depending on the particular request. Some of the requests a rather long-running (5-15s), reaching out to several external services and returning a large amount of data. If the TTL is 60s, that still means that some users are getting unacceptable response times every minute. Instead of immediately evicting the data from the cache when the TTL is up, I'd like to kick off a background task to fetch the data and refresh the cache, while continuing to serve a stale copy of the data if it exists. Is this possible with Redis?
Is it possible to serve stale data from Redis by design?
Create a custom key generator like this: @Component("myKeyGenerator") public class MyKeyGenerator implements KeyGenerator { public Object generate(Object target, Method method, Object... params) { String[] value = new String[1]; long key; CachePut cachePut = method.getAnnotation(CachePut.class); if (cachePut != null) { value = cachePut.value(); } key = (long) params[0]; return value[0] + "-" + key; } } And use it like below: @CachePut(value = "DATA1", keyGenerator = "myKeyGenerator") I haven't test this but should work, atleast you will get a basic idea how to do it.
I am using spring cache with Redis for caching I have the following methods: @CachePut(value ="DATA1", key = "#key1") public Object saveData1(long key1, Object obj) { return obj; } @CachePut(value ="DATA2", key = "#key1") public Object saveData2(long key1, Object obj) { return obj; } This is causing collisions in keys and the data is being overridden. I want to generate the key with the cache name appended to it. Like: DATA1-key1, DATA2-key1. Is it possible? I have seen a few examples which use class name and method name. But I want to use the cache name. Thank you.
Spring KeyGenerator For Appending Cache name to the key
3 Well... I would say the answer stands in the question! If you carefully read Oracle documentation about Cross Session Functions, then you'd know. The cross-session PL/SQL function result cache provides a simple way to boost the performance of PL/SQL functions by saving the results of function calls for specific combinations of input parameters in the SGA. These results can be reused by any session calling the same function with the same parameters. This is exactly what you're using when creating your function: FUNCTION get_param(p_parameter IN VARCHAR2) RETURN VARCHAR2 RESULT_CACHE relies_on(nls_session_parameters) IS Indeed nls_session_parameters View doesn't change between your calls! it is a fixed system view. What changes it what your user sees from it. So you have solutions: simpler and inefficient(sorry): remove RESULT_CACHE statement from your function declaration or find a way to refresh the cache between the calls add a parameter that will change between your calls: FUNCTION get_param(p_parameter IN VARCHAR2, p_dummy_session_id IN NUMBER) RETURN VARCHAR2 RESULT_CACHE relies_on(nls_session_parameters) IS ... (you might need to actually do something with the "dummy" parameter for it to be taken into account) Share Improve this answer Follow answered Jan 27, 2017 at 8:17 J. ChomelJ. Chomel 8,2671515 gold badges4343 silver badges7171 bronze badges Add a comment  | 
Why below function is not returning fresh param value every time I am altering session to set new NLS_DATE_FORMAT FUNCTION get_param(p_parameter IN VARCHAR2) RETURN VARCHAR2 RESULT_CACHE relies_on(nls_session_parameters) IS l_value nls_session_parameters.value%TYPE; BEGIN dbg('Entered Fn_Get_nls_session_Parameter_frc to cache details for .. ' || p_parameter); SELECT SYS_CONTEXT('USERENV', p_parameter) INTO l_value FROM dual; RETURN l_value; EXCEPTION WHEN NO_DATA_FOUND THEN dbg('In NDF : Gng to return value as null.. '); l_value := NULL; RETURN l_value; END get_param;
RESULT_CACHE RELIES_ON (NLS_SESSION_PARAMETERS)
Vary: Authorization is unnecessary; responses to requests with Authorization are automatically private, and won't be cached by shared caches. You can send Cache-Control: public to override this; responses with that can be cached using the normal rules. However, if you want those responses to remain authenticated, you need to impose authentication. You can do that by also sending Cache-Control: no-cache, which will force the cache to check with the origin before serving a stored response. If you just want to have your reverse proxy (e.g., Varnish, nginx) do the caching, it's likely that it has a way of being configured to impose authentication on the "edge", serving the responses from cache when the request has the proper authentication. Check its documentation for details.
I'm currently building a REST API. Many of the resources I'm creating will always be identical regardless of who's accessing the resource. The few that aren't will have a Vary: Authorization header. There's two exceptions: You will get a 401 response if you're not authenticated. You might get a 403 response for some resources that you don't have access to. My question is, in this scenario would it still be possible to setup Caching correctly. In particular, I would like to use a reverse proxy such as nginx, varnish or haproxy to offload the main service. Are there elegant solutions to this problem?
HTTP Caching for authenticated REST apis
Hitting refresh has semantics that are dependent upon the browser you're using, but often it will make a conditional request to make sure the user is seeing a fresh response (because they wanted to refresh). If you want to check cache operation, try navigating to the page, rather than hitting refresh. OTOH if you don't want refresh to behave like this -- and you really mean it -- Mozilla is prototyping Cache-Control: immutable to do this (but it's early days, and mob-only for the moment).
I'm trying to implement cache control on my application. I've set up the tomcat filter for the all fonts giving a max-age=120. When I request a font for the first time with the cache cleared, the call/response is the following: and as you can see I have the max-age response. Now I expect that if I hit refresh the browser won't send the http request again instead this is what happens: As you can see the second request has a cache-control: max-age=0 value and the response is returned from the server cache. What I'm trying to achieve is to block the entire call from the browser. Am I doing something wrong? Thanks
Cache control not working when hit refresh in the browser
Have a look at StringGet overloaded method You can pass an array of keys to get the array of values in single call. It will execute Redis MGET call here
I'm using the azure redis cache with a .NET implementation. I have a list of keys that I need to get from the cache. Right now, this is my implementation: ` List<string> planIds = ...; // already initialized. List<customObj> plans = new List<customObj>(); foreach (string currentId in planIds) { var plan = Database.StringGet(key); if (plan != null) plans.Add(plan); } I've simplified it a bit for my explanation, but it works just fine. However, I was wondering if I could do a similar setup to the batch set for the batch download by passing it a list of keys that I want to retrieve. It's usually around 200+ ids. Is that doable?
Can I query the redis cache to get values by a list of keys?
5 Check documentation: http://apidock.com/rails/ActiveSupport/Cache/Store/fetch Fetches data from the cache, using the given key. If there is data in the cache with the given key, then that data is returned. But you can use the option force with true: Rails.cache.fetch("development_test", force: true) do {'x' => 3} end for rewrite the cache value Share Improve this answer Follow answered May 20, 2016 at 23:10 William Wong GarayWilliam Wong Garay 1,9411818 silver badges1414 bronze badges 1 This is not working for me > Rails.cache.fetch("development_test", force: true) { { x: 3 } } NoMethodError: undefined method call' for nil:NilClass Did you mean? caller from (irb):24:in '<eval>'` – HarlemSquirrel Jun 30, 2017 at 18:57 Add a comment  | 
I am using Rails 4.2.1 and memcached. I can't seem to cache a hash. How do I cache a hash? irb(main):039:0* irb(main):040:0* Rails.cache.fetch("development_test") do irb(main):041:1* 'hi' irb(main):042:1> end Cache read: development_test Cache fetch_hit: development_test => "hi" irb(main):043:0> Rails.cache.fetch("development_test") Cache read: development_test => "hi" irb(main):044:0> Rails.cache.fetch("development_test") do irb(main):045:1* {'x' => 3} irb(main):046:1> end Cache read: development_test Cache fetch_hit: development_test => "hi" irb(main):047:0> Rails.cache.fetch("development_test") Cache read: development_test => "hi" irb(main):048:0>
How do you cache a hash in Rails?
Is this code correct? As a matter of fact, no. That's some bogus non-existent system register encoding - cache maintenance operations live in the c7 space, not c12. What's more incorrect, though, is the assumption that you can do this. Prior to ARMv8, all cache maintenance operations can only be executed in privileged modes. From userspace, you'd need support from the OS to allow you to request it; Linux, for example, has an ARM-specific syscall which GCC provides an interface to via __clear_cache() - there might be some permission-related caveats, although I don't see any reference to VMA permissions in the current mainline kernel code, so maybe it was a quirk of older kernels. Either way, the only cache maintenance concern which really applies to userspace code is coherency between the instruction and data caches, to cater for JITs or self-modifying code. Things like data cache coherency with main memory should never be relevant to userspace code (which would normally be calling into driver code within the OS in situations where such things did matter), and on many systems require separate outer cache maintenance which only the OS is in a position to manage anyway.
I am using the following code to flush a cache line on a raspberry pi 2: static inline void flush(void addr) { asm volatile("mcr p15, 0, %0, c7, c6, 1"::"r"(addr)); } I am getting an error that this is a privileged instruction when I run this. Is this code correct? Is there any way to flush the cache line from user space on this machine? On x86 clflush works without any modification.
Flush a cache line from user mode on ARMv7(rpi2)
That is called N+1 problem. You should learn how to use Eager loading and load relations first. Then iterate over collection to display data to the user. An example of eager loading of multiple relations from documentation: $books = App\Book::with('author', 'publisher')->get(); An example and tutorial suggested by @Achraf Khouadja: $blogs = blog::with('lastCommenter', 'user')->get();
I am in a situation where I have a lot of relationships and it is becoming a problem as it slows down a site when I do a lot of queries. For instance, I am doing a foreach loop for all blogs and getting the user who made the blog. @foreach ($blogs as $blog) <a href="{{ route('blog.view', str_slug($blog->title)) }}">{{ $blog->title }}</a> {{ $blog->created_at }} <a href="{{ viewProfile($blog->user) }}"><{{ $blog->user->username }}/a> Last Commenter: <a href="{{ viewProfile($blog->lastCommenter()->user) }}"> {{ $blog->lastCommenter()->user->username }} </a> @endforeach That alone is more than 50+ queries.. And if there are like 100 blogs, the number of queries is way off. How can I avoid doing this? I have stored it in a variable in this view but I don't want to really put any PHP code in a blade file. How can I avoid doing this? I have tried using cache in database but that's also doing a few queries to the cache table in database. I am also using eager loading which has helped a lot. But how can I best do this kind of things? Thank you very much for your response in advance.
Laravel - Repetitive query in blade view - Reducing number of queries
From this question looks like you do this (c# version) for iOS 9 and it will print out which records are deleted: var websiteDataTypes = new NSSet<NSString>(new [] { //Choose which ones you want to remove WKWebsiteDataType.Cookies, WKWebsiteDataType.DiskCache, WKWebsiteDataType.IndexedDBDatabases, WKWebsiteDataType.LocalStorage, WKWebsiteDataType.MemoryCache, WKWebsiteDataType.OfflineWebApplicationCache, WKWebsiteDataType.SessionStorage, WKWebsiteDataType.WebSQLDatabases }); WKWebsiteDataStore.DefaultDataStore.FetchDataRecordsOfTypes (websiteDataTypes, (NSArray records) => { for (nuint i = 0; i < records.Count; i++) { var record = records.GetItem<WKWebsiteDataRecord> (i); WKWebsiteDataStore.DefaultDataStore.RemoveDataOfTypes (record.DataTypes, new[] {record}, () => {Console.Write($"deleted: {record.DisplayName}");}); } }); Or for iOS 8, from ShingoFukuyama/WKWebViewTips you could check the subdirectories Cookies, Caches, WebKit in the Library directory are removed. iOS 8 After much trial and error, I've reached the following conclusion: Use NSURLCache and NSHTTPCookie to delete cookies and caches in the same way as you used to do on UIWebView. If you use WKProccessPool, re-initialize it. Delete Cookies, Caches, WebKit subdirectories in the Library directory. Delete all WKWebViews
I am doing following to clear the cache from the WkWebView. I would like to know how do I confirm that the cache is cleared var request = new NSUrlRequest (webURL, NSUrlRequestCachePolicy.ReloadIgnoringLocalAndRemoteCacheData, 0); NSUrlCache.SharedCache.RemoveAllCachedResponses (); NSUrlCache.SharedCache.MemoryCapacity = 0; NSUrlCache.SharedCache.DiskCapacity = 0; Is there a way to print the cache when making the request
Xamarin iOS clear cache from WKWebView
You have not shown you Dockerfile, because of that I can give you an example of container with PHP-FPM where this problem is fixed: This row is fixing permission error: usermod -u 1000 www-data FROM debian:jessie RUN apt-get update RUN apt-get install -y curl \ mcrypt \ && apt-get install -y php5 \ php5-fpm \ php5-cli \ php-pear \ php5-common \ php5-igbinary \ php5-json \ php5-mysql \ php5-mysqlnd \ php5-gd \ php5-curl \ php5-dev \ php5-sqlite \ php5-memcached \ php5-memcache \ && usermod -u 1000 www-data EXPOSE 9000
I am running a Docker containers on OSX. Containers consist of: Symfony Nginx php-fpm Redis This is a pretty common setup to run Symfony apps. I am running into some weird folder permission issues and I'm getting this error: error screenshot My Symfony can create a folder /var/www/var/chache but then it can't write into it. Once cache folder created, folder permissions are set to this: 10344 drwxr-xr-x 1 1000 staff 68 Apr 15 00:33 cache Owner of the folder is my local OSX user, which Docker is running under. I've tried to change folder permissions or owner from Symfony's CLI in Docker and it has no effect. I tried to chmod -R 777 under my local console, permissions are changed, but then Symfony creates folder inside cache folder and can't write into it again. I've also tried to disable caching in app_dev.php: $kernel = new AppKernel('dev', true); // $kernel->loadClassCache(); $request = Request::createFromGlobals(); And in config.yml: twig: cache: false Nothing had effect, so I'm lost here. Any ideas how to solve an issue?
Folder permissions when running Symfony in Docker environment
The Nginx sources show that the file cache manager is a small function that is called by the cache manager process. It checks if the current cache size is larger than the max_size, and if it is, tries to delete the last cache node from both the internal queue and the disk. And unless you redefine it in the configuration file, the max_size is simply set with a large platform-dependent constant value. So it would appear that the cache manager does not do anything fancy, like trying to determine the disk size. It merely uses a large constant value and leaves it to you to worry about the consequences. So, if you won't set the max_size explicitly, you might eventually end up with tons of errors and misbehaving cache. Also note that the cache manager lives in a separate process and it may not notice in time that the cache size went beyond the limit. For this reason it would be wise to set the max_size with a value at least 100 Mb less than the actual HDD size.
We're planning on updating our Nginx cache duration and I'm just going through the documentation to make sure nothing catches us out when I came across the following: max_size sets the upper limit of the size of the cache (to 10 gigabytes in this example). It is optional; not specifying a value allows the cache to grow to use all available disk space. When the cache size reaches the limit, a process called the cache manager removes the files that were least recently used to bring the cache size back under the limit. Source: https://www.nginx.com/blog/nginx-caching-guide/ We've mounted a separate HDD which will be used solely for the cache and I'm tempted to not specify the max_size property so that it will just use the available space (as per the above) but it doesn't specify whether the cache manager will handle the clean up when the "limit" is the implied size of the disk. Does anyone know whether the cache manager will handle this or should we be setting the max_size (to the size of the HDD) just to be on the safe side?
Will the Nginx cache manager clean out files when approaching the limit of the disk?
Some additional considerations: Can you servers handle whatever additional load would come from serving these assets (e.g. for one page view on your system, might there be 10-20 assets being served)? What Cache-Control headers are being set, and do you want to ignore or override them for every asset There is no guarantee that the resources would work correctly if served from another domain, as they may make assumptions about their relative paths or what domain they are being served on. That said, there is no technical reason you couldn't set up a 'backend' for each and then proxy them using a URL pattern to detect which one to serve. For instance, let's say you have a resource: http://someparty.com/assets/js/stuff.js You could set up a backend: backend thirdparty_someparty { .host = "someparty.com"; } Then you might reference it in some form like: <script src="//3p/someparty/assets/js/stuff.js"></script> Then in your VCL: sub vcl_recv { if (req.url ~ "^/3p/someparty") { set req.backend_hint = thirdparty_someparty; set req.url = regsub(req.url, "^/3p/someparty", ""); // This way we don't override the Host for logging set req.http.HostOverride = "someparty.com"; } } sub vcl_backend_fetch { if (bereq.http.HostOverride) { set bereq.http.Host = bereq.http.HostOverride; } unset bereq.http.HostOverride; } You mentioned that frequently the content doesn't change often. Depending on whether the origin is actually sending back headers corresponding to hours or days then you won't need to do anything, otherwise you will need to override the TTL in the response. sub vcl_backend_response { if (bereq.http.Host == "someparty.com") { if (! beresp.uncacheable && beresp.ttl < 1h) { // Use your judgement here set beresp.ttl = 1h; } } } Hopefully that gets you started and helps a bit.
I have a website that's using some third party scripts and images and they are key to having a fully functional website. But the site performance is taking a hit because these third party resources are having poor caching, compressing and cdn. And they do not even change for over a month. I would like to use my varnish instance to cache these third party resources too JS, CSS and Images for at least a few hours and serve from my own server with optimization through my cloudflare. Is this possible to do this with Varnish?
Can varnish be used as a proxy to cache and serve third party resources?
There are two relevant effects at play. The first and major effect is that all data transfers from the 3 locations mentioned happen in blocks bigger than a single integer value. From HDD or SSD to main memory, block sizes are typically 4kB or bigger (file system cluster size). From main memory to cache, data transfers are typically 64-256 bytes (cache line size). The second effect is that because most access is sequential, storage is optimized for this. Hard disk file systems store files consecutively, so the hard disk read head doesn't need to move to get the next cluster. The disk just rotates. Only after one rotation does the head move, by a single step. A random seek takes milliseconds, in comparison. But even SSD's have to wait for a new address, whereas for a sequential read the next address can be predicted.
I was shown this table in the context of data processing and big data. What are the bars actually measuring? The mechanism of a device for reading always has the same speed: e.g. it's not as if a hard drive thinks "ok, this data is sequential so I'll increase the amount which the head reads". I was told it happens because of the cache, though it's sort of misleading to say that the actual read speed is faster if it's the cache that's responsible. Does this happen because an entire page is loaded from secondary storage to primary and if it's sequential than a larger portion of the page would be used then if it was random? This seems like a very academic perspective. I'm not sure if I should've posted this question above the previous paragraph, but are we talking about 1) how long a device takes to read something, 2) how long it takes for a device to read something and pass it to the next level in the memory hierarchy, 3) or how long a device takes to read something and pass it to the processor? Come to think of it I'm not sure there's a difference between the first two: say you have an SSD that has read speed of x and RAM that has read speed of y. Then for something to be loaded into ram, would it take (x+y)*size_of_page time or just x*size_of_page? Obviously there's many different caches a long the way: hard drives have a buffer, I don't know if SSD do, any CPUs can have L1, L2 or any number of caches. It really seems like this table needs more of an explanation.
What is this table trying to convey? Why do sequential reads happen faster than random reads?
I found the answer in this blog post. In order to exclude some entities, you need to create a caching policy and drive a class from CachingPolicy. After overriding CanBeCached method, you can return false to prevent caching. This is my working code: class CacheConfiguration : DbConfiguration { public CacheConfiguration() { var transactionHandler = new CacheTransactionHandler(new InMemoryCache()); AddInterceptor(transactionHandler); //var cachingPolicy = new CachingPolicy(); var cachingPolicy = new myCachingPolicy(); Loaded += (sender, e) => e.ReplaceService<DbProviderServices>( (s, _) => new CachingProviderServices(s, transactionHandler, cachingPolicy)); } } public class myCachingPolicy : CachingPolicy { protected override bool CanBeCached(System.Collections.ObjectModel.ReadOnlyCollection<System.Data.Entity.Core.Metadata.Edm.EntitySetBase> affectedEntitySets, string sql, IEnumerable<KeyValuePair<string, object>> parameters) { string[] excludedEntities = { "permView1", "permView2", "permView3"}; if (affectedEntitySets.Where(x => excludedEntities.Contains(x.Table)).Any()) { return false; } else { return base.CanBeCached(affectedEntitySets, sql, parameters); } } }
I'm using EFCache to provide 2nd-level caching in my EF context. I ran into a problem, where one of my entities is connected to a view which provides row-level security. So this view filters rows based on some parameters. When using 2nd-level cache, all users will get the same result! I'm looking for a way to exclude certain entities from caching, any help is welcome. This is my caching configuration: class CacheConfiguration : DbConfiguration { public CacheConfiguration() { var transactionHandler = new CacheTransactionHandler(new InMemoryCache()); AddInterceptor(transactionHandler); var cachingpolicy = new cachingpolicy(); Loaded += (sender, e) => e.ReplaceService<DbProviderServices>( (s, _) => new CachingProviderServices(s, transactionHandler, cachingPolicy)); } }
Exclude certain entities from second-level caching
The solution is to open Chrome's Dev tools (right click, inspect element), click the network tab, and disable caching. Reload the first url, and try the second url. If there is no redirect, disable caching, and the issue is resolved. Chrome only redirects from cache if the page was initially loaded with caching enabled.
I'm setting up a local development server on my Mac using nginx in place of Apache. I'm basically there, but having one issue. I have multiple web apps, and each are set up using sites-available and sites-enabled - no issues here. The issue is that my browser of choice is chrome, and there's some weird caching going on that is causing the first-visited app to load each time. For example, I have: site1.dev site2.dev If I load site1.dev, it loads without issue. If I load site2.dev, it's automatically redirected to site1.dev. I see this as a caching issue because if I use chrome's Incognito mode, I don't have the same issues (nor do I have them in Firefox). Does anyone know what could be going on here? Or what the solution could be? Thanks in advance!
Nginx Server Caching in Chrome
Your answer is good but it requires changes to both Inbound and Outbound DTOs, and exposes a parameter to the end user of the service that he might not care about (because it's more of an implementation detail). I was hoping for something a that was entirely internal. I ended up making sure my AssemblyVersion attribute was set to change Revision/Build numbers on every build: [assembly: AssemblyVersion("1.0.*")] Then I created the following helper class to pull that version number out as a string: internal class AssemblyVersion { public static string Version { get { return Assembly.GetExecutingAssembly().GetName().Version.ToString(); } } } finally, I add the version number to every inbound DTO's Cachekey property. Now the DLL version number is stored as part of the Cache key, and new builds of the DLL makes sure to not use old version cache entries. [DataContract] [Route("/cachedhello/{Name}")] public class CachedHello : IReturn<string> { [DataMember] public string Name { get; set; } public string CacheKey { get { return string.Format("urn:cachedhello:nm={0}:ver={1}", Name, AssemblyVersion.Version); } } }
I continually burn myself when I'm testing a change to my Servicestack service, say using the browser interface. I don't see my new changes and it turns out it's because the data is cached. So I clear the cache and all is well. I'm wondering if anyone has somehow included a build/version number in their cache keys, OR perhaps done something to clear the cache as part of the deploy process.
ServiceStack cache to include object/software version number
In regards to lifespan, accessing the entry from the cache doesn't affect this. Only maxIdle is affected by an access. cache.containsKey will affect max idle and it will be refreshed. The only way to not update idleness is by iterating over the entries or accessing the entry through the DataContainer directly using peek (shown here). DataContainer<K, V> container = cache.getAdvancedCache().getDataContainer(); InternalCacheEntry<K, V> entry = container.peek(key); Note that this may not work properly with a distributed cache, since accessing the data container only reads local contents. Although it is also mentioned that max idle shouldn't be used in a clustered cache here, as it is not guaranteed to refresh idleness across cluster.
Is there a way to examine an entry in a cache (org.infinispan.Cache implementations) without affecting expiration policy? Meaning, if I have configured maxIdle, the results of the cache operation won't affect the time the entry is going to be evicted? Something such as Ehcache Cache.getQuiet(Object key). If calling cache.containsKey, does it have effect on idleness?
get an entry from Infinispan cache without affecting last modify timestamp