Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
1 After working with temp files a bit, I don't believe this is a bug, it is a feature for temporary files to be deleted when not opened for a period of time. I do believe you can manage that more directly by differentiating between normal and temporary files. As per a much more detailed article here, you can use the ManagedURL protocol to fix your issue: public protocol ManagedURL { var contentURL: URL { get } func keepAlive() } public extension ManagedURL { public func keepAlive() { } } extension URL: ManagedURL { public var contentURL: URL { return self } } To keep the background operation alive: URLSession.shared.uploadTask(with: request, fromFile: fileToUpload.contentURL) { _, _, _ in temporaryFile.keepAlive() } Share Improve this answer Follow answered Feb 12, 2022 at 19:42 EJZEJZ 1,18011 gold badge77 silver badges2727 bronze badges 1 How’d you use it in a normal file as per the question? – Miguel Ruivo Feb 13, 2022 at 20:32 Add a comment  | 
Surprisingly, and after some time debugging this, I've found that my app is purging files from tmp directory after 1 minute or so, without any other user action, even while the app is running on the foreground. Accordingly to the docs, temporary directory should keep the files during app execution and only remove it afterwards or at most on its startup. I don't want to move/copy to caches directory as workaround here, I'm more surprised about why is this actually happening. If that helps, I'm picking from Files app (using UIDocumentPickerViewController). To make it even better, sometimes there are files that persist for a long time there, even though they are older than the others that are removed instantly after picking. This can result in bad access because your user selects a file, go grab a coffee while the app is still running, and then presses the button to upload it somewhere or whatever and the file is gone. I can replicate it everytime with a simple app.
Temporary files are being removed during execution on iOS
1 I am not familiar with python but i checked the documentation the method names are same. So you may apply the following structure to your code. here is the redis-cli version of both sets and lists. Sets 127.0.0.1:6379> sadd company_name name1 name2 name3 name4 nameN (integer) 5 127.0.0.1:6379> sadd news news1 news2 news3 news4 newsN (integer) 5 127.0.0.1:6379> expire company_name 600 (integer) 1 127.0.0.1:6379> expire news 600 (integer) 1 127.0.0.1:6379> ttl company_name (integer) 588 127.0.0.1:6379> ttl news (integer) 592 127.0.0.1:6379> smembers company_name 1) "name4" 2) "name3" 3) "name2" 4) "name1" 5) "nameN" 127.0.0.1:6379> smembers news 1) "news4" 2) "news1" 3) "news3" 4) "news2" 5) "newsN" 127.0.0.1:6379> Lists 127.0.0.1:6379> lpush company_name name1 name2 name3 nameN (integer) 4 127.0.0.1:6379> lrange company_name 0 -1 1) "nameN" 2) "name3" 3) "name2" 4) "name1" 127.0.0.1:6379> lpush news news1 news2 news3 newsN (integer) 4 127.0.0.1:6379> lrange news 0 -1 1) "newsN" 2) "news3" 3) "news2" 4) "news1" 127.0.0.1:6379> expire news 15 (integer) 1 127.0.0.1:6379> expire company_name 15 (integer) 1 127.0.0.1:6379> ttl news (integer) 6 127.0.0.1:6379> ttl company_name (integer) 8 127.0.0.1:6379> ttl news (integer) -2 127.0.0.1:6379> lrange news 0 -1 (empty list or set) 127.0.0.1:6379> Share Improve this answer Follow answered May 25, 2020 at 13:31 ErsoyErsoy 9,17666 gold badges3535 silver badges5050 bronze badges Add a comment  | 
I am using Google Cloud's memorystore for Redis server. My objective is to store the query results coming out of BigQuery (SQL like) and to retrieve the results from Redis for subsequent usage. The query returns about 1000 rows with two columns which I try to set using client.set. Once the values along with key are set, when I try to retrieve, it is returning me only the last value of the key. I am guessing that is because while trying to set values in a loop it is overwriting the previous value of the key. How do I overcome this problem of storing multiple values of the same key and retrieving it. The code that I have used is as below: import redis import os from google.cloud import bigquery client = bigquery.Client() redis_host = os.environ.get('REDISHOST', 'IP address') redis_port = int(os.environ.get('REDISPORT', 6379)) r = redis.StrictRedis(host=redis_host, port=redis_port) def hello_world(): if (r.get('company_name')): print(r.get("company_name")) else: query = """SELECT coompany_name, news FROM `some-project.news` LIMIT 1000 """ query_job = client.query(query) results = query_job.result() for row in results: r.set("company_name",row.company_name) r.set("news", row.news) r.expire("company_name",15) r.expire("news",15) print(row.company) if __name__ == '__main__': hello_world() Please let me know how can I print all the key and values that are set
How to set values in Redis cache, consisting of multiple rows under the same column name using Python
This will work but I think you want max-age and not max-stale. A cached response written at time a will be served until time b, a time that is derived from the response’s headers. The value you specify in max-stale is added to b to extend the lifetime of the cached response. The value you specify in max-age is added to a to constrain how long the cached response is valid. https://square.github.io/okhttp/4.x/okhttp/okhttp3/-cache-control/-builder/
For Android Apps, is it possible to set different caching times for different urls using OkHttpClient? for example, I have two urls: http://www.example.com/getcountries.php http://www.example.com/getnews.php for the first url, i would like to set caching for 365 days: Request request = new Request.Builder() .cacheControl(new CacheControl.Builder() .maxStale(365, TimeUnit.DAYS) .build()) .url("http://www.example.com/getcountries.php") .build(); for the second url, i would like to set caching for 3 minutes: Request request = new Request.Builder() .cacheControl(new CacheControl.Builder() .maxStale(3, TimeUnit.MINUTES) .build()) .url("http://www.example.com/getnews.php") .build(); will it work? (with caching in place, debugging is difficult). Thanks for your support.
(Android) OkHttpClient caching based on URLs (different caching for different urls)
1 IMemoryCache (as in its documentation) doesn't have any method to get item expiration. Same for its default implementation MemoryCache. Share Improve this answer Follow answered Apr 12, 2020 at 18:18 Oleg LukinOleg Lukin 1133 bronze badges Add a comment  | 
I add a value to IMemoryCache _cache.Set("Key", "Value", DateTimeOffset.UtcNow); I wanna to get absolute expiration value of this key in other methods. Or I wanna to update value of this key without changing expiration. How can i do that? In IMemoryCache document doesn't have any way to get it (at least i didn't see), I handle it by caching expiration. Is there any better solution for handle it?
How can I get AbsoluteExpiration DateTimeOffset of key in IMemoryCache
1 I had the same challenge, and solved it using headers. In details: We are indicating our tenants by subdomain id: <id>.domain.com We wanted to store a different cached value for each tenant. For example: 123.domain.com/get-config and 456.domain.com/get-config need to return different cached values. As a solution, since cloudfront doesn't supply an indication based on sub-domains, we based on headers. In your case, you can pass a header named appName and give it values: demo1, demo2 etc. Cloudfront will host different cache values based on that header. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-selecting Share Improve this answer Follow answered Oct 2, 2022 at 20:00 Yedidya SchwartzYedidya Schwartz 15622 silver badges55 bronze badges Add a comment  | 
I have CloudFront setup which is setup to serve various subdomains under my domain. ex. http://demo1.mydomain.com/test.html, and http://demo2.mydomain.com/index.html can be two requests which are served by the CloudFront. Now the issue here is with CloudFront caching. It caches the content based on path ie. in the above examples ("/test.html" and "/index.html"). This creates a problem that if two subdomains having same path, content which will get cached for a given path in one subdomain and will also be served from cache (same path) in other subdomain. ex. http://demo1.mydomain.com/example.html http://demo2.mydomain.com/example.html The second request here will serve the cached content of first one. Can I configure CloudFront to include subdomain when caching? This way I can avoid same path conflicts across subdomain. Thanks
AWS CloudFront Caching based on subdomain
OPcache improves PHP performance by storing precompiled script bytecode in shared memory, thereby removing the need for PHP to load and parse scripts on each request. phpinfo() wont show any status of the OPcache if the Zend OPcache extension is not loaded. To check whether is Zend OPcache loaded or not you can use print_r(get_loaded_extensions()); If Zend OPcache is not listed in the array you can configure it in php.ini file Just add in php.ini [opcache] zend_extension ="D:\xampp\php\ext\php_opcache.dll" opcache.enable=1 Other Configuration of opcache is here https://www.php.net/manual/en/opcache.configuration.php Also Note that you can configure opcache.enable=1 by only php.ini. if you use ini_set() it will generate error. Restart you xampp php service and now you can see all configuration by using phpinfo() and finally you can use print_r(opcache_get_status()); opcache_get_status() will show you all your opcache statistics, cached file , memory consumption etc.
As i am new to PHP i have some question about internal that i am not able to find exactly on internet. I have read a statement PHP recompiles your program every time that it is run into an machine readable language, called opcodes. An opcode cache stores the compilation in memory and just re-executes it when called a second time. so some question arises in my mind- I read somewhere that PHP cache the OPCODE so that no need to recompile. How can i get to know that if any opcaode caching technique is enabled or not on my server? i am using Xampp with default configuration on my local machine windows. Does PHP uses OPCODE caching by default or we have to enable it by installing any external library?
PHP internal opcode cache
1 +50 I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways - Try deleting the cache files manually from Laravel from storage/framework/views Upload the code directly into AWS for particular module without using the pipeline way restart your server This will surely resolve your issue! Share Improve this answer Follow edited Jun 20, 2020 at 9:12 CommunityBot 111 silver badge answered Sep 3, 2019 at 5:30 Param sohiParam sohi 12199 bronze badges 0 Add a comment  | 
I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly. I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
Critical Caching issue in Laravel Application (AWS Server)
The problem was within the service-worker: I forgot to add the service worker file to the static assets. Found the solution by reading the answers of this question: https://stackoverflow.com/a/44482764/7350000.
I wrote a simple PWA (current version) based on this tutorial by Vaadin. It works fine, tested in Chrome, also in offline mode. By using it on a mobile device, issues occur: After saving the PWA, starting it once, it runs fine. then after closing, turning on flight mode and restarting the PWA, I get a system message, saying I have no internet connection -> no problem, I can ignore that after ignoring, the app does not load the static assets as I expected it, but shows a blank page, saying the browser could not load the page, since I don't have internet connection. I thought that is, what the PWA is good for? Why does it not load the static assets? I think my service-worker is just fine: const staticAssets = [ './', './styles.css', './app.js', './images', './fallback.json', './images/system/offline.png' ] self.addEventListener('install', async event => { const cache = await caches.open('static-assets'); cache.addAll(staticAssets); }); self.addEventListener('fetch', event => { const req = event.request; const url = new URL(req.url); if(url.origin === location.origin){ event.respondWith(cacheFirst(req)); }else{ event.respondWith(networkFirst(req)); } }); async function cacheFirst(req){ const cachedResponse = await caches.match(req); return cachedResponse || fetch(req); } async function networkFirst(req){ const cache = await caches.open('dynamic-content'); try{ const res = await fetch(req); cache.put(req, res.clone()); return res; }catch(err){ const cachedResponse = await cache.match(req); return cachedResponse || caches.match('./fallback.json'); } } I'm happy to share more code, if you think the problem is somewhere else!
PWA offline mode not loading from cache on mobile browsers
System.Threading.Tasks.Task implements IAsyncResult. If the data is found in the cache, you can return a completed Task with the result via Task.FromResult. Otherwise, you make the call to the service. public override IAsyncResult Start(object sender, EventArgs e, AsyncCallback cb, object extraData) { Object cachedData = cache.Get("key"); if (cachedData != null) { // Return cached data. return Task.FromResult<object>(cachedData); } // Make call to the service. svc = new service.GetData(m_url); if (m_debug_mode) // not thread safe { return ((service.GetData)svc).BeginCallDataDebug(request, cb, extraData); } return ((service.GetData)svc).BeginCallData(request, cb, extraData); } In the End method, you can check the IAsyncResult type to access the result value. (Or you set a state flag/field in the Start method about whether your called the service or not; you could check the service svc field which will be null when cached data is being used.) public override void End(IAsyncResult ar) { try { Task<object> task = ar as Task<object>; if (task != null) { data = task.Result; } else { data = ((service.GetData)m_svc).EndCallData(ar); if(data !=null) cache.Add("key", data, null, absoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } } } catch(Exception ex) { Log(ex.message); } }
We are trying to cache the data of a WCF service, So when the data is available in cache memory we need to return cached data from the cache as AsyncResult because the data is an object type and the Start method is IAsyncResult. Here I can't change the return type because its an abstract member in the helper class. As well as I cant check from the parent page caching available and pass because this needs to be changed globally so those who are consuming this service can make use of it. public override IAsyncResult Start(object sender, EventArgs e, AsyncCallback cb, object extraData) { if(cache.Get("key") { //Needs to return the result Async format which is there as object in cache. } svc = new service.GetData(m_url); if (m_debug_mode) // not thread safe { return ((service.GetData)svc).BeginCallDataDebug(request, cb, extraData); } return ((service.GetData)svc).BeginCallData(request, cb, extraData); } public override void End(IAsyncResult ar) { try { data = ((service.GetData)m_svc).EndCallData(ar); if(data !=null) cache.Add("key", data, null, absoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } catch(Exception ex) { Log(ex.message); } }
Caching WCF service and Convert data as Async
It seems that it's necessary for the server to add an Expires header to the response, in addition to (or instead of?) the Cache-Control header. The poster of that answer deserves the bounty. When an Expires header is present, and RequestCacheLevel.Default is used, expiry behaves as expected in a .NET Framework application: the request is served from the local cache until the expiry time is reached. However in a .NET Core application, using the same code, all requests are sent to the server and the local cache isn't used :( This is separate to the subject of my original question so I've asked a new question.
I am confused about how time-based cache policies work when using HttpWebRequest. I am calling a GET method of a WebAPI that returns JSON content together with a Cache-Control header similar to: Cache-Control: public, max-age=60 The intention is that the content should be considered to be stale after max-age seconds. I am calling this API using HttpWebRequest, and want subsequent requests to be served from the local cache for max-age seconds, then to be retrieved from the server when the content expires. I've tried different combinations with the results listed below. Do not specify a RequestCachePolicy. In this case, as expected, all requests go to the server. Specify a default cache policy: var policy = new RequestCachePolicy(RequestCacheLevel.Default); HttpWebRequest.DefaultCachePolicy = policy; In this case, all requests still go to the server. I was expecting requests to be served from the cache for the next max-age seconds. Specify a cache policy CacheIfAvailable: var policy = new RequestCachePolicy(RequestCacheLevel.CacheIfAvailable); HttpWebRequest.DefaultCachePolicy = policy; In this case, all requests are served from the local cache, even if the content is stale (i.e. max-age seconds has passed). Is it possible to achieve what I want, and if so, how?
How to get RequestCachePolicy to respect max-age
It looks like an incorrect setting of vm.vfs_cache_pressure = 1000 was causing this misbehaviour. Setting it to 70 fixed the problem and restored good cache performance. And the documentation explicitly recommends against increasing the value beyond 100. Unfortunately, the Internet is full of examples with insane values like 1000.
When linking executables (more than 200) in a large project, I get link rate 0.5 executables per second, even if I have ran the link stage a minute before. vmstat shows more than 20MB/s disk read rate. But if I pre-cache the build directory using "tar cf /dev/null build-dir" once, I get consistent link rate of 4.8 executables per second and the disk read rate is basically zero. Why doesn't Linux cache the object files and/or ".so" files when they are read by GNU Linker, but does so when they are read by tar? There is plenty of RAM (16GB). Kernel version is 4.4.146. CentOS 7.5.
Why doesn't Linux cache object and/or ".so" files when using GNU Linker?
1 Yes, last year, Bitbucket pipelines introduced cache keys: Announcement Today we are announcing the option to add cache keys to your Pipelines cache definitions. Cache keys provide a way to uniquely identify versions of a cache based on a set of files in your repository. The typical use case would be to define a cache key based on files that define a project's dependencies. When dependencies are updated, the hash of the key files also updates and Pipelines will be able to generate a unique cache version for subsequent builds. As multiple cache versions are able to be retained, future builds using either the old or new dependency set will have a unique cache version to reuse. Documentation Share Improve this answer Follow answered Jan 9 at 16:18 Dario SeidlDario Seidl 4,38011 gold badge4242 silver badges5757 bronze badges Add a comment  | 
Do pipelines cache system support cache keys? In my project build process, I have many assets to be built by web pack. Resulting assets are copied to public/build directory. That process can take several minutes to complete. So I would like to generate a hash of the assets directory content (the one from which web pack is building) and store a cache with for example "assets-{assets-directory-hash}" as a key and the public/build directory as a cache container. Then in a next build if assets directory content hash didn't change it would mean that assets didn't change, and I could skip web pack building step, restore public/build directory from the cache, and save few minutes of the entire build as a result. If hash did change, then I would run web pack as usual, and store cache with a new key. Thanks.
Bitbucket pipelines cache keys?
1 Checking Disable Cache in Ddeveloper Tools works On Firefox, use ctrl+shift+I On Chrome, use right click, Inspect Share Improve this answer Follow answered Jul 19, 2023 at 13:43 Rohit GuptaRohit Gupta 4,1042020 gold badges3131 silver badges4242 bronze badges Add a comment  | 
The site I am building is being cached by browser. So when I make any changes in back-end or front-end I need to hard refresh the respective page to see the changes. In order to stop it I used @never_cache as described in django documentation but seems like its not preventing browser to cache page. Then I tried ajax cache false! $.ajax({ url: "/getData", method:"GET", cache: false }).done(function (res) { alert("Success"); }); Still not effective! Can anybody help me solving this problem?
How to stop browser caching in Django?
1 There are several advanced options for image caching. You can get a direct access to Memory Cache (default Nuke's ImagePipeline has two cache layers): // Configure cache ImageCache.shared.costLimit = 1024 * 1024 * 100 // Size in MB ImageCache.shared.countLimit = 20 // You may use this option for your needs ImageCache.shared.ttl = 90 // Invalidate image after 90 sec // Read and write images let request = ImageRequest(url: url) ImageCache.shared[request] = image let image = ImageCache.shared[request] // Clear cache ImageCache.shared.removeAll() Also, use ImagePreheater class. Preheating (a.k.a. prefetching) means loading images ahead of time in anticipation of their use. Nuke lib provides an ImagePreheater class that does just that: let preheater = ImagePreheater(pipeline: ImagePipeline.shared) let requests = urls.map { var request = ImageRequest(url: $0) request.priority = .low return request } // User enters the screen: preheater.startPreheating(for: requests) // User leaves the screen: preheater.stopPreheating(for: requests) You can use Nuke lib in combination with Preheat library which automates preheating of content in UICollectionView and ImagePipeline0. On iOS 10.0 and above you might want to use new prefetching APIs provided by iOS instead. Share Improve this answer Follow answered Aug 24, 2018 at 0:29 Andy JazzAndy Jazz 52.8k1818 gold badges146146 silver badges230230 bronze badges Add a comment  | 
I need to cache images on disk but to limit them say to 20 images. I am trying Nuke library. When I run Nuke.loadImage(with: url, options: options, into: imageView) image is cached as long as I am inside my View Controller. When I leave the views, next time images are being fetched again. So how do I make Nuke (or other lib) save those images for specific image count or time. Before trying Nuke I was just saving images to Documents folder of the app every time I fetch image. I am sure there is a better way. Update: I was able to do it with Kingfisher. func getFromCache(id: String) { ImageCache.default.retrieveImage(forKey: id, options: nil) { image, cacheType in if let image = image { self.galleryImageView.image = image } else { print("Not exist in cache.") self.loadImage() } } } private func loadImage() { ImageDownloader.default.downloadImage(with: url, options: [], progressBlock: nil) { (image, error, url, data) in if let image = image { self.galleryImageView.image = image ImageCache.default.store(image, forKey: id, toDisk: true) } } } If I understand correctly, first retrieveImage fetches images from disk cache. Later from memory cache. And it frees them only when memory warning received. I hope it does. God help us.
Image Caching – How to control when images are disposed?
To reset a service, you can expose a function that invoke the service again. PLUNKER Service app.service('MyService', MyService); function MyService () { console.log('init service #' + Date.now()); this.reset = () => { MyService(); } } Then you can use it everywhere like var myService = $injector.get('MyService'); myService.reset(); With ES6 classes you can do something like this: class MyService { constructor() { console.log('init service #' + Date.now()); this.reset = () => { new MyService(); } } }
We all know that in AngularJS service are singletons. Since then when you do this $injector.get('foo') you will get an instance of this service and its constructor would be invoked + object would be added into injector's cache for the reason because it is the singleton. Anyway, are there any methods that I can utilize for removing service in that cache for re-creating service again? Using factory solves this but anyway.
How to destroy injected service in angularjs cache?
This error is due to wrong configuration of celery. Different versions of celery use slightly different set of configurations so changing them accordingly solved this error for me. Check this link http://docs.celeryproject.org/en/4.0/userguide/configuration.html to find celery settings for version 4.0
ACCOUNTS_SESSION_REDIS_URL=redis://cache:6379/1" CACHE_REDIS_URL=redis://cache:6379/0 CACHE_TYPE=redis CELERY_RESULT_BACKEND=redis://cache:6379/2 it is configured as given above in .yml file for docker the configuration in flask app is given below CACHE_KEY_PREFIX = "cache::" #: Host CACHE_REDIS_HOST = "localhost" #: Port CACHE_REDIS_PORT = 6379 #: DB CACHE_REDIS_DB = 0 #: URL of Redis db. CACHE_REDIS_URL = "redis://{0}:{1}/{2}".format( CACHE_REDIS_HOST, CACHE_REDIS_PORT, CACHE_REDIS_DB) #: Default cache type. CACHE_TYPE = "redis" i am unable to find out what is causing this error.
Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address
1 As it stands right now the only solution is to modify magento's core code, which is not ideal for patching. This has been something that has annoyed me for quite some time now. I also use the maintenance flag in conjunction with an ip whitelist to show a "Coming Soon" or "Under Construction" page while doing work. In developer mode, as you pointed out, it fails to function. Share Improve this answer Follow edited Aug 1, 2018 at 16:21 Félix Adriyel Gagnon-Grenier 8,6231010 gold badges5555 silver badges6666 bronze badges answered Aug 1, 2018 at 14:33 Andy AllenAndy Allen 1111 bronze badge 1 I wish this was the only thing that frustrated me about the M2 platform... the amount of bugs in the core code I have personally rooted out (e.g. with i18n translations; incorrect context passed for store/store view etc.) after countless frustrating hours... it's mind-boggling. It's does so much, yet at the expense of being so bug-ridden, so badly managed and such an inflexible workflow. I hate it. – ajmedway Nov 22, 2018 at 19:32 Add a comment  | 
I have a custom 503 "site down for maintenance" page configured and I see this served publicly when I switch to maintenance mode as described here: http://devdocs.magento.com/guides/v2.2/install-gde/install/cli/install-cli-subcommands-maint.html I have run the command magento maintenance:enable --ip=<ip address>, which serves to create the var/.maintenance.flag file containing my office ip address in the whitelist. I thus have M2 running in maintenance mode and I still have full access to the site. But wait! I now wish to do some work on the site in developer mode, i.e. make code changes in xml and php files. When I switch to developer mode, this disturbs the way maintenance mode works, as it prevents the custom 503 status page from showing and instead renders a stack trace to the public! This issue is raised here, but there are no real answers to this conundrum. What on earth is the point in having a maintenance mode that does not allow a developer to switch into "developer mode" where the caches are bypassed and we can actually do some work?! This whole setup makes no sense to me. If I leave it in production mode, I will need to manually nuke caches/static files after every change is made, which is massively impractical! What are people doing to work on live Magento 2 sites? What is the workflow adopted to temporarily switch a live site into maintenance mode and run it in developer mode concurrently? The official Magento 2 docs seem to make no attempt to address this. Can anyone point me in the direction of some resources that explain how to put a live site into maintenance mode, then set to developer mode while still showing custom 503 page to the public? Note: I have custom modules and theme modules that only permit changes to be made via the admin panel when the site is switched to developer mode, so I MUST be able to go into developer mode. Many thanks.
Magento2 - run a site in maintenance mode showing public 503 page AND switch to developer mode
1 Redis provides support for pipelining, which involves sending multiple commands to the server without waiting for the replies and then reading the replies in a single step. Pipelining can improve performance when you need to send several commands in a row, such as adding many elements to the same List. Spring Data Redis provides several RedisTemplate methods for executing commands in a pipeline. One example: //pop a specified number of items from a queue List<Object> results = stringRedisTemplate.executePipelined( new RedisCallback<Object>() { public Object doInRedis(RedisConnection connection) throws DataAccessException { StringRedisConnection stringRedisConn = (StringRedisConnection)connection; for(int i=0; i< batchSize; i++) { stringRedisConn.rPop("myqueue"); } return null; } }); You can follow this link Or you can use Redis Mass insertion facility too. Share Improve this answer Follow answered Nov 10, 2017 at 15:03 YogiYogi 1,8651313 silver badges2424 bronze badges Add a comment  | 
We are planning to use Redis cache in our app , our requirement is first we need to load some 6 month data into Redis cache before actual app starts , i am thinking that if we execute one command at a time in loop like to insert key values into Redis , it will take too much time , is their way we can retrieve the data from database and insert all the data into Redis in one shot ? can any one please suggest ?
how to load data into redis cache from database?
That's a good question. And the answer is: fetch inside the SW works just like fetch in the browser context. This means that the browser's HTTP cache is checked and only after that the network is consulted. Fetch from the SW doesn't bypass the HTTP cache. This comes with a potential for a race condition if you're not careful with how you name your static assets. Example: asset.css is served from the server with max-age: 1y after the first request for it the browser's HTTP cache has it now the file's contents are updated; the file is different but the name is still the same (asset.css) any fetch event to the file, asset.css, is now served from the HTTP cache and any logic that the SW implements to check the file from the server is actually leading to getting the initial file from step 1 from the HTTP cache at this point the file on the server could be incompatible with some other files that are cached and something breaks Mitigations: 1. Always change the name of the static asset when the content changes 2. Include a query string (do not ask for asset.css, but for asset.css?timestamporsomething) Required very good reading: https://jakearchibald.com/2016/caching-best-practices/
I'm wondering if a fetch call in a ServiceWorker uses the normal browser cache or bypasses it and sends the request always to the server For example. Does the fetch call on line 5 first look in the browser cache or not. self.addEventListener('fetch', function(event) { event.respondWith( caches.open('mysite-dynamic').then(function(cache) { return cache.match(event.request).then(function (response) { return response || fetch(event.request).then(function(response) { cache.put(event.request, response.clone()); return response; }); }); }) ); });
Does a fetch call from a ServiceWorker use the regular browser cache
1 I had a alternative for that. Don't know about your project structure. But you can generate ahead-of-time QML cache. For QML's you want to create cache, just place those QML's in the form of component in main.qml file. For example: ApplicationWindow { ......... Item{ ....... Component{Page1{}} Component{Page2{}} } } By using that it would pre-compile given QML's and would generate ahead-of-time cache for them. And could have warm start for that given components. It does not have any negative impact on app, but positive off-course. If you don't know to access QML's from sub-directories in main.qml. Use: import "dir_name" Share Improve this answer Follow answered Dec 27, 2017 at 7:57 GurpreetGurpreet 1155 bronze badges Add a comment  | 
I have found the documentation for using the new feature in Qt 5.9 for generating QML cache files (.qmlc/.jsc) ahead-of-time with a QMake project, but what's the procedure for doing so with a CMake based project?
How do I use ahead-of-time QML cache generation with a CMake project?
1 While waiting for ngsw to be ready, you could use workbox-build npm package in your Angular project. For precaching assets: const workbox: WorkboxBuild = require('workbox-build'); workbox.injectManifest({ globDirectory: './dist/', globPatterns: ['**\/*.{html,js,css,png,jpg,json}'], globIgnores: ['build/*', 'sw-default.js', 'workbox-sw.js','assets/icons/**/*'], swSrc: './src/sw-template.js', swDest: './dist/sw-default.js', }); For dynamic caching: const workboxSW = new self.WorkboxSW(); // work-images-cache workboxSW.router.registerRoute('https://storage.googleapis.com/xxx.appspot.com/(.*)', workboxSW.strategies.cacheFirst({ cacheName: 'work-images-cache', cacheExpiration: { maxEntries: 60 }, cacheableResponse: { statuses: [0, 200] } }) ); You could cache web fonts etc. by calling another registerRoute. Realistic usage example here. Share Improve this answer Follow answered Nov 9, 2017 at 18:18 bobbob 2,73411 gold badge3030 silver badges4646 bronze badges Add a comment  | 
I am working on a progressive web app in angular 4 which seems to be working fine in online mode. It does work on offline mode as well unless I involve dynamic caching. So there is this ngsw-manifest.json in which I have done some configuration: { "routing": { "index": "/index.html", "routes": { "/": { "match": "exact" }, "/coffee": { "match": "prefix" } } }, "static.ignore": [ "^\/icons\/.*$" ], "external": { "urls": [ { "url": "https://fonts.googleapis.com/icon?family=Material+Icons" }, { "url": "https://fonts.gstatic.com/s/materialicons/v29/2fcrYFNaTjcS6g4U3t-Y5ZjZjT5FdEJ140U2DJYC3mY.woff2" } ] }, "dynamic": { "group": [ { "name": "api", "urls": { "http://localhost:3000/coffees": { "match": "prefix" } }, "cache": { "optimizeFor": "freshness", "networkTimeoutMs": 1000, "maxEntries": 30, "strategy": "lru", "maxAgeMs": 360000000 } } ] } } So the dynamic key in the above json caches the content on the page for offline use. Here is an image of the content being cached: However when I try to reload the page in offline mode after caching, the content is not being shown. Is there some configuration that I missed?
Dynamic cache angular 4 not working PWA
1 +100 In order to load custom tiles in MKMapView you need to subclass MKTileOverlay and override method url(forTilePath path: MKTileOverlayPath) -> URL MKTitleOverlay contains x, y and z properties for the tile. So the implementation may look like this: override func url(forTilePath path: MKTileOverlayPath) -> URL { let tilePath = Bundle.main.url( forResource: "\(path.y)", withExtension: "png", subdirectory: "tiles/\(path.z)/\(path.x)", localization: nil)! return tile } In your mapView setting function add the following: let overlay = CustomTileOverlay() overlay.canReplaceMapContent = true mapView.add(overlay, level: .aboveLabels) Also don't forget to return renderer in func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer { return MKTileOverlayRenderer(tileOverlay: overlay) } P.S. There is a great tutorial on raywenderlich.com on that subject: Share Improve this answer Follow answered Dec 6, 2017 at 17:52 Denis LitvinDenis Litvin 35511 silver badge88 bronze badges 1 Your answer didn't respond directly to my question, but you linked an article giving me hints on how I could at least get further on my challenge ;) Thank you! – Ivan Le Hjelmeland Dec 7, 2017 at 11:36 Add a comment  | 
I have found a way to cache tiles with MapKit, but I haven't found any solution for loading all tiles inside an area to from the top level tiles to the bottom level tiles. I want to cache all tiles for a rectangle area in my mapview. Is there any way to do this in Mapkit?
Load all tiles for area in mapView
EDIT: Some time ago, a guy on GitHub helped me figure this out. It's because there's a Workbox config property called maximumFileSizeToCacheInBytes. By default it is set to 2MB, which means it will only cache files that are smaller than (or equal to) 2MB. Some of my built files are larger than that. I only needed to increase the maximumFileSizeToCacheInBytes to solve the issue and include those larger files in the generated service worker. My updated Workbox config looks like this: plugins: [ new workboxPlugin({ globDirectory: DIST_DIR, globPatterns: ['js/*.{html,js,css}'], maximumFileSizeToCacheInBytes: 5000000, // <-- Added (5 MB) swDest: path.join(DIST_DIR, 'sw.js'), }), ]
I use Vuejs and I use workbox-webpack-plugin in order to make my site offline, I want to cache a folder which contains all my files (3 files) like picture below, but when I build my project(using Laravel-mix). The main.js (which contains everything of Vue) cannot be cached in service-woker.js. I tried some ways to fix that but those didn't work. Does anybody face this issue, do you have any solution, many thanks! ->>
Workbox: cannot cache all files when build project
1 You seem to be experiencing some sort of critical section problem. But here's the thing. Redis operations are atomic however Laravel does its own checks before calling Redis. The major issue here is that all concurrent requests will all cause a request to be made and then all of them will write the results to the cache (which is definitely not good). I would suggest implementing a simple mutual exclusion lock on your code. Replace your current method body with the following: public function getData($cacheKey) { $mutexKey = "getDataMutex"; if (!Redis::setnx($mutexKey,true)) { //Already running, you can either do a busy wait until the cache key is ready or fail this request and assume that another one will succeed //Definately don't trust what the cache says at this point } $value = Cache::rememberForever($cacheKey, function () { //This part is just the convinience method, it doesn't change anything $dataFromService = $this->makeRequest($cacheKey); $dataMapped = array_map([$this->transformer, 'transformData'], $dataFromService); return $dataMapped; }); Redis::del($mutexKey); return $value; } setnx is a native redis command that sets a value if it doesn't exist already. This is done atomically so it can be used to implement a simple locking mechanism, but (as mentioned in the manual) will not work if you're using a redis cluster. In that case the redis manual describes a method to implement distributed locks Share Improve this answer Follow answered Sep 12, 2017 at 7:49 apokryfosapokryfos 39.6k1010 gold badges7575 silver badges115115 bronze badges 1 I will give it a try and answer you back! – Stas Coder Sep 12, 2017 at 14:02 Add a comment  | 
I have an API written in Laravel. There is the following code in it: public function getData($cacheKey) { if(Cache::has($cacheKey)) { return Cache::get($cacheKey); } // if cache is empty for the key, get data from external service $dataFromService = $this->makeRequest($cacheKey); $dataMapped = array_map([$this->transformer, 'transformData'], $dataFromService); Cache::put($cacheKey, $dataMapped); return $dataMapped; } In getData() if cache contains necessary key, data returned from cache. If cache does not have necessary key, data is fetched from external API, processed and placed to cache and after that returned. The problem is: when there are many concurrent requests to the method, data is corrupted. I guess, data is written to cache incorrectly because of race conditions.
Laravel cache returns corrupt data (redis driver)
1 As Jeff Posnick said, Chrome Devtools displays 50 entries at a time (0-49). At the bottom, it should say the total number of entries like: Total entries: xxx Share Improve this answer Follow answered Oct 5, 2019 at 0:29 Ray KimRay Kim 1,95122 gold badges1414 silver badges2525 bronze badges Add a comment  | 
I am currently working on a project that involves the creation of an offline application with a realitivly big amount of files stored in the cache. (432 to be exact) As the files needed is dynamic for each user, I have a method that creates the Array and passes it into the service worker. This all works fine and if I simply print the array it contains the full list of files. The problem arises when I check the cache storage after the everything is loaded, for some reason a total of 49 files is being stored with no promt as to what happened to the rest of the files. I am using Firefox, and am aware that the cache has unlimited storage with promts after 50mb. The total memory used after storing these 49 files is just under 19mb, so I do not believe it to be a memory issue. I have searched through Service Worker questions and haven't found anyone experiancing the same issue, so I hope someone out here can help! Cheers :)
Service Worker Cache not storing all files in Array
1 I do not think prefetch is quite as simple as just loading the image from cache just because it is in the cache, if that makes sense. Check out this thread https://github.com/facebook/react-native/issues/2314 and this question React native Image prefetch. If you want to save images and have them always available offline look at https://www.npmjs.com/package/react-native-preload-images. Share Improve this answer Follow answered Oct 9, 2017 at 8:41 LanceLance 73177 silver badges2525 bronze badges Add a comment  | 
I am trying to understand how to cache an Image url so that it does not need to be redownloaded. I have taken a look at: https://docs.expo.io/versions/v19.0.0/guides/preloading-and-caching-assets.html and have been using Image.prefetch like so: const prefetchedImages = images.map(url => { console.log('url', url) //this is correctly logging the url return Image.prefetch(url) }); Promise.all(prefetchedImages) .then(() => { this.setState({loaded:true}) }) This ultimately does set the state as true. I am then rendering my Images in a different component, but I make sure the component that is prefetching does not unmount. I load the url like so: <Image source={{uri: myImageUrl}} style={{width:100, height:100}} /> When I load images into my grid view, only the local images appear right away, and the ones with URLs are white for a moment before rendering. When using cache:‘force-cache’ on iOS, the images are in fact loaded from cache and there is no lag. I thought I did not need to do that if I used prefetch. Am I missing something here? I thought I can call my Image source as usual and the system will know how to grab the cached image for that url.
Image.prefetch not automatically loading from cache [React-Native / Expo]
At this step you haven't opened a connection to Redis yet, you only created an instance of RedisCache class. The connection will be opened later when you will use public methods of the class. Something like RedisCache.GetAsync method as the example: // IDistributedCache _cache; // ... // This is the place where you may get the exception var value = await _cache.GetAsync(key); Here connection will be created internally (if needed) using private Connect methods.
I need to use Microsoft.Extensions.Caching.Redis into my asp.net core project. I put this code into ConfigureService (Startup.cs): IDistributedCache cache = new RedisCache(new RedisCacheOptions { Configuration = Configuration.GetConnectionString("Redis"), InstanceName = "Master" }); services.AddSingleton<IDistributedCache>(cache); What I need is to catch a connection exception (in case the Redis server is down, or the server is not reachable), but I haven't found a method to check connection and if I wrap code into a try/catch nothing happens. Is there a way to accomplish my task? Thanks
Catch Microsoft.Extensions.Caching.Redis connection fail?
1 is it possible to load multiple WebPages(including html,images,JS,css ) to WebView cache with real ProgressBar ? Not Possible . Visit Android Doc for reference . Share Improve this answer Follow answered Apr 14, 2017 at 7:50 Shishupal ShakyaShishupal Shakya 1,64222 gold badges1818 silver badges4242 bronze badges Add a comment  | 
is it possible to load multiple WebPages(including html,images,JS,css ) to WebView cache with real ProgressBar ? I know it is possible to load one page in this way : String appCachePath = getApplicationContext().getCacheDir().getAbsolutePath(); URL = "https://stackoverflow.com" webView.getSettings().setJavaScriptEnabled(true); webView.getSettings().setLoadsImagesAutomatically(true); webView.getSettings().setAllowFileAccess(true); webView.getSettings().setAppCacheEnabled(true); webView.getSettings().setAppCacheMaxSize(1024 * 1024 * 8); webView.getSettings().setCacheMode(WebSettings.LOAD_DEFAULT); webView.getSettings().setAppCachePath(appCachePath); if (isNetworkAvailable()) { appCachePath = getApplicationContext().getCacheDir().getAbsolutePath(); webView.getSettings().setCacheMode(WebSettings.LOAD_DEFAULT); } else { Log.e(TAG,"Load page from cache"); webView.getSettings().setCacheMode(WebSettings.LOAD_CACHE_ONLY); } and it will make possible offline loading of that page, but is it any way to create custom WebView where I will have chance to load multiple URL in array or list, like List<String> urls = new ArrayList<>(); urls.add("https://stackoverflow.com"); urls.add("https://google.com"); urls.add("https://facebook.com"); webView.loadURL(ursl); And also I need a total progressBar for loading of those WebPages. I know I can set WebChromeClient for webView like: webView.setWebChromeClient(new WebChromeClient() { public void onProgressChanged(WebView view, int progress) { Log.d(TAG,"onProgressChanged" + progress); } } but it will show progress of loading only one WebPage, when I need total progress of few pages. I do not want to write my own WebPage grabber, and store it in internal Storage, because it takes a lot of time and probably will cause a lot of different bugs and mistakes. Please, can you give me some ideas ?
WebView multiple pre-cache with progressBar Android
You should remove wifi.svg from the NETWORK section of your manifest, for the fallback to work: CACHE MANIFEST #Version 0.1.3 index.html CACHE: images/nowifi.svg FALLBACK: images/wifi.svg images/nowifi.svg This might feel a bit counter-intuitive at first, but an explicit NETWORK entries take precedence over the fallback entries, which is why your fallback is never applied and the image is missing. The browser is also smart enough to recognize that the left side of the FALLBACK entry is to be re-checked with the server, and will properly replace it with a fallback image (instead of just using a cached copy), when it is offline. It will also normally automatically cache the right hand side of the FALLBACK entry (i.e. nowifi.svg), so you may omit it from the CACHE section as well (through it won't affect anything). Also note that in my experience the "Work Offline" functions of Google Chrome "Developer Tools" and Firefox, sometimes tend to produce all kind of weird results when it comes to cache and the offline apps, so you better just switch your web server or connection on and off instead, when testing this.
By providing FALLBACK, I expect the wifi.svg to be replaced with nowifi.svg when it is loaded from cache. it is not working as expected. Here is my cache manifest file. CACHE MANIFEST # Version 0.1.3 index.html CACHE: images/nowifi.svg NETWORK: images/wifi.svg FALLBACK: images/wifi.svg images/nowifi.svg When I'm offline, I only see missing image in place of cached nowifi.svg I thought, since I never request nowifi.svg could be the problem, just added a hidden <img src="images/nowifi.svg" /> still no luck. I could not able to figure out what is the issue. For complete project: https://github.com/palaniraja/kmusic/blob/master/src
Appcache - fallback not working as expected
1 I've ran into the same question when profiling my forms. One of the problems I faced is that adding Second level caching is very easy when using the QueryBuilder, but the EntityRepository methods don't use that cache out of the box. The solution was actually pretty simple. Just add some cache settings to your query_builder. Here an example from the Symfony documentation: $builder->add('users', EntityType::class, array( 'class' => User::class, 'query_builder' => function (EntityRepository $er) { return $er->createQueryBuilder('u') //add something like this ->setCacheable(true) ->setCacheMode(Cache::MODE_NORMAL) ->setCacheRegion('default') ->orderBy('u.username', 'ASC'); }, 'choice_label' => 'username', )); Don't forget to add the second level cache to your entity: /** * @ORM\Entity * @ORM\Cache(region="default", usage="NONSTRICT_READ_WRITE") */ class User { } Share Improve this answer Follow answered Jun 14, 2018 at 8:27 Stephan VierkantStephan Vierkant 9,91488 gold badges6363 silver badges9898 bronze badges Add a comment  | 
In Symfony v3.2, I'm using a form with several EntityType fields, which have hundreds of options - and each option is a relatively big object. Since they don't change so often, I'd like to use some Cache in Symfony, to load them once and just keep feeding the EntityType with it. I've already cut down the size of the data that's feeding it, by pulling just the fields that I need, and then saved that into a cache. When I pull the data from cache - I cannot feed it directly to EntityType with a choice_list, because it gets detached from ObjectManager, and I get an error ("Entities passed to the choice field must be managed"). To re-attach it, I could use the ObjectManager->merge(), but then it means doing a call to DB for each item being re-merged and re-attached to the Manager. That beats the purpose of Caching. What is the best way to proceed in this scenario? Just lose the EntityType completely from the Form (for speed-sensitive pages) and go with the ChoiceType (which would also include changing the logic in many parts of code)? Anything nicer than that? So far I didn't find anything near to solution on SO or elsewhere.
Symfony Form EntityType caching
1 If your images are coming from external links (like FB avatars) they will not be cashed in that way: "The appcache package is only designed to cache static resources. As an "application" cache, it caches the resources needed by the application, including the HTML, CSS, Javascript and files published in the public/ directory." For this situation you can use Cloudinary. I use it in a mobile app and I think it does miracles. Share Improve this answer Follow answered Mar 19, 2017 at 0:10 Paul PaulincaiPaul Paulincai 64444 silver badges88 bronze badges Add a comment  | 
We're building a Meteor app. One page of this app is a dashboard which shows all of your cliënts. All of these cliënts have images. The page is loaded perfectly but on refresh of the app the images aren't loaded from the browser cache but are loaded from the (external) image server again. We want the images be loaded from browser cache. The headers of an image are: accept-ranges:bytes cache-control:public content-length:8613 content-type:image/jpeg date:Fri, 17 Mar 2017 15:48:15 GMT etag:W/"37533ce4359fd21:0" expires:Sat, 18 Mar 2017 15:48:14 GMT last-modified:Fri, 17 Mar 2017 15:48:15 GMT server:Microsoft-IIS/10.0 status:200 x-powered-by:ASP.NET On page refresh the image is still loaded from the server whilst js files are loaded from browser cache: overview of requests
Meteor images ignoring cache headers
1 This is unrelated to a server cache. The routes.properties is loaded once, during Servlet Context Initialization, and used from then on out. Only a destroy of the running context (ie: restart the web server) will cause it to call FrontServlet.init() again. Share Improve this answer Follow answered Mar 8, 2017 at 19:23 Joakim ErdfeltJoakim Erdfelt 48.2k77 gold badges8686 silver badges140140 bronze badges 1 I forgot to mention that I also tried restarting the Web server, and even the whole virtual machine, but still the properties file was found, despite being physically absent. – swahnee Mar 9, 2017 at 19:18 Add a comment  | 
Jetty caches by default static resources like property files for performance reasons. For instance, some code like this: public class FrontServlet extends HttpServlet { private final Properties routes = new Properties(); @Override public void init() throws ServletException { try { this.routes.load(this.getClass().getClassLoader().getResourceAsStream("routes.properties")); } catch (IOException | NullPointerException e) { throw new ServletException(e); } } } would continue working even after I delete the routes.properties file, because it would still be available from the cache, rather than from the disk. The Eclipse Jetty plugin documentation also mentions this: look for "Disable Server Cache". Now, I'd like to disable this feature in development environments to avoid false positives. The Jetty documentation mentions that there is a init parameter called maxCacheSize that, if set to 0, disables the cache. However, I tried it both as a context parameter: <context-param> <param-name>org.eclipse.jetty.servlet.maxCacheSize</param-name> <param-value>0</param-value> </context-param> and as a servlet init parameter: <servlet> ... <init-param> <param-name>org.eclipse.jetty.servlet.maxCacheSize</param-name> <param-value>0</param-value> </init-param> </servlet> to no avail. Does anyone know how to do this? EDIT: The routes.properties file is still found even after I restart the Web server, and the Vagrant virtual machine it's running on. I should also mention that I'm using the Maven Jetty plugin, thus launching the server with mvn jetty:run.
How to disable server cache in Jetty
@CommonsWare That's not as practical as i wanted to but finally we decided not to encrypt it. The data is not sensitive enough to waste time in this workaround. Thank you for adding it to the wishlist. I hope they'll include it in the next release.
Because of cache class is set to final, I can not write my own implementation and I am needing to encrypt the cache because is sensitive data. I'd try with an interceptor but there is nothing like a CacheResponseInterceptor to encrypt and decrypt it. How can I do this using OkHttp?
Encrypt cache OkHttp android
1 Ignite does introduce some overhead to your data and half of a GB doesn't sound too bad too me. I would recommend you to refer to this guide for more details: https://apacheignite.readme.io/docs/capacity-planning Share Improve this answer Follow answered Dec 2, 2016 at 17:31 Valentin KulichenkoValentin Kulichenko 8,38011 gold badge1717 silver badges1212 bronze badges 1 I'm curious as to how this happens? It seems to be not just .5GB overhead but 30-50% overhead that grows as the data does. (zzz and I are on the same team) – Carlos Bribiescas Dec 2, 2016 at 20:45 Add a comment  | 
I am using Ignite to build a framework for data calculation. One big problem is the memory usage is a little more than expected. The data using 1G memory outside Ignite will use more than 1.5G in Ignite cache. I turned off backup and copyOnRead already. I don't use query feature so no extra index space. I also counted in the extra space used for each cache and cache entry. The total memory usages still doesn't add up. The data value for each cache entry is a big map contains list of primitive arrays. Each entry is about 120MB. What can be the problem? The data structure or the configuration?
Ignite uses more memory than expected
1 You can pass Cache-Control=cache in header to cache GET request Share Improve this answer Follow answered Dec 2, 2016 at 12:49 Akshay PanchalAkshay Panchal 69555 silver badges1515 bronze badges 0 Add a comment  | 
I make the same request at different point in my app Document doc = Jsoup.connect("https://www.sampleURL.com") .get(); Is there a way to cache the get request?
Is there a way to cache Jsoup get request?
1 +50 You can combine $cache with browser cache simply by comparing the E-tag in the header in your code. You can't catch 304 status as the browser simulates 200 status code always. There is library that handles this kind of problem https://www.npmjs.com/package/angular-http-etag. But the problem with parsing json you can't avoid because localStorage also serializes json into string so you will have to parse it ether way. My suggestion is to split the json into smaller chunks and request it as needed Share Improve this answer Follow answered Nov 22, 2016 at 12:17 KlimentKliment 2,26033 gold badges1919 silver badges3232 bronze badges 2 Sounds good! And I can avoid serialization as I have to check the same data rather often and plan to use a non-persistent cache (just a plain js object). Actually, localStorage is too limited (5 MB or so). Splitting is currently not an option. – maaartinus Nov 22, 2016 at 12:26 You can try using service worker mobiforge.com/design-development/…. The limit in MB is 50. Also if you can't split the file try to compress it. Json files are very compressible. You can split the size of the file in half just by compressing it with tar.gz. npmjs.com/package/tar.gz. Also try to find repeatable data in the file and compress it manually by keeping only one copy of that string and using references of that string to replace it in the file – Kliment Nov 22, 2016 at 12:39 Add a comment  | 
I know that when the server returns 304 NOT MODIFIED, the browser handles it transparently and there's no way for any client code to make direct use of it. My problem is that the list is really huge (>4 MB uncompressed) and converting it to JSON takes quite long (70 ms on my desktop, much longer on Android where it matters). I don't want to use angularjs cache here, as the HTTP request must be done. I don't want to work with partial lists. I guess using the E-Tag header and hacking into defaultHttpResponseTransform would help, but I wonder if there's a standard way of avoiding this overhead.
Efficiently working with a 304 response status code in angular
Yes, the Templates object will contain some kind of internal/compiled in-memory representation of the entire stylesheet (that is, all modules of the stylesheet). Though exactly what happens depends of course on the implementation (JAXP is an interface and JAXP implementations can implement it in different ways.)
I am using the following code to cache the xsl (same across all requests) so that the file is not read from the disk every single time. My question is if the xsl refers to 100 other xsl files(<xsl:include href="file1"/>), will they be loaded into the cache as well? will they still be read from the disk? if not, How do we make all dependent xsls to be read into memory and cached? private static Templates cachedXslt = null; // Transformer if(cachedXslt == null) { Source xsltSrc = new StreamSource(xslPath); TransformerFactory transformerFactory = TransformerFactory.newInstance(); cachedXslt = transformerFactory.newTemplates(xsltSrc); } Transformer transformer = cachedXslt.newTransformer();
XSL caching in java - dependent xsl files
1 HttpContext.Current.Cache caches objects (reference types), so you need to wrap the int (value type) in a reference type, as you suggested. By why not just have a class with a static method and a static member for count if the cached item will never expire? Also, you should use interlocked increment to ensure the count is correct, and have some way of knowing when a user is not active so that the count can be decremented. As pointed out in a comment, this will only give you the count for a single process. If you have multiple web processes on the same machine, or multiple servers the count will be wrong - maybe that's what the expensive operation is that returns 5 :) Share Improve this answer Follow answered Jun 17, 2020 at 6:07 Alex PeckAlex Peck 4,65311 gold badge3333 silver badges3737 bronze badges Add a comment  | 
Problem Given a function that returns the total number of active users on the website: private static readonly object Lock = new object(); public static int GetTotalActiveUsers() { var cache = HttpContext.Current.Cache; if (cache["ActiveUsers"] == null) { lock (Lock) { if (cache["ActiveUsers"] == null) { var activeUsers = 5; // This would actually be an expensive operation cache.Add("ActiveUsers", activeUsers, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } } } return (int) cache["ActiveUsers"]; } The problem with storing a ValueType in the cache in this way is it's not updatable. For example: public static void OnNewActiveUser() { var total = GetTotalActiveUsers(); total++; } Doesn't update the cached value. (This is an expected behaviour). I'm looking for a thread safe method to update the active user count. Solution 1 Use a lock public static void OnNewActiveUser() { lock (UpdateActiveUsersLock) { var cache = HttpContext.Current.Cache; var newTotal = GetTotalActiveUsers() + 1; cache.Insert("ActiveUsers", newTotal, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } } Solution 2 Create a thin class around the int to turn it into a reference type: public class CachedInt { public int Int { get; set; } public CachedInt(int value) { Int = value; } } Then: public static void OnNewActiveUser() { var activeUsers = GetTotalActiveUsers(); activeUsers.Int++; } Question I'd prefer to avoid Solution 1 if possible (it doesn't fit neatly into my design). Is wrapping the value types in a thin class code smell, or is it a legitimate way to solve the problem?
C# storing value types in cache
In your case the tricky line is cache is physically addressed meaning before hitting it we must perform the translation (as programs use virtual addresses) I build the following probability tree to compute the average. We will be reducing it from leaves to calculate the whole average. Rules are super easy: we calculate branch costs and multiply them by the probability pretty much like you did in your calc. The value I get is 2.7225 Tree 1st leave reduction, page fault after the reduction cost for cache hit scenario Note: that we pay 1 cycle for cache access anyway before last reduction Note: we pay 1 cycle for TLB anyway 1 + 0.95*1.5 + 0.05*5.95 = 1 + 1.425 + 0.2975 = 2.7225
Consider the following information Assume the cache is physically addressed TLB hit rate is 95%, with access time = 1 cycle Cache hit rate is 90%, with access time of again = 1 cycle Page fault is 1% and occurs when miss occurs in both TLB and Cache The TLB access and cache access are sequential Main memory access time is 5 cycles Disk access time is 100 cycles Page tables are always kept in main memory What will be the Average memory access time based on the following information ? My Approach => I am giving my approach of how i understood this question. Please check it. Average memory access time ==> Probability of NO page fault (Memory access time) + Probability of page fault (Page fault service time) ==> 0.99 ( TLB hit (TLB access time + cache hit + cache miss) + TLB miss (TLB access time + Page table access time + cache hit + cache miss) ) + 0.01 (TLB access time + page table access time + Disk access time) ==> 0.99 ( 0.95 (1 + 0.90(1) + 0.10(1 + 5)) + 0.05(1 + 5 + 0.90(1) + 0.10(1 + 5))) + 0.01 (1 + 5 + 100) Is the given expression correct ? Please let me know, that, is my approach right or have i committed some mistakes? Can Anyone help me ? PS : I am having my mid term next week and need to practice such questions
Calculation of the average memory access time based on the following data?
Have you modified your base_url fields in the core_config_data table? I suggest you set your local website using a fake domain name rather than using 127.0.0.1/site/xyz. you can do this by configuring a vhost in Apache (don't forget to modify your hosts file). Then, since your admin is working, set the Secure and Unsecure urls under System / General / Web to the domain you have created (or change the values manually in the core_config_data table). Drop the var/cache folder and try to display the homepage. Details over here: https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site The URL "http://localhost/magento/" is not accessible. Unable to read response, or response is empty Magento Installtion Error
From last couple of days frontend of magento (localhost) is showing blank, I tried all the solutions, they are: Clear cache. Renamed the file local.xml.sample to local.xml. Uncommented the comment ini_set('display_errors', 1); in index.php. Turned on developer mode. Disabled and removed custom modules The backend is working fine but the front is totally blank. But none of the above worked, any solutions please.
Blank frontend at magento
In the same chapter you are quoting, the specification states: Note that this does not guarantee that all appropriate responses are invalidated. Invalidation is a very difficult task in a distributed environment. There might be other caches, or other resources that rely on the same data (as in your case). That means it should not be attempted, it is cheaper to just plan it into the system. One "workaround" is to make the client force an update on the resource which it knows must be changed because of the PUT. So you could make a request for yourself (and for the cache) to update the representation of the "parent" resource with this header: Cache-Control: max-age=0 Again, other caches might still have out-of-date but still valid cached responses, but it solves the problem of not receiving conflicting information for the same process on the same machine. So I would not "normalize" the representations to return just URIs without any data, I would rather design the workflows in a way that avoids such problems if possible. If not, force refresh (as described), set sufficiently small caching time, or do not cache if all else fails.
I'm currently working on an SPA with an ASP MVC API. We have recently adding client side caching via HTTP headers on our API responses with appropriate max-age values depending on the expected frequency of changes. While this has worked to improve performance, we are now having the issue where a user makes a change themselves and then gets a cache hit with the old data when reloading the page. To resolve this, I've added a version parameter to the GET request that is incremented each time a change is made. However, I've now found that RFC 7234 Sec 4.4 states that POST, PUT or DELETE requests should invalidate the GET request's cache for the same URI. Given that, I'm wondering how I should better design my APIs so the version parameter is not necessary and the browser will automatically handle this. For example: I have GET /resource - Returns a collection of all resources POST /resource - Create a new resource GET /resource/{id} - Gets the resource with id specified PUT /resource/{id} - Updates the resource with id specified. Request 2 will invalidate 1, 4 will invalidate 3, however 4 should also invalidate 1. Is this correct behaviour? Or should request 1 just return a collection of the IDs of all resources and I should make separate request 3s for each ID. That doesn't seem valid as it would resolve in 100s of requests rather than 1. Is there an easy solution to this?
Managing API Resource Caching With HTTP
Cache removal means deleting of tokens from the keychain. It has no impact on the cookies in the webview. You should use peomptbehavior.always flag to ensure that the user is prompted for password at login time.
I have integrated ADAL library in my iOS application to authenticate with WAAD accounts. After receiving the accessToken then the login screen (of ADAL) automatically gets disappear. Now if I try to call same login code to login with different user then it does not showing any fresh login screen and it was automatically getting logged in with previous user and giving me new accessToken. How can I erase cache of previous logged in user. I used below code to clear cache but no use of it. [authContext.tokenCacheStore removeAll]; // This method is no more available. Getting error for removeAll method. [authContext.tokenCacheStore removeAllWithError:&error]; // It does not clear cache. Is there any way to clear cache or to display login screen? please help me. Thanks in advance.
How to clear cache in iOS ADAL library
Thanks for the feedback. At this point we don't support caching per partition. I have logged an internal feature request for this.
A customer of mine is using Google BigQuery, but he is a little worried about the costs involved. After closer inspection, the many inserts queries were often incurring high costs because the cache was constantly being invalidated. I recommended using a date partitioned table for this. This will make sure queries only run against distinct parts of the table, reducing the costs. But my customer does actually often run against old partitions, but these are rarely updated. It would make sense if each partition would also maintain their own cache but I have been unable to find any official documentation stating this. As I want to give the customer an estimate of the new cost projection, it would be nice if I can confirm separate caches are maintained for each partition.
Does each partition of a date partitioned table in Google Big Query have its own cache?
1 +50 Actually the OutputCache directive is used for both Client as Server side caching. When you set the Location of that directive to Any, Client, Downstream or ServerAndClient, proper cache response headers are set such that browsers or proxies won't request the same page again and serve the cached version of your page. But keep in mind that those clients are free to request those pages again. Location options with their Cache-Control headers after setting directive: <%@ OutputCache Location="XXX" Duration="60" VaryByParam="none" %> Client: private, max-age=60 Downstream: public, max-age=60 Any: public ServerAndClient: private, max-age=60 Server: no-cache No output directive: private Share Improve this answer Follow answered Aug 18, 2016 at 10:14 Niels VNiels V 1,00588 silver badges1111 bronze badges 2 Please read max-age. I guess, all the approaches you mentioned tells the client to consider data as stale after a time period. It doesn't validate whether the data actually changed by sending a request to server, using a mechanism like ETag . – LCJ Aug 18, 2016 at 21:06 No, it doesn't do ETag validation out of the box. IIS does it for static content, but ASP.NET webforms not. But please refrase your question what you actually are looking for. – Niels V Aug 21, 2016 at 17:49 Add a comment  | 
I have an ASP.Net Web Forms application. The blog post "CacheCow Series - Part 0: Getting started and caching basics" mentions that Output Caching uses HttpRuntime.Cache behind the scene -hence not HTTP caching. The request reaches the server and cached response is sent from the server (when the valid cached output is avaialble on the server). So the entire content is sent across the wire. Is there any HTTP Caching available for ASP.Net Web Forms (where response content is not sent from the server, if cache is valid; but the client takes it from it's HTTP Cache after getting validity information (only) from the server)? REFERENCES Is page output cache stored in ASP.NET cache object? Things Caches Do - Ryan Tomayko - 2ndscale.com/
Is there any Http Caching for ASP.Net Web Forms?
1 If you want to synchronize a++; and if(a % 2 == 0) { /* do something */ } you should try this: std::mutex mtx; // Thread 1: while(1) { mtx.lock(); a++; mtx.unlock(); } // Thread 2: while(1){ mtx.lock(); if(a%2 == 0) { mtx.unlock(); // do something. } mtx.unlock(); } Locking a specific mutex every time before you want to use a specific resource, and unlocking it when you are done, ensure you that the operations on that resource are synchronized. Share Improve this answer Follow answered Aug 11, 2016 at 19:09 JohnnyJohnny 35622 silver badges1111 bronze badges 1 that would be a solution for sure, but I want to know whether there are any other ways than the conventional lock/semaphore solution to this problem? Coz after all it is not that tough a problem to need locks. In case both threads were writing to the variable, we would definitely need locks. But one thread is just doing an atomic read operation, and hence I feel locks can be avoided. – adisticated Aug 16, 2016 at 5:51 Add a comment  | 
How to ensure that the caches on different cores/processors are in sync with each other at any instant. For example: Thread 1: while(1) { a++; } Thread 2: While(1) { if(a%2 == 0) { // do something } } In thread 2, when a is tried to be accessed, a would not reflect the latest value. Would this implementation using locks be a correct solution? : Thread 1: while(1) { a++; } Thread 2: while(1){ lock(); unlock(); if(a%2 == 0) { // do something. } } Desired behavior is that we want the two threads to be as much synchronized as possible.
Synchronizing caches in a multithreaded program on multi-core system
If you want easy, less work, and proven to work well, do the following: Set up a Moxi client on each application server. Point Moxi to Couchbase bucket on the Couchbase cluster. Change your web application servers to point at the local MOXI install. For your next code revision start converting your code to using the Couchbase SDK instead of memcached. Yes, there will be a time where things will not be hot in the cache, but it will not take long for Couchbase to get populated. This method is used all of the time to switch over. It is easy, nearly fool proof. One thing I have seen people do is try and copy things from their existing memcached servers over to Couchbase before cutting over, but what I am not sure of is how they new the key of each value in memcached. Also note that Moxi is an interim step to easily get off of regular memcached and it is great, but for the long run, it is much better to switch to the SDK. The SDK has many more features than pure memcached. Do not use the memcached buckets as they have none of the HA, persistence or whatever features of Couchbase.
We have a memcached cluster running in production. Now we are replacing memcached with a Couchbase cluster as a persistent cache layer. The question is how to implement this cut-over and how to warm up Couchbase bucket. Obviously we can't simply switch over to the cold Couchbase since starting with old cache would bring the whole site down. One option I was thinking is to warm up the Couchbase as a memcached node first. That means Couchbase is using the (non-persistent) memcached bucket, and getting the cache set/get traffic like any other memcached node. The good thing about it is there is minimum code changes (what's needed is configure the moxi proxy to take memcached traffic, and register that node as a memcached node). Later we will convert all memcached buckets to Couchbase. But not sure Couchbase supports the conversions between these two types of buckets. The 2nd option is set up the persistent Couchbase bucket (as opposed to non-persistent memcached bucket) at the beginning. We change the production cache client to replicate all the traffic to both memcached and coucbase clusters. We monitor the Couchbase bucket and once the cache items reach certain size, we complete the cut-over. A small drawback is the extra complexity to change the cache client. Thoughts? EDIT on Aug 9, 2016 As I later found out, converting memcached bucket to couchbase bucket is not supported in Couchbase. So the first option is not feasible. Finally we decide to set up the Client-side (standalone) proxy in each application host. We do it incrementally from host to host to ramp up the cache traffic. That way the changes in the site is small enough.
Couchbase warmup strategies
1 you can try this: Picasso .with(context) .load(your_image) .resize(4000, 2000) .onlyScaleDown() // the image will only be resized if it's bigger than 4000x2000 pixels. .into(your_image_view); Share Improve this answer Follow answered Jul 20, 2016 at 15:39 YoniYoni 1,34633 gold badges1616 silver badges3838 bronze badges Add a comment  | 
I using Picasso to show image in my application. At some cases the image received are really huge and in those cases I want to resize the image and from now on use the resized image. I'm using the following method of Picasso to receive the bitmap image: articleVH.mTarget = new Target() { @Override public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) { if(bitmap != null) { articleVH.mCoverImage.setImageBitmap(Utils.getResizedBitmapIfNeeded(bitmap)); } else { mTimelineViewCallback.getArticleImage(articleCoverData); } } @Override public void onBitmapFailed(Drawable errorDrawable) { mTimelineViewCallback.getArticleImage(articleCoverData); } @Override public void onPrepareLoad(Drawable placeHolderDrawable) {} }; Now I want that in case Utils.getResizedBitmapIfNeeded(bitmap) resizes the image, I want this resized image to be used the next time it will be requested. How can this be done without using the resize(int, int) option which I know is cacheing the images but I can't use it.
How to force Picasso to cache and use a resized image next time the same image URL was requested
1 While you're right that NCache is "pricey" when compared with Redis or Memcache but its with good reason. The entity framework provider model that you're after comes with the NCache Enterprise edition. It will save you a lot of DEV time. For more information let me just point you to the right article Entity Framework Caching with NCache And since you'll be paying for the product you can enhance your application response times with many many more ways and features. To get more information try Memcache Comparison and the Redis Comparison While if you're still persistent about Memcache then Julie Lerman has a good article about implementing a second level cache in an Entity Framework. Or you could use any other open source Entity Framework module from github and embrace the bugs as well e.g. EFRedis Full Disclosure: I work for Alachisoft © Share Improve this answer Follow edited Jun 28, 2016 at 6:12 answered Jun 28, 2016 at 6:00 Basit AnwerBasit Anwer 6,81277 gold badges4545 silver badges8989 bronze badges Add a comment  | 
I have an application that is written in ASP.NET MVC 5. This generate lots of database transactions. I don't have server slave to dedicate for reporting so I need to use a second level caching in Entity framework to cache all the queries that are generated by the reports. The idea here is to reduce the amount of the queries that have to hit the database especially when multiple users are trying to view the same report. so if I have 5 people that want to view a dashboard, only one will hit the database and the rest will read the data set stored in the cache. This should improve the reports performance, reduce the database locks and improve my application performance. Is it possible to use Memcached or Redis with Entity framework where the data set is cached after the query is executed for a set amount of time. And of course before the query is executed, the cache will be checked for existing dataset before a hard inquiry is executed at the server. I came across NCache which seems to be exactly what I am trying to accomplish but unfortunately it is pricey.
Can Memcached be used as second layer caching for Entity Framework 6?
It is default behaviour as caches are compiled from pieces of your code (partials etc). It speeds up compilation process of large number of files. I don't think you can reproduce your original css / scss from cache.
I'm wondering what exactly the .scssc file in the .sass-cache folder does and whether it can be used to generate CSS or SCSS if the original scss file has been wiped. When I open the file in .sass-cache I get thousands of hexidecimals (I think). Is this in any way able to be parsed into CSS or SCSS. And if not, what does this cache file do?
Generate CSS from scssc file in .sass-cache
1 You might want to look at Tape. Share Improve this answer Follow answered May 31, 2016 at 0:11 Jesse WilsonJesse Wilson 39.8k88 gold badges126126 silver badges131131 bronze badges Add a comment  | 
I use OkHttp 3 on Android 4.4. Most answers I found are about caching the content of the HTTP responses of some request. I want to cache the requests themselves. So when I am offline for some time and then the connection gets better, all requests are sent. How do I do this with OkHttp? Is there a build-in request queue? I read about silent-retry but am not sure how it behaves exactly. What if my connection is bad for e.g. 6h? What if I have 30 pending requests?
How to cache the request queue (not responses) with OkHttp?
1 Use this code [YOUR_CELL_NAME.YOUR_IMAGE_NAME setImageWithURL:[NSURL URLWithString:@"IMAGE URL"] placeholderImage:[UIImage imageNamed:@"PLACEHOLDER IMAGE NAME"]]; Share Improve this answer Follow answered Jun 23, 2016 at 8:19 user6503043user6503043 Add a comment  | 
Iam using collectionview to load image.These images are loaded from url in background thread. Hence the scrolling is not smooth.The images in url get expired after seconds hence i want to update the image with new image.How can i use the SDWebImage to load image and make scrolling smooth? Can anyone please help me to solve this.
How to use SDWebImage in my project
1 php artisan config:cache doesn't clear cache, but creates config cache. If you'll run first 4 commands from your list, all app cache will be cleared. You may also want to clear cache, created by packages or composer or web server, it depends on what are you trying to achieve. If you want to clear nginx cache, you can do this by deleting nginx cache directory, for example /var/nginx/cache/. Share Improve this answer Follow edited May 21, 2016 at 5:48 answered May 21, 2016 at 5:46 Alexey MezeninAlexey Mezenin 161k2626 gold badges297297 silver badges283283 bronze badges 2 Pages are cached until you reboot nginx, i want to cler all cache – bleggleb May 21, 2016 at 5:47 @bleggleb, you can delete nginx cache by deleting nginx cache directory, for example /var/nginx/cache/. Also, you can temporarily turn off caching if you want. – Alexey Mezenin May 21, 2016 at 5:52 Add a comment  | 
I run these commands: php artisan view:clear php artisan route:clear// или (s) php artisan cache:clear php artisan config:clear php artisan config:cache But the final cache is cleared only after the reboot nginx and php5-fpm
how completely clear your cache in Laravel?
Please see below example, the created file will be somewhere under the directories used by NodeManager for containers. This is configuration property yarn.nodemanager.local-dirs in yarn-site.xml, or the default inherited from yarn-default.xml, which is under /tmp Please see @Chris Nauroth answer, Which says that Its just for debugging purpose and It's not recommended as a permanent production configuration. It was clearly described why it was not recommended. public void map(Object key, Text value, Context context) throws IOException, InterruptedException { // do some hadoop stuff, like counting words String path = "newFile.txt"; try { File f = new File(path); f.createNewFile(); } catch (IOException e) { System.out.println("Message easy to look up in the logs."); System.err.println("Error easy to look up in the logs."); e.printStackTrace(); throw e; } }
I want to store some value in map task into local disk in each data node. For example, public void map (...) { //Process List<Object> cache = new ArrayList<Object>(); //Add value to cache //Serialize cache to local file in this data node } How can I store this cache object to local disk in each data node, because if I store this cache in map function like above, then the performance will be terrible because I/O task? I mean is there any way to wait for map task in this data node run completely and then we will store this cache into local disk? Or does Hadoop have a function to solve this issue?
Write data to local disk in each datanode
Fragment caching {% import cache %} {% cache 86400 fragment_name %} <img src="{% static 'img/large_image.png' %}" alt="" /> {% endcache %} What you have here is django fragment caching. It has no influence on whether the image will be cached by the browser or not. Since the file is hosted on amazon s3, browser caching will be dictated solely by the meta data you set for the file object on S3. Despite the marketing, template fragment caching provides very little benefit. It takes about as much time to connect to the cache and retrieve the data over the network as it does to render the template. Cache control headers Looking at the headers for the image, it can be seen that the meta data Expires headers specifies that the file ought to be cached till 2099! However there is a conflicting Cache-Control header which dictates that the file should be cached only for 1 week. However since you want only 1 day of caching it wouldn't really matter. Cache-Control: max-age=604800, s-maxage=604800, must-revalidate Expires: Thu, 31 Dec 2099 20:00:00 GMT Last-Modified: Wed, 27 Apr 2016 11:22:07 GMT Server: AmazonS3 The headers that you have shown does not provide any conclusive evidence as to whether or not the file has been cached. That can be conclused by looking at the HTTP Status. If it is 200 the file has not been cached. However chrome developer tools sometimes provides wrong information on this. In that case the size field will show 'from cache'
OK I have a (simple) question: I use django-storages and boto to serve static files directly from S3 (all src links are from S3). I work on apache, webfaction using memcached. Now, in one of my templates I want to cache a large image. I do: {% import cache %} {% cache 86400 fragment_name %} <img src="{% static 'img/large_image.png' %}" alt="" /> {% endcache %} Every time I visit the page, the image is re-downloaded with the following response header: Cache-Control: max-age=604800, s-maxage=604800, must-revalidate Expires: Thu, 31 Dec 2099 20:00:00 GMT Last-Modified: Wed, 27 Apr 2016 11:22:07 GMT Server: AmazonS3 Shouldn't the image be cached the first time for 86400 sec and next time loaded from cache? Am I doing something wrong or haven't understand something?
Django cache image from S3
Werkzeug's SimpleCache isn't thread safe. It's not intended to be used by other threads or processes as it doesn't implement locking. Also, the documentation seems to allude to the cache being stored in process memory, which would make it quite difficult to alter the main process's cache from a secondary one.
I'd like to set a cache variable from a background process in Flask using its SimpleCahce framework. That is: from rq import Queue from worker import conn from werkzeug.contrib.cache import SimpleCache cache = SimpleCache() app = Flask(__name__) q = Queue(connection=conn) # background process to be run. located in a seperate file def test(): for i in range(10): cache.set("value", i, 3600) time.sleep(1) @app.route('/') def home(): cache.clear() q.empty() q.enqueue(test, timeout=1000) return jsonify({'state':"running"}) @app.route('/current_value') def get_value(): return jsonify({'value':cache.get("value")}) However, this will always return null. I've done this before using Redis, but is setting cache in a background process not allowed with SimpleCache? Or am I just doing something wrong?
Setting Cache from background process in Flask
Node.js itself (by default) doesn't do any caching, although OS and other lower layer elements (e.g., HDD) may do it, speeding up consecutive reads significantly. If you want to enable cache'ing http responses in nodejs there is a http-cache library - https://www.npmjs.com/package/http-cache and request-caching library - https://www.npmjs.com/package/node-request-caching. For caching files you can use filecache https://www.npmjs.com/package/filecache and for serving static files - serve-static (https://www.npmjs.com/package/serve-static). If you're using a framework such as Express it's not that simple anymore - for example, running Express in production mode causes it to cache some stuff (like templates or CSS). Also note that res.sendFile streams the file directly to the customer (possibly proxy server, such as nginx) However, even Express' webpage (http://expressjs.com/en/advanced/best-practice-performance.html) advises to use a separate proxy: Cache request results Another strategy to improve the performance in production is to cache the result of requests, so that your app does not repeat the operation to serve the same request repeatedly. Use a caching server like Varnish or Nginx (see also Nginx Caching) to greatly improve the speed and performance of your app. For other recommendations about speeding up nodejs you can take a look at https://engineering.gosquared.com/making-dashboard-faster or http://www.sitepoint.com/10-tips-make-node-js-web-app-faster/
i'm wondering if node.js is using cache for the follow scenario or if a module for that is existing: When you have for example a web-portal which shows you at the startpage 20 products with images, every time the server has to fetch the images from the hdd or in best case from a ssd. for every single image just to find it the server need about 5-7 ms.when you have 50 user are visiting the startpage at the same time it would take 20img * 5ms * 50 usr = 5000ms just to find the images on the hdd. So it would be nice if is there was a way to keep all often used files like images, css, html and so on in the memory.so you just define the cache size. For example 50MB and the module/node.js keep the often used files in the cache.
does node.js/ngnix using a intern cache for often used files?
1 Hi I faced a similar problem recently. Are you sure you aren't getting any error before this? Whenever I imported LibRosa, the initial error was 'DLL failed to load' in some scipy module. If I ignored the error and imported LibRosa again, then I'd get ImportError:cannot import name cache. So I uninstalled NumPy and SciPy, downloaded them from http://www.lfd.uci.edu/~gohlke/pythonlibs/ and reinstalled them. That fixed all import issues. Share Improve this answer Follow answered Jun 23, 2016 at 7:57 orchidasorchidas 1111 bronze badge Add a comment  | 
So i was working with librosa library in IPython and didn't encounter with any problem so far until yesterday that failed to import it. Specifically when i try to import librosa it gives me the following error message. import librosa ImportError Traceback (most recent call last) <ipython-input-3-6ce83e78f094> in <module>() ----> 1 import librosa c:\python27\lib\site-packages\librosa\__init__.py in <module>() 12 13 # And all the librosa sub-modules ---> 14 from . import cache 15 from . import core 16 from . import beat ImportError: cannot import name cache Can somebody inform me what is this message about and how do i solve this problem?
Error with librosa library
1 There is no such feature in Guava, as Louis already pointed out. For example you can use EHCache or cache2k. For cache2k I can give you quick directions since this is a core feature we use regularly: You can either implement the interface ValueWithExpiryTime on your value object, which is: interface ValueWithExpiryTime { long getCacheExpiryTime(); } Or, you can register a EntryExpiryCalculator to extract the time value. The cache is build as follows: Cache<Key, Value> cache = CacheBuilder.newCache(Key.class, Value.class) .expiryCalculator(new EntryExpiryCalculator<Key, Value>() { @Override public long calculateExpiryTime( final Key key, final Value value, final long loadTime, final CacheEntry<Key, Value> oldEntry) { return value.getExpiry(); } } ) .build(); The time is the standard long type represented in milliseconds since the epoch. By default the expiry will happen not exactly at the specified time, but zero or a few milliseconds later, depending on your machine load. This is the most effective mode. If this is a problem, add sharpExpiry(true). Disclaimer: I am the author of cache2k.... Share Improve this answer Follow answered Mar 25, 2016 at 20:19 cruftexcruftex 5,61522 gold badges2020 silver badges3636 bronze badges 0 Add a comment  | 
I am trying to create a cache using guava cache library. One my main requirement is that I want to set the cache expiry after the CacheLoader.load(..) function instead of something most of the examples I encountered on the web, like the one below. LoadingCache<String, MyClass> myCache = CacheBuilder.newBuilder().maximumSize(MAX_SIZE).expireAfterWrite(10, TimeUnit.Minutes).build(cacheLoader); The reason for this is that the object retrieved from the database by the CacheLoader.load(...) function contains the expiration data. So I want to use this information instead of some "random" static value. I want something like this. LoadingCache<String, MyClass> myCache = CacheBuilder.newBuilder().maximumSize(MAX_SIZE).build(cacheLoader); ... CacheLoader meCacheLoder = new CacheLoader<String MyClass>(){ @Override public MyClass load(String key) throws Exception { // Retrieve the MyClass object from database using 'key' MyClass myObj = getMyObjectFromDb(key); int expiry = myObj.getExpiry(); // Now somehow set this 'expiry' value with the cache ???? return myObj; } }; OR Is there any better option available than Guava cache for this purpose?
Guava cache: how to set expiration on 'CacheLoader.load()' instead of during CacheBuilder.newBuilder()?
Ah since this is happening when you change headers, you are most probably not setting the Cache-Control header According to Jake wharton (One of the developer of Picasso) Picasso doesn't have a disk cache. It delegates to whatever HTTP client you are using for that functionality (relying on HTTP cache semantics for cache control). Because of this, the behavior you seek comes for free Taken from Jake Wharton's answer here Also, If you never see a blue indicator, it's likely that your remote images do not include proper cache headers to enable caching to disk
I have to use custom OkHttpClient so I can add headers to the image requests. The problem is Picasso won't cache any images on disk because of this. I've used setIndicatorsEnabled(true) to check caching and I see only red indicators. When I use default OkHttpDownloader all is ok. Below is my Picasso initialization code. So does anyone encounter the same problem? public static void init(Context context) { Picasso.Builder builder = new Picasso.Builder(context); OkHttpClient client = new OkHttpClient(); client.interceptors().add(new AuthInterceptor()); Downloader downloader = new OkHttpDownloader(client); Picasso.setSingletonInstance(builder.downloader(downloader).build()); Picasso.with(context).setIndicatorsEnabled(true); } Also my image download code public static void load(final ImageView imageView, final Image image) { Picasso.with(imageView.getContext()) .load(image.getUrl()) .resize(400, 0) .memoryPolicy(MemoryPolicy.NO_CACHE) .into(imageView); }
Picasso doesn't cache image on disk
1 It seems that the way to go is to use TagDependency. With that tag you can invalidate the cached query when you see fit. You create the cached query giving it a unique tag like this: $object = $db->cache(function ($db) use($id) { return self::findOne($id); }, 0, new TagDependency(['tags' => 'myquerytag'])); Then when you want to invalidate it you can use the 'invalidate' static method of TagDependency like this: TagDependency::invalidate(Yii::$app->cache, 'myquerytag'); Keep in mind that in this case I gave a cache expiration time of 0 for this query like in the documentation example, but you can give it any time you see fit. Share Improve this answer Follow answered May 4, 2016 at 9:00 lgomezmalgomezma 1,59722 gold badges1515 silver badges3030 bronze badges 0 Add a comment  | 
I am using latest (March 2016) Yii2's query caching mechanism in Models with Redis in a form: $object = $db->cache(function ($db) use($id) { return self::findOne($id); }); As a result, an entry with GUID ID (e.g. "bb83d06878206d758eda3e29082dda4f") is set that holds the result of the query. Is there a way to invalidate just that record (based on id) or the whole Model's table, every time Model's save method is invoked? E.g. if a User record is saved, we want to dirty that User's record (or "user" table), so next time we fetch that user, cache is no longer valid and record is retrieved from DB. If possible, I would like to avoid DbDependency (e.g. on "last_updated" field on the record), since that is another DB query, if I am not mistaken.
Is there a way to invalidate / dirty Yii2 query cached record?
1 I would recommend to embed an Ignite node into each service instance and create a REPLICATED cache to cache the data (you can configure read/write-through [1] and evictions [2] if needed). With such deployment each service instance will have all the cached data locally, so reads will be very fast. To start an embedded node simply call Ignition.Start() with the proper configuration on start. Here is a small example: https://apacheignite-net.readme.io/docs/getting-started-1#first-ignite-data-grid-application [1] https://apacheignite-net.readme.io/docs/persistent-store [2] https://apacheignite.readme.io/docs/evictions Share Improve this answer Follow answered Feb 28, 2016 at 19:02 Valentin KulichenkoValentin Kulichenko 8,38011 gold badge1717 silver badges1212 bronze badges Add a comment  | 
I used ignite in a java project before, but was never exposed to the infrastructure/architecture setup... Now I am have a .net project where I see a perfect need for a distributed in memory cache and I am turning to apacheignite-net. The .net project is a set of backend services (wcf & amqp). these services can scale horizontally: I can add additional servers with these services for higher throughput. But I am needs advice/pointers on ignite deployment/infrastructure along with .net: - I can't add additional/dedicated servers for caching: so I am thinking about having both my .net services and apacheignite-net on the same box. - the objects that I would need to cache (right now seeking them from DB with every request) are not that large but I do not need all of them cached: so a combination of read-through / eviction policy ? My questions is: is it a normal/safe thing to have both the JVM for ignite and these .net services on the same box. I read trough the performance tips, but still seek input from the wiser/experienced. I can always add more memory to these servers but not much cores: these .net services do not consume all provided cpu, they are backed by an RDBMS, and I can ask for a bit more ram if needed.
.net ignite distributed cache on the same machine as the .net runtime?
1 Im aware of how to add cache headers. We have a production issue affecting many clients. For a large site its no small thing to migrate the fix to production. I am trying to find out how long the browser cached by default because that is how long the issue will exist for our clients before the browser will naturally refetch the newer version of the javascript if we do not migrate the fix. For instance if it was 2 days then the issue would already have cleared itself up without us taking action. Seems like the answer was right in front of me though. There are no cache headers but the below tag seems to be saying cache forever regardless of the no-cache meta tags right before it. <meta http-equiv="expires" content="0"> So we have to do a production migration and add cache busters to all our javascript to fix the issue or keep telling every client to refresh the page. Share Improve this answer Follow answered Feb 25, 2016 at 5:44 GeorgeGeorge 1,14322 gold badges1717 silver badges3535 bronze badges 2 The first paragraph of your answer sounds like it should go in the question. – Bergi Feb 25, 2016 at 7:07 2 That <meta http-equiv="expires" …> tag is not in your javascript files I presume? It only affects the HTML resource it is included in. – Bergi Feb 25, 2016 at 7:08 Add a comment  | 
We had a production release and there were issues due to javascript that we updated. The clients have an old cached version of the javascript file which is causing the issue. The workaround is to have them refresh their browser cache (hit F5) but Im trying to understand how long the javascript gets cached by default. I do not see the cache-control headers or Expired header being set on the server side and I do not see those headers in the developer console in chrome. I DO see the below meta tags are set on the page. These meta tags would lead me to believe the JS is not cached - however we know it is because we have many users reporting issues and hitting F5 fixes it. <meta http-equiv="pragma" content="no-cache"> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="expires" content="0"> So back to my original question - how long is the javascript going to be cached by default if we set no cache headers. I apologize since Im sure this question is asked elsewhere I just cant seem to find a concrete answer. Here are all the headers I can see being set: Accept-Ranges:bytes Connection:Keep-Alive Content-Length:157271 Content-Type:application/x-javascript Date:Tue, 23 Feb 2016 15:37:38 GMT Keep-Alive:timeout=15, max=500 Last-Modified:Sat, 20 Feb 2016 10:57:45 GMT Strict-Transport-Security:max-age=31536000; includeSubDomains X-Content-Type-Options:nosniff
how long is javascript cached by default
nginx sets the response header for the static file, included in the headers are: Cache-Control Expires Last-Modified Cache-Control tells the client (at least) how to cache the content. Expires and Last-Modified allow the client to determine when to fetch new content. What you must do is ensure that PHP sends the same headers, or sensible headers if not exactly the same; Now that you know which headers are important, inspecting the requests in your browser will tell you how to achieve this.
I'm using Nginx as web server and Firefox to view response headers. For testing, I had two files on the server with the same content: test.html and test.php. In the Nginx configuration file, the expires directive is set to 30d in the server context. When accessing test.html in a web browser multiple times, the browser first obtains a 304 Not Modified response and serves a copy cached in the browser. However, when accessing test.php, the browser always makes a full request to the server (200 OK) without using the browser cache. The questions are: Is the behaviour (i.e. different treatment of HTML and PHP generated files) normal? What could be done to make web browsers cache HTML and PHP generated files in the same way?
Do web browsers cache HTML files and PHP generated files differently?
1 You can add multiple counter_cache columns to your database, and it looks like you're on the right track as far as naming them. To keep them updated, you'll need to modify your ForumThread and ForumPost models to look a bit like this: ForumThread < ActiveRecord::Base ... belongs_to :forum, counter_cache: true ... end There is more information on counter_cache available on the Rails Guides. There's also a RailsCast about counter_caches, Share Improve this answer Follow edited Feb 10, 2016 at 2:15 answered Feb 10, 2016 at 2:06 Kyle BaldersonKyle Balderson 2133 bronze badges 1 Yes, I was familiar with how to set the counter_cache in the model - I was just curious about the migration file. Thank you though for adding that part. I actually followed the RailsCast and it gives an error when trying to run the migration. And getting errors when trying to run migrations is a huge headache for a newbie. That's why I wanted to make sure that the code in the migration file was correct. – BB123 Feb 10, 2016 at 3:03 Add a comment  | 
I've added forum_threads_count and forum_posts_count columns to the Forums table. The forum_threads_count works just fine. The forum_posts_count has been reset to "0" instead of showing all of the forum posts that have been created before I added the counter cache columns. The relationships are: Forum has_many :forum_threads, ForumThreads has_many :forum_posts, and Forum has_many :forum_posts, through: :forum_threads. I later found out that I can't use counter_cache with a has_many through: relationship. So I wrote some private methods to add after_create/after_destroy calls to to increment/decrement the counter. The counter works, it's just that it's still not accounting for all of the forum posts that were created before adding these columns to the Forum table. I feel like it's something wrong with how I wrote the migration. Please help and thank you in advance. I appreciate everyone on this site helping people out. "...add_counters_to_forums_table.rb"(migration file) class AddCountersToForumsTableAgain < ActiveRecord::Migration def self.up change_table :forums do |t| t.integer :forum_threads_count, :forum_posts_count, default: 0 end Forum.reset_column_information Forum.all.pluck(:id).each do |id| Forum.reset_counters(id, :forum_posts) Forum.reset_counters(id, :forum_threads) end end def self.down change_table :forums do |t| t.remove :forum_threads_count, :forum_posts_count end end end models/forum.rb forum_posts_count0 models/forum_thread.rb forum_posts_count1 models/forum_post.rb forum_posts_count2 views/forums/index.html.erb forum_posts_count3
Rails: how can I add two counter_cache columns in one migration?
1 Use this commands, to know which directory is getting huge: du -h --max-depth=1 [project_root_path]/cache/ du -h --max-depth=1 [project_root_path]/cache/smarty du -h --max-depth=1 [project_root_path]/cache/smarty/cache You might have installed a module, which is handling cache inappropriately. This thread on Prestashop forum might be related Share Improve this answer Follow edited Nov 21, 2017 at 12:19 user8554766 answered Jan 25, 2016 at 9:06 Florian LemaitreFlorian Lemaitre 5,73722 gold badges2323 silver badges4545 bronze badges 2 Thanks for such quick response, Will disabling cache will fix this issue or will it face in slow web page loading ? – Zmax Gera Jan 25, 2016 at 10:35 It's preferable not to disable cache. Did you find witch cache directory is too big ? – Florian Lemaitre Jan 25, 2016 at 10:40 Add a comment  | 
I have a shop with prestashop installed on it. The cache folder size incrementally increases up-to 21gb+ every week & consumes heavy disk space. What could be the reason behind this ? In my settings - smarty cache is enabled Recompile templates if the files have been updated is ticked. Do I need to clear the cache in particular intervals ? Is there any setting which I need to tweak inside the prestashop backend ? Thanks.
prestashop huge cache folder size
I'm not one of the developers, but I believe their train of thought was this: Our keys are URLs. A lot of the times, different URLs (typically of the same site) share a good number of characters. That's why the key hashing is performed on the first half of the key and on the second half separately - to create more variance in the file names. Hash isn't super reliable in Java.
/** * Creates a pseudo-unique filename for the specified cache key. * * @param key The key to generate a file name for. * @return A pseudo-unique filename. */ private String getFilenameForKey(String key) { int firstHalfLength = key.length() / 2; String localFilename = String.valueOf(key.substring(0, firstHalfLength).hashCode()); localFilename += String.valueOf(key.substring(firstHalfLength).hashCode()); return localFilename; } This code from Google Volley DiskBasedCache. Why splicing without direct access. e.g: return String.valueOf(key.hashCode());
Why Volley DiskBasedCache splicing without direct access to the cache file name
The way I see it, since the L1 Instruction Cache and the L1 Data Cache are accessed in parallel, you should compute AMAT for Instructions and AMAT for data, and then take the largest value as the final AMAT. In your example since the Data Miss Rate is higher than Instruction Miss Rate you can consider that during the time the CPU waits for data, it solves all the misses on the instruction cache. If the measure unit is cycles you do the same as if it were nanoseconds. If you know the frequency of your processor, you can convert back the AMAT in nanoseconds.
I can calculate penalty when I have a single cache. But I'm unsure what to do when I am presented with two L1 caches (one for data and one for instruction) that are accessed in parallel. I'm also unsure what to do when I'm presented with clock cycles instead of actual time such as ns. How do I calculate the average miss penalty using these new parameters? Do I just use the formula two times and then average the miss penalty or is there more to this? AMAT = hit time + miss rate * miss penalty For example I have the following values: AMAT = 4 clock cycles L1 data access = 2 clock cycle (also hit time) L1 instruction access = 2 clock cycle (also hit time) 60% of instructions are loads and stores L1 instruction miss rate = 1% L1 data miss rate = 3% How would these values fit into AMAT?
Finding Average Penalty from AMAT
You could try to use the HTTP response headers, instead of HTML meta tags: see Disabling browser caching for all browsers from ASP.NET. If you're asking about disabling the caching of data in input elements, see How to prevent google chrome from caching my inputs, esp hidden ones when user click back?
Is it possible to prevent caching files in Google Chrome programmatically? I'd like to achieve the same effect as the option "disable cache" in chrome developer tools. Main problem is that I'm using external script (it can't be changed by me) which loads another script - putting additional (randomly generated) parameters into source url for that script won't help. So far I've tried to use meta tags: <meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="cache-control" content="no-store" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1990 12:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> After testing with different combinations of that tags I can only say that Google Chrome ignores that at all. Is there any option to do that? Thanks in advance.
How to prevent caching in Google Chrome with meta tags
1 Use Volley Library for your requests , if you set setShouldCache(true) then you have the last response you get from specified url , and in OnFail() of your request you can say that get data from Volley's cache . Share Improve this answer Follow answered Oct 11, 2015 at 9:50 Mohammad Ranjbar ZMohammad Ranjbar Z 1,51711 gold badge1010 silver badges2222 bronze badges 3 Can that cache more than 1 call at a time? Like if I want the user to fill out multiple forms before they get Internet. – Chris Oct 11, 2015 at 9:57 This cache system save last response that get from url , if you send your requests to multiple urls yes Volley can cache all them , but if you send multiple request to one url you can just access the last response in normal way – Mohammad Ranjbar Z Oct 11, 2015 at 10:04 I'll have to look into Volley some more and figure out if it fits my needs, thanks. :) – Chris Oct 11, 2015 at 10:08 Add a comment  | 
I have an application that uses HttpPost, DefaultHttpClient and a ResponseHandler for all my network calls. A problem with the application is that when the user doesn't have Internet, the application pretty much has to block them from doing anything. I've heard there may be a library that could cache my network calls, and then when the user goes back to having Internet, it would make those calls. Is there a good library that would handle this? Example with Internet: User fills out form. User has Internet. App calls networking, and all is good. Example without Internet: User fills out form. User does NOT have Internet. App caches the call, and notifies the user it will be posted once they have Internet. Sometime later, the user has Internet and the call is made. Thanks for your time.
Android - Library to cache/recall network calls when no Internet
1 I need to know how can I tell the production environment to properly use that index in the first statement. Use the "force index" syntax. If you paste CREATE TABLE & EXPLAIN output for the tables & queries involved (resp.) on dev & production we can narrow it down to a just a couple things. Might also help to know the RAM available on each machine. ps: On my development environment I always get an index merge which makes the query fast even though it makes a filesort for some reason that I dont' really care about. sorting by d which isn't in your index Share Improve this answer Follow answered Oct 1, 2015 at 18:04 gfunkgfunk 38111 silver badge1414 bronze badges Add a comment  | 
I've looked everywhere for my issue but found no definite answer. Database: MySQL Given three numerical fields a, b, c and One datetime filed d, all indexed separately The involved table holds 10 mil. records. Two numbers n,m I have a basic query: select * where (a=n or b=n) and c IN(m) Order by d DESC (n can be any number, m can be any number through 1-9) I also have a separate index on each one of them. I've tried indexes on ac and bc but with no success. On my development environment I always get an index merge which makes the query fast even though it makes a filesort for some reason that I dont' really care about. But on production(different sever-same schema/data) that doesn't happen no matter what I do. My workaround to this weird issue was turning the query into the following statement: From: select * where (a=n or b=n) and c IN(m) ORDER BY d desc To: select * where (a=n or b=n) and c IN(m,'m') ORDER BY d desc And that resulted in an merge index query on the production environment as well which basically for me means that there's an execution plan cache somewhere and I can't figure out for the life of me where to clear that cache (if indeed there is one) I need to know how can I tell the production environment to properly use that index in the first statement. As a note ... for some reason Explain query tells me that d is the index used on production when explaining the query.
Mysql query execution plan has caching?
1 I've never used it, but maybe the cache component can help you, see http://camel.apache.org/cache.html. So in your case, call the CHECK operation, if the data exists end the route, if not call the ADD operation and do the further processing of the route. Share Improve this answer Follow answered Sep 21, 2015 at 7:53 soilworkersoilworker 1,31711 gold badge1717 silver badges3434 bronze badges 4 Thanks for your response. But, it still is mixed with routing logic. I do not see cache component providing a way to be decoupled from routes. I am looking for decoupling them. – Ram Kumar Sep 21, 2015 at 10:06 Ok, but camel interceptor, see camel.apache.org/intercept.html, should fit your needs then. – soilworker Sep 21, 2015 at 15:06 I think I already said that in my question, that Interceptor is still coupled with routing logic. – Ram Kumar Sep 21, 2015 at 16:16 Yes, but according to the documentation you can set/configure the interceptor globally for all routes in the camel context. – soilworker Sep 22, 2015 at 7:42 Add a comment  | 
I am working on a application using Apache Camel Routes to process the requests. I want to add caching to each of the routes. So, that if the requested data is already in cache, we do not need to execute the processing in the route, otherwise the route logic would be executed. I want to know how we can transparently add caching to each of the routes. I initially thought of adding a check for cached content at start of route and proceeding based on the results. Also, a step to add the route response to cache at end of the route. But, I think this approach makes caching logic coupled with route logic.Bu,stil we know what to cache and if we have to cache if it is within the routes. I am looking for a way which I can use to add this as Aspect as in AOP. Is this possible in Camel. I have seen there is a interceptor in camel. But, still its part of the route, so no separation from route logic. Is there way we can transparently add caching to each of the routes.
Add Interceptors to Camel Route
1 You can either install a Cordova plugin to disable the cache for Android and IOS: ionic cordova plugin add cordova-disable-http-cache Or you can check this solution at the ionic forum. myApp.factory('httpInterceptor', ['$q',function($q) { var regex = new RegExp('\.(html|js|css)$','i'); var isAsset = function(url){ return regex.test(url); }; return { // optional method 'request': function(config) { // do something on success if(!isAsset(config.url)){ //if the call is not for an asset file config.url+= "?ts=" + Date.now(); //append the timestamp } return config; }, // optional method 'requestError': function(rejection) { // do something on error return $q.reject(rejection); }, // optional method 'response': function(response) { // do something on success return response; }, // optional method 'responseError': function(rejection) { return $q.reject(rejection); } }; }]); //appended the interceptor in the config block using $httpProvider $httpProvider.interceptors.push('httpInterceptor'); Share Improve this answer Follow answered Oct 19, 2017 at 13:52 GuiGui 9,6451010 gold badges4242 silver badges5454 bronze badges Add a comment  | 
Checking server message when an app is opened every time. On Android this message is always cached. When the server is offline a cached message is returned. Which also is a problem because then for certain a message should be shown (default error message or a timeout). Have tried the following: $http.get(url, { cache: false }) ... And before $cacheFactory.get('$http').removeAll(); Even localStorage.clear(); Also in module index: .config(['$httpProvider', function($httpProvider) { if (!$httpProvider.defaults.headers.get) { $httpProvider.defaults.headers.get = {}; } }]) On IOS the value does not seem cached instead, except for when the server is offline.
Cordova/AngularJS $http.get always cached
1 Well, first of all, if the user has been visiting the website which uses jQuery CDN his browser will already have the library. Look at this statistics. That means from top 1 million websites 30,000 are using jQuery CDN. So there is a quite high probability that the user already has visited one of these websites. Another question is why do you need to rewrite this for mobile? jQuery file size is 84KB. It is downloaded and cached once. Your HTML probably costs even more but it is downloaded every time. Don't create problems out of nothing, go on using jQuery and spend more time on improving the really important things. Share Improve this answer Follow answered Jul 28, 2015 at 7:15 smnbbrvsmnbbrv 24k99 gold badges8080 silver badges112112 bronze badges 1 Thanks for the answer, but assuming this is not important can only rely on jquery CDN caching for mobile, which will have to be the great majority of users otherwise performance can be effected. If that is the case then you are right. – Omer Greenwald Jul 28, 2015 at 7:34 Add a comment  | 
I'm considering dropping jQuery and replace it with vanilla js code mostly for mobile browsers. However, I also read that most users have jQuery CDN version already cached in their browser so that they actually don't need to download it. If that is indeed the case for mobile browsers as well, it can save me a lot of code rewriting time. Since my site is not live yet to get an indication myself, I am looking for general statistics of the estimated percentage of users who's browsers have jQuery library cached versus the users who don't. Any help will be appreciated.
jQuery CDN caching statistics
Each Cache concurrency strategy has an associated cache synchronization mehcanism: NONSTRICT_READ_WRITE is a read-through cache, because entities are stored in cache when they are fetched from the database (and not when they are persisted). Updating an entity causes an entity cache entry invalidation. READ_WRITE is an asynchronous write-through cache strategy, because the database and the cache are not updated transactionally. Soft-locks are used to guarantee consistency. TRANSACTIONAL is a synchronous cache strategy, as both the database and the cache are updated atomically. Hibernate favours strong consistency so both READ_WRITE and TRANSACTIONAL cache coherency is similar to the READ_COMMITTED isolation level. In NONSTRICT_READ_WRITE, stale records can can still occur.
I have a question regarding Hibernate caching. What I have understood is that Hibernate caching is used to avoid the frequently hitting the database. We therefore use Hibernate caching mechanism to gain performance. If a new record is added to the database, when using caching, if we don't hit the database, how does the newly added record would be fetched? Caching still fetches the old record right? can someone explain me how does this work?
Hibernate caching and database consistency
1 Normally caching means that you save a site that you don't have to calculate them every time and save performance. A good solution for that is Varnish and ESI. You can exclude parts of your code and replace them with ESI tags. Varnish fetch that URL and put them together and deliver the whole site. you have some possibilities that Varnish fetch them lazy so when a user go to the page he get the old version and in background Varnish load a new one for the next request. https://www.varnish-cache.org/trac/wiki/ESIfeatures https://www.varnish-cache.org/docs/4.0/users-guide/esi.html Some Frameworks have plugins for Varnish and have implemented that ESI functions and replace automatically part of your page. Share Improve this answer Follow answered Jun 11, 2015 at 12:03 René HöhleRené Höhle 27k2222 gold badges7676 silver badges8787 bronze badges 2 I was thinking more of redis or ElastiCache so that database is used as second layer cache (behind redis or ElastiCache) when the number of active users/request per second increases. – Pez Jun 17, 2015 at 7:46 The problem is you have some bottlenecks in most cases thats you webserver or database... so you should try that as many as requests come to your webservers and can be answers in front of them. – René Höhle Jun 17, 2015 at 10:05 Add a comment  | 
I have a section on my site called Views where one user can see who has viewed their profile. Can this be cached considering the fact that it kind of needs to be real-time so that if the users checks this page or refresh every 30 seconds, they can see the new visitor? Another section I have is Messages where users message each-other. This also needs to be real-time. Can this be cached? The other section I'd like to cache is new users section where user can see newly registered users. Have you guys had experience with something similar and how did you go on about solving it? The purpose of this is to reduce the number of calls to the Database. I want to look at this option and fine tune everything before increasing my database's limitations. Thank you.
How to cache a very dynamic website in PHP?
Add Microsoft.VisualBasic.Devices assembly reference to your project then you can use following var Available = new ComputerInfo().AvailablePhysicalMemory; var Total = new ComputerInfo().TotalPhysicalMemory; var Cheched = Total - Available; Edit: Following code works for me, also note that Available amount includes the Free Amount and also includes most of the Cached amount. ObjectQuery wql = new ObjectQuery("SELECT * FROM Win32_OperatingSystem"); ManagementObjectSearcher searcher = new ManagementObjectSearcher(wql); ManagementObjectCollection results = searcher.Get(); //total amount of free physical memory in bytes var Available = new ComputerInfo().AvailablePhysicalMemory; //total amount of physical memory in bytes var Total = new ComputerInfo().TotalPhysicalMemory; var PhysicalMemoryInUse = Total - Available; Object Free = new object(); foreach (var result in results) { //Free amount Free = result["FreePhysicalMemory"]; } var Cached = Total - PhysicalMemoryInUse - UInt64.Parse(Free.ToString()); Console.WriteLine("Available: " + ByteToGb(Available)); Console.WriteLine("Total: " + ByteToGb(Total)); Console.WriteLine("PhysicalMemoryInUse: " + ByteToGb(PhysicalMemoryInUse)); Console.WriteLine("Free: " + ByteToGb(UInt64.Parse( Free.ToString()))); Console.WriteLine("Cached: " + ByteToGb(Cached));
I couldnt able to find the cached and free memory of a system using C#.Help me.......
How to find systems cached and free memory using C#
I eventually solved this specific problem by piping my filelist to another python process which interfaced with the bash to call the cachedel command from the nocache package. This is no pure pythonic solution, but it worked in my case. import glob import subprocess filelist=glob.glob('/path/to/file/*.fileending') for dat in filelist: subprocess.call("cachedel "+dat, shell=True) Nevertheless, if caching itself is the timing bottleneck, it's a better idea to disable caching for the python process itself by running nocache python with the nocache package installed.
I want to test and compare the speed of two different file-opening approaches over a network using multi-processing. To really see if the network is the bottleneck, I want to disable caching of the relevant data for the python process or another method to force the python process to get its data via network and explicitly not from the cache. Clearing the cache is not an option as I am working on a multi-user environment where caching itself is essential.
Disable Caching for Python
1 You probably need to take a different approach. Try creating a custom confirmation box to catch/cancel the refresh action: http://devzone.co.in/show-confirmation-box-page-refresh-page-unload-using-javascript/ Execute function before refresh (although this solution has a comment that says it doesn't work on FireFox). Share Improve this answer Follow edited May 23, 2017 at 11:58 CommunityBot 111 silver badge answered May 7, 2015 at 14:41 IanIan 4,32944 gold badges3939 silver badges6565 bronze badges 1 Nice approach. Unfortunately I couldn't really get it working, but I'll keep trying. Or maybe I'll simply have to tell everyone not to use the fu##### IE ;) – Bautzi89 May 11, 2015 at 8:53 Add a comment  | 
I am developing a web application in ASP.NET and on some pages I want to prevent them from caching the data using the following code: protected override void OnInit(EventArgs e) { Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetNoStore(); Response.Cache.SetExpires(DateTime.MinValue); base.OnInit(e); } This works like it should, but now when the user tries to refresh the page after a postback (e.g. saving the data) the browser informs the user that action will be caused twice if he proceeds and asks you if you want to continue or not. If you cancel this on Chrome or Firefox the page still has its current state - which is exactly what I want, but in IE it is invalidated. Is there any way to achieve the same behaviour for Internet Explorer?
How do I prevent caching in Internet Explorer?
1 You're going to have to build 2 different queries, one for local data and one for network data, and then figure out which one you want to display on screen. There are a few scenarios you have to account for : Server side deletion, addition and update Client side deletion, addition and update Not all of these need to be accounted for, only those that make sense for your application. Keep in mind that when an object is pinned (and not saved to the server), it does not have an objectId, but it does have something called a localId (it's private api but you can see it in the debugger). You can check for the existence of objectId to determine if the object was created locally and has never been saved to the server. Share Improve this answer Follow answered Apr 19, 2015 at 18:12 hhanesandhhanesand 1,0001111 silver badges2828 bronze badges Add a comment  | 
I am using parse for chat application in android for storing the data I am using both the server and local database(Parse.enableLocalDatastore()) it is working fine, the problem is with fetching data from the database like if network is not available the data will return from local database and if it is available it will return directly from parse so how can i differentiate between them? should i use query.fromLocalDatastore() method while quering the data or not?
how to synchronize between local database and Parse in android?
After updating Xcode, the images does cache now.
I'm trying to use SDWebImage to cache images retrieved from the web. The image gets cached to memory all the time. I've seen the documentation that saving to disk is default. Everything will be handled for you, from async downloads to caching management. By default SDImageCache will lookup the disk cache if an image can't be found in the memory cache. You can prevent this from happening by calling the alternative method imageFromMemoryCacheForKey:. However every time I relaunch my application, SDWebImage downloads the file again from the web and not retrieving it from the disk. I will receive SDImageCacheType of value SDImageCacheTypeNone when my application reloads, and SDImageCacheTypeMemory when I keep scrolling without exiting my application. Any ideas? I've tried caching manually but the image always appear black. Not sure what where wrong. Any help is appreciated! I'm using this method to download the images: - (void)sd_setImageWithURL:(NSURL *)url placeholderImage:(UIImage *)placeholder options:(SDWebImageOptions)options progress:(SDWebImageDownloaderProgressBlock)progressBlock completed:(SDWebImageCompletionBlock)completedBlock {
SDWebImage not caching to disk
If DMA (the preferred method of copying data between I/O devices and memory for large transfers) is cache coherent (i.e., the DMA agent checks the cache), then hardware will ensure that any dirty cache lines will be read when paging out and any old cache lines will be invalidated (or overwritten — some systems support I/O devices storing to cache). For non-coherent DMA, the OS would have to flush the page being paged out to memory. Whether DMA is coherent is ISA and implementation dependent. For example, this blog post mentions that x86 guarantees coherence but Itanium and ARM do not. (Note that an implementation of an ISA can provide stronger guarantees.) Whether the cache is virtually indexed or not would not impact the operations required because the OS would flush based on the virtual address and aliasing issues would already be handled in hardware or by software (e.g., by page coloring).
When a page is swapped out to disk, some of its content might be present in a cache (I believe this would be a very rare scenario, because, if the page is not accessed for a long time, most likely the cache-lines containing its content would also have been evicted by then.) What happens to these cache-lines when the page is swapped out. Do they need to be invalidated immediately ? Does it make any difference if the line is dirty or clean ? Who controls this process, OS or hardware or both ? To explain why this need be taken care of, lets assume there are processes A & B and A is accessing physical page P1 at starting physical address X. Some of the contents of P1 must have been cached in different levels of caches. Now the page P1 is swapped out and page P2 of process B is brought in at the same address X. So if the cache-lines belonging to page P1 are not invalidated, then process B might get caches hit on those lines which originally belonged to process A for page P1 (because the physical tag would match). Is this scenario valid ? Does it make any difference if the cache is VIPT or PIPT ? It would be great, if you can cite how it is handled in modern OS/Processor.
What happens to the cache-lines for a page when the page is swapped out to the disk?
1 You must have to use lib for save image because we can not handle properly bitmap. Lib are following :- Picasso Universal Loader Volley Above lib have own benefit so take who full fill your goal. You can save image on cache. See below link :- http://www.androidhive.info/2014/05/android-working-with-volley-library-1/ Share Improve this answer Follow edited Mar 10, 2015 at 7:22 answered Mar 10, 2015 at 6:55 dugguduggu 38.1k1212 gold badges117117 silver badges114114 bronze badges 1 I am using volley library to fetch data and images from network.please provide reference in volley for storing image. I have loaded image using NetworkImageView of volley. – Arth Tilva Mar 10, 2015 at 7:17 Add a comment  | 
I want to store some of the data from web service when it is loaded first time in android. I am storing data in shared preference and database. My data also consists of Images.Please suggest me the options to store image other than internal storage if possible and which would be efficient and light weighted ? As Facebook also displays some of the images when the app is offline.
Which are the different option to store image in android app from webservice?
1 1- have you tried the -F option used to specify the average sampling rate in samples per sec (https://perf.wiki.kernel.org/index.php/Tutorial#Sampling_with_perf_record) ? 2- According to the wiki (link above) "The perf tool defaults to the average rate. It is set to 1000Hz, or 1000 samples/sec." 3- I think perf mem provides all the information you need. Share Improve this answer Follow answered Mar 4, 2015 at 14:15 Manuel SelvaManuel Selva 18.8k2222 gold badges9090 silver badges137137 bronze badges 2 Thanks Manuel! I tried the -c option of perf record [to control sampling period]. However, I didn't find a difference in the number of samples collected even as I varied -c parameter a lot. For example: (i) perf record -e instructions:u -c 2000 ./a.out # To sample every 2000 ins perf report -D -i perf.data | fgrep RECORD_SAMPLE | wc -l # No. of samples collected = 17506 (ii) perf record -e instructions:u -c 10000 ./a.out # To sample every 10000 ins perf report -D -i perf.data | fgrep RECORD_SAMPLE | wc -l # No. of samples collected = 17401 – jithinpt Mar 4, 2015 at 19:26 jithinpt, do check output of dmesg and sysctl -a |grep perf. There is some auto-tuning in perf - it will not allow you to generate too many profiling events (sometimes it is limited just to several kHz; it means that on 3GHz CPU you can't profile more often than around 1 million cycles or 1 million instruction) and profiling should use no more 25% of CPU by default. Superuser root can change limits. – osgx Mar 5, 2015 at 18:54 Add a comment  | 
I've been trying to use the linux perf tool to sample the memory accesses in my program. Specifically, I'm using the perf mem command to instrument the loads in the program: perf mem -t load rec myprogram perf mem -t load rep However, I would like to increase the sampling frequency and collect more samples. But I did not find any option for the perf mem command that controls the sampling frequency. Questions Is there an option that would let me control the sampling frequency when running perf mem? What's the default sampling frequency? Is there a better option that perf mem to instrument the memory accesses in the program? I'm specifically looking for the following bits of data for each of the sampled load operations - (i) Target data address (ii) And whether the load resulted in an L1/L2/LLC cache hit or not.
Using linux perf and PEBS to sample memory accesses in a program
1 Suppose that someBean in a autowired bean in your class, you can use the object being invoked target to get it, try this @Cacheable(condition="target.someBean.isSomeBoolean()") Share Improve this answer Follow edited Dec 9, 2018 at 14:03 answered Dec 9, 2018 at 13:38 NHSNHS 19511 gold badge22 silver badges1313 bronze badges Add a comment  | 
I've got a method that I want to conditionally cache based on the result of a method call to another bean (which says whether or not global caching is turned on). I've tried,using SpEL, something along the lines of @Cacheable(condition="@someBean.isSomeBoolean()") which requires a BeanResolver which I don't have configured. I'm OK about creating one of these programmatically but how do I configure the class I've got cacheable methods in to reference this ? The error I am currently getting is: No bean resolver registered in the context to resolve access to bean There's a similar post here talking about keys, not conditions. Has anyone successfully managed to reference other beans in caching annotations ?
Spring caching - How to reference a bean in SPEL to enable conditional caching via @Cacheable
Actually I gave a talk on this just today at the FOSDEM conference in Burssels. See the slides here: http://www.slideshare.net/cruftex/cache2k-java-caching-turbo-charged-fosdem-2015 Basically you can use Google Guava, however, since Guava is a cache which uses LRU, there is still a synchronized block needed. Something which I am exploring in cache2k is used an advanced eviction algorithm, that needs no list manipulation for the cache access, so locks whatsoever at all. cache2k is on maven central, add cache2k-api and cache2k-core as dependency and initialize the cache with: cache = CacheBuilder.newCache(String.class, Object.class) .implementation(ClockProPlusCache.class) .build(); If you have only cache hits, cache2k is about 5x faster then Guava and 10x faster then EHCache. For your usage pattern e.g. with the Currency type you can run the cache in read through configuration and add a cache source which is responsible for constructing the Currency instances. So, you don't necessarily do look out for a cache. For the currency example you don't need a cache, since there is a limited space of currency instances. If you want to do the same with a possible non limited space, the cache is the more universal solution, since you have to limit the resource consumption. One example I explored, is using this for formatted dates. See: https://github.com/headissue/cache2k-benchmark/blob/master/zoo/src/test/java/org/cache2k/benchmark/DateFormattingBenchmark.java For general questions on cache2k, feel free to post them on stack overflow.
Suppose we want to implement a cache for a particular entity. class Cache { private static Map<String, Object> cache = new HashMap<>(); public static Object get(String id) { assert notNullOrEmpty(id); return cache.get(id); } public static Object add(String id, Object element) { assert notNullOrEmpty(id) && notNull(element); if(cache.containsKey(id)) return cache.get(id); cache.put(id, element); return element; } } now we want to ensure this is threadsafe and most importantly optimal when it comes to data access and performance (we dont want to block when its not necessary). For example if we mark both methods as synchronized we will uslessly block two concurrent get() calls which could perfectly work without block. so we want to block get() only if add() is in process, and block add only if at least one get() or an add() is in process. Multiple concurrent get() executions should not block each other... How do we do this? UPDATE In fact this is not a cache but just a use case i've come up with to describe the problem, the actual purpose is to create a singletone instances store... For example there is a Currency type which is only instantiated trough its builder and is immutable, builder itself after verifying that parameters passed in are valid checks this so called global cache in static context to see if there is an instance already created... well you got me... This is not an enum usecase because system will dynamically add new Currency, Market or even Exchange instances which all should be loosely coupled and instantiated only once... (also to prevent heavy GC) So to clarify the question... think of the global problem of concurrency not the particular examlpe. I've found this link quite helpful http://tutorials.jenkov.com/java-concurrency/read-write-locks.html i guess there are some lock types already in JDK for this purpose, but not sure yet.
least blocking java cache
The grunt-cache-breaker library (version <= 2.0.1) does not support file renaming. It only updates references to files. I'd suggest you use the grunt-cache-bust library instead.
I'm using grunt-cache-breaker to add a md5 hash to my filename. When I run grunt, it runs like normal, no error messages. While the filename inside the markup has the added md5 hash, the actual file does not have the md5 hash. Here's what the cache breaker task looks like in my Gruntfile.js cachebreaker: { dev: { options: { match: ['idm-ui-vendor.min.js'], replacement: 'md5', src: { path: 'tmp/dev/common/scripts/idm-ui-vendor.min.js' } }, files: { src: ['tmp/dev/login/views/view.jsp'] } } }
grunt-cache-breaker not renaming revved files
1 I have the exact same problem. Did you find the cause to all this? As I was trying to solve the issue I saw this in my response headers Age 0 Connection keep-alive Date Tue, 21 Apr 2015 08:47:21 GMT Server ATS/3.2.4 All the files with a 304 Not Modified Status had Apache Traffic Server (ATS) (from my LAN) which led me to think it is the one causing all this behaviour; I forced the non caching as explained here and I have no more problems as such. Share Improve this answer Follow edited May 23, 2017 at 11:43 CommunityBot 111 silver badge answered Apr 21, 2015 at 8:53 elhaelha 22322 silver badges1111 bronze badges Add a comment  | 
I'm using CakePHP 2.6 I'm having a problem when I redirect back to the same view from where the request was made. The view seems to be cached, so whatever changes were made during the request are not shown until the page is refreshed again. This means: the user can't see the changes just made. Flash messages are shown on the following view (which is bad). Why is this happening? Things I've checked: My PHP environment has caching turned off My CakePHP configurations are the defaults (see below). Caching should be disabled because I'm in debug mode: Configure::write('debug', 2); I'm testing in multiple browsers with and without browser caching enabled. Configure::write('Session', array( 'defaults' => 'php' )); Representative example: //Inside ListingsController... $this->Listing->id = $id; if ($this->Listing->save($listing)) { $this->Flash->success(__('"%s" is now active.', $listing['Listing']['title'])); } else { $this->Flash->error(__('Problem activating')); } //this is the original view... $this->redirect( array('controller'=>'listings', 'action'=>'mylistings') );
CakePHP caching issue when redirecting back to same page
1 You're wanting to store or cache the input for 15 frames? I can tell you how to gather input, you can cache it from there if you'd like by storing it in a global Keycode[] array. This code will print the key being pressed to your console. void OnGUI() { Event e = Event.current; if (e.isKey){ string key = e.keyCode.ToString(); Debug.Log(key); } } Share Improve this answer Follow answered Dec 15, 2014 at 19:52 VolearixVolearix 1,57333 gold badges2323 silver badges4949 bronze badges Add a comment  | 
I've just started with unity so please excuse any lack of knowledge. I began programming with microsoft's xna environment. I've now switched to unity but I'm having troubles. Xna had a "KeyboardState" feature that checked what buttons/keys were being pressed. I've heard Unity doesn't have the same feature so I was wondering how I can store/cache input for the past 15 frames. I've heard Event.KeyboardEvent and KeyCode might help but I'm lost. Can anyone please help???
Unity C#: Caching keyboard/controller state?
The issue is that the entire HTML page is being cached, including the script tags which contain your evaluated EL. If you serve the page with the following header tags the browser shouldn't cache it: <meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> For more details about how these headers disable caching refer to this answer: Using <meta> tags to turn off caching in all browsers?
Problem! I am calling a JS function inside the html's body section, the function's parameters are gathered by parsing the EL inside the function. Eg: <script type="text/javascript"> jQuery(window).load(function() { loadImage("${expression_language_var_1}", "${expression_language_var2}"); }); </script> But it seems that sometimes both parameters are cached and I do receive old information. Questions! Are the script tags inside the html structure being cached just the same way as the external javascript files that are included in the header? Best Regards,
Javascript: Is Javascript cached if invoked from the html's body?
1 I think it will often be the case that you have to sacrifice performance in order to use polymorphism but in this case perhaps you can maintain separate vectors of Bar1 and Bar2. You could consider these a "pool" of Bar1 and Bar2. And then fill the vector of Foo object pointers with pointers to objects in your Bar1 and Bar2 pools: template<typename Bar> void populateFoos(std::vector<Foo*>& foos, std::vector<Bar>& bars) { for (auto& bar : bars) foos.emplace_back(&bar); } std::vector<Bar1> bar1s; std::vector<Bar2> bar2s; std::vector<Foo*> foos; // Populate Bar1s bar1s.emplace_back(); bar1s.emplace_back(); // Populate Bar2s bar2s.emplace_back(); // Populate 'foos' with Bar1 and Bar2 objects populateFoos(foos, bar1s); populateFoos(foos, bar2s); // Iterate through foos for(auto foo : foos) foo->doSomething(); Live demo You will need to be careful that you don't invalidate the Foo object pointers by reallocating your Bar1 and Bar20 pools. Share Improve this answer Follow edited Nov 6, 2014 at 14:31 answered Nov 6, 2014 at 14:25 Chris DrewChris Drew 15.1k33 gold badges3434 silver badges5555 bronze badges Add a comment  | 
(The title might be "not so optimal".) Suppose there's code like this: class Foo {/*stuff*/}; class Bar1 : Foo {/*stuff*/}; class Bar2 : Foo {/*stuff*/}; std::vector<Foo*> foos; // Populate 'foos' with Foo, Bar1 and Bar2 objects // Iterate through foos for(Foo* foo : foos) foo->doSomething(); Basically, foos is a vector with Foo object pointers. However, looping through this vector is likely to cause cache misses. A theoretical remedy would be to store actual objects instead of pointers, but this isn't allowed in C++ (no polymorphism with arrays). That said: How can one improve data locality (and minimize cache misses) when lots of polymorphic objects are required? I'm interested about this since everyone tells me that cache hits / misses are of great importance in performance-critical software, and thus one should avoid the use of pointers like in the code sample given above. However, this would essentially mean throwing away polymorphism.
How to achieve data locality with polymorphism?
1 I don't know why the rdynamic would result in a speedup. But regarding your second question, check Agner Fog's guide "Optimizing software in C++" http://www.agner.org/optimize/optimizing_cpp.pdf. Have a look at section 9.2 where he talks about critical stride. Might be applicable in this situation. Share Improve this answer Follow answered Nov 3, 2014 at 18:36 hkohko 65166 silver badges2121 bronze badges Add a comment  | 
I have a simple array operation in a for loop, which is done for different sizes (from 16 to really big) of the array, which contains doubles. I do these several times: for(int i = 1; i < n-1; i++){ target[i] = (source[i-1]+source[i]+source[i+1])*0.5; } I compiled it with "-O3 -march=native" and measured the speed. I then, for reasons not relevant here, tried adding "-rdynamic", for a significant speedup, as you can see in the plot. "cmake" in the legend refers to the "-rdynamic" addition. This only works on a i7-4790 CPU. I couldn't reproduce it on a AMD Phenom II X6 1045T at all. I certainly don't understand why -rdynamic would produce a speed-up that big. (GLOPS = #updates of the array cells per second in billions). Why do i gain a speed up? Why not on the AMD CPU? Notice that these measurements are the mean value of ten measurements for both cases each. And another interesting observation is that at least in the beginning, as the array fits in the L1 cache, i have these drops of performance. Interesting about it is, that those happens as the size of my array is power of 2. I guess this has something to do with the L2 cache, but i absolutely don't know what and why. Maybe some cache conflicts, or alignment? EDIT: I have now properly plotted just with: g++ -O3 -march=native programm.cpp -rdynamic The curve labelled "cmake" is the same as adding "-rdynamic". EDIT 2: Removed cmake narrative from the question entirely.[Peter]
Why is a array with the size power of 2 slower? Why do i gain with -rdynamic performance?
Is there anyway around creating individual caches based on current_user.id and prevent having gobs of versions of (almost) identical cached content? Instead of including the user's ID in the cache key, you could include the users' permissions. This would still have duplicate copies of the content, but scales as per your permission model, not the number of users. So instead of the typical: <% cache("posts/all-#{Post.maximum(:updated_at).try(:to_i)}") do %> ... <% end %> you can create a cache key like (assuming current_user returns the authenticated user) and you care about only editing vs. reading: <% cache("posts/all-#{Post.maximum(:updated_at).try(:to_i)}-#{current_user.can?(:edit, Post) ? :edit : :read}") do %> ... <% end %> Note that the cache key generation should probably be extracted to a separate class/helper method.
I've got a view that utilizes Russian Doll caching, where the whole collection of items is cached, and each item in the collection is cached individually within that cache. However, each item in the collection should show edit/delete links based on the current user's permissions granted through CanCan. So, User A would only see the edit/delete links next to her own posts, but not next to User B's posts. Well, whenever a post is created by User A, it's cached with the appropriate edit/delete links since she should have them visible based on her permissions. But when User B views the collection, he's served User A's cached post, along with the edit/delete links that he shouldn't see. Granted, CanCan prevents these edit/delete actions from occurring, but the links are still present. Is there anyway around creating individual caches based on current_user.id and prevent having gobs of versions of (almost) identical cached content?
Russian doll caching and permission-based links in view fragment
Ok so I had infact read the answer to those questions not on the diveintohtml5 article but in the links it referenced: This linked article described how you can check in JS whether you're currently online or working from the cached page. Now to utilize this fully you should have two versions of your API code( what you are using to GET server resources): one for online use, with HTTP get/post, which then stores data locally using indexdb one for offline use, which directly goes to your indexdb or falls back to a resource not available. You should probably also check if the user is back online here and switch to the alternative method. Once the user visited a page with a cache manifest, and your browser knows how to deal with it, next time you try to access it and you're offline then the browser does it all for you: https://html.spec.whatwg.org/multipage/browsers.html#offline, http://googlecode.blogspot.ie/2009/05/gmail-for-mobile-html5-series-part-3.html
I have to build a web app that is fully useable in remote/field/offline environments It looks like HTML5 supports an "offline mode" with a pretty sophisticated caching mechanism. A few questions: Beyond caching JS/CSS/image resources, I also need a way to stub/mock/re-route async/ajax calls made after the pages "download". For instance, in "online mode", if the user clicks a button, this would normally send an HTTP POST to the server. But in "off-line mode", I need it to somehow store a stubbed/mocked version of the HTTP POST somewhere locally. Then, when the user resumes "online mode" the app would be intelligent enough to query the cache and fire off the request. Is this possible? If so, what is this mechanism called, and where is it documented? How are laymen end users expected to find offline HTML5 apps?!? They will know to go to http://www.myapp.example.com in "online mode", but while "off-line" they would normally need to go to some browser URL beginning with file:///some/path/on/their/system/to/the/cached/offline/app, right? Does HTML5 have anything to make this more "user friendly"? For instance, in offline mode could a user still go to the normal online URL (myapp.example.com) but then the browser automatically detects the network outage, and serves back whatever it has in its cache? Or something like that?
Providing users access to offline HTML5 web applications
1 phpfastcache cant uderstand your data was changed or not you must do something after specific data changed in your database first in your home page cache code must be like this : $lastnews = phpFastCache::get('index_lastnews'); $bestnews = phpFastCache::get('index_bestnews'); $worldnews = phpFastCache::get('index_worldnews'); if($lastnews == null) { $lastnews = YOUR DB QUERIES || GET_DATA_FUNCTION; phpFastCache::set('index_lastnews',$lastnews,600); } if($bestnews == null) { $bestnews = YOUR DB QUERIES || GET_DATA_FUNCTION; phpFastCache::set('index_bestnews',$bestnews,600); } . . . and in your admin page when specific data changed cache code must be like this : AFTER DATABASE insert | update .... you can replace old cache by this two way : 1) delete cache (after delete cache , cache automatically rebuild after first visit) phpFastCache::delete('index_lastnews'); 2) update cache $lastnews = YOUR DB QUERIES || GET_DATA_FUNCTION; phpFastCache::set("index_lastnews",$lastnews,600); Share Improve this answer Follow answered Sep 23, 2014 at 19:38 Pouya DarabiPouya Darabi 2,2911818 silver badges2323 bronze badges Add a comment  | 
I found phpfastcahce class for cached MySQL result. in details that support WinCache, MemCache, Files, X-Cache, APC Cache and say: PHP Caching Class For Database : Your website have 10,000 visitors who are online, and your dynamic page have to send 10,000 same queries to database on every page load. With phpFastCache, your page only send 1 query to DB, and use the cache to serve 9,999 other visitors. in sample code: <?php // In your config file include("php_fast_cache.php"); // This is Optional Config only. You can skip these lines. // phpFastCache support "apc", "memcache", "memcached", "wincache" ,"files", "pdo", "mpdo" and "xcache" // You don't need to change your code when you change your caching system. Or simple keep it auto phpFastCache::$storage = "auto"; // End Optionals // In your Class, Functions, PHP Pages // try to get from Cache first. $products = phpFastCache::get("products_page"); if($products == null) { $products = YOUR DB QUERIES || GET_PRODUCTS_FUNCTION; // set products in to cache in 600 seconds = 10 minutes phpFastCache::set("products_page",$products,600); } foreach($products as $product) { // Output Your Contents HERE } ?> NOW, In my index of my website I have any block for show last news, best news, world news ..... for cache my index, i must cache MySQL result for each block(last news, best news, world news ..... ) using phpfastcache and in admin page remove all cache if I edit existing news Or add new news? this is a true way? what's best way For cache MySQL result my index page using phpfastcache(any method)?!
php cache dynamic index page
1 I just setup Redis on Windows, one master and one slave, one sentinel per server, the servers are in a cluster enviroment by the ms network load balancer. So basically what I did was: Server A: (local IP: 1.10.10.1, cluster ip: 1.10.11.1) Run Redis as Master Run redis as Sentinel watching to Server A Server B: (local IP: 1.10.10.2, cluster ip: 1.10.11.1) Run Redis as Slave of Server A Run redis as Sentinel watching to Server B Now the first problem I had was that when I connected to the cluster address, I dont know which server are responding, and I need to connect to the master, because the slave is a readonly (only for failover) so of course HAProxy and Twemproxy is for that, but there is no implementation for windows, so I decided to create a proxy for that purpose: https://bitbucket.org/israelito3000/redis So basically I installed the redis-proxy on both servers and now when I connected from my client library the proxy always transmit the package to the master, so it is like a tunnel. When master redis fail, the sentinel automaticaly will change the role of the slave to master and the proxy will redirect the traffic to the new master, so basically from the client I dont need to do anything. It is important to said that I could not access directly to the servers (only thru the cluster ip) Share Improve this answer Follow answered May 22, 2015 at 2:47 Israel GarciaIsrael Garcia 79877 silver badges1616 bronze badges Add a comment  | 
I am trying to setup a high-availability setup where if a server goes down that is hosting my main Redis cache, that it will choose a different master, but I am a little confused after reading through all the documentation about Sentinels. For instance, if I have a url that I am pointing my Redis Client to: http://my.RedisServer.com:6379, how is the the sentinel helping to failover to another server say at http://mybackup.RedisServer.com:6379? I am using the ServiceStack.Redis client for .Net and have my Redis installation on a Windows server, but I am thinking in order to get high-availability I have to switch to Linux and use a Twemproxy setup or something? I am guessing I can't just store the http://my.RedisServer.com:6379 in my web.config and have it somehow work right? I imagine somewhere there has to be a DNS that maps to the 2+ IPs and is load-balanced like any H.A. web application... I think I saw something about a PooledRedisClientManager that might be my answer? Thanks for the clarification.
How does Redis Cache work with High Availability and Sentinel?
1 Nope, you can't use wildcards in the CACHE section. The example above will simply attempt to download the file named * in the folder s/ on cdn.example.com on page load. Share Improve this answer Follow answered Jun 12, 2015 at 14:17 salomvarysalomvary 1,12099 silver badges1313 bronze badges Add a comment  | 
I want to use the appcache for offline viewing of my app and I want to use it as CACHE MANIFEST CACHE: http://cdn.example.com/s/* NETWORK: * is there any way the browser will cache all the files in the the 's' folder and if not, is there any way I can specify all the files of particular folder or link to be included in the cache.
use wildcard in cache area in appcache
1 Unless your application has only one post, I really don't think you would want to cache the first call - fragment caching will cache a given post, but it still needs to know which post is being accessed. If you did cache this, every time you loaded the post#show page, regardless of what post you asked for, you would see one post - the post that was cached. Share Improve this answer Follow answered Aug 27, 2014 at 14:29 daxdax 10.8k88 gold badges5252 silver badges8686 bronze badges 2 I get the point that we need to know which post we are trying to access to to get the right cache. But actually if I do model caching (Rails.cache.fetch() {Myquery}) instead of fragment caching I can check if the cache exists and trigger the query only if it doesn't. But that's more complexity than fragment caching and auto expiration. Every post I read about fragment caching use this kind of exemple with cache @the-object-you're-caching but doesn't talk about the query. Your point is to tell that the query is unavoidable but the caching is still usefull for the gain of time of the rendering? – coding addicted Aug 27, 2014 at 15:24 probably it depends. your db call is only 1.2 ms, your view takes 4.8 ms. I think that I wouldn't bother with caching for something like this, but it could be you're using a pared down example. The database call isn't going to change (unless more is required from it) - the view could become more complex in which case you might find that the time you save by caching becomes worth your while. – dax Aug 27, 2014 at 15:51 Add a comment  | 
I'm playing with fragment caching, I have read the guides and watched the railscast. I'm trying to make some fragment caching on a basic show action: Controller: class PostsController < ApplicationController before_action :set_post, only: [:show] def show end private # Use callbacks to share common setup or constraints between actions. def set_post @post = Post.friendly.find(params[:id]) # @post = Post.find(params[:id]) end end View: <% cache @post do %> <h1><%= @post.title %></h1> <%= @post.content %> <% end %> Problem: whereas the fragment is build and read (see the logs below) the database is still hit. Is this a normal behaviour? I'm suspecting the before action filter to trigger the query before the cache is read. I was suspecting the friendly id system but the query happen with the classic find too. How should I cache this to avoid query? Logs: Started GET "/articles/article-3" for 127.0.0.1 at 2014-08-27 10:05:14 -0400 Processing by PostsController#show as HTML Parameters: {"id"=>"article-3"} Post Load (1.2ms) SELECT "posts".* FROM "posts" WHERE "posts"."slug" = 'article-3' ORDER BY "posts"."id" ASC LIMIT 1 Cache digest for app/views/posts/show.html.erb: 18a5c19e6efef2fd1ac4711102048e1c Read fragment views/posts/3-20140730194235000000000/18a5c19e6efef2fd1ac4711102048e1c (0.5ms) Rendered posts/show.html.erb within layouts/application (4.8ms)
Rails avoiding queries with fragment caching on a basic show action
Why do you want to use portable at all? (Identified)DataSerializable is less of a pain. Portable does not deal with null values, so it means that you need to convert to a string and then e.g. insert 'null' for null or '1234' for a number to deal with null. And with reading you read out the string, if it contains 'null' then you set null otherwise you e.g. do Long.parseLong. So unless you have a very good reason, I would stay far away from it.
I'm using Hazelcast to store some data in a IMap but currently I'm facing a "problem" with null properties in the POJO I'm storing in the cache. Imagine I have the following pojo: public class TestPojo implements Portable { public static final int CLASS_ID = 1; private Long id; private String description; private Long value; @Override public int getFactoryId() { return 1; } @Override public int getClassId() { return CLASS_ID; } @Override public void writePortable(PortableWriter writer) throws IOException { writer.writeLong("id", id); writer.writeUTF("description", description); writer.writeLong("value", value); } @Override public void readPortable(PortableReader reader) throws IOException { id = reader.readLong("id"); description = reader.readUTF("description"); value = reader.readLong("value"); } } The problem is while serializing this class that contains the property 'value' = null it will throw a NullPointerException because the writeLong method uses a primitive long. [3.3-EA2] Exception while load all task:com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.NullPointerException My question is: what should by the right way to handle this null values? I'm using Hazelcast 3.3-EA2. Thank you.
Hazelcast portable serialization - handling null properties
In case anyone else run into the same problem, I ended up adding an httphandler and added the "Cache-Control" and "Expires" in the response header public void ProcessRequest(HttpContext context) { var partial = context.Request.FilePath; var filepath = context.Server.MapPath("~/" + partial); context.Response.AddHeader("Content-Disposition", "filename=\"" + Path.GetFileName(filepath) + "\""); // To make sure partial html pages are not cached by browser(s) context.Response.AddHeader("Cache-Control", "No-Cache"); context.Response.AddHeader("Expires", "0"); context.Response.ContentType = MimeTypesHelper.GetMimeType(filepath); context.Response.WriteFile(filepath); }
The problem I am having is partial files(*.html) are getting cached by browser. While developing it's not a big problem but once I deploy the application, clients see the old page(s) until they clear their cache or hit ctrl F5 I have tried specifying meta tags(CACHE-CONTROL,EXPIRES) but still see those pages getting picked up from cache in the developer tool of chrome (Maybe I am missing something here?). I was going to try and add some random number in front of the url like <div ng-include src="'views/test.html?i=1000'"></div> But came across https://groups.google.com/forum/#!topic/angular/9gORRowzP2M ,where James cook rightly states that this way would fill cache with partials over and over. I read somewhere that it's better to set the meta tags in headers from the server but I don't know how to do that? I was thinking of somehow doing it in an http interceptor?Maybe somehow add meta tags in request or response of this httpinterceptor? https://gist.github.com/gnomeontherun/5678505 Any ideas how to do that? Or if it's good/bad idea? Or any other way to prevent partial pages getting cached by browser?
Prevent html pages from browsing caching
1 When you decide where to cache the results of complex queries, you should consider throughput as well as latency. If you put it in the database, you get a simpler solution, although it is unlikely to be able to handle as many requests per second as if you instead cached the data in memcached (or some other NoSQL database). Share Improve this answer Follow answered Jul 24, 2014 at 8:21 Christian DahlqvistChristian Dahlqvist 1,6751212 silver badges99 bronze badges Add a comment  | 
Which is faster? A two column select to a traditional db or a query to memcached? If the db query is roughly as fast, why bother with adding another layer to your stack (assuming you don't care about expiring entries)? Wouldn't it be easier to add a two column table (key varchar, value text) which can be used for all caching purposes?
memcached vs a db based key value table?