Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
php artisan cache:clear - Flush the application cache
php artisan config:cache - Create a cache file for faster configuration loading.
This is for configuration cache. This command will clear the configuration cache before it creates. More details
php artisan config:clear - Remove the configuration cache file
|
In Laravel have noticed there is two way to clear the cache
php artisan cache:clear
and
php artisan config:cache
However i realized only the second one working properly when changing the localization, adding laravel/passport package and etc..
What is their difference ?
|
Difference between "php artisan config:cache" and "php artisan cache:clear" in Laravel
|
Try:
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
This attribute, placed in controller class, disables caching. Since I don't need caching in my application, I placed it in my BaseController class:
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public abstract class BaseController : Controller
{
Here is nice description about OutputCacheAttribute: Improving Performance with Output Caching
You can place it on action too.
|
I'm having an issue with a page in internet explorer.
I have an ajax call that calls a form, in other browser, when I click the link it passes in the controller and load correctly data. but in IE, when its loaded once, it aways brings me the same old results without passing in the controller.
|
Internet Explorer Caching asp.netmvc ajax results
|
22
There are many ways to achieve this, however probably the easiest way is to use the build in methods for writing and reading Python pickles. You can use pandas.DataFrame.to_pickle to store the DataFrame to disk and pandas.read_pickle to read the stored DataFrame from disk.
An example for a pandas.DataFrame:
# Store your DataFrame
df.to_pickle('cached_dataframe.pkl') # will be stored in current directory
# Read your DataFrame
df = pandas.read_pickle('cached_dataframe.pkl') # read from current directory
The same methods also work for pandas.Series:
# Store your Series
series.to_pickle('cached_series.pkl') # will be stored in current directory
# Read your DataFrame
series = pandas.read_pickle('cached_series.pkl') # read from current directory
Share
Improve this answer
Follow
edited Jul 9, 2018 at 11:35
answered Jul 8, 2018 at 21:25
nijmnijm
2,1981313 silver badges2929 bronze badges
Add a comment
|
|
Is there an implementation for python pandas that cache the data on disk so I can avoid to reproduce it every time?
In particular is there a caching method for get_yahoo_data for financial?
A very plus would be:
very few lines of code to write
possibility to integrate the persisted series when new data is downloaded for the same source
|
Python pandas persistent cache
|
It looks like the cache-manager does all the "check it exists, if not run the lambda then store". If so, the only way to make that async is to have a GetAsync method that returns a Task<Store> rather than a Store, i.e.
public virtual Task<Store> GetStoreByUsernameAsync(string username)
{
return _cacheManager.GetAsync(string.Format("Cache_Key_{0}", username), () =>
{
return _storeRepository.GetSingleAsync(x => x.UserName == username);
});
}
Note that this doesn't need to be marked async as we aren't using await. The cache-manager would then do something like:
public async Task<Store> GetAsync(string key, Func<Task<Store>> func)
{
var val = cache.Get(key);
if(val == null)
{
val = await func().ConfigureAwait(false);
cache.Set(key, val);
}
return val;
}
|
My service layer is caching alot of Db requests to memcached, does this make it impossible to use Async/Await?? For example how could I await this?
public virtual Store GetStoreByUsername(string username)
{
return _cacheManager.Get(string.Format("Cache_Key_{0}", username), () =>
{
return _storeRepository.GetSingle(x => x.UserName == username);
});
}
Note: If the key exists in the cache it will return a "Store" (not a Task<Store>), if the key does not exist in the cache it will execute the lambda. If I change the Func to
return await _storeRepository.GetSingleAsync(x => x.UserName == username);
And the method signature to
public virtual async Task<Store> GetStoreByUsername(string username)
This will not work obviously because of the cache return type.
|
Async/Await and Caching
|
The purpose of your cache layer should be pretty much that : reflecting the corresponding data in your database, but providing it faster than the database would, or at least providing it without keeping the database busy.
To achieve this, you have two solutions :
know the exact lifespan of everything you store in cache
keep your cache up-to-date with your database
The first is pretty rare, but pretty easy to deal with : just update your cache on a regular basis.
The second point is what you'll most likely deal with in your projects : just update your cache when your database is updated. It's simpler than you'd think :
Add a new object to your cache right after you successfully added it to your database.
Update an object in your cache right after you successfully updated it in your database.
Delete an object from your cache right after you successfully deleted it in your database.
If your code is clean enough, it should be easy to implement an efficient cache policy on top of it. There's a little more about caching and how to do it well in that answer I posted some times ago. Hopefully this all will help you :)
|
Just wondering how you guys manage your cache invalidations. Given that there might objects (hundreds and thousands) in the cache that might be triggered by different algorithms or rules. How do you keep track of it all?
Is there anyway you can reference the relationships from a table in the database and enforce it somehow?
Do bear with me as I've never done any caching before.
|
Managing Cache Invalidation
|
I got this to work using a .manifest file in a UIWebView. I discovered the trick on the Apple developer forums.
you must deliver the proper mime-type for the manifest file: it must be of type "text/cache-manifest" - if it is anything else, then you won't get your files cached.
you can use web-sniffer at http://web-sniffer.net/ to look at the headers returned by your server.
if you are using .htaccess files on your web server, then add the following line to the .htaccess file:
AddType text/cache-manifest .manifest
make sure to clear your browser cache after making this change to see the effect.
HTH
Mark
edit: this is only supported on iOS 4.0 and later. You will need to implement your own caching mechanism for iOS 2.x and 3.x :-(
|
I'd like to be able to use the html5 cache manifest to store images locally on an iPhone that is visiting the page via a UIWebView within an app.
I've set up a sample that I think conforms to the specs, and appears to work in safari 4 and mobile safari, but not in my app's UIWebView.
The sample html is set up at http://bynomial.com/html5/clock3.html.
This is very similar to the sample provided in the HTML5 draft standard.
Here is the entire (non-template) code of the sample app I'm using for testing:
- (void)applicationDidFinishLaunching:(UIApplication *)application {
// I thought this might help - I don't see any difference, though.
NSURLCache* cache = [NSURLCache sharedURLCache];
[cache setDiskCapacity:512*1024];
CGRect frame = [[UIScreen mainScreen] applicationFrame];
UIWebView* webView = [[UIWebView alloc] initWithFrame:frame];
[window addSubview:webView];
NSString* urlString = @"http://bynomial.com/html5/clock3.html";
NSURL* url = [NSURL URLWithString:urlString];
NSURLRequest* request = [NSURLRequest requestWithURL:url];
[webView loadRequest:request];
[window makeKeyAndVisible];
}
I've reviewed a few related questions on stackoverflow, but they don't seem to provide info to solve this. For example, I'm pretty sure the files I'm trying to cache are not too large, since they are just a couple small text files (way < 25k).
Any ideas for how to get this to work?
|
Html5 cache manifest in a UIWebView?
|
The operating system can clear this folder if needed.
Documentation
Put data cache files in the Library/Caches/ directory. Cache data can be used for any data that needs to persist longer than temporary data, but not as long as a support file. Generally speaking, the application does not require cache data to operate properly, but it can use cache data to improve performance. Examples of cache data include (but are not limited to) database cache files and transient, downloadable content. Note that the system may delete the Caches/ directory to free up disk space, so your app must be able to re-create or download these files as needed. (emphasis mine)
|
I'm saving media files at this path and i wonder does iOS auto clean the cache or i have to do it manually?
let documentsUrl = self.fileManager.urls(for: .cachesDirectory, in: .userDomainMask).first!
Searched and there is no particular answer for it.
|
Does iOS clean cache directory automatically?
|
ActiveSupport::Cache::MemoryStore is what you want to use. Rails.cache uses either MemoryStore, FileStore or in my case DalliStore :-)
You can have global instance of ActiveSupport::Cache::MemoryStore and use it or create a class with a singleton pattern that holds this object (cleaner). Set Rails.cache to the other cache store and use this singleton for MemoryStore
Below is this class:
module Caching
class MemoryCache
include Singleton
# create a private instance of MemoryStore
def initialize
@memory_store = ActiveSupport::Cache::MemoryStore.new
end
# this will allow our MemoryCache to be called just like Rails.cache
# every method passed to it will be passed to our MemoryStore
def method_missing(m, *args, &block)
@memory_store.send(m, *args, &block)
end
end
end
This is how to use it:
Caching::MemoryCache.instance.write("foo", "bar")
=> true
Caching::MemoryCache.instance.read("foo")
=> "bar"
Caching::MemoryCache.instance.clear
=> 0
Caching::MemoryCache.instance.read("foo")
=> nil
Caching::MemoryCache.instance.write("foo1", "bar1")
=> true
Caching::MemoryCache.instance.write("foo2", "bar2")
=> true
Caching::MemoryCache.instance.read_multi("foo1", "foo2")
=> {"foo1"=>"bar1", "foo2"=>"bar2"}
|
I'd like to use 2 caches -- the in memory default one and a memcache one, though abstractly it shouldn't matter (I think) which two.
The in memory default one is where I want to load small and rarely changing data. I've been using the memory one to date. I keep a bunch of 'domain data' type stuff from the database in there, I also have some small data from external sources that I refresh every 15 min - 1 hour.
I recently added memcache because I'm now serving up some larger assets. Sort of complex how I got into this, but these are larger ~kilobytes, relatively small in quantity (hundreds), and highly cacheable -- they change, but a refresh once per hour is probably too much. This set might grow, but it's shared across all hosts. Refreshes are expensive.
The first set of data has been using the default memory cache for a while now, and has been well-behaved. Memcache is perfect for the second set of data.
I've tuned memcache, and it's working great for the second set of data. The problem is that because of my existing code that was done 'thinking' it was in local memory, I'm doing several trips to memcache per request, which is increasing my latency.
So, I want to use 2 caches. Thoughts?
(note: memcache is running on different machine(s) than my server. Even if I ran it locally, I have a fleet of hosts so it wouldn't be local to all. Also, I want to avoid needing to just get bigger machines. Even though I probably could solve this problem by making the memory bigger and just using the in memory (the data really isn't that big), this doesn't solve the problem as I scale, so it will just be kicking the can.)
|
How to use multiple caches in rails? (for real)
|
24
Please Read the Description below as per official documentation : https://futurestud.io/tutorials/glide-caching-basics
Share
Improve this answer
Follow
answered Jan 8, 2018 at 14:13
Saurabh MistrySaurabh Mistry
13.2k55 gold badges5353 silver badges7373 bronze badges
3
The 4.x after the Glide what means?
– Katona Tamas
Jan 8, 2018 at 14:33
3
version 4.x is version of library like 4.5.0
– Saurabh Mistry
Jan 8, 2018 at 14:35
Does it matter which one I use if I just want Glide to use the cache when needed?
– MikkelT
Nov 9, 2021 at 12:50
Add a comment
|
|
When I use glide, some images doesn't loads. How can I store it in the cache, or anywhere, to when I'll use the app all my images loads.
Example picture of my problem:
My code:
.java
home_ib7_monster_truck =
(ImageButton)findViewById(R.id.home_ib7_monster_truck);
Glide.with(Home.this)
.load(R.drawable.moviep_monster_truck)
.centerCrop()
.fitCenter()
.diskCacheStrategy(DiskCacheStrategy.ALL)
.into(home_ib7_monster_truck);
.xml
<ImageButton
android:id="@+id/home_ib7_monster_truck"
android:layout_width="98dp"
android:layout_height="155dp"
android:layout_alignStart="@+id/home_ib4_keplers_dream"
android:layout_below="@+id/home_ib4_keplers_dream"
android:layout_marginTop="14dp"
android:background="@android:color/transparent"
android:scaleType="fitCenter" />
I use glide 'cause I saw it's easy to use, and I can load images from drawables.
And the main problem is, that it do this randomly, so I mean once all my images are doesn't seems, another time all images can see, and I don't know why.
I use 980x1550 sized images, 'casuse I don't want my images beign full of pixels, and when I use another method like src in the .xml I got memory error.
So anyone know any soluton, how can I cache the images?
EDIT:
With these (.diskCacheStrategy(DiskCacheStrategy.RESULT ; diskCacheStrategy(DiskCacheStrategy.Source) I have the same problem.
I get this from LogCat: Throwing OutOfMemoryError "Failed to allocate a 1519012 byte allocation with 230112 free bytes and 224KB until OOM"
and i don't know how, when it should be cached.
|
How to cache images in Glide
|
After long hours searching on SO and Github, I found imgCache.js, a JS library that handle file cache for Chrome, Android and iOs (through Cordova).
https://github.com/chrisben/imgcache.js/
Then, basically :
var target = $('.cached-img');
ImgCache.isCached(target.attr('src'), function(path, success){
if(success){
// already cached
ImgCache.useCachedFile(target);
} else {
// not there, need to cache the image
ImgCache.cacheFile(target.attr('src'), function(){
ImgCache.useCachedFile(target);
});
}
});
|
Here's my problem :
I making a Web/Mobile app, using AngularJS and Cordova. For offline purpose, I use localStorage to store all the data of the app (JSON, parameters, and so on).
The thing is : I need to store / cache images locally (again, offline purpose). As localStorage size limit is around 5mo, I can't use it, I need more.
I thought I could use a cache manifest, but it doesn't work, as I need to update it regulary, without recompiling the app (I thought I could put the cache manifest on an external server, but it's like I can't use a cache manifest from another domain).
So I'm thinking of using Cordova/Phonegap File API, but I have no idea to achieve that...
Any help or ideas ?
|
How can I cache image files locally with PhoneGap / Cordova?
|
Speculation:
Here is what appears to be going on.
knitr quite sensibly caches objects as as soon as they are created. It then updates their cached value whenever it detects that they have been altered.
data.table, though, bypasses R's normal copy-by-value assignment and replacement mechanisms, and uses a := operator rather than a =, <<-, or <-. As a result knitr isn't picking up the signals that DT has been changed by DT[, c:=5].
Solution:
Just add this block to your code wherever you'd like the current value of DT to be re-cached. It won't cost you anything memory or time-wise (since nothing except a reference is copied by DT <- DT) but it does effectively send a (fake) signal to knitr that DT has been updated:
```{r, cache=TRUE, echo=FALSE}
DT <- DT
```
Working version of example doc:
Check that it works by running this edited version of your doc:
=0
|
This is related in spirit to this question, but must be different in mechanism.
If you try to cache a knitr chunk that contains a data.table := assignement then it acts as though that chunk has not been run, and later chunks do not see the affect of the :=.
Any idea why this is? How does knitr detect objects have updated, and what is data.table doing that confuses it?
It appears you can work around this by doing DT = DT[, LHS:=RHS].
Example:
```{r}
library(data.table)
```
Data.Table Markdown
========================================================
Suppose we make a `data.table` in **R Markdown**
```{r, cache=TRUE}
DT = data.table(a = rnorm(10))
```
Then add a column using `:=`
```{r, cache=TRUE}
DT[, c:=5]
```
Then we display that in a non-cached block
```{r, cache=FALSE}
DT
```
The first time you run this, the above will show a `c` column,
from the second time onwards it will not.
Output on second run
|
why does knitr caching fail for data.table `:=`?
|
11
The method you use is actually the correct way to clear your cache, there is just one minor 'error' in your code. The enumerator only is valid as long as the original collection remains unchanged. So while the code might work most of the time, there might be small errors in certain situations. Best is to use the following code, which does essentially the same, but does not use the enumerator directly.
List<string> keys = new List<string>();
IDictionaryEnumerator enumerator = Cache.GetEnumerator();
while (enumerator.MoveNext())
keys.Add(enumerator.Key.ToString());
for (int i = 0; i < keys.Count; i++)
Cache.Remove(keys[i]);
Share
Improve this answer
Follow
answered Jun 2, 2011 at 9:35
KilZoneKilZone
1,5951010 silver badges2020 bronze badges
3
Your code does not actually fix the problem of enumerating on a collection that might change. You may have shortened the window in which it can happen, but keys.Add has the same problem as calling Remove directly, which is that the collection can change while you're enumerating.
– Matt
May 23, 2013 at 18:17
1
Yes it does, neither loop changes the collection that it's using to enumerate. You first collect all keys in one collection (and make a copy of those), then use those keys to remove values from another collection.
– KilZone
May 24, 2013 at 16:49
2
Ahh, see I imagined the problem to be the enumeration was changing on another thread that was using the cache, and so you are still susceptible to those types of enumertaion changes. MSDN on http cache says the class is thread safe, so it may not be an issue. It doesn't mention specifically if the enumerator is thread safe (like it does on concurrentdictionary)
– Matt
May 24, 2013 at 17:42
Add a comment
|
|
I am building an ASP.NET/Umbraco powered website which is very custom data driven via entity framework, we are having to cache quite a lot of the data queries (For example searches by keyword) as it's a busy site.
But when a user creates a new data entry, I need to clear all the cached queries (Searches etc..) so the new entry is available in the results.
So in my create, delete and update methods I am calling the following method:
public static void ClearCacheItems()
{
var enumerator = HttpContext.Current.Cache.GetEnumerator();
while (enumerator.MoveNext())
{
HttpContext.Current.Cache.Remove(enumerator.Key.ToString());
}
}
Is this really bad? I can't see how else I am supposed to clear the cached items?
|
Most Efficient Way Of Clearing Cache Using ASP.NET
|
Even though this is a bit hacky, maybe have a go at this workaround (I'm assuming this is for development)
As per the notes in disabling ajax cacheing here How to disable Ajax caching in Safari browser? you could set no-cache headers while developing or in plain HTML like this
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
<META HTTP-EQUIV="Expires" CONTENT="-1">
obviously it won't fix your initial issue of being stuck. but you can break the cycle by adding an arbitrary query parameter ?something=3164 so the URL is effectively unique. then next time it loads hopefully it will hold onto the no-cache params.
If even that then doesnt work you could set up a bookmark which redirected you to a different random=14361 number each time to they are all effectively unique calls - but then we're getting into silly territory!
I'd like to have a proper solution but when I'm developing JS webapps I've found that sometimes everything refreshes properly for a while, then sometimes it doesnt... no real pattern i can tell - apart from the fact i think it seems to do it less when the debugger is enabled (but thats totally unsubstantiated ;-)
|
I'm trying to debug a site on my iPhone4 (iOS4) iPad1 (iOS3.3) and desktop.
My problem is I cannot clear the iPhone cache at all.
If I add alerts/consoles to the js files I'm debugging, they show up on iPad and desktop, but the iPhone just keeps reloading from the cache.
If I clear the cache through settings>safari>delete browser history, cache, cookies and in Safari delete all bookmarks and remove the files on the server, iPad and desktop break (missing files) but the iPhone still loads the page as if nothing happened.
Not sure this is the right place to ask, but maybe someone else has a similar experience and an idea how to workaround?
Thanks!
EDIT:
I played around with this some more. If I start an appliction through the icon the cache seems cleared. Only when I open the page in Mobile Safari, it still uses the wrong file from cache. Pointers still welcome!
EDIT:
I'm starting a bounty on this. I'm using RequireJS and JqueryMobile on the site, so these may also be reasons for the cache not clearing. Still, I don't understand why it clears in app-mode and why it doesn't clear in Mobile Safari.
I have tried the following:
1. Clicking reload page in the URL bar does not clear the cache. Clicking on the link and then loading the page via go does seem to clear the cache once in a while
|
Why my Mobile Safari cache won't clear?
|
If you're not willing to use your own block class inheriting from Mage_Catalog_Block_Navigation, in which you could set your own cache informations (to make it eg depending on the current product), you can get rid of the cache for your block by using this in its layout definition :
<block type="catalog/navigation" name="left.navigation.block" as="left.navigation.block" template="catalog/navigation/page_left.phtml">
<action method="unsetData"><key>cache_lifetime</key></action>
</block>
|
I have a sidebar block in my layout that is being displayed on different pages.
In this block I have a list of products, and I want to select the current product when I'm on a product page.
I'm using :
$current_product = Mage::registry('current_product');
to get the current product, but this works only for the first time I'm loading the product page. When I select a different product the code above returns the same value (the first product).
I'm assuming this happens because I'm using Magento cache. What can I do to get the right value?
The same thing happens when I use:
$currentCategory = Mage::registry('current_category');
The sidebar block is a navigation template I've added here:
..\app\design\frontend\default\mytheme\template\catalog\navigation\page_left.phtml.
I'm adding it to the layout with this XML :
<block type="catalog/navigation" name="left.navigation.block" as="left.navigation.block" template="catalog/navigation/page_left.phtml"/>
|
Magento - get current product
|
It's possible to use the same Redis for multiple microservices, just make sure to prefix your redis cache keys to avoid conflict between all microservices.
You can use multi db in the same redis instance (i.e one for each microservice) but it's discouraged because Redis is single threaded.
The best way is to use one Redis for each microservices, then you can easily flush one of them without touching others.
From my personal experience with a redis cache in production (with 2 million keys), there is no problem using EXPIRE. I encourage you to use it.
|
I am very much new to redis. I have been investigating on redis for past few days.I read the documentation on cache management(lru cache), commands ,etc. I want to know how to implement caching for multiple microservice(s) data .
I have few questions:
Can all microservices data(cached) be kept under a single instance of redis
server?
Should every microservice have its own cache database in redis?
How to refresh cache data without setting EXPIRE? Since it would consume more memory.
Some more information on best practices on redis with microservices will be helpful.
|
How to use redis for number of micro-services?
|
It the depends on what cache-control is used. Check in firebug och chrome inspector and see what expiration date are set.
If you've set the cache-control to public you can't affect the control since the files are cache on various proxies and server along the way.
If you use cache-control private you should be able to reset you browser cache and be fine, but as you say sometimes you get the wrong files from Google's production environment. I've had the same problem. The fastest solution is to add a query param to the files loaded.
|
I am running into a known AppEngine issue where the wrong static content is cached if I go to a particular URL for my app, but the right static content shows up if I append a ?foo parameter to bust the cache, and VERSION.myapp.appspot.com works too.
Is there any way to get the correct content showing up at the unmodified URL?
I would be happy to delete the app and restore it or anything drastic. The app isn't live, but I need it to be in a couple of hours. Anything to get those URLS working so the mobile app talking to the AppEngine app gets the right data.
EDIT
cURLing the headers, I see:
HTTP/1.1 200 OK
ETag: "ZN9VxQ"
Date: Tue, 14 Aug 2012 02:00:58 GMT
Expires: Wed, 15 Aug 2012 02:00:58 GMT
Content-Type: text/html
Server: Google Frontend
Cache-Control: public, max-age=86400
Age: 34623
Transfer-Encoding: chunked
Am I hosed for another 50,000 seconds? Anyway to shorten that?
EDIT FOR COMMENTS:
In app.yaml, I have this handler:
- url: /static
static_dir: static
expiration: 1s
I have now tried removing the expiration:
- url: /static
static_dir: static
And I added this to the top of app.yaml, based on the docs:
default_expiration: "1m"
Also, deleting files doesn't make them disappear when I deploy.
|
Any way to force reset of all cached static files on AppEngine?
|
This problem drove me nuts for like a month a while back. You have to disable IIS caching in the registry, as far as I know this isn't documented anywhere for IIS 7 but instead is an old IIS 5 trick that still works. You can either turn the below into a .reg file and import it or you can just navigate to the section and add it manually. I recommend rebooting after changing this parameter, I'm not sure if IIS picks it up after just an iisreset.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\InetInfo\Parameters]
"DisableMemoryCache"=dword:1
|
I have set up IIS 7.5 to statically serve some files, and some of these files are actually symbolic links (created by mklink).
Even if I disabled both kernel and user caching, these files seems to be cached somehow by IIS. And IIS is still serving old versions after the files are modified.
To be sure that it is not caused by ASP.NET, I've created a dedicated unmanaged AppPool. I have also checked that these file are not cached by browsers.
My web.config is following:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<directoryBrowse enabled="true" />
<caching enabled="false" enableKernelCache="false" />
<urlCompression doStaticCompression="false" doDynamicCompression="false" />
<staticContent>
<clientCache cacheControlMode="DisableCache" />
</staticContent>
</system.webServer>
</configuration>
There are several people mentioning this problem:
http://forums.iis.net/t/1166077.aspx
http://forums.iis.net/t/1171204.aspx
Any hints how to solve this problem?
|
How do I prevent IIS 7.5 from caching symlink content?
|
I think I figured it out. It looks like the issue is that VaryByParam, when the input parameter is an object, uses ToString() on that object to determine it's uniqueness. So this leaves two options:
Overriding ToString() to provide a unique identifier.
Passing a unique identifier as an additional parameter:
<% Html.RenderAction("RenderContent", Model, Model.Id); %>
[Authorize]
[OutputCache(Duration = 6000, VaryByParam = "id", VaryByCustom = "browser")]
public ActionResult RenderContent(Content content, string id)
{
return PartialView(content);
}
|
I'm attempting to use the new partial page caching available in ASP.NET MVC 3. In my view, I'm using:
<% Html.RenderAction("RenderContent", Model); %>
Which calls the controller method:
[Authorize]
[OutputCache(Duration = 6000, VaryByParam = "*", VaryByCustom = "browser")]
public ActionResult RenderContent(Content content)
{
return PartialView(content);
}
Note that both the original view and the partial view are using the same view model.
The problem is that VaryByParam doesn't work - RenderContent() always returns the same cached HTML no matter what view model is passed to it. Is there something about VaryByParam that I don't understand?
|
Partial Page Caching and VaryByParam in ASP.NET MVC 3
|
I have set integration tests that confirm all of the main areas of the site are available (a few hundred pages in total). They don't do anything that changes data - just pull back the pages and forms.
I don't currently run them when I deploy my production instance, but now you mention it - it may actually be a good idea.
Another alternative would be to pull every page that appears in your sitemap (if you have one, which you probably should). It should be really easy to write a gem / rake script that does that.
|
I am wondering if anyone has any plugins or capistrano recipes that will "pre-heat" the page cache for a rails app by building all of the page cached html at the time the deployment is made, or locally before deployment happens.
I have some mostly static sites that do not change much, and would run faster if the html was already written, instead of requiring one visitor to hit the site.
Rather than create this myself (seems easy but it lowwwww priority) does it already exist?
|
"Warm Up Cache" on deployment
|
There are at least 3 options to store an object per-request in ASP.NET Core:
1. Dependency Injection
You could totally re-design that old code: use the built-in DI and register a Database instance as scoped (per web-request) with the following factory method:
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<Database>((provider) =>
{
return new DatabaseWithMVCMiniProfiler("MainConnectionString");
});
}
Introduction to Dependency Injection in ASP.NET Core
.NET Core Dependency Injection Lifetimes Explained
2. HttpContext.Items
This collection is available from the start of an HttpRequest and is discarded at the end of each request.
Working with HttpContext.Items
3. AsyncLocal<T>
Store a value per a current async context (a kind of [ThreadStatic] with async support). This is how HttpContext is actually stored: HttpContextAccessor.
What's the effect of AsyncLocal<T> in non async/await code?
ThreadStatic in asynchronous ASP.NET Web API
|
My old code looks like this:
public static class DbHelper {
// One conection per request
public static Database CurrentDb() {
if (HttpContext.Current.Items["CurrentDb"] == null) {
var retval = new DatabaseWithMVCMiniProfiler("MainConnectionString");
HttpContext.Current.Items["CurrentDb"] = retval;
return retval;
}
return (Database)HttpContext.Current.Items["CurrentDb"];
}
}
Since we don't have HttpContext anymore easily accesible in core, how can I achieve the same thing?
I need to access CurrentDb() easily from everywhere
Would like to use something like MemoryCache, but with Request lifetime. DI it's not an option for this project
|
How to Per-Request caching in ASP.net core
|
try (with changing the values)
<mvc:resources mapping="/static/**" location="/public-resources/"
cache-period="31556926"/>
<mvc:annotation-driven/>
You can also use an interceptor:
<mvc:interceptors>
<mvc:interceptor>
<mvc:mapping path="/static/*"/>
<bean id="webContentInterceptor"
class="org.springframework.web.servlet.mvc.WebContentInterceptor">
<property name="cacheSeconds" value="31556926"/>
<property name="useExpiresHeader" value="true"/>
<property name="useCacheControlHeader" value="true"/>
<property name="useCacheControlNoStore" value="true"/>
</bean>
</mvc:interceptor>
</mvc:interceptors>
See the MVC docs
|
How to enable browser caching of static content(images, css, js) with Tomcat?
Preferable solution will be editingspring MVC config files or web.xml
|
How to enable browser caching of static content(images, css, js) with Tomcat?
|
22
I ended up modifying the built-in lru_cache to use psutil.
The modified decorator takes an additional optional argument use_memory_up_to. If set, the cache will be considered full if there are fewer than use_memory_up_to bytes of memory available (according to psutil.virtual_memory().available). For example:
from .lru_cache import lru_cache
GB = 1024**3
@lru_cache(use_memory_up_to=(1 * GB))
def expensive_func(args):
...
Note: setting use_memory_up_to will cause maxsize to have no effect.
Here's the code: lru_cache.py
Share
Improve this answer
Follow
answered May 5, 2014 at 21:05
WillWill
4,41166 gold badges4040 silver badges4848 bronze badges
Add a comment
|
|
I'm using Python 3's builtin functools.lru_cache decorator to memoize some expensive functions. I would like to memoize as many calls as possible without using too much memory, since caching too many values causes thrashing.
Is there a preferred technique or library for accomplishing this in Python?
For example, this question lead me to a Go library for system memory aware LRU caching. Something similar for Python would be ideal.
Note: I can't just estimate the memory used per value and set maxsize accordingly, since several processes will be calling the decorated function in parallel; a solution would need to actually dynamically check how much memory is free.
|
Memory-aware LRU caching in Python?
|
16
The short answer: There is no way to tell the browsers of the users to "forget" the R 301 redirect. 301 means permanent, it can be only undone on action of the user or when the cache expires.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.2
Similar Q and A on Stackoverflow:
Apache - how to disable browser caching while debugging htaccess,
Cannot remove 301 redirect
Try to avoid 301 redirects and use 302 (temporarily) instead. Here is an article how to set no cache for 301 redirects (didn't try it):
https://github.com/markkolich/blog/blob/master/content/entries/set-cache-control-and-expires-headers-on-a-redirect-with-mod-rewrite.md
What you could do in your scenario: You could add a header redirect to the file index.shtml, which sends the user to the original file, where he should usually go.
Share
Improve this answer
Follow
edited Oct 24, 2020 at 7:32
Neil Erdwien
7344 bronze badges
answered May 1, 2014 at 7:25
meberhardmeberhard
1,81511 gold badge2020 silver badges2424 bronze badges
4
Redirecting index.shtml back to the root creates a redirect loop in the affected browsers. I made it [302] redirect back to example.com/?home so the URL is at least different, but I'm not sure if that works or not!
– DMack
May 2, 2014 at 16:07
1
The instruction in the last link to disable cache works by me. Thanks!
– ostmond
Aug 14, 2019 at 12:29
Last link is no longer valid, do you remember the solution for this?
– Steven Combs
Oct 19, 2020 at 18:28
Even with 302 you have the issue that a browsers keep using a cached redirect although the site is not redirecting anymore. Is there a solution for 302 redirects?
– basZero
Nov 23, 2020 at 8:47
Add a comment
|
|
I inherited a domain that previously had a 301 redirect from the root ("/") to "/index.shtml"
I've removed the redirect and a different site on the domain, but people who visited the site in the past will have the redirect behavior cached in their browsers... for a terribly long time, unless they manually clear their caches.
Anyone trying to go to example.com in these browsers will be sent to example.com/index.shtml before they even make any HTTP requests. Right now this is a huge problem because there is no index.shtml, but is there something I can do with headers to tell browsers to "forget about that redirect you just did!"?
|
Force browsers to forget cached redirects?
|
You don't need to do it in all of your components. As soon as an image is downloaded it gets cached by the browser and will be accessible in all components, so you can do this only once somewhere in a high-level component.
I don't know what exactly UX you are trying to create by caching images, however, your code only initiates downloading images but doesn't know whether an image is being downloaded, has been downloaded successfully or even failed. So, for example, you want to show a button to change images or add a class to a component only when the images have been downloaded (to make it smooth), your current code may let you down.
You may want to resolve this with Promises.
// create an utility function somewhere
const checkImage = path =>
new Promise((resolve, reject) => {
const img = new Image()
img.onload = () => resolve(path)
img.onerror = () => reject()
img.src = path
})
...
// then in your component
class YourComponent extends Component {
this.state = { imagesLoaded: false }
componentDidMount = () =>
Promise.all(
R.take(limit, imgUrls).map(checkImage)
).then(() => this.setState(() => ({ imagesLoaded: true })),
() => console.error('could not load images'))
render = () =>
this.state.imagesLoaded
? <BeautifulComponent />
: <Skeleton />
}
Regarding memory consumption — I don't think anything bad will happen. Browsers normally limit the number of parallel xhr requests, so you won't be able to create a gigantic heap usage spike to crash anything, as unused images will garbage collected (yet, preserved in browser cache).
Redux store is a place to store the app state, not the app assets, but anyway you won't be able to store any actual images there.
|
Suppose I have a list of url's like so :
[ '/images/1', '/images/2', ... ]
And I want to prefetch n of those so that transitioning between images is faster. What I am doing now in componentWillMount is the following:
componentWillMount() {
const { props } = this;
const { prefetchLimit = 1, document = dummyDocument, imgNodes } = props;
const { images } = document;
const toPrefecth = take(prefetchLimit, images);
const merged = zip(toPrefecth, imgNodes);
merged.forEach(([url, node]) => {
node.src = url;
});
}
with imgNodes being defined like so:
imgNodes: times(_ => new window.Image(), props.prefetchLimit),
and times, zip, and take coming from ramda.
Now when I use those urls inside of react like so:
<img src={url} />
it hits the browser cache according to the n0 and n1 tags regardless of where the url is used. I also plan on using this to prefetch the next n2 images whenever we hit n3 inside of the view, reusing n4 in the same manner.
My question are:
Is this even a valid idea give 100+ components that will use this idea but only 1 will be visible at a time?
Will I run into memory issues by doing this? I am assuming that n5 will be garbage collected when the component is unmounted.
We are using n6 so I could save these images in the store but that seems like I am handling the caching instead of leveraging the browser's natural cache.
How bad of an idea is this?
|
How To Cache Images in React?
|
The original poster wanted to prevent static assets from getting into the general Rails cache, which led them to want to disable the Rack::Cache. Rather than doing this, the better solution is to configure Rack::Cache to use a separate cache than the general Rails cache.
Rack::Cache should be configured differently for entity storage vs meta storage. Rack::Cache has two different storage areas: meta and entity stores. The metastore keeps high level information about each cache entry including HTTP request and response headers. This area stores small chunks of data that is accessed at a high frequency. The entitystore caches the response body content which can be a relatively large amount of data though it is accessed less frequently than the metastore.
The below configuration caches the metastore info in memcached but the actual body of the assets to the file system.
Using memcached gem:
config.action_dispatch.rack_cache = {
:metastore => 'memcached://localhost:11211/meta',
:entitystore => 'file:tmp/cache/rack/body',
:allow_reload => false
}
Using dalli gem
config.action_dispatch.rack_cache = {
:metastore => Dalli::Client.new,
:entitystore => 'file:tmp/cache/rack/body',
:allow_reload => false
}
By the way this configuration is the recommendation for Heroku:
https://devcenter.heroku.com/articles/rack-cache-memcached-static-assets-rails31
|
I'm using CloudFlare CDN on my Rails 3.1 application. Cloudflare is a CDN that works at the DNS level. On the first hit to a static asset, CloudFlare loads it from your app then caches it in their CDN. Future requests for that asset load from the CDN instead of your app.
The problem I'm having is that if you set controller caching to true:
config.action_controller.perform_caching = true
it enables the Rack::Cache middleware. Since Rails sets a default cache control setting for static assets, those assets get written to the Rails.cache store. As a result my cache store (in my case redis) is being filled up with static assets with the url as the hash key.
Unfortunately, I can't turn off the static asset cache control headers without affecting how Cloudflare and my users' browsers cache the assets. I can't turn off controller caching or I lose page/action/fragment caching. Same result if I delete the Rack::Cache middleware.
Does anyone have any other ideas?
Update: I've opened a ticket on GitHub here.
|
How do I prevent Rails 3.1 from caching static assets to Rails.cache?
|
Only sessions are unique to every client, not necessarily cookies.
What you want makes sense and is possible with Varnish, it is just a matter of carefully crafting your own vcl. Please pay attention to the following parts of the default.vcl:
sub vcl_recv {
...
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
}
sub vcl_hit {
if (!obj.cacheable) {
return (pass);
}
...
}
sub vcl_fetch {
if (!beresp.cacheable) {
return (pass);
}
if (beresp.http.Set-Cookie) {
return (pass);
}
...
}
You have to replace these parts with your own logic; i.e. define your own vcl_ functions. By default, requests (vcl_recv) and responses (vcl_fetch) with cookies are not cacheable. You know your back-end application best and you should rewrite the generic caching logic to this specific case. That is, you should define in which case varnish does a lookup, pass or deliver.
In your case, you will have pages (case 1 and 2) without a vary-by cookie, which will be cached and shared by everyone (requests with/without cookies); just don't mind req.http.Cookie in vcl_recv. I wouldn't cache pages (case 3) with a vary-by cookie -or at least not for a long time-, as they can not be shared at all; do a 'pass' in vcl_fetch.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I want to use Varnish to cache certain pages even in the presence of cookies. There are 3 possibilities that I need to take care of:
An anonymous user is viewing some page
A logged in user is viewing some page with light customization. These customizations are all stored in a signed-cookie and are dynamically populated by Javascript. The vary-cookie http header is not set.
A logged in user is viewing some page with customized data from the database. The vary-cookie http header is set.
The expected behaviors would be:
Cache the page. This is the most basic scenario for Varnish to handle.
Cache the page and do not delete the cookie because some Javascript logic needs it.
Never cache this page because vary-cookie is signalling the cookie contents will affect the output of this page.
I have read some docs on Varnish and I cannot tell if this is the default behavior or if there is some setup I have to do in VCL to make it happen.
|
How to make Varnish ignore, not delete cookies [closed]
|
9
I have the same kind of issue: one db context, 2 or more different db models (different by table names, only)
My solution for EF6: One can still use the internal Entity Framework caching of db model but make a differentiation between DbModel(s) on the same DbContext by implementing IDbModelCacheKeyProvider Interface on derived DbContext.
MSDN doc is here: https://msdn.microsoft.com/en-us/library/system.data.entity.infrastructure.idbmodelcachekeyprovider(v=vs.113).aspx
And it says:
Implement this interface on your context to use custom logic to calculate the key used to lookup an already created model in the cache. This interface allows you to have a single context type that can be used with different models in the same AppDomain, or multiple context types that use the same model.
Hope it helps someone.
Share
Improve this answer
Follow
edited Nov 1, 2017 at 7:39
answered Oct 31, 2017 at 23:34
ion173ion173
9911 silver badge55 bronze badges
3
1
This works perfectly. All you have to do is return a different key for each instance of the model.
– Kramii
Dec 3, 2018 at 12:26
@Kramii I suspect there might be a caveat in your approach if we use a random guid as key: If the underlying cache is boundless (that is to say it's not capped in some way, p.e. 1000 elements or something) then the more you spawn db-contexts during the application lifetime the more db-models get cached and you end up with a memory leak. Just a word of caution.
– XDS
Aug 14, 2019 at 15:20
This is actually far better than simply disabling the cache
– Zar Shardan
Mar 16, 2021 at 10:38
Add a comment
|
|
Following MSDN documentation we can read:
The model for that context is then cached and is for all further instances of the context in the app domain. This caching can be disabled by setting the ModelCaching property on the given ModelBuidler, but note that this can seriously degrade performance.
The problem is the model builder does not contain any property named ModelCaching.
How it is possible to disable the model caching (e.g. for changing model configuration in a run-time)?
|
How to disable model caching in Entity Framework 6 (Code First approach)
|
EDIT: See this thread on asktom, which describes how and why not to do this.
If you are in a test environment, you can put your tablespace offline and online again:
ALTER TABLESPACE <tablespace_name> OFFLINE;
ALTER TABLESPACE <tablespace_name> ONLINE;
Or you can try
ALTER SYSTEM FLUSH BUFFER_CACHE;
but again only on test environment.
When you test on your "real" system, the times you get after first call (those using cached data) might be more interesting, as you will have cached data. Call the procedure twice, and only consider the performance results you get in subsequent executions.
|
I'm trying to test the utility of a new summary table for my data.
So I've created two procedures to fetch the data of a certain interval, each one using a different table source. So on my C# console application I just call one or another. The problem start when I want to repeat this several times to have a good pattern of response time.
I got something like this: 1199,84,81,81,81,81,82,80,80,81,81,80,81,91,80,80,81,80
Probably my Oracle 10g is making an inappropriate caching.
How I can solve this?
|
How to disable oracle cache for performance tests
|
It really is the same cache at the end, only HttpContext.Current can sometimes be null (when not in a web context, or in a web context but not yet constructed). You'd be safe to always use HttpRuntime.Cache.
|
I know there is a very similar question here but I was hoping to get a better explination. Why would I ever use HttpContext.Cache instead of HttpRuntime.Cache if the HttpContext really uses the HttpRuntime.Cache behind the scenes?
In the article Simulate a Windows Service using ASP.NET to run scheduled jobs Omar uses the HttpContext to store his cache items, but when Jeff Atwood Implemented it here he chose to use the HttpRuntime instead. Obviously in this particular situation it makes sense since since you don't have to do a web request to add the cache item back into the HttpContext.
However I'm looking for some good pointers as to when to use one versus the other.
|
What's the difference between the HttpRuntime Cache and the HttpContext Cache?
|
5
I've had a similar experience, but I don't believe it was with an actual helper class, it was with anything I wrote under the lib/ directory. If you've had to use a require 'some_class' statement, then you should switch it to:
require_dependency 'some_class'
Worked like a charm for me.
Share
Improve this answer
Follow
answered Oct 24, 2011 at 18:30
shakerlxxvshakerlxxv
41244 silver badges88 bronze badges
0
Add a comment
|
|
i have a rails 3 app in dev mode that won't load any changes i make when its running webrick. i triple checked the settings for my development.rb and made sure i am running in development mode.
config.cache_classes = false
config.action_controller.perform_caching = false
i also checked my tmp directory to make sure the cache folder is empty - i have yet to do any caching on the site and have never turned on caching. im guessing its a loading problem with the files.
also i was running on webrick then installed mongrel and the problem still persists.
im guessing ive run into a config problem, bc i dont see anyone else posting such a prob. anything else im missing?
EDIT: it looks like my view helpers aren't auto loadable - aren't helpers by default supposed to be reloadable in rails 3?
|
Rails 3 development environment keeps caching, even without caching on?
|
An AJAX request is no different from a normal request - it's a GET/POST/HEAD/whatever request being sent by the browser, and it is handled as such. This is confirmed here:
The HTTP and Cache sub-systems of modern browsers are at a much lower level than Ajax’s XMLHttpRequest object. At this level, the browser doesn’t know or care about Ajax requests. It simply obeys the normal HTTP caching rules based on the response headers returned from the server.
As per the jQuery documentation, caches can also be invalidated in at least one usual way (appending a query string):
cache (default: true, false for dataType 'script' and 'jsonp')
Type: Boolean
If set to false, it will force requested pages not to be cached by the browser. Note: Setting cache to false will only work correctly with HEAD and GET requests. It works by appending "_={timestamp}" to the GET parameters. The parameter is not needed for other types of requests, except in IE8 when a POST is made to a URL that has already been requested by a GET.
So in short, given the same headers, AJAX responses are cached the same way as other requests.
|
I have a JavaScript app that sends requests to REST API, the responses from server have cache headers (like ETag, cache-control, expires). Is caching of responses in browser automatic, or the app must implement some sort of mechanism to save the data?
|
Is caching in browser automatic?
|
10
I suspect that you have enabled output caching, this would exhibit the behaviour that you are describing where recycling the app pool or restarting IIS clears them and allows you to see the new content.
This page gives more information, http://www.iis.net/learn/manage/managing-performance-settings/walkthrough-iis-output-caching
If you are using IIS Express then it is likely that the caching is set at the application level in your web.config, or on individual pages.
You need to set
<caching>
<outputCache enableOutputCache="false" />
</caching>
or if its IIS 7+ (Which IIS Express will be)
<system.webServer>
<caching enabled="false" />
</system.webServer>
Share
Improve this answer
Follow
answered Apr 29, 2015 at 8:37
D3vyD3vy
83977 silver badges2121 bronze badges
0
Add a comment
|
|
I do some web development for work and the biggest hit to my productivity comes from IIS. It caches files that will not update even when they are changed, unless I restart IIS. Specifically, I am referring to html, js, and css files. This problem forces me to stop and start my web application constantly. Since I have Windows 7 I believe I have ISS 7.5. I am using IIS Express. This is so frustrating that I'd prefer IIS to never cache anything, ever. I am fine with a solution that stops all forms of caching or just for the project I am working on.
IIS Manager is not available to me. It is not located in System and Security -> Administrative Tools -> IIS Manager like is suggested by https://technet.microsoft.com/en-us/library/cc770472%28v=ws.10%29.aspx. Also, searching for inetmgr in the Start Search box gets me no results. Because of this, I am looking for a fix to the IIS applicationhost.config file.
I have tried putting the following in my applicationhost.config which doesn't work:
<location path="ProjectName">
<system.webServer>
<staticContent>
<clientCache cacheControlCustom="public" cacheControlMode="DisableCache" />
</staticContent>
<caching enabled="false" enableKernelCache="false"></caching>
</system.webServer>
</location>
The closest question on StackOverflow to my problem was IIS cached files never replaced. However, Fiddler shows me that the old files are being sent to the browser even after they have been changed.
How can IIS to send my browser the updated files without having to restart it?
|
How do I stop IIS from caching any files, ever, under any circumstances?
|
8
The nginx documentation is quite exhaustive — there's no variable with the direct relative age of the cached file.
The best way would be to use the $upstream_http_ variable class to get the absolute age of the resource by picking up its Date header through $upsteam_http_date.
add_header X-Cache-Date $upstream_http_date;
For the semantic meaning of the Date header field in HTTP/1.1, refer to rfc7231#section-7.1.1.2, which describes it as the time of the HTTP response generation, so, basically, this should accomplish exactly what you want (especially if the backend runs with the same timecounter).
Share
Improve this answer
Follow
answered Oct 9, 2017 at 7:56
cnstcnst
26.4k66 gold badges9393 silver badges122122 bronze badges
Add a comment
|
|
I've set up a caching server for a site through nginx 1.6.3 on CentOS 7, and it's configured to add http headers to served files to show if said files came from the caching server (HIT, MISS, or BYPASS) like so:
add_header X-Cached $upstream_cache_status;
However, i'd like to see if there's a way to add a header to display the age of the cached file, as my solution has proxy_cache_valid 200 60m; set, and i'd like to check that it's respecting that setting.
So what i'm looking for would be something like:
add_header Cache-Age $upstream_cache_age;
I'm unable to find anything of the sort though, can you help?
Thanks
|
How to display the age of an nginx cached file in headers
|
See Regular Expression Engine Comparison Chart maintained by Roger Qui which a copy of the information available in the original answer. (Credit to Uberhumus for the new link.)
[Original Answer]
See Flavor Comparison at Regular-Expressions.info.
|
We are currently in the process of upgrading our Varnish Cache servers.
As part of the process, we upgraded only one of them to see how it behaves compared to the older versions.
Some of the major changes made in this new version is changing the regex engine from POSIX to PCRE. That means that some of our purges (regex purges) have stopped working on the newer server.
I was wondering if anyone can list/point me to a list of actual syntax differences between POSIX and PCRE. Or maybe a function that converts a POSIX regex to PCRE regex.
This is so that I can convert only the purges going to the newer server - without affecting the current regex syntax that is implemented in the system for the other servers.
|
Regex Syntax changes between POSIX and PCRE
|
According to firebase documentation
Transactions are not persisted across app restarts
Even with persistence enabled, transactions are not persisted across
app restarts. So you cannot rely on transactions done offline being
committed to your Firebase Realtime Database. To provide the best user
experience, your app should show that a transaction has not been saved
into your Firebase Realtime Database yet, or make sure your app
remembers them manually and executes them again after an app restart.
|
I am setting offline persistence
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
as described in an earlier post, but the following use case fails:
Turn internet connectivity OFF on handset
Attempt writing to the DB
Kill app from the memory using the users' multitasking menu in the OS
Turn internet connectivity back ON
Relaunch the app. At this point I expect the new record from step 2 to be sent to the DB via the restored network connectivity, but this does not happen. (Are my expectations correct?)
Sample code:
static{
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
}
void updateValue(){
DatabaseReference dbRef = FirebaseDatabase.getInstance().getReference("mydb");
dbRef.keepSynced(true);
dbRef.setValue("123");
}
Note that, if I don't kill the app from memory the caching works:
Turn internet connectivity OFF on handset
Attempt writing to the DB
Turn internet connectivity back ON
The new record is sent to the DB once the network connectivity is restored.
|
Firebase does not sync offline cache if the app is killed
|
7
See http://greenbytes.de/tech/webdav/rfc7234.html#response.cacheability:
"A cache MUST NOT store a response to any request, unless:
The request method is understood by the cache and defined as being cacheable, and
... the Authorization header field (see Section 4.2 of [RFC7235]) does not appear in the request, if the cache is shared, unless the response explicitly allows it, ..."
Share
Improve this answer
Follow
edited Jun 14, 2018 at 8:02
sfussenegger
35.9k1616 gold badges9797 silver badges120120 bronze badges
answered Mar 3, 2015 at 16:56
Julian ReschkeJulian Reschke
40.9k88 gold badges9898 silver badges9898 bronze badges
5
I'm confused. See w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.8 point 1. Using the cache settings s-max-age is set along with max-age.
– Finglas
Mar 3, 2015 at 16:58
1
RFC 2616 is obsolete. Please read all of Section 3, in particular 3.2.
– Julian Reschke
Mar 3, 2015 at 17:37
Thanks for that @Julian. I've updated my question to be more specific as I'm still not 100% clear on how this works.
– Finglas
Mar 3, 2015 at 19:50
2
Caching of authorized content seems to be the most confusing affiar to me in caching. I have a specific question here based on a scenario which received response adding more confusion to it. Could you please clarify how caching can be leveraged in such a scenario?
– LCJ
Aug 26, 2016 at 0:43
2
Has anyone found a solution for this?
– sammygadd
Feb 6, 2020 at 0:23
Add a comment
|
|
Given a response from a web server that contains an Authorization header as per the OAuth spec does HTTP caching fail to be useful?
Request1 Authorization : AUTHTOKEN
Request2 Authorization : ANOTHERAUTOTOKEN
In this case given HTTP caching the second request would return the cached response for the first user. This isn't a problem for content that is generic across users, but this feels wrong for a shared cache to be providing responses for other users.
Likewise if we were to use a Vary header and vary by Authorization, this means our cache would store a cached copy per token which surely defeats the purpose of HTTP caching. The browsers local cache (private) would work fine, but this would still mean an origin request from each user at least once per session.
Edit
The service in question requires Authorization for all requests, however based on what I've read, serving responses from a Shared cache that include Authorization headers shouldn't be done unless must-revalidate, public, and s-maxage are present.
My question therefore is, given an API that has both generic (responses the same across all users) and user specific responses, is caching even possible? Having s-maxage and public headers but an authorization header would mean that the cache would resolve UserA's response to UserB, UserC and so on if I'm following the RFC correctly.
|
HTTP Caching with Authorization
|
I saw your thread on the nodeJS github and your answer is there: the cache is designed to be immutable. There's also a good suggestion to use Workers, with each instance having its own fresh imports. Short of that, if you need this behaviour (which doesn't exist for a reason), is there perhaps a better way to design your application to not need it?
EDIT: The issue on GH: https://github.com/nodejs/help/issues/1399
|
ES Modules docs states:
require.cache is not used by import. It has a separate cache.
So where's this separate cache? Is it accessible after all?
I'm looking for invalidating module caching as it can be done in CommonJS modules (node.js require() cache - possible to invalidate?)
|
require.cache equivalent in ES modules
|
2
You can use query-params in the URL to avoid caching.
No need to change the filename.
this.http.get(`configs/config.json?t=${new Date().getTime()}`).subscribe(...);
new Date().getTime() will create a unique number for every millisecond.
In case of ngx-translate, you can define your httpLoader factory as
export function HttpLoaderFactory(httpClient: HttpClient) {
return new TranslateHttpLoader(httpClient, '/assets/i18n/',`.json?v=${new Date().getTime()}`);
}
I hope that helps.
Share
Improve this answer
Follow
answered Jan 6, 2020 at 10:11
Kumar SidharthKumar Sidharth
58855 silver badges1313 bronze badges
1
This works as cache busting but it will prevent caching completely for this file (each requets will get a file from the server). With hash possibly injected by the Webpack you get both cache busting when file changes and caching when file is not changed.
– kamilz
Sep 26, 2023 at 9:39
Add a comment
|
|
I have the following code (written in typescript, but could be any JS variant):
this.http.get('configs/config.json').subscribe(...);
Basically, I'm loading a configuration from a local json file. I would like to have cache busting implemented on the file.
Although I can set up my webpack to modify json files by adding a hash suffix, I would also need to modify all the source files which have references to those files. string-replace-loader might do the job, but doing this feels bit odd.
Additionally, in some cases I don't have access to the code lines that make the http call to resource (third-party plugin for e.g. translation that load something like i18n/[lang].json so I can't directly modify code and/or name (and thus content hash) is only known in the run-time.
Is there something like URL rewrite for webpack that could solve this?
|
Cache busting of JSON files in webpack
|
Cache fragment entries are created with a slightly different key than what you access with Rails.cache.
Use expire_fragment instead (you can send it to a controller): http://api.rubyonrails.org/classes/ActionController/Caching/Fragments.html#M000438
|
Something like
Rails.cache.delete('site_search_form')
doesn't seem to work. Is this possible? Thanks.
|
How do I expire a view cached fragment from console?
|
39
Have you read documentation for touch method?
Saves the record with the updated_at/on attributes set to the current
time. Please note that no validation is performed and only the
after_touch, after_commit and after_rollback callbacks are executed.
If an attribute name is passed, that attribute is updated along with
updated_at/on attributes.
Share
Improve this answer
Follow
answered Jan 24, 2015 at 9:18
Roman KiselenkoRoman Kiselenko
43.7k99 gold badges9696 silver badges107107 bronze badges
1
3
If all you need is just to trigger after_save callbacks please use save method.
– Oleg Afanasyev
Feb 16, 2017 at 10:25
Add a comment
|
|
Recent days , I was trying to cache rails app use Redis store.
I have two models:
class Category < ActiveRecord::Base
has_many :products
after_save :clear_redis_cache
private
def clear_redis_cache
puts "heelllooooo"
$redis.del 'products'
end
end
and
class Product < ActiveRecord::Base
belongs_to :category, touch: true
end
in controller
def index
@products = $redis.get('products')
if @products.nil?
@products = Product.joins(:category).pluck("products.id", "products.name", "categories.name")
$redis.set('products', @products)
$redis.expire('products', 3.hour.to_i)
end
@products = JSON.load(@products) if @products.is_a?(String)
end
With this code , the cache worked fine.
But when I updated or created new product (I have used touch method in relationship) it's not trigger after_save callback in Category model.
Can you explain me why ?
|
Why does after_save not trigger when using touch?
|
This is included in MVC 2 Futures. See http://blogs.msdn.com/rickandy/archive/2009/12/17/session-less-mvc-controller.aspx for more info.
|
We are building an ASP.NET MVC application which will be deployed behind a hardware load balancer that supports, among other things, caching.
Our proposal is to manually define which URL patterns should be cached by the load balancer. This will be quite an easy process for us as we have 'catalogue' pages which are relatively static, then 'order' pages which are not.
Must avoid using session state on cached pages, as the entire response is cached by the load balancer - this includes any cookies that are sent.
Ideally there would be an attribute which can be applied to controllers or action methods which allows selective use of session state, but there doesn't appear to be one. I realise that an approach like this would result in lost sessions if the use leaves the 'session zone' - that's fine.
Other than re-implementing the entire ASP.NET MVC HTTP controller... any suggestions?
Thanks in advance.
|
Enable / disable session state per controller / action method
|
I know this thread is old, but I think I know what the other answer meant about prepend a "reset" subject to push new values. Check this example:
private _refreshProfile$ = new BehaviorSubject<void>(undefined);
public profile$: Observable<Customer> = _refreshProfile$
.pipe(
switchMapTo(this.callWS()),
shareReplay(1),
);
public refreshProfile() {
this._refreshProfile$.next();
}
In the above snippet, all profile$ new subscribers will receive the latest emitted value (upon calling callWS() once). If you wish to "refresh" the Customer being shared, you would call "refreshProfile()". This would emit a new value going through switchMapTo, reassigning the replay value and also notifying to any profile$ open subscriber.
Have a nice one
|
I use shareReplay to call only once (like a cache) a webservice to retrieve some informations :
In my service :
getProfile(): Observable<Customer> {
return this.callWS().pipe(shareReplay(1));
}
In multiple components :
this.myService.getProfile().subscribe(customer => {
console.log('customer informations has been retrieved from WS :', customer);
});
Now I want to add a method to force refresh the informations (bypass shareReplay only once). I tried with storing my observable in a variable, and set it to null before re-initialize it, but it seems to break components subscriptions..
Any help ?
Thanks
|
RxjS shareReplay : how to reset its value?
|
25
Well, in my case, when I checked my app, it hadn't the /tmp folder. Then I created the structure (/tmp/cache/models, /tmp/cache/persistent) and all worked well. This happened to me maybe git ignore empty folders, so they weren't created.
Share
Improve this answer
Follow
edited Oct 10, 2012 at 21:44
Alfabravo
7,49266 gold badges4747 silver badges8282 bronze badges
answered Jul 5, 2012 at 5:03
ShadShad
1,0211212 silver badges2121 bronze badges
1
1
Indeed, version control systems often ignore empty folders.
– Alfabravo
Oct 10, 2012 at 21:44
Add a comment
|
|
I'm using CakePHP 2.0 RC-1. After checking out the project from SVN, the application is starting to complain that it can't write cache files to the tmp/cache directory. Since this is local, I know the directory is writeable and I can CLEARLY see that the directories are even filled with files, so the error is a bit strange.
Here are some of the errors I've encountered:
_cake_core_ cache was unable to write 'cake_dev_nb' to File cache
fopen(c:\cake\app\tmp\cache\models\cake_model_default_media) [function.fopen]: failed to open stream: No error [CORE\Cake\Cache\Engine\FileEngine.php, line 127]
No error?! Wth?
Now, if I look in the FileEngine file, at line 127 it reads:
if (!$handle = fopen($this->_File->getPathName(), 'c')) {
return false;
}
By replacing the "c" with "w", no error is encountered and everything works as it should. But, it should not be necessary to modify the core Cake libraries to work around this problem. Let me repeat that on my other computer this works as intended, without editing the core library. Both use the Windows OS and the read/write rights to the tmp/cache-folder is exactly the same.
Edit: Here's a site that experiences the error outputs I'm having locally
Example site found by Googling. Not my site: http://www.12h30.net/credit/
Any suggestions?
Update: Here is why: This is caused if you have a PHP-version that's too low, before 5.2.6, as outlined by "api55" in the comments. Thanks for the reply. Hope this helps you too.
|
CakePHP 2.0 - Cake was unable to write to File cache
|
19
For future reference , this worked for me ( I could not use additional query parameter due to project requirements) :
HttpWebRequest request = HttpWebRequest.CreateHttp(url);
if (request.Headers == null)
{
request.Headers = new WebHeaderCollection();
}
request.Headers[HttpRequestHeader.IfModifiedSince] = DateTime.UtcNow.ToString();
Share
Improve this answer
Follow
answered Feb 9, 2012 at 9:56
frnofrno
1,0841111 silver badges2424 bronze badges
1
@Agent_L the request looks different for the WP7 HTTP client
– SandRock
Mar 30, 2012 at 9:12
Add a comment
|
|
It seems that HttpWebRequest caching in WP7 is enabled by default, how do I turn it off?
Adding a random
param url + "?param=" + RND.Next(10000) works, but it's quite tricky and I'm not sure if it will work
with all servers.
|
WP7 HttpWebRequest without caching
|
This is specified in the docs:
If expireAfterWrite or expireAfterAccess is requested entries may be evicted on each cache modification, on occasional cache accesses, or on calls to Cache.cleanUp(). Expired entries may be counted by Cache.size(), but will never be visible to read or write operations.
And there's more detail on the wiki:
Caches built with CacheBuilder do not perform cleanup and evict values "automatically," or instantly after a value expires, or anything of the sort. Instead, it performs small amounts of maintenance during write operations, or during occasional read operations if writes are rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
Instead, we put the choice in your hands. If your cache is
high-throughput, then you don't have to worry about performing cache
maintenance to clean up expired entries and the like. If your cache
does writes only rarely and you don't want cleanup to block cache
reads, you may wish to create your own maintenance thread that calls
Cache.cleanUp() at regular intervals.
If you want to schedule regular cache maintenance for a cache which
only rarely has writes, just schedule the maintenance using
ScheduledExecutorService.
|
private Cache<Long, Response> responseCache = CacheBuilder.newBuilder()
.maximumSize(10000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build();
I am expecting that response objects that are not send to client within 10 minutes are expired and removed from cache automatically but I notice that Response objects are not always getting expired even after 10, 15, 20 minutes. They do get expire when cache is being populated in large numbers but when the system turn idle, something like last 500 response objects, it stops removing these objects.
Can someone help to understand this behavior? Thank you
|
Guava cache 'expireAfterWrite' does not seem to always work
|
I've ran across this problem a few times and usually over come the problem on production sites by calling my css like this
<link rel="stylesheet" type="text/css" href="style.css?v=1" />
When you roll out an update just change the v=1 to v=2 and it will force all of your users browsers to grab the new style sheets. This will work for script files as well. If you view source on Google you will notice they use this approach as well.
|
I am developing a simple website using PHP.
Development Configuration : WAMP
Production Configuration : LAMP
While testing, I changed my CSS file, but when I reload the page my browser(not sure) still uses the old cached css.
I did some googling and found different solutions that I have already tried
Appending a query at the end of css css/main.css?78923
Using Ctrl + R (in Firefox) to force fetching of the resource
Disabling Firefox caching as well as using the Clear Cache Firefox add-on.
When none of this worked, I did some more googling, where I came across a stack page (here) where someone suggested that Apache caches the resources. So, the problem is not with the Firefox, but the server. The guy also suggested a solution that I did not understand (me being a newbie)
My question has two parts:
Is it true that Apache caches resources? (How do I check if mine does?)
How to prevent it from caching?
PS: copying and pasting the solution in stack question (the one that I have above as a link) did not work :(
|
Preventing Caching of CSS Files
|
Try placing the following in /config/environments/development.rb:
# Temporarily enable caching in development (COMMENT OUT WHEN DONE!)
config.action_controller.perform_caching = true
Additionally, if your cache store configuration is in /config/environments/production.rb, then you will need to copy the appropriate line into development.rb as well. For example, if your cache store is the Dalli memcache gem:
# copied from production.rb into development.rb for caching in development
config.cache_store = :dalli_store, '127.0.0.1'
Hope that helps.
|
In development, the following (simplified) statement always logs a cache miss, in production it works as expected:
@categories = Rails.cache.fetch("categories", :expires_in => 5.minutes) do
Rails.logger.info "+++ Cache missed +++"
Category.all
end
If I change config.cache_classes from false to true in config/development.rb, it works as well in development mode, however, this makes development rather painful. Is there any configuration setting that is like config.cache_classes = false except that Rails.cache.fetch is fetching from cache if possible?
|
Rails3 - Caching in development mode with Rails.cache.fetch
|
I did some digging on a related question and looking at mvc 3 source, they definitely don't support any attribute other than Duration and VaryByParam. The main bug with their current implementation is that if you don't supply either one of these you will get an exception telling you to supply that, instead of an exception say that what you tried to use is not supported. The other major issue was that they will cache even if you turn off caching in the web.config, which seems really lame and not right.
The biggest issue I had with it all is that they are using the same attribute which works in both views and partial views, but in reality it should probably be 2 different attributes since the partial view is so limited and behaves a lot differently, at least in it's current implementation.
|
I'm trying to use cache profiles for caching child actions in my mvc application, but I get an exception: Duration must be a positive number.
My web.config looks like this:
<caching>
<outputCache enableOutputCache="true" />
<outputCacheSettings>
<outputCacheProfiles>
<add name="TopCategories" duration="3600" enabled="true" varyByParam="none" />
</outputCacheProfiles>
</outputCacheSettings>
</caching>
And my child action something like this:
[ChildActionOnly]
[OutputCache(CacheProfile = "TopCategories")]
//[OutputCache(Duration = 60)]
public PartialViewResult TopCategories()
{
//...
return PartialView();
}
Inside a view I just call @Html.RenderAction("TopCategories", "Category")
But I get an error:
Exception Details: System.InvalidOperationException: Duration must be a positive number.
If I don't use cache profile it works. Have an idea what's the problem?
|
Caching ChildActions using cache profiles won't work?
|
AFAIK, it is not possible to configure Redis to consistently evict the older data first.
When the *-ttl or *-lru options are chosen in maxmemory-policy, Redis does not use an exact algorithm to pick the keys to be removed. An exact algorithm would require an extra list (for *-lru) or an extra heap (for *-ttl) in memory, and cross-reference it with the normal Redis dictionary data structure. It would be expensive in term of memory consumption.
With the current mechanism, evictions occur in the main event loop (i.e. potential evictions are checked at each loop iteration before each command is executed). Until memory is back under the maxmemory limit, Redis randomly picks a sample of n keys, and selects for expiration the most idle one (for *-lru) or the one which is the closest to its expiration limit (for *-ttl). By default only 3 samples are considered. The result is non deterministic.
One way to increase the accuracy of this algorithm and mitigate the problem is to increase the number of considered samples (maxmemory-samples parameter in the configuration file).
Do not set it too high, since it will consume some CPU. It is a tradeoff between eviction accuracy and CPU consumption.
Now if you really require a consistent behavior, one solution is to implement your own eviction mechanism on top of Redis. For instance, you could add a list (for non updatable keys) or a sorted set (for updatable keys) in order to track the keys that should be evicted first. Then, you add a daemon whose purpose is to periodically check (using INFO) the memory consumption and query the items of the list/sorted set to remove the relevant keys.
Please note other caching systems have their own way to deal with this problem. For instance with memcached, there is one LRU structure per slab (which depends on the object size), so the eviction order is also not accurate (although more deterministic than with Redis in practice).
|
I'm storing a bunch of realtime data in redis. I'm setting a TTL of 14400 seconds (4 hours) on all of the keys. I've set maxmemory to 10G, which currently is not enough space to fit 4 hours of data in memory, and I'm not using virtual memory, so redis is evicting data before it expires.
I'm okay with redis evicting the data, but I would like it to evict the oldest data first. So even if I don't have a full 4 hours of data, at least I can have some range of data (3 hours, 2 hours, etc) with no gaps in it. I tried to accomplish this by setting maxmemory-policy=volatile-ttl, thinking that the oldest keys would be evicted first since they all have the same TTL, but it's not working that way. It appears that redis is evicting data somewhat arbitrarily, so I end up with gaps in my data. For example, today the data from 2012-01-25T13:00 was evicted before the data from 2012-01-25T12:00.
Is it possible to configure redis to consistently evict the older data first?
Here are the relevant lines from my redis.cnf file. Let me know if you want to see any more of the cofiguration:
maxmemory 10gb
maxmemory-policy volatile-ttl
vm-enabled no
|
Configuring redis to consistently evict older data first
|
When you use Firebase Hosting on top of Cloud Functions for Firebase, Hosting can act as an edge-cached layer on top of the responses from your HTTPS functions. You can read about that integration in the documentation. In particular, read the section managing cache behavior:
The main tool you'll use to manage cache is the Cache-Control header.
By setting it, you can communicate both to the browser and the CDN how
long your content should be cached. In your function, you set
Cache-Control like so:
res.set('Cache-Control', 'public, max-age=300, s-maxage=600');
|
Let's say I have a database of 100,000 pieces of content inside of firestore. Each piece of content is unlikely to change more than once per month. My single page app, using firebase hosting, uses a function to retrieve the content from firestore, render it to HTML, and return it to the browser.
It's a waste of my firestore quotas and starts to add up to a lot of money if I'm routinely going through this process for content that is not that dynamic.
How can that piece of content be saved as a static .com/path/path/contentpage.html file to be served whenever that exact path and query are requested, rather than going through the firestore / functions process every time?
My goal is to improve speed, and reduce unnecessary firestore requests, knowing each read costs money.
Thanks!
|
Can Firebase Hosting Serve Cached Data from Cloud Functions?
|
Adding my bit to the great answers, given by community;
1. http caching header attrubute Cache-Control: private is added by default by IIS/ASP.NET ?
Cache request directives
Standard Cache-Control directives that can be used by the client in an HTTP request.
Cache-Control: max-age=<seconds>
Cache-Control: max-stale[=<seconds>]
Cache-Control: min-fresh=<seconds>
Cache-Control: no-cache
Cache-Control: no-store
Cache-Control: no-transform
Cache-Control: only-if-cached
Cache response directives
Standard Cache-Control directives that can be used by the server in an HTTP response.
Cache-Control: must-revalidate
Cache-Control: no-cache
Cache-Control: no-store
Cache-Control: no-transform
Cache-Control: public
Cache-Control: private
Cache-Control: proxy-revalidate
Cache-Control: max-age=<seconds>
Cache-Control: s-maxage=<seconds>
IIS uses the secure and more obvious/useful one for default, which happens to be private
2. how to prevent it from being added by default?
IIS/asp.net allows this to be configured from the day it was introduced like this, ref1, ref2, ref3, ref4 and
System.Web Namespace
The System.Web namespace supplies classes and interfaces that enable browser-server communication. This namespace includes the System.Web.HttpRequest class, which provides extensive information about the current HTTP request; the System.Web.HttpResponse class, which manages HTTP output to the client; and the System.Web.HttpServerUtility class, which provides access to server-side utilities and processes. System.Web also includes classes for cookie manipulation, file transfer, exception information, and output cache control.
protected void Application_BeginRequest()
{
Context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
}
|
Why does all responses from ASP.NET contain Cache-Control: private? Even a 404 response? Is there something in IIS that sets this default value, and is there a way to configure it? Or is there something in ASP.NET that sets this?
For dynamic content (that is, all MVC results) I would not like it to be cached by the browser, since it is dynamic and can change at any time. Static content is hosted on a CDN, so is not served by IIS.
Edit:
To clarify, I understand very well what Cache-Control: private is, the difference between private, public, no-store, etc and how/when to use them. The question I have is why Cache-Control: private is added by default by IIS/ASP.NET and how to prevent it from being added by default. I understand that it can be useful to cache dynamic pages, but in my application I don't want to cache dynamic pages/responses. For example, I don't want XHR JSON responses to be cached, since they contain dynamic content. Unfortunately the server adds Cache-Control: private to all responses automatically, so I have to manually override it everywhere.
How to reproduce: Open visual studio and create a new ASP.NET Framework (yes, framework, no not Core. We are not able to migrate our system to core yet) solution with an MVC project. Now start the project in IIS Express (just press the play button), and use F12 devtools in the browser to look at the http response. You will see that it contains Cache-Control: private. My question is, what adds this header, and how can I prevent it from being added by default?
|
IIS/ASP.NET responds with cache-control: private for all requests
|
Yes.
If you just have proxy_cache_bypass set true on pages you don't want cached (eg. logged in users) then they will still be saved into the cache and served to people who should get cached pages (eg. non logged in users).
But setting both proxy_cache_bypass and proxy_no_cache to true means that those users neither receive nor contribute to the cache.
|
Here's the documentation:
proxy_cache_bypass
Defines conditions under which the response will not be taken from a cache. If at least one value of the string parameters is not empty and is not equal to “0” then the response will not be taken from the cache:
proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
proxy_cache_bypass $http_pragma $http_authorization;
Can be used along with the proxy_no_cache directive.
proxy_no_cache
Defines conditions under which the response will not be saved to a cache. If at least one value of the string parameters is not empty and is not equal to “0” then the response will not be saved:
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $http_pragma $http_authorization;
Can be used along with the proxy_cache_bypass directive.
Does that mean if I want to totally exclude something in cache, I should set both proxy_no_cache and proxy_cache_bypass? Is it OK if I only set proxy_cache_bypass?
|
Nginx proxy_no_cache and proxy_cache_bypass
|
I'm using this way in a project and it's working:
let mutableURLRequest = NSMutableURLRequest(URL: SERVICEURL)
mutableURLRequest.HTTPMethod = "POST"
mutableURLRequest.HTTPBody = self.createJson()
mutableURLRequest.setValue("application/json", forHTTPHeaderField: "Content-Type")
mutableURLRequest.cachePolicy = NSURLRequestCachePolicy.ReloadIgnoringCacheData
request(mutableURLRequest).validate().responseJSON{ response in...
Hope it helps.
|
I set cache policy to request in Alamofire to ignore local cache.
Then I load a viewcontroller with network connection, then I disconnect network connection, kill the app and run it again.
Now no network available error is not shown(ie alamofire doesnt create nserror object) created, instead app runs as if the request succeeded getting data from cache obviously.And the odd thing is the when I tried to inspect the cached data using
NSURLCache.sharedURLCache().cachedResponseForRequest(request)
nil is returned eventhough the data was from cache ..
The only way I could prevent cached responses is perform NSURLCache.sharedURLCache().removeAllCachedResponses()
let request = NSURLRequest(URL: NSURL(string: url)!, cachePolicy: NSURLRequestCachePolicy.ReloadIgnoringLocalAndRemoteCacheData, timeoutInterval: 100)
Alamofire.manager.request(method, request, parameters:params)
.responseJSON { (request, response, data, error) in
if let anError = error {
if anError.code == NSURLErrorNotConnectedToInternet {
UIAlertView(title: "Alert", message: "No Network Connection Available", delegate: nil, cancelButtonTitle: "ok").show()
}
} else if let data: AnyObject = data {
println(NSURLCache.sharedURLCache().cachedResponseForRequest(request))
//prints nil
}
}
}
What I want to do is load data from cache only if network connection is not available, something like limited offline mode.How to do this?
|
Alamofire loading from cache even when cache policy set to ReloadIgnoringLocalAndRemoteCacheData
|
As Matchu posted, you could implement point two from this post (same link he posted, but found via my Googling as well). This adds a dependency on JavaScript, which may or may not be something you want.
Alternatively, you could look into Fragment Caching. This allows you to cache certain portions of a page, but still generate the dynamic portions (such as forms with authenticity tokens). Using this technique, you could cache the rest of the page, but generate a new form for every request.
One final solution (but the least favourable), is to disable the authenticity token for that specific action. You can do this by adding the following to the beginning of the controller generating that form:
protect_from_forgery :except => [:your_action]
You can also turn off protect_from_forgery for the entire controller by adding the following to the beginning:
skip_before_filter :verify_authenticity_token
|
I have a simple Ruby on Rails form which includes an authenticity_token. Unfortunatly, I missed that when you page cache this page then the Authenticity Token becomes invalid. I'm glad I figured it out however.
How do you solve caching in such a case?
|
Ruby on Rails form page caching including authenticity_token
|
Sounds like you want MapMaker.makeComputingMap, but you mention softKeys so I assume you are already familiar with that class.
You are right about softKeys - it will not work if you compose keys on-the-fly, because softKeys causes the map to use == instead of equals for key comparison. But you should be fine with softValues and expiration, as long as there is no side-effect from recreating an evicted entry.
|
What Guava classes are suitable for thread-safe caching? I use a composed key, which gets constructed on the fly, so softKeys() makes no sense, right? I saw somewhere ConcurentLinkedHashMap, is it the way to go? Is it already in the recent release? Sorry for the chaotic way of asking...
Update
This question is pretty old and looking through he answers could possible be a waste of time. Since long there's a CacheBuilder which is the way to go.
|
Caching with Guava
|
Some of the details it would be nice to have answers for:
http://code.google.com/p/googleappengine/issues/detail?id=2258#c3
|
Google App Engine must have some sort of reverse caching proxy because when I set the response header Cache-Control public, max-age=300 from one of my servlets, subsequent requests to the app engine show up in the logs like this: /testcaching 204 1ms 0cpu_ms 49kb, whereas non-cached requests show up in the logs as: /testcaching 200 61ms 77cpu_ms 49kb.
Anyways, my question is: Does anyone have any more details about this reverse caching proxy?
|
Details on Google App Engine's caching proxy?
|
16
Chrome Cache located into Path:
$HOME/.cache/google-chrome/Default/
To delete Web browsing Cache:
rm -rf $HOME/.cache/google-chrome/Default/Cache/
To delete video and Music (Media) cache:
rm -rf $HOME/.cache/google-chrome/Default/Media\ Cache/
Also there another cache Folder under Profile 2 Folder:
rm -rf $HOME/.cache/google-chrome/Profile\ 2/Cache/
Notice: Do not remove all (Folder, Files) under google-chrome Folder like this
rm -rf $HOME/.cache/google-chrome/
Just remove files under Folder and keep Folder Empty.
Share
Improve this answer
Follow
edited Mar 15, 2019 at 22:17
John_West
2,24944 gold badges2424 silver badges4444 bronze badges
answered Jul 26, 2015 at 11:32
ahmed hamdyahmed hamdy
5,12111 gold badge4848 silver badges5959 bronze badges
Add a comment
|
|
I want to be able to clear cache (both browser's own cache and possible offline cache manifests) through the command line.
|
what is the path to Chrome cache on Ubuntu?
|
@Philippe, Ensure you commented out OPcache in
/Applications/MAMP/bin/php/php5.5.3/conf/php.ini
not the one in
/Applications/MAMP/conf/php5.5.3/php.ini
|
Trying to turn off caching in MAMP for development, waiting for cache to expire after making small changes is killing my productivity.
(Problem started when I changed to PHP 5.5.3, changing back doesn't fix it)
After researching I've taken the following steps to (unsuccessfully) disable cache:
Commented out OPcache lines in php.ini and reset mamp. (and tried zero values shown)
;zend_extension="/Applications/MAMP/bin/php/php5.5.3/lib/php/extensions/no-debug-non-zts-20121212/opcache.so"
; opcache.memory_consumption=0
; opcache.interned_strings_buffer=0
; opcache.max_accelerated_files=0
; opcache.revalidate_freq=0
; opcache.fast_shutdown=1
; opcache.enable_cli=0
added PHP headers
header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
added html headers
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
I'm also using the option in google chrome to turn off caching when dev tools are open.
I'm lost here, can't think of anything else I can do to disable cache.
After changing PHP - OR - HTML code I have to wait about 2 mins for it to take effect. However database changes seem to take effect immediately so I think its a server side opcache.
Is there another cache that MAMP uses that I need to disable? (or a different step?)
|
Turn off Caching in MAMP
|
I don't think I'd say it's "cached" as such - but it's just stored in a field, so it's fast enough to call frequently.
The Sun JDK implementation of size() is just:
public int size() {
return size;
}
|
I was wondering, is the size() method that you can call on a existing ArrayList<T> cached?
Or is it preferable in performance critical code that I just store the size() in a local int?
I would expect that it is indeed cached, when you don't add/remove items between calls to size().
Am I right?
update
I am not talking about inlining or such things. I just want to know if the method size() itself caches the value internally, or that it dynamically computes every time when called.
|
Is ArrayList.size() method cached?
|
There's a property on the CacheItemPolicy called RemovedCallback which is of type: CacheEntryRemovedCallback. Not sure why they didn't go the standard event route, but that should do what you need.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.cacheitempolicy.removedcallback.aspx
|
I've got a simple object being cached like this:
_myCache.Add(someKey, someObj, policy);
Where _myCache is declared as ObjectCache (but injected via DI as MemoryCache.Default), someObj is the object i'm adding, and policy is a CacheItemPolicy.
If i have a CacheItemPolicy like this:
var policy = new CacheItemPolicy
{
Priority = CacheItemPriority.Default,
SlidingExpiration = TimeSpan.FromHours(1)
};
It means it will expire in 1 hour. Cool.
But what will happen is that unlucky first user after the hour will have to wait for the hit.
Is there any way i can hook into an "expired" event/delegate and manually refresh the cache?
I see there is a mention of CacheEntryChangeMonitor but can't find any meaninful doco/examples on how to utilize it in my example.
PS. I know i can use _myCache0 and expire it manually, but i can't do that in my current example because the cached data is a bit too complicated (e.g i would need to "invalidate" in like 10 different places in my code).
Any ideas?
|
.NET 4 ObjectCache - Can We Hook Into a "Cache Expired" Event?
|
I had a similar issue and decided to use django to write the sitemap files to disk in the static media and have the webserver serve them. I made the call to regenerate the sitemap every couple of hours since my content wasn't changing more often than that. But it will depend on your content how often you need to write the files.
I used a django custom command with a cron job, but curl with a cron job is easier.
Here's how I use curl, and I have apache send /sitemap.xml as a static file, not through django:
curl -o /path/sitemap.xml http://example.com/generate/sitemap.xml
|
I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1]
I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work.
What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django.
How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached?
Any thoughts?
[1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening.
[2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.
EDIT, 2012-05-12: This has continued to be a problem, and I finally ditched Django's sitemap framework after using it with a file cache for about a year. Instead I'm now using Solr to generate the links I need in a really simple view, and I'm then passing them off to the Django template. This greatly simplified my sitemaps, made them perform just fine, and I'm up to about 2,250,000 links as of now. If you want to do that, just check out the sitemap template - it's all really obvious from there. You can see the code for this here: https://bitbucket.org/mlissner/search-and-awareness-platform-courtlistener/src/tip/alert/casepage/sitemap.py
|
How to efficiently serve massive sitemaps in django
|
10
Now awesome library Volley released on Google I/O 2013 which helps for improve over all problems of calling REST API:
Volley is a library,it is library called Volley from the Android dev team. that makes networking for Android apps easier and most importantly, faster. It manages the processing and caching of network requests and it saves developers valuable time from writing the same network call/cache code again and again. And one more benefit of having less code is less number of bugs and that’s all developers want and aim for.
Example for volley: technotalkative
Share
Improve this answer
Follow
answered Aug 16, 2013 at 7:07
LOG_TAGLOG_TAG
20.2k1212 gold badges7373 silver badges105105 bronze badges
1
Great! You could also find some good samples of using Volley here : github.com/stormzhang/AndroidVolley
– Sam003
Jul 16, 2015 at 14:41
Add a comment
|
|
My android app gets its data using REST API. I want to have client side caching implemented. Do we have any inbuilt classes for this?
if not, is these any code that i can reuse? I remember coming across such code sometime back. However I cant find it.
If nothing else works, i will write my own. following is basic structure
public class MyCacheManager {
static Map<String, Object> mycache;
public static Object getData(String cacheid) {
return mycache.get(cacheid);
}
public static void putData(String cacheid, Object obj, int time) {
mycache.put(cacheid, obj);
}
}
how do i enable time for cached objects? also - whats the best way to serialize? cache should be intact even if app is closed and reopened later (if time has not expired).
Thanks
Ajay
|
How to implement caching in android app for REST API results?
|
13
Maybe something more difficult to setup than Webdis but you can do that directly in the nginx daemon with some extra modules like redis2-nginx-module. You will have to recompile nginx.
There is some good examples of configuration on the home page.
For instance :
# GET /get?key=some_key
location /get {
set_unescape_uri $key $arg_key; # this requires ngx_set_misc
redis2_query get $key;
redis2_pass foo.com:6379;
}
Of course, with a little more nginx configuration, you can get another URL pattern.
Note that for this example, you will have to compile ngx_set_misc module too.
Share
Improve this answer
Follow
answered Dec 30, 2010 at 22:11
FabienFabien
17811 silver badge77 bronze badges
Add a comment
|
|
I am using nginx to pass requests to a Node app. The app basically acts as a remote cache for html (checks to see if what the user is requesting is in the redis db, if it is just show that, if not grab it and store it in the redis cache and serve it up.)
I was curious if there was anyway to bypass hitting the Node app by having nginx serve up the content directly from redis? I have been fooling around with the http_redis module but I can't really get it to work.
A simple example would be: http://mywebsite.com/a where nginx would serve the content up in the 'a' key or pass it on to the node app if the key did not exist. Is this even possible?
|
Using nginx to serve content directly out of a redis cache
|
I can't offer you a comprehensive set of best practices, but I can offer what I've learned so far:
Managing your cache is a good idea. My app's cache is such that I know that I'll never need more than a certain number of cached files, so whenever I insert a new file into the cache, I delete the oldest files until I'm under the limit I have set. You could do something similar based on size, or simply age.
Caching to the SD card, if it's available, is a good idea if your cache needs to take up a lot of space. You'll need to manage that space just as carefully, since it won't automatically clear that space for you. If you're caching image files, be sure to put them in a directory that begins with a dot, like "/yourAppHere/.cache". This will keep the images from showing up in the gallery, which is really annoying.
Letting the user choose the location of the cache seems like overkill to me, but if your audience is very geeky, it might be appreciated.
I haven't noticed a much of a penalty when caching to the SD, but I don't know how your app uses that information.
|
I currently have my app caching image files in the cache sub-directory for the application. The images are used in a ListView and stored in a HashMap of SoftReferences to Bitmaps.
So my question is this, what is the best way to cache these image files without inflating the space my application uses AND remains responsive from a user standpoint.
Things I am concerned about:
I know the user can clear the cache and that it is done automatically when space is low on internal memory, but I feel most users will see a several MB app and uninstall it. Also if the space is constantly low, my app will just keep downloading the images, making it appear slower.
Most devices have an SD card pre-installed, but what should I do when it is not inserted? The SD card may also be slower compared to internal storage, affecting my app's performance.
Should I include an option to choose the location of the cache?
Should I attempt to manage the size of my cache (be it in the /cache or /sdcard) or just forget about it?
Thank you for your time (its a long one I know) and please post any relevant experience.
|
Best Practice when Caching files in Android
|
For nehalem: rolfed.com/nehalem/nehalemPaper.pdf
Each core in the architecture has a 128-bit write port and a
128-bit read port to the L1 cache.
128 bit = 16 bytes / clock read
AND
128 bit = 16 bytes / clock write
(can I combine read and write in single cycle?)
The L2 and L3 caches each have a 256-bit port for reading or writing,
but the L3 cache must share its port with three other cores on the chip.
Can L2 and L3 read and write ports be used in single clock?
Each integrated memory controller has a theoretical bandwidth
peak of 32 Gbps.
Latency (clock ticks), some measured by CPU-Z's latencytool or by lmbench's lat_mem_rd - both uses long linked list walk to correctly measure modern out-of-order cores like Intel Core i7
L1 L2 L3, cycles; mem link
Core 2 3 15 -- 66 ns http://www.anandtech.com/show/2542/5
Core i7-xxx 4 11 39 40c+67ns http://www.anandtech.com/show/2542/5
Itanium 1 5-6 12-17 130-1000 (cycles)
Itanium2 2 6-10 20 35c+160ns http://www.7-cpu.com/cpu/Itanium2.html
AMD K8 12 40-70c +64ns http://www.anandtech.com/show/2139/3
Intel P4 2 19 43 200-210 (cycles) http://www.arsc.edu/files/arsc/phys693_lectures/Performance_I_Arch.pdf
AthlonXP 3k 3 20 180 (cycles) --//--
AthlonFX-51 3 13 125 (cycles) --//--
POWER4 4 12-20 ?? hundreds cycles --//--
Haswell 4 11-12 36 36c+57ns http://www.realworldtech.com/haswell-cpu/5/
And good source on latency data is 7cpu web-site, e.g. for Haswell: http://www.7-cpu.com/cpu/Haswell.html
More about lat_mem_rd program is in its man page or here on SO.
|
What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD?
Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any.
PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.
|
Cache bandwidth per tick for modern CPUs
|
Sure, it's fine to cache the hash value. In fact, Python does so for strings itself. The trade-off is between the speed of the hash calculation and the space it takes to save the hash value. That trade-off is for example why tuples don't cache their hash value, but strings do (see request for enhancement #1462796).
|
I've written a class whose .__hash__() implementation takes a long time to execute. I've been thinking to cache its hash, and store it in a variable like ._hash so the .__hash__() method would simply return ._hash. (Which will be computed either at the end of the .__init__() or the first time .__hash__() is called.)
My reasoning was: "This object is immutable -> Its hash will never change -> I can cache the hash."
But now that got me thinking: You can say the same thing about any hashable object. (With the exception of objects whose hash is their id.)
So is there ever a reason not to cache an object's hash, except for small objects whose hash computation is very fast?
|
Is there any reason *not* to cache an object's hash?
|
Hopefully this will help: http://www.iis.net/ConfigReference/system.webServer/staticContent/clientCache
The <clientCache> element of the <staticContent> element specifies cache-related HTTP headers that IIS 7 and later sends to Web clients, which control how Web clients and proxy servers will cache the content that IIS 7 and later returns...
|
When I refresh my website in less than 2-3 minutes, Firebug shows these nice requests:
1. /core.css 304 Not modified
2. /core.js 304 Not modified
3. /background.jpg 304 Not modified
BUT when I refresh after >3 minutes, I get:
1. /core.css 200 OK
2. /core.js 200 OK
3. /background.jpg 304 Not modified
Why my CSS and JS files are downloaded again and images aren't?
I'm using ASP.NET MVC 3, I DON'T use [OutputCache], and in my /Content folder (where all css, js and img files live in subfolders) I have this Web.config:
<configuration>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />
</staticContent>
</system.webServer>
</configuration>
which sets HTTP header Cache-Control: max-age=86400 ONLY. So basically CSS, JS and images are treated the same way, but somehow CSS and JS don't get cached for a longer period... why is that?
|
Cache CSS and JS files
|
I'll share my point of view on your questions.
Two addresses that are separated by more bytes than block's size, won't reside on the exact same cache line. Thus, if a core has the first address in its cache, and another core requests the second address, the first won't be removed from cache because of that request. So a false sharing miss won't occur.
I can't imagine how false sharing would occur when there's no concurrency at all, as there won't be anyone else but the single thread to compete for the cache line.
Taken from here, using OpenMP, a simple example to reproduce false sharing would be:
double sum=0.0, sum_local[NUM_THREADS];
#pragma omp parallel num_threads(NUM_THREADS)
{
int me = omp_get_thread_num();
sum_local[me] = 0.0;
#pragma omp for
for (i = 0; i < N; i++)
sum_local[me] += x[i] * y[i];
#pragma omp atomic
sum += sum_local[me];
}
Some general notes that I can think of to avoid false sharing would be:
a. Use private data as much as possible.
b. Sometimes you can use padding in order to align data, to make sure that no other variables will reside in the same cache that shared data reside.
Any correction or addition is welcome.
|
Today I got a different understand with my professor on the Parallel Programming class, about what is "false sharing". What my professor said makes little sense so I pointed it out immediately. She thought "false sharing" will cause a mistake in the program's result.
I said, "false sharing" happens when different memory address are assigned to the same cache line, writing data to one of it will cause another being kicked out of the cache. If the processors write between the two false sharing address turn and turn about, both of them could not stay on the cache so all operations will result in the access of DRAMs.
That's my opinion so far. In fact I'm not definitely sure about what I said either... If I got a misunderstanding just point it out please.
So there are some questions. The cache is assumed 64 bytes aligned, 4-way set-associative.
Is it possible that two address separated by more than 64 bytes are “false sharing”?
Is it possible that a single threaded program encountered a "false sharing" issue?
What's the best code example to reproduce the "false sharing"?
In general, what should be noted to avoid "false sharing" for programmers?
|
What is "false sharing"? How to reproduce / avoid it?
|
8
I would go with your second option and refactor things a little bit. I would create an Interface and two Providers (which are your adapters):
public interface ICachingProvider
{
void AddItem(string key, object value);
object GetItem(string key);
}
public AspNetCacheProvider : ICachingProvider
{
// Adapt System.web.Caching.Cache to match Interface
}
public MemoryCacheProvider : ICachingProvider
{
// Adapt System.Runtime.Caching.MemoryCache to match Interface
}
Share
Improve this answer
Follow
answered Oct 7, 2010 at 14:41
Justin NiessnerJustin Niessner
244k4040 gold badges410410 silver badges539539 bronze badges
4
Hi Justin, This will probably be what I will do.
– Nadav
Oct 11, 2010 at 6:37
3
It just Irritating that there are two cache implementations in the BCL and they don't have a common interface :(
– Nadav
Oct 11, 2010 at 6:40
5
Where do you think that Common Interface would live? I suspect Microsoft didn't want to make it that you had to include some System.Runtime.*.dll as a requirement of using System.Web.Caching and likewise they made System.Runtime.Caching because they didn't want people to have to reference System.Web.* from their applications.
– Seph
Dec 16, 2010 at 5:41
2
Well, an interface is not such a big thing that you can't put it in one of the base dlls. Right now, If you want to cache data in a client you don't have anything( System.Runtime.Cache is in the Server profile).
– Nadav
Feb 15, 2011 at 15:51
Add a comment
|
|
I've got a class that needs to store data in a cache.
Originally I used it in an asp.net application so I used System.Web.Caching.Cache.
Now I need to use it in a Windows Service.
Now, as I understand it, I should not use the asp.net cache in a not asp.net application, so I was looking into MemoryCache.
The problem is that they are not compatible, so either I change to use MemoryCache in the asp.net application, or I will need to create an adapter so ensure that the two cache implementations have the same interface (maybe derive from ObjectCache and use the asp.net cache internally?)
What are the implications of using MemoryCache in an asp.net?
Nadav
|
dotnet System.Web.Caching.Cache vs System.Runtime.Caching.MemoryCache
|
5
+50
I'd check and see that:
MIME type really is text/cache-manifest.
Your cache-manifest starts with CACHE MANIFEST, your urls thereafter are either relative to the manifest or absolute URLs.
You don't have any broken links in your manifest, or a forced NETWORK: tag.
Share
Improve this answer
Follow
answered Feb 25, 2013 at 23:51
Ben Max RubinsteinBen Max Rubinstein
1,80311 gold badge1313 silver badges1515 bronze badges
1
Thanks Ben, turns out we are using a forced NETWORK tag. I will try it out without the tag and see how it goes.
– Luthfur
Feb 26, 2013 at 21:44
Add a comment
|
|
This is regarding HTML5 offline apps on Android devices.
We are running into an issue where bookmarking an offline capable HTML5 app (with a complete cache manifest file) fails to load on the Android browser under the following conditions:
Bookmark the app on the browser
Switch off all wireless connectivity
Close the browser completely
Attempt to launch the bookmark from the homescreen
We end up with an "Unable to connect to the internet" message. The app works perfectly fine on iOS devices when saved to homescreen and on airplane mode.
Is there a specific way the app should be saved, or is this an Android specific quirk?
|
HTML5 Offline app on Android devices
|
A cache is there to increase performance. So defeating a cache means finding a pattern of memory accesses that decreases performance (in the presence of the cache) rather than increases it.
Bear in mind that the cache is limited in size (smaller than main memory, for instance) so typically defeating the cache involves filling it up so that it throws away the data you're just about to access, just before you access it.
|
I have this question on my assignment this week, and I don't understand how the caches can be defeated, or how I can show it with an assembly program.. Can someone point me in the right direction?
Show, with assembly program examples, how the two different caches (Associative and Direct Mapping) can be defeated. Explain why this occurs and how it can be fixed. Are the same programs used to defeat the caches the same?
Note: This is homework. Don't just answer the question for me, it won't help me to understand the material.
|
How can caches be defeated?
|
7
This topic is old but thought I would share my solution. To get Firefox, Chrome and Safari to behave consistently, you have to set an unload handler on the page that needs to be reloaded when going back, and also use cache busting headers.
Example
In HTTP Headers
Cache-Control: must-revalidate, no-store, no-cache, private
And in the javascript for the page
$(window).unload(function(){}); // Does nothing but break the bfcache
Read here for more info: http://madhatted.com/2013/6/16/you-do-not-understand-browser-history
Share
Improve this answer
Follow
answered Dec 10, 2014 at 23:32
Jayden MeyerJayden Meyer
73322 gold badges1818 silver badges2828 bronze badges
Add a comment
|
|
I have two pages, A and B. The flow is as follows:
Go to A
javascript Ajaxes a bunch of content to add to A, forming A'
go to B
pressing [Back] goes back to A, not A', without all the Ajaxed content
Has anyone else noticed this, and if so, how do you fix it?
If Chrome was caching the A' state just before going to B, and reproduces A' upon back, that would be acceptable. If Chrome simply re-loaded the entirety of A (including the Ajax requests that transformed it into A') that would work too. The current behaviour, which is loading an old, incomplete version of A, is not what I want.
EDIT: I know it's loading a cached version because the server isn't receiving any new requests when i hit [Back].
|
Chrome back button: only giving cached version of initial page, without any Ajaxed content
|
Clear the Linux file cache
sync && echo 1 > /proc/sys/vm/drop_caches
Create a large file that uses all your RAM
dd if=/dev/zero of=dummyfile bs=1024 count=LARGE_NUMBER
(don't forget to remove dummyfile when done).
|
My java program spends most time by reading some files and I want to optimize it, e.g., by using concurrency, prefetching, memory mapped files, or whatever.
Optimizing without benchmarking is a non-sense, so I benchmark. However, during the benchmark the whole file content gets cached in RAM, unlike in the real run. Thus the run-times of the benchmark are much smaller and most probably unrelated to the reality.
I'd need to somehow tell the OS (Linux) not to cache the file content, or better to wipe out the cache before each benchmark run. Or maybe consume most of the available RAM (32 GB), so that only a tiny fraction of the file content fits in. How to do it?
I'm using caliper for benchmarking, but in this case I don't think its necessary (it's by no means a microbenchmark) and I'm not sure it's a good idea.
|
How to measure file read speed without caching?
|
2
It seems that making a change to your manifest file (like adding a version number ), should make the app reload content: Mobile Web App not clearing cache properly
You should add a version number to your CSS and JS urls to solve caching issues.
e.g. file.css?v=2
Cheers,
Z.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:28
CommunityBot
111 silver badge
answered Sep 26, 2013 at 21:28
Zeno PopoviciZeno Popovici
58933 silver badges1515 bronze badges
1
1
Thanks for answer, but I have already done this. Tried to add version to manifest file by adding # comment that I change. Also I am dynamicaly creating both css and js files on every page load and adding timestamp as parameter at the end of their sources like this file.css?timestamp=1380270300000
– Develoger
Sep 27, 2013 at 8:40
Add a comment
|
|
I have a JavaScript application that I run in standalone mode (added to home screen) on an iPad.
I have upgraded from iOS 6 to iOS 7 and now my app is always loading the same content, it keeps caching.
Even if I load my JS and CSS files dynamically on every app load with unique timestamp as parameter. I needed that to be sure that JS and CSS files are not cached and it worked in iOS 6.
I tried the following:
removing app from home screen
deleting cookies and website data
restarting iPad
I had manifest="manifest.appcache" removed that (then tried again 1 to 3)
I have added following meta tags
<meta http-equiv="cache-control" content="no-cache"> and
<meta http-equiv="pragma" content="no-cache">
Since I develop this locally and that page is served from my desktop PC I have tried to change the IP of my PC and then tried 1 to 3 but it did not solve my problem.
So all of my changes are not appearing no matter if they are in the HTML file or JS.
There is obviously some changes in how iOS handles standalone apps. What am I missing?
|
iOS standalone web app cant load new content after upgrade to iOS7
|
4
+75
We faced similar situation some time back but we recently found a github-actions which actually helps in caching the docker-layers & images b/w subsequent runs.
I am sure that your problem can also be solved with it. Here is the link to the gh-action https://github.com/satackey/action-docker-layer-caching.
Configuration Example
You can add the following lines above the docker run step to ensure caching to be done in gha
- uses: satackey/[email protected]
continue-on-error: true
Share
Improve this answer
Follow
answered Oct 26, 2021 at 20:43
Kush TrivediKush Trivedi
36211 silver badge88 bronze badges
1
1
This turned out to be slower than my current solution (pull from cache). Nice idea though.
– Joshua
Nov 9, 2021 at 11:59
Add a comment
|
|
On Github Actions, I'd like to avoid having to pull my newly built Docker image from the registry when I have it in a cache (and this is the slowest part of my jobs)
My workflow is something like
Build an image (with all my dependencies baked in)
Run a command within the above image
As per the Docker Build Push Action docs, setting up the cache-to and cache-from to point to gha has helped speed up step 1 a lot.
However, when I run docker run ghcr.io/org/image:new-tag command, it always starts with
Unable to find image 'ghcr.io/org/image:new-tag' locally
new-tag: Pulling from org/image
...
5402d6c1eb0a: Pulling fs layer
...
which takes around a 50 seconds (of around a total job time of ~75 seconds).
This seems unnecessary when there's a cache sat within reach that contains this information, however I don't know how to tell my docker run command how to make use of this cache as, as far as I can see, there's no --cache-from=gha equivalent option for docker run.
How can I tell docker to look in the gha cache for an image when I call docker run on Github Actions?
|
How can I use my "gha" Docker cache to speed up Docker pull, as well as Docker build on Github Actions?
|
1
I'm not sure if you've solved this problem yet (several months later...), but this should be possible.
SetMaxAge should set the amount of "guarranteed" fresh time. If you additionally send an ETag, you'll have satisfied 3 & 4. Requirements 1 & 2 can be solved orthogonally with whatever server-side caching mechanism you use: I've never used ASP.NET server-side cache like this, but it's almost certainly possible.
I'd remove extraneous headers from your responses such as SetRevalidation - why would this be necessary?
Share
Improve this answer
Follow
answered Jan 31, 2011 at 15:46
Eamon NerbonneEamon Nerbonne
47.5k2020 gold badges102102 silver badges169169 bronze badges
Add a comment
|
|
I have an ASP.Net site (happens to be MVC, but that's not relevant here) with a few pages I'd like cached really well.
Specifically I'd like to achieve:
output cached on the server for 2 hours.
if the file content on the server changes, that output cache should be flushed for that page
cached in the browser for 10 minutes (i.e. don't even ask the server if it's that fresh)
when the browser does make an actual subsequent request, I'd like it to use etags, so that the server can return a 304 if not modified.
(note - time values above are indicative examples only)
1) and 2) I can achieve by Response.Cache.SetCacheability(HttpCacheability.Server)
I know 3) can be achieved by using max-age and cache-control:private
I can emit etags with Response.Cache.SetETagFromFileDependencies();
but I can't seem to get all of these things to work together. Here's what I have:
Response.Cache.SetCacheability(HttpCacheability.ServerAndPrivate);
Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches);
Response.Cache.SetETagFromFileDependencies();
Response.Cache.SetValidUntilExpires(true);
Response.Cache.SetMaxAge(TimeSpan.FromSeconds(60 * 10));
Is the scenario I want possible? In particular:
can browsers do both 3) and 4) like that? When Firefox issues a new request after it expires in the local cache, it does indeed send the etag the server responded with before, but I get a 200 response.
setting the variables like above, where would I set the duration of the output caching?
Thanks for any tips!
|
Setting optimum http caching headers and server params in ASP.Net MVC and IIS 7.5
|
Make long story short, just define your AFNetworking manager:
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
[manager.requestSerializer setCachePolicy:NSURLRequestReloadIgnoringLocalCacheData];
Enjoy!
|
I'm using this code to pull a simple JSON feed from a server:
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
manager.responseSerializer = [AFJSONResponseSerializer serializer];
[manager GET:kDataUrl parameters:nil
success:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(@"response: %@", responseObject);
}
failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"JSON DataError: %@", error);
}];
It works. However, after I change the JSON file at kDataUrl, and verify that the change is made in a browser, when I run the app again, I still get the previous response.
It seems that AFNetworking is somehow caching the old response. I do not want this behavior. I want to download the current feed. Is there some kind of setting or parameter I need to set to turn off caching?
|
AFNetworking - do not cache response
|
volley' Request class deal with all network requests. I have not yet found any class loading resource from disk..
|
I'm looking for an open source image loading/caching solution.
I am looking in to:
Google's Volley,
Square's Picasso
Universal Image Loader
I want to be able to handle async image loads from disk as well as network, however I'm not sure if Google's volley handle's loading from disk.
Does Volley allow resource loading from disk??
An example of what I would like to do is available with AQuery.
|
Which provides better Image Loading/Caching - Volley or Picasso?
|
Returning a real, actual file object from a view sounds like something is wrong. I can see returning the contents of a file, feeding those contents into an HttpResponse object. If I understand you correctly, you're caching the results of this view into a file. Something like this:
def myview(request):
file = open('somefile.txt','r')
return file # This isn't gonna work. You need to return an HttpRequest object.
I'm guessing that if you turned caching off entirely in settings.py, your "can't pickle a file object" would turn into a "view must return an http response object."
If I'm on the right track with what's going on, then here are a couple of ideas.
You mentioned you're making a file-based cache for this one view. You sure you want to do that instead of just using memcached?
If you really do want a file, then do something like:
def myview(request):
file = open('somefile.txt','r')
contents = file.read()
resp = HttpRespnse()
resp.write(contents)
file.close()
return resp
That will solve your "cannot pickle a file" problem.
|
In django, I wrote a view that simply returns a file, and now I am having problems because memcache is trying to cache that view, and in it's words, "TypeError: can't pickle file objects".
Since I actually do need to return files with this view (I've essentially made a file-based cache for this view), what I need to do is somehow make it so memcache can't or won't try to cache the view.
I figure this can be done in two ways. First, block the view from being cached (a decorator would make sense here), and second, block the URL from being cached.
Neither seems to be possible, and nobody else seems to have run into this problem, at least not on the public interwebs. Help?
Update: I've tried the @never_cache decorator, and even thought it was working, but while that sets the headers so other people won't cache things, my local machine still does.
|
Disable caching for a view or url in django
|
Here a simple function that adds caching to getting some URL contents:
function getJson($url) {
// cache files are created like cache/abcdef123456...
$cacheFile = 'cache' . DIRECTORY_SEPARATOR . md5($url);
if (file_exists($cacheFile)) {
$fh = fopen($cacheFile, 'r');
$size = filesize($cacheFile);
$cacheTime = trim(fgets($fh));
// if data was cached recently, return cached data
if ($cacheTime > strtotime('-60 minutes')) {
return fread($fh, $size);
}
// else delete cache file
fclose($fh);
unlink($cacheFile);
}
$json = /* get from Twitter as usual */;
$fh = fopen($cacheFile, 'w');
fwrite($fh, time() . "\n");
fwrite($fh, $json);
fclose($fh);
return $json;
}
It uses the URL to identify cache files, a repeated request to the identical URL will be read from the cache the next time. It writes the timestamp into the first line of the cache file, and cached data older than an hour is discarded. It's just a simple example and you'll probably want to customize it.
|
Got a slight bit of an issue. Been playing with the facebook and twitter API's and getting the JSON output of status search queries no problem, however I've read up further and realised that I could end up being "rate limited" as quoted from the documentation.
I was wondering is it easy to cache the JSON output each hour so that I can at least try and prevent this from happening? If so how is it done? As I tried a youtube video but that didn't really give much information only how to write the contents of a directory listing to a cache.php file, but it didn't really point out whether this can be done with JSON output and certainly didn't say how to use the time interval of 60 minutes or how to get the information then back out of the cache file.
Any help or code would be very much appreciated as there seems to be very little in tutorials on this sorta thing.
|
Caching JSON output in PHP
|
We do a lot of component caching and not all of them are updated at the same time. So we set host and timestamp values in a universally included context processor. At the top of each template fragment we stick in:
<!-- component_name {{host}} {{timestamp}} -->
The component_name just makes it easy to do a View Source and search for that string.
All of our views that are object-detail pages define a context variable "page_object" and we have this at the top of the base.html template master:
<!-- {{page_object.class_id}} @ {{timestamp}} -->
class_id() is a method from a super class used by all of our primary content classes. It is just:
def class_id(self):
"%s.%s.%s" % (self.__class__._meta.app_label,
self.__class__.__name__, self.id)
If you load a page and any of the timestamps are more than few seconds old, it's a pretty good bet that the component was cached.
|
Is there a way to be sure that a page is coming from cache on a production server and on the development server as well?
The solution shouldn't involve caching middleware because not every project uses them. Though the solution itself might be a middleware.
Just checking if the data is stale is not a very safe testing method IMO.
|
How to test django caching?
|
45
System.Web.Caching.Cache: this is the implementation of .NET caching.
System.Web.HttpContext.Current.Cache: this is the instance of that implementation, that lives in the application domain.
I think you want to use the second one if you are not in the code behind of an aspx page. Use Cache if you are in the code behind of an aspx page.
You can also use Page.Cache.Insert directly that has a reference to the System.Caching.Cache through the page object. All this point to the same application cache which are global for all users.
Share
Improve this answer
Follow
edited Oct 24, 2013 at 10:17
Michał Powaga
22.8k88 gold badges5252 silver badges6262 bronze badges
answered Jul 22, 2011 at 14:53
coder netcoder net
3,45755 gold badges3232 silver badges4040 bronze badges
5
if you use HttpContext.Current.Cache["Item"] = object, its not working?
– coder net
Jul 22, 2011 at 15:02
1
I think you are not in the page class (code behind of an asp.net page) and thats why the Cache does not work directly. In the code behind, you can use the syntax you mentioned in the links above.
– coder net
Jul 22, 2011 at 15:04
Oh yes .. nice catch. I am making a class library to interact with cache. How can I do that in a separate class?
– Riz
Jul 22, 2011 at 15:19
2
instead of using "Cache" directly, just use "System.Web.HttpContext.Current.Cache". You may need to add a reference to system.web assembly depending on what your library is.
– coder net
Jul 22, 2011 at 15:23
If your application structure is a WEB FARM, will the same .NET implementation work or you have to tweak it?
– Zeeshan Ajmal
Mar 30, 2020 at 20:28
Add a comment
|
|
I am trying to use the Cache, but get the error below. How can I properly use the Cache?
protected void Page_Load(object sender, EventArgs e) {
x = System.DateTime.Now.ToString();
if (Cache["ModifiedOn"] == null) { // first time so no key/value in Cache
Cache.Insert("ModifiedOn", x); // inserts the key/value pair "Modified On", x
}
else { // Key/value pair already exists in the cache
x = Cache["ModifiedOn"].ToString();
} }
'System.Web.Caching.Cache' is a 'type' but is used like a 'variable'
|
Using System.Web.Caching.Cache
|
Yes, definitely use $this.
A new jQuery object must be constructed each time you use $(this), while $this keeps the same object for reuse.
A performance test shows that $(this) is significantly slower than $this. However, as both are performing millions of operations a second, it is unlikely either will have any real impact, but it is better practice to reuse jQuery objects anyway. Where real performance impacts arise is when a selector, rather than a DOM object, is repeatedly passed to the jQuery constructor - e.g. $('p').
As for the use of var, again always use var to declare new variables. By doing so, the variable will only be accessible in the function it is declared in, and will not conflict with other functions.
Even better, jQuery is designed to be used with chaining, so take advantage of this where possible. Instead of declaring a variable and calling functions on it multiple times:
var $this = $(this);
$this.addClass('aClass');
$this.text('Hello');
...chain the functions together to make the use of an additional variable unecessary:
$(this).addClass('aClass').text('Hello');
|
Assume I have the following example:
Example One
$('.my_Selector_Selected_More_Than_One_Element').each(function() {
$(this).stuff();
$(this).moreStuff();
$(this).otherStuff();
$(this).herStuff();
$(this).myStuff();
$(this).theirStuff();
$(this).children().each(function(){
howMuchStuff();
});
$(this).tooMuchStuff();
// Plus just some regular stuff
$(this).css('display','none');
$(this).css('font-weight','bold');
$(this).has('.hisBabiesStuff').css('color','light blue');
$(this).has('.herBabiesStuff').css('color','pink');
}
Now, it could be:
Example Two
$('.my_Selector_Selected_More_Than_One_Element').each(function() {
$this = $(this);
$this.stuff();
$this.moreStuff();
$this.otherStuff();
$this.herStuff();
$this.myStuff();
$this.theirStuff();
$this.children().each(function(){
howMuchStuff();
});
$this.tooMuchStuff();
// Plus just some regular stuff
$this.css('display','none');
$this.css('font-weight','bold');
$this.has('.hisBabiesStuff').css('color','light blue');
$this.has('.herBabiesStuff').css('color','pink');
}
The point isn't the actual code, but the use of $(this) when it is used more than once/twice/three times or more.
Am I better off performance-wise using example two than example one (maybe with an explanation why or why not)?
EDIT/NOTE
I suspect that two is better one; what I was a little fearful of was peppering my code with $this and than inadvertently introducing a potentially difficult-to-diagnosis bug when I inevitably forget to add the $this to an event handler. So should I use var $this = $(this), or $this = $(this) for this?
Thanks!
EDIT
As Scott points out below, this is considered caching in jQuery.
http://jquery-howto.blogspot.com/2008/12/caching-in-jquery.html
Jared
|
Does using $this instead of $(this) provide a performance enhancement?
|
APC caching apparently doesn't play nicely with Magento, disabling it threw a PHP error that an outdated theme was producing
|
Magento isn't displaying anything but a white homepage, in the error_log the error given is:
client denied by server configuration: /var/www/httpdocs/app/etc/local.xml
I can access the admin area fine, does anyone know why this might happen?
|
magento client denied by server configuration
|
this will delete cache
public static void deleteCache(Context context) {
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
return dir.delete();
}
|
I need to find a way how to clear the data which my application stores in cache.Basically I am using Fedor's ( Lazy load of images in ListView ) lazy list implementation and I want to clear the cache automatically when I have for example 100 images loaded.Any ideas how to do that?
EDIT:
Code :
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
list=(ListView)findViewById(R.id.list);
adapter=new LazyAdapter(this, mStrings);
list.setAdapter(adapter);
deleteCache(this);
adapter.notifyDataSetChanged();
}
public static void deleteCache(Context context) {
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
return dir.delete();
}
|
How to clear cache Android
|
37
The System.Web.Cache object itself is thread safe.
The issue is how to obtain a reference to it in a way that works throughout your application. HttpContext.Current returns null unless it is called on a thread that is handling an ASP.NET request. An alternative way to get the Cache is through the static property System.Web.HttpRuntime.Cache. This will avoid the problems with the HttpContext.
Share
Improve this answer
Follow
answered Mar 13, 2010 at 1:21
Jason KresowatyJason Kresowaty
16.3k99 gold badges5858 silver badges8484 bronze badges
3
Oh cool, I don't know how I never came across HttpRuntime... +1, deleting my answer. :)
– Tanzelax
Mar 13, 2010 at 1:29
1
+1: This is the right answer. Here's some details: weblogs.asp.net/pjohnson/archive/2006/02/06/437559.aspx
– Brian MacKay
May 25, 2011 at 20:31
The pjohnson page has moved to weblogs.asp.net/pjohnson/…. The redirect doesn't work; took a while to find it with wayback machine.
– goodeye
Aug 28, 2015 at 15:55
Add a comment
|
|
Normally i have a static class that reads and writes to HttpContext.Current.Cache
However since adding threading to my project, the threads all get null reference exceptions when trying to retrieve this object.
Is there any other way i can access it, workarounds or another cache i can use?
|
Accessing the ASP.NET Cache from a Separate Thread?
|
There is a query plan cache in Hibernate. So the HQL is not parsed every time the DAO is called (so #1 really occurs only once in your application life-time). It's QueryPlanCache. It's not heavily documented, as it "just works". But you can find more info here.
|
I'm using JPA to load and persist entities in my Java EE-based web application. Hibernate is used as an implementation of JPA, but I don't use Hibernate-specific features and only work with pure JPA.
Here is some DAO class, notice getOrders method:
class OrderDao {
EntityManager em;
List getOrders(Long customerId) {
Query q = em.createQuery(
"SELECT o FROM Order o WHERE o.customerId = :customerId");
q.setParameter("customerId", customerId);
return q.getResultList();
}
}
Method is pretty simple but it has a big drawback. Each time the method is called following actions are performed somewhere within JPA implementation:
JPQL expression is parsed and compiled to SQL.
Either Statement or PreparedStatement instance is created and initialized.
Statement instance is filled with parameters and executed.
I believe that steps 1 and 2 of above should be implemented once per application lifetime. But how to make it? In other words, I need that Query instances to be cached.
Of course I can implement such a cache on my side. But wait, I am using modern powerful ORM's! Didn't they already made this for me?
Notice that I'm not mentioning something like Hibernate query cache which caches result of queries. Here I'd like to execute my queries a bit more quickly.
|
JPA: caching queries
|
19
It's an old one but...
set this in your web.config under system.web
<caching>
<outputCache enableOutputCache="false" />
</caching>
Share
Improve this answer
Follow
edited Feb 17, 2012 at 1:09
answered Jan 14, 2011 at 16:56
Tony BasalloTony Basallo
3,03422 gold badges3333 silver badges4848 bronze badges
2
Shouldn't this be false instead of true?
– ashes999
Feb 13, 2012 at 19:16
@TonyBasallo This really does not work on IIS Express 8.Why so ?
– Freshblood
Sep 15, 2013 at 4:05
Add a comment
|
|
I use OutputCache in an ASP.net MVC application. As developing with an active OutputCache is not very pleasant I want to disable the OutputCache on the Development Systems (local machines and development server).
What is the best way to do this?
|
Disable OutputCache on Development System
|
Caching + in memory computation is definitely a big thing for spark, However there are other things.
RDD(Resilient Distributed Data set): an RDD is the main abstraction of spark. It allows recovery of failed nodes by re-computation of the DAG while also supporting a more similar recovery style to Hadoop by way of checkpointing, to reduce the dependencies of an RDD. Storing a spark job in a DAG allows for lazy computation of RDD's and can also allow spark's optimization engine to schedule the flow in ways that make a big difference in performance.
Spark API: Hadoop MapReduce has a very strict API that doesn't allow for as much versatility. Since spark abstracts away many of the low level details it allows for more productivity. Also things like broadcast variables and accumulators are much more versatile than DistributedCache and counters IMO.
Spark Streaming: spark streaming is based on a paper Discretized Streams, which proposes a new model for doing windowed computations on streams using micro batches. Hadoop doesn't support anything like this.
As a product of in memory computation spark sort of acts as it's own flow scheduler. Whereas with standard MR you need an external job scheduler like Azkaban or Oozie to schedule complex flows
The hadoop project is made up of MapReduce, YARN, commons and HDFS; spark however is attempting to create one unified big data platform with libraries (in the same repo) for machine learning, graph processing, streaming, multiple sql type libraries and I believe a deep learning library is in the beginning stages. While none of this is strictly a feature of spark it is a product of spark's computing model. Tachyon and BlinkDB are two other technologies that are built around spark.
|
I have started to learn about Apache Spark and am very impressed by the framework. Although one thing which keeps bothering me is that in all Spark presentations they talk about how Spark caches the RDDs and therefore multiple operations which need the same data are faster than other approaches like Map Reduce.
So the question I had is that if this is the case, then just add a caching engine inside of MR frameworks like Yarn/Hadoop.
Why to create a new framework altogether?
I am sure I am missing something here and you will be able to point me to some documentation which educates me more on spark.
|
Is caching the only advantage of spark over map-reduce?
|
Dirty tracking is the normal way to handle this, I think. Something like:
class MyObject {
public string SomeValue {
get { return _someValue; }
set {
if (value != SomeValue) {
IsDirty = true;
_someValue = value;
}
}
public bool IsDirty {
get;
private set;
}
void SaveToDatabase() {
base.SaveToDatabase();
IsDirty = false;
}
}
myoldObject = new MyObject { someValue = "old value" };
cache.Insert("myObjectKey", myoldObject);
myNewObject = cache.Get("myObjectKey");
myNewObject.someValue = "new value";
if(myNewObject.IsDirty)
myNewObject.SaveToDatabase();
|
So I just fixed a bug in a framework I'm developing. The pseudo-pseudocode looks like this:
myoldObject = new MyObject { someValue = "old value" };
cache.Insert("myObjectKey", myoldObject);
myNewObject = cache.Get("myObjectKey");
myNewObject.someValue = "new value";
if(myObject.someValue != cache.Get("myObjectKey").someValue)
myObject.SaveToDatabase();
So, essentially, I was getting an object from the cache, and then later on comparing the original object to the cached object to see if I need to save it to the database in case it's changed. The problem arose because the original object is a reference...so changing someValue also changed the referenced cached object, so it'd never save back to the database. I fixed it by cloning the object off of the cached version, severing the reference and allowing me to compare the new object against the cached one.
My question is: is there a better way to do this, some pattern, that you could recommend? I can't be the only person that's done this before :)
|
Caching Patterns in ASP.NET
|
Turns out I needed to not only add the __typename as the ID needed to be the one resolved by default (Explained here)
So I needed to do the following in order to make it work:
client.writeFragment({
id: `Thing:${id}`,
fragment: gql`
fragment my_thing on Thing {
status
}
`,
data: {
__typename: 'Thing',
status
}
})
|
In react-apollo 2.0.1 I have a graphql type that looks like this:
type PagedThing {
data: [Thing]
total: Int
}
When doing the following writeFragment
client.writeFragment({
id,
fragment: gql`
fragment my_thing on Thing {
status
}
`,
data: {
status
}
})
The cache is not update and the new data is not shown on the UI. Is there something else that need to be done?
PS: To be safe I used fragment matching
Edit 1:
I receive an error of:
Cannot match fragment because __typename property is missing: {"status":"online"}
So I changed the code to:
client.writeFragment({
id,
fragment: gql`
fragment my_thing on Thing {
status
}
`,
data: {
__typename: 'Thing',
status
}
})
And no error is thrown but the updated still do not happen
|
Apollo writeFragment not updating data
|
15
This is not a legal way, but it works:
expires_at = Rails.cache.send(:read_entry, 'my_key', {})&.expires_at
expires_at - Time.now.to_f if expires_at
read_entry is protected method that is used by fetch, read, exists? and other methods under the hood, that's why we use send.
It may return nil if there is no entry at all, so use try with Rails, or &. for safe navigation, or .tap{ |e| e.expires_at unless e.nil? } for old rubies.
As a result you will get something like 1587122943.7092931. That's why you need Time.now.to_f. With Rails you can use Time.current.to_f as well.
Share
Improve this answer
Follow
answered Apr 17, 2020 at 11:40
Nick RozNick Roz
4,10033 gold badges3737 silver badges6060 bronze badges
5
Have a look at these 2 methods github.com/rails/rails/blob/master/activesupport/lib/…
– Nick Roz
Apr 17, 2020 at 11:44
2
read_entry0 might also work, read_entry1 holds the expiration value so it can be compared without further operations.
– Sebastián Palma
Sep 18, 2020 at 12:14
2
@SebastianPalma but that won't give you what the original question was, which is read_entry2.
– courtsimas
Jan 6, 2021 at 4:27
Just as a follow-up to @NickRoz, here's what read_entry3 looks like: read_entry4. So it stores both the read_entry5 and read_entry6 values in the cache, which it uses to determine if the cache entry is expired or not. Neat!
– Joshua Pinter
Apr 6, 2021 at 15:00
NOTE: You need to normalize the key now so see this answer for how: stackoverflow.com/a/69159992/293280
– Joshua Pinter
Nov 26, 2022 at 6:04
Add a comment
|
|
Somewhere in my app I use
Rails.cache.write 'some_key', 'some_value', expires_in: 1.week
In another part of my app I want to figure out how much time it is left for that cache item.
How do I do that?
|
Get expiration time of Rails cached item
|
To ensure optimal flexibility and code reuse, Symfony2 applications leverage a variety of classes and 3rd party components. But loading all of these classes from separate files on each request can result in some overhead. To reduce this overhead, the Symfony2 Standard Edition provides a script to generate a so-called bootstrap file, consisting of multiple classes definitions in a single file. By including this file (which contains a copy of many of the core classes), Symfony no longer needs to include any of the source files containing those classes. This will reduce disc IO quite a bit.
Source: Use Bootstrap Files.
|
I'm using SF2 in one of our legacy project, not the entire framework but by pulling in bundles and components I need. And I have always wondered about these lines of code:
$loader = require_once __DIR__.'/../app/bootstrap.php.cache';
require_once __DIR__.'/../app/AppKernel.php';
//require_once __DIR__.'/../app/AppCache.php';
$kernel = new AppKernel('prod', false);
$kernel->loadClassCache();
I wonder what this bootstrap.php.cache file is for, what it is for, how it is generated (if I'm not using the SF2 whole framework). I didn't use it before, and there was no problem, but I wonder if this can give me some performance boost etc that I should look into. I tried to find all around but couldn't find a document dedicated to this subject.
|
What's the purpose of the Symfony2 bootstrap.php.cache file?
|
You can actually save the output of the page before you end the script, then load the cache at the start of the script.
example code:
<?php
$cachefile = 'cache/'.basename($_SERVER['PHP_SELF']).'.cache'; // e.g. cache/index.php.cache
$cachetime = 3600; // time to cache in seconds
if (file_exists($cachefile) && time() - $cachetime <= filemtime($cachefile)) {
$c = @file_get_contents($cachefile);
echo $c;
exit;
} else {
unlink($cachefile);
}
ob_start();
// all the coding goes here
$c = ob_get_contents();
file_put_contents($cachefile, $c);
?>
If you have a lot of pages needing this caching you can do this:
in cachestart.php:
<?php
$cachefile = 'cache/' . basename($_SERVER['PHP_SELF']) . '.cache'; // e.g. cache/index.php.cache
$cachetime = 3600; // time to cache in seconds
if (file_exists($cachefile) && time() - $cachetime <= filemtime($cachefile)) {
$c = @file_get_contents($cachefile);
echo $c;
exit;
} else {
unlink($cachefile);
}
ob_start();
?>
in cacheend.php:
<?php
$c = ob_get_contents();
file_put_contents($cachefile, $c);
?>
Then just simply add
include('cachestart.php');
at the start of your scripts. and add
include('cacheend.php');
at the end of your scripts. Remember to have a folder named cache and allow PHP to access it.
Also do remember that if you're doing a full page cache, your page should not have SESSION specific display (e.g. display members' bar or what) because they will be cached as well. Look at a framework for specific-caching (variable or part of the page).
|
how do I cache a web page in php so that if a page has not been updated viewers should get a cached copy?
Thanks for your help.
PS: I am beginner in php.
|
How do I cache a web page in PHP?
|
Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code.
I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results:
Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer.
When it comes to caches:
* fastest: eAccelerator
* XCache
* APC
And: You DO want to install a opcode cache!
For example:
alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png
This is the duration it took to call the wordpress homepage 10.000 times.
Edit: BTW, eAccelerator contains an optimizer itself.
|
Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these?
|
Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster?
|
So, it turns out that OutputCaching was working, it was just that my method of testing it was flawed. The result of an action will only be cached if the response doesn't include a cookie. Of course the first response always includes a cookie if you have ASP.NET Session enabled which we do. Therefore the first response headers look like this:
HTTP/1.1 200 OK
Cache-Control: private, max-age=600
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Expires: Tue, 26 Nov 2013 03:48:44 GMT
Last-Modified: Tue, 26 Nov 2013 03:38:44 GMT
Vary: *
Set-Cookie: ASP.NET_SessionId=kbnhk4lphdlcpozcumpxilcd; path=/; HttpOnly
X-UA-Compatible: IE=Edge
Date: Tue, 26 Nov 2013 03:38:44 GMT
Content-Length: 9558
Assuming your browser or test tool can accept cookies and include those in the subsequent requests, the next request to the same page would result in HTTP response headers like so:
HTTP/1.1 200 OK
Cache-Control: private, max-age=598
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Expires: Tue, 26 Nov 2013 03:48:45 GMT
Last-Modified: Tue, 26 Nov 2013 03:38:45 GMT
Vary: *
X-UA-Compatible: IE=Edge
Date: Tue, 26 Nov 2013 03:38:45 GMT
Content-Length: 9558
As there is no client specific information in the response the output can now be cached as expected.
So, lesson is when testing output caching use a testing tool that can accept and return cookies in subsequent requests.
We ended up using Jmeter rather than tinyget and everything now works as expected.
|
I am having an issue where output caching doesn't appear to be working for my ASP.NET MVC 4 (EPiServer 7) website.
I have the following output cache profile in my web.config:
<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="PageOutput" enabled="true" duration="300" varyByParam="*" location="ServerAndClient" />
</outputCacheProfiles>
</outputCacheSettings>
</caching>
And here is my output caching configuration for static resources:
<caching>
<profiles>
<add extension=".gif" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="0.00:01:00" location="Any" />
<add extension=".png" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="0.00:01:00" location="Any" />
<add extension=".js" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="0.00:01:00" location="Any" />
<add extension=".css" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="00:01:00" location="Any" />
<add extension=".jpg" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="0.00:01:00" location="Any" />
<add extension=".jpeg" policy="DontCache" kernelCachePolicy="CacheUntilChange" duration="00:01:00" location="Any" />
</profiles>
</caching>
And my controller is decorated with an output cache attribute like so:
[OutputCache(CacheProfile = "PageOutput")]
public class HomePageController : BasePageController<HomePage>
{ ...}
I'm watching the following counters in perfmon but not seeing them increment as expected when I visit the home page:
\ASP.NET Apps v4.0.30319(__Total__)\Output Cache Entries
\ASP.NET Apps v4.0.30319(__Total__)\Output Cache Hits
I've also been testing using tinyget like so:
tinyget -srv:mywebsite -uri:/ -threads:1 -loop:20
Any advice would be greatly appreciated!
|
Why is output caching not working for my ASP.NET MVC 4 app?
|
The code has changed.
In Python 2.7, the cache is a simple dictionary; if more than _MAXCACHE items are stored in it, the whole the cache is cleared before storing a new item. A cache lookup only takes building a simple key and testing the dictionary, see the 2.7 implementation of _compile()
In Python 3.x, the cache has been replaced by the @functools.lru_cache(maxsize=500, typed=True) decorator. This decorator does much more work and includes a thread-lock, adjusting the cache LRU queue and maintaining the cache statistics (accessible via re._compile.cache_info()). See the 3.3.0 implementation of _compile() and of functools.lru_cache().
Others have noticed the same slowdown, and filed issue 16389 in the Python bugtracker. I'd expect 3.4 to be a lot faster again; either the lru_cache implementation is improved or the re module will move to a custom cache again.
Update: With revision 4b4dddd670d0 (hg) / 0f606a6 (git) the cache change has been reverted back to the simple version found in 3.1. Python versions 3.2.4 and 3.3.1 include that revision.
Since then, in Python 3.7 the pattern cache was updated to a custom FIFO cache implementation based on a regular dict (relying on insertion order, and unlike a LRU, does not take into account how recently items already in the cache were used when evicting).
|
When answering this question (and having read this answer to a similar question), I thought that I knew how Python caches regexes.
But then I thought I'd test it, comparing two scenarios:
a single compilation of a simple regex, then 10 applications of that compiled regex.
10 applications of an uncompiled regex (where I would have expected slightly worse performance because the regex would have to be compiled once, then cached, and then looked up in the cache 9 times).
However, the results were staggering (in Python 3.3):
>>> import timeit
>>> timeit.timeit(setup="import re",
... stmt='r=re.compile(r"\w+")\nfor i in range(10):\n r.search(" jkdhf ")')
18.547793477671938
>>> timeit.timeit(setup="import re",
... stmt='for i in range(10):\n re.search(r"\w+"," jkdhf ")')
106.47892003890324
That's over 5.7 times slower! In Python 2.7, there is still an increase by a factor of 2.5, which is also more than I would have expected.
Has caching of regexes changed between Python 2 and 3? The docs don't seem to suggest that.
|
Why are uncompiled, repeatedly used regexes so much slower in Python 3?
|
In Python 3.x, you can use os.makedirs(path, exist_ok=True), which will not raise any exception if such directory exists. It will raise FileExistsError: [Errno 17] if a file exists with the same name as the requested directory (path).
Verify it with:
import os
parent = os.path.dirname(__file__)
target = os.path.join(parent, 'target')
os.makedirs(target, exist_ok=True)
os.makedirs(target, exist_ok=True)
os.rmdir(target)
with open(target, 'w'):
pass
os.makedirs(target, exist_ok=True)
|
I have a urllib2 caching module, which sporadically crashes because of the following code:
if not os.path.exists(self.cache_location):
os.mkdir(self.cache_location)
The problem is, by the time the second line is being executed, the folder may exist, and will error:
File ".../cache.py", line 103, in __init__
os.mkdir(self.cache_location)
OSError: [Errno 17] File exists: '/tmp/examplecachedir/'
This is because the script is simultaneously launched numerous times, by third-party code I have no control over.
The code (before I attempted to fix the bug) can be found here, on github
I can't use the tempfile.mkstemp, as it solves the race condition by using a randomly named directory (tempfile.py source here), which would defeat the purpose of the cache.
I don't want to simply discard the error, as the same error Errno 17 error is raised if the folder name exists as a file (a different error), for example:
$ touch blah
$ python
>>> import os
>>> os.mkdir("blah")
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 17] File exists: 'blah'
>>>
I cannot using threading.RLock as the code is called from multiple processes.
So, I tried writing a simple file-based lock (that version can be found here), but this has a problem: it creates the lockfile one level up, so /tmp/example.lock for /tmp/example/, which breaks if you use /tmp/ as a cache dir (as it tries to make /tmp.lock)..
In short, I need to cache urllib2 responses to disc. To do this, I need to access a known directory (creating it, if required), in a multiprocess safe way. It needs to work on OS X, Linux and Windows.
Thoughts? The only alternative solution I can think of is to rewrite the cache module using SQLite3 storage, rather than files.
|
Race-condition creating folder in Python
|
Redis has no idea whether the data in DB has been updated.
Normally, we use Redis to cache data as follows:
Client checks if the data, e.g. key-value pair, exists in Redis.
If the key exists, client gets the corresponding value from Redis.
Otherwise, it gets data from DB, and sets it to Redis. Also client sets an expiration, say 5 minutes, for the key-value pair in Redis.
Then any subsequent requests for the same key will be served by Redis. Although the data in Redis might be out-of-date.
However, after 5 minutes, this key will be removed from Redis automatically.
Go to step 1.
So in order to keep your data in Redis update-to-date, you can set a short expiration time. However, your DB has to serve lots of requests.
If you want to largely decrease requests to DB, you can set a large expiration time. So that, most of time, Redis can serve the requests with possible staled data.
You should consider carefully about the trade-off between performance and staled data.
|
Say, I'm Fechting thousands or record using some long runing task from DB and caching it using Redis. Next day somebody have changed few records in DB.
Next time how redis would know that it has to return cached data or again have to revisit that all thousands of records in DB?
How this synchronisation achived?
|
How would Redis get to know if it has to return cached data or fresh data from DB
|
Internally, Dictionary uses the hash code of the key you give it. Effectively every key is stored as an integer.
You have nothing to worry about.
|
We are using HttpRuntime.Cache API in an ASP.NET to cache data retrieved from a database.
For this particular application, our database queries feature a LOT of parameters, so our cache keys look something like this:
table=table1;param1=somevalue1;param2=somevalue2;param3=somevalue3;param4=somevalue4;param5=somevalue5;param6=somevalue6... etc...
For some queries, we have so many parameters that the cache key is several hundred characters long.
My question: is there a limit to the length of these cache keys? Internally, it is using a dictionary, so theoretically the lookup time should be constant. However, I wonder if we have potential to run into some performance/memory problem.
|
Maximum length of cache keys in HttpRuntime.Cache object?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.