Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
What you are talking about is lifetime dependency chaining, that one thing is dependent on another which can be modified outside of it's control.
If you have an idempotent function from a, b to c where, if a and b are the same then c is the same but the cost of checking b is high then you either:
accept that you sometime operate with out of date information and do not always check b
do your level best to make checking b as fast as possible
You cannot have your cake and eat it...
If you can layer an additional cache based on a over the top then this affects the initial problem not one bit. If you chose 1 then you have whatever freedom you gave yourself and can thus cache more but must remember to consider the validity of the cached value of b0. If you chose 2 you must still check b1 every time but can fall back on the cache for b2 if b3 checks out.
If you layer caches you must consider whether you have violated the 'rules' of the system as a result of the combined behaviour.
If you know that b4 always has validity if b5 does then you can arrange your cache like so (pseudocode):
b6
Obviously successive layering (say b7) is trivial so long as, at each stage the validity of the newly added input matches the b8:b9 relationship for c0:c1 and c2:c3.
However it is quite possible that you could get three inputs whose validity was entirely independent (or was cyclic), so no layering would be possible. This would mean the line marked // important would have to change to
if (endCache[a] expired or not present)
|
"There are only two hard problems in Computer Science: cache invalidation and naming things."
Phil Karlton
Is there a general solution or method to invalidating a cache; to know when an entry is stale, so you are guaranteed to always get fresh data?
For example, consider a function getData() that gets data from a file.
It caches it based on the last modified time of the file, which it checks every time it's called.
Then you add a second function transformData() which transforms the data, and caches its result for next time the function is called. It has no knowledge of the file - how do you add the dependency that if the file is changed, this cache becomes invalid?
You could call getData() every time transformData() is called and compare it with the value that was used to build the cache, but that could end up being very costly.
| Cache Invalidation — Is there a General Solution? |
Disable OPCache
MAMP now turns on OPCache by default, you can disable it by editing your php.ini file. Make sure you edit the correct php.ini.
I was running into the same problem myself. MAMP with PHP version 5.5.3 runs OPcache by default, but you can't turn it off in the GUI like you can with the older PHP version 5.2.17. You have to manually comment out all the OPcache lines at the end of the php.ini file (MAMP/bin/php/[version]/conf/php.ini) and make sure to stop and start the servers for the changes to take effect.
I updated the URI, the changes can be reflective by also changing /conf/ under the php folder, but it seems MAMP will ignore these after restart.
|
Installed MAMP on a new Macbook with PHP 5.5.3.
Reload and refresh do nothing. Still nothing. Google around for a few minutes trying to find out what is wrong, come back and refresh. It works. What the heck?
I went into php.ini and disabled all the new OPcache and set the default cache time to 0. Added headers to the document to force no caching. Still same problem. What the heck is going on here?
The network tab is showing a HTTP 200 request, so any new HTML in the index.php file renders fine, but new PHP that needs to be rendered by the server is delayed and not rendered until some predetermined set of time passes that I don't know how to change. What's going on?
I checked this in Safari too so it is definitely a server thing that is keeping the file from rendering.
Interesting fact though, if I go into MAMP and change the PHP version to the old one (PHP 5.2 or something) it will render normally, with no "caching issues". Switch to PHP 5.5 and it hangs up. In the MAMP preferences caching options for 5.5 don't even exist and are automatically disabled.
| Stop caching for PHP 5.5.3 in MAMP |
142
Easiest solution:
app.disable('etag');
Alternate solution here if you want more control:
http://vlasenko.org/2011/10/12/expressconnect-static-set-last-modified-to-now-to-avoid-304-not-modified/
Share
Improve this answer
Follow
edited May 17, 2014 at 0:04
answered May 16, 2014 at 23:48
blentedblented
2,73922 gold badges2424 silver badges1717 bronze badges
4
7
Could you explain the "easiest solution" or give a reference on how this affects?
– Samuel Méndez
Nov 20, 2018 at 8:53
2
@SamuelMéndez it disables caching basically, the wiki on etag has a lot of good info en.wikipedia.org/wiki/HTTP_ETag
– blented
Dec 12, 2018 at 4:19
Worked for me :)
– Naveen Kumar V
Jun 15, 2020 at 19:07
1
This makes sense. I believe the etag was not being recomputed properly because I wasn't setting the the last modified date, as mentioned in vlasenko's link. My problem went away when I updated my code as follows: const headers = { 'Last-Modified': (new Date()).toUTCString() }; app.get('/*', (req, res) => { res.sendFile(join(DIST_FOLDER + '/index.html'), { headers } ); });
– Robert Patterson
Dec 27, 2020 at 5:03
Add a comment
|
|
When I reload a website made with express, I get a blank page with Safari (not with Chrome) because the NodeJS server sends me a 304 status code.
How to solve this?
Of course, this could also be just a problem of Safari, but actually it works on all other websites fine, so it has to be a problem on my NodeJS server, too.
To generate the pages, I'm using Jade with res.render.
Update: It seems like this problem occurs because Safari sends 'cache-control': 'max-age=0' on reload.
Update 2: I now have a workaround, but is there a better solution?
Workaround:
app.get('/:language(' + content.languageSelector + ')/:page', function (req, res)
{
// Disable caching for content files
res.header("Cache-Control", "no-cache, no-store, must-revalidate");
res.header("Pragma", "no-cache");
res.header("Expires", 0);
// rendering stuff here…
}
Update 3:
So the complete code part is currently:
app.get('/:language(' + content.languageSelector + ')/:page', pageHandle);
function pageHandle (req, res)
{
var language = req.params.language;
var thisPage = content.getPage(req.params.page, language);
if (thisPage)
{
// Disable caching for content files
res.header("Cache-Control", "no-cache, no-store, must-revalidate");
res.header("Pragma", "no-cache");
res.header("Expires", 0);
res.render(thisPage.file + '_' + language, {
thisPage : thisPage,
language: language,
languages: content.languages,
navigation: content.navigation,
footerNavigation: content.footerNavigation,
currentYear: new Date().getFullYear()
});
}
else
{
error404Handling(req, res);
}
}
| NodeJS/express: Cache and 304 status code |
Dispose the existing MemoryCache and create a new MemoryCache object.
|
I have created a cache using the MemoryCache class. I add some items to it but when I need to reload the cache I want to clear it first. What is the quickest way to do this? Should I loop through all the items and remove them one at a time or is there a better way?
| How to clear MemoryCache? |
I know this is an older question, but I wanted to post an answer for users with the same question:
curl -H 'Cache-Control: no-cache' http://www.example.com
This curl command servers in its header request to return non-cached data from the web server.
|
Is there a way to tell cURL command not to use server's side cache?
e.g; I have this curl command:
curl -v www.example.com
How can I ask curl to send a fresh request to not use the cache?
Note: I am looking for an executable command in the terminal.
| How to call cURL without using server-side cache? |
Since this question was originally asked, Google's Guava library now includes a powerful and flexible cache. I would recommend using this.
|
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed 9 years ago.
Improve this question
I'm looking for a simple Java in-memory cache that has good concurrency (so LinkedHashMap isn't good enough), and which can be serialized to disk periodically.
One feature I need, but which has proved hard to find, is a way to "peek" at an object. By this I mean retrieve an object from the cache without causing the cache to hold on to the object any longer than it otherwise would have.
Update: An additional requirement I neglected to mention is that I need to be able to modify the cached objects (they contain float arrays) in-place.
Can anyone provide any recommendations?
| Looking for simple Java in-memory cache [closed] |
Use SysInternal's RAMMap app.
The Empty / Empty Standby List menu option will clear the Windows file cache.
|
What tools or techniques can I use to remove cached file contents to prevent my performance results from being skewed? I believe I need to either completely clear, or selectively remove cached information about file and directory contents.
The application that I'm developing is a specialised compression utility, and is expected to do a lot of work reading and writing files that the operating system hasn't touched recently, and whose disk blocks are unlikely to be cached.
I wish to remove the variability I see in IO time when I repeat the task of profiling different strategies for doing the file processing work.
I'm primarily interested in solutions for Windows XP, as that is my main development machine, but I can also test using linux, and so am interested in answers for that environment too.
I tried SysInternals CacheSet, but clicking "Clear" doesn't result in a measurable increase (restoration to timing after a cold-boot) in the time to re-read files I've just read a few times.
| Clear file cache to repeat performance testing |
Without resorting to cut'n'paste of the link that @MYYN posted, I suspect this is because the optimisations that the JVM performs are not static, but rather dynamic, based on the data patterns as well as code patterns. It's likely that these data patterns will change during the application's lifetime, rendering the cached optimisations less than optimal.
So you'd need a mechanism to establish whether than saved optimisations were still optimal, at which point you might as well just re-optimise on the fly.
|
The canonical JVM implementation from Sun applies some pretty sophisticated optimization to bytecode to obtain near-native execution speeds after the code has been run a few times.
The question is, why isn't this compiled code cached to disk for use during subsequent uses of the same function/class?
As it stands, every time a program is executed, the JIT compiler kicks in afresh, rather than using a pre-compiled version of the code. Wouldn't adding this feature add a significant boost to the initial run time of the program, when the bytecode is essentially being interpreted?
| Why doesn't the JVM cache JIT compiled code? |
Update (July 11, 2023)
As already described in VonC's answer, the GitHub CLI now has a dedicated cache top-level command:
List of caches for the current repository:
$ gh cache list
Delete a cache for the current repository by cache ID:
$ gh cache delete <CACHE_ID>
Update (October 20, 2022)
You can now manage caches via the UI:
https://github.com/<OWNER>/<REPO>/actions/caches
Update (June 27, 2022)
You can now manage caches via the GitHub Actions Cache API:
GET list of caches for a repository:
$ curl \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token <TOKEN>" \
https://api.github.com/repos/<OWNER>/<REPO>/actions/caches
DELETE a cache for a repository by cache ID:
$ curl \
-X DELETE \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token <TOKEN>" \
https://api.github.com/repos/<OWNER>/<REPO>/actions/caches/<CACHE_ID>
Alternatively, you can also use the GitHub CLI to interact with the API, using the gh-actions-cache extension.
Original Post (November 13, 2020)
As pointed out in the corresponding issue, two practical workarounds can be used to force the use of a new cache. This is not exactly the same as clearing the current cache (with regards to the cache usage limits), but it does the job.
In order to do so, you have to change the cache key (and any restore-keys). Because if the key(s) is/are different, this is considered a cache miss and you start with a new one.
You can change the cache key either by modifying the workflow file directly, e.g., by adding a version number:
$ gh cache list
0
If you now want to use a new cache, all you have to do is to commit a different version number:
$ gh cache list
1
If you don't want to modify the workflow file and prefer using the UI, you can abuse secrets:
$ gh cache list
2
Whenever the secret changes, a new cache will be used.
⚠️ WARNING: Secrets used for cache keys are "revealed" in the UI.
|
I am working on an R package and using GitHub Action (GHA) as a Continuous Integration (CI) provider. I cache R packages (dependencies) by using actions/cache. And now I want to clear all cache. How can I do that?
A part of GHA Workflow I use:
on: push
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ${{ matrix.config.os }}
name: ${{ matrix.config.os }} (${{ matrix.config.r }})
strategy:
fail-fast: false
matrix:
config:
# - {os: windows-latest, r: 'devel'}
- {os: macOS-latest, r: 'release'}
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
RSPM: ${{ matrix.config.rspm }}
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v2
- uses: r-lib/actions/setup-r@master
- name: Query dependencies
run: |
repos <- c("https://r-hyperspec.github.io/hySpc.pkgs/", getOption("repos"))
saveRDS("remotes::dev_package_deps(dependencies = TRUE)", ".github/depends.Rds", version = 2)
writeLines(sprintf("R-%i.%i", getRversion()$major, getRversion()$minor), ".github/R-version")
shell: Rscript {0}
- name: Cache R packages
if: runner.os != 'Windows'
uses: actions/cache@v1
with:
path: ${{ env.R_LIBS_USER }}
key: ${{ runner.os }}-${{ hashFiles('.github/R-version') }}-1-${{ hashFiles('.github/depends.Rds') }}
restore-keys: ${{ runner.os }}-${{ hashFiles('.github/R-version') }}-1-
- name: Install dependencies
run: remotes::install_deps(dependencies = TRUE)
shell: Rscript {0}
- name: Session info
run: |
options(width = 100)
pkgs <- installed.packages()[, "Package"]
sessioninfo::session_info(pkgs, include_base = TRUE)
shell: Rscript {0}
| Clear cache in GitHub Actions |
86
Use Invalidations to clear the cache, you can put the path to the files you want to clear, or simply use wild cards to clear everything.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidating-objects-api
This can also be done using the API!
http://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateInvalidation.html
The AWS PHP SDK now has the methods but if you want to use something lighter check out this library:
http://www.subchild.com/2010/09/17/amazon-cloudfront-php-invalidator/
user3305600's solution doesn't work as setting it to zero is the equivalent of Using the Origin Cache Headers.
Share
Improve this answer
Follow
edited Apr 20, 2017 at 8:13
Roy Shmuli
4,98911 gold badge2424 silver badges3838 bronze badges
answered Dec 2, 2014 at 3:22
NeoNeo
11.3k33 gold badges7070 silver badges7979 bronze badges
3
4
This is indeed a better answer
– Bob van Luijt
Apr 20, 2015 at 23:00
1
That library looks to be outdated.
– Phil Tune
Dec 3, 2019 at 20:13
Hello, I keep seeing comments on threads re: AWS CloudFront invalidations take minutes to complete. Is this still true in 2023?
– Patrick Steil
Mar 6, 2023 at 15:34
Add a comment
|
|
I have a cron job that runs every 10 minutes and updates the content-type and x-amz-meta. But since yesterday it seems like after the cron job run, Amazon is not picking up the changes made (refreshing his cache).
I even went and made the changes manually but no change...
When a video is uploaded it has a application/x-mp4 content-type and the cron job changes it to video/mp4.
Although S3 has the right content type video/mp4 cloudfront shows application/x-mp4(old content-type) ....
The cron job has been working for the last 6 months without a problem.
What is wrong with amazon caching? How can i synchronize the caching?
| Amazon S3 and Cloudfront cache, how to clear cache or synchronize their cache |
The functools source code is available here: https://github.com/python/cpython/blob/master/Lib/functools.py
lru_cache uses the _lru_cache_wrapper decorator (python decorator with arguments pattern) which has a cache dictionary in context in which it saves the return value of the function called (every decorated function will have its own cache dict). The dictionary key is generated with the _make_key function from the arguments. Added some bold comments below:
# ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper
# DEFINES AND RETURNS ONE OF wrapper DECORATORS
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo):
# Constants shared by all lru cache instances:
sentinel = object() # unique object used to signal cache misses
cache = {} # RESULTS SAVES HERE
cache_get = cache.get # bound method to lookup a key or return None
# ... maxsize is None:
def wrapper(*args, **kwds):
# Simple caching without ordering or size limit
nonlocal hits, misses
key = make_key(args, kwds, typed) # BUILD A KEY FROM ARGUMENTS
result = cache_get(key, sentinel) # TRYING TO GET PREVIOUS CALLS RESULT
if result is not sentinel: # ALREADY CALLED WITH PASSED ARGS
hits += 1
return result # RETURN SAVED RESULT
# WITHOUT ACTUALLY CALLING FUNCTION
misses += 1
result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty
cache[key] = result # SAVE RESULT
return result
# ...
return wrapper
|
Especially when using recursive code there are massive improvements with lru_cache. I do understand that a cache is a space that stores data that has to be served fast and saves the computer from recomputing.
How does the Python lru_cache from functools work internally?
I'm Looking for a specific answer, does it use dictionaries like the rest of Python? Does it only store the return value?
I know that Python is heavily built on top of dictionaries, however, I couldn't find a specific answer to this question.
| How does Lru_cache (from functools) Work? |
You can see what's in the PostgreSQL buffer cache using the pg_buffercache module. I've done a presentation called "Inside the PostgreSQL Buffer Cache" that explains what you're seeing, and I show some more complicated queries to help interpret that information that go along with that.
It's also possible to look at the operating system cache too on some systems, see [pg_osmem.py] for one somewhat rough example.
There's no way to clear the caches easily. On Linux you can stop the database server and use the drop_caches facility to clear the OS cache; be sure to heed the warning there to run sync first.
|
Sometimes I run a Postgres query and it takes 30 seconds. Then, I immediately run the same query and it takes 2 seconds. It appears that Postgres has some sort of caching. Can I somehow see what that cache is holding? Can I force all caches to be cleared for tuning purposes?
I'm basically looking for a Postgres version of the following SQL Server command:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But I would also like to know how to see what is actually contained in that buffer.
| See and clear Postgres caches/buffers? |
Generally one year is advised as a standard max value. See RFC 2616:
To mark a response as "never expires," an origin server sends an
Expires date approximately one year from the time the response is
sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
year in the future.
Although that applies to the older expires standard, it makes sense to apply to cache-control too in the absence of any explicit standards guidance. It's as long as you should generally need anyway and picking any arbitrarily longer value could break some user-agents. So:
Cache-Control: max-age=31536000
|
I'm using Amazon S3 to serve static assets for my website. I want to have browsers cache these assets for as long as possible. What meta-data headers should I include with my assets
Cache-Control: max-age=???
| Max value for cache control header in HTTP |
Your problem is caused by back-forward cache. It is supposed to save complete state of page when user navigates away. When user navigates back with back button page can be loaded from cache very quickly. This is different from normal cache which only caches HTML code.
When page is loaded for bfcache onload event wont be triggered. Instead you can check the persisted property of the onpageshow event. It is set to false on initial page load. When page is loaded from bfcache it is set to true.
Kludgish solution is to force a reload when page is loaded from bfcache.
window.onpageshow = function(event) {
if (event.persisted) {
window.location.reload()
}
};
If you are using jQuery then do:
$(window).bind("pageshow", function(event) {
if (event.originalEvent.persisted) {
window.location.reload()
}
});
|
Got an issue with safari loading old youtube videos when back button is clicked. I have tried adding onunload="" (mentioned here Preventing cache on back-button in Safari 5) to the body tag but it doesn't work in this case.
Is there any way to prevent safari loading from cache on a certain page?
| Prevent safari loading from cache when back button is clicked |
Go to Manage Workspaces (either through the File/Source Control menu or the workspace drop down in Source Control Explorer)
select edit for your workspace.
You should see, under working folders, a mapping for the source
control directory to the old/wrong project directory.
Select it and click remove.
Close VS and delete the suo file.
It still references the wrong directory. Maybe rebinding might work at this point but I didn't try that. Reload your project and you should be good to go.
|
Visual Studio (and possibly TFS) has somehow (I think perhaps during a source control merge) become confused about the path of a project within my solution.
It thinks it is here (example paths for simplicity):
C:\My Projects\ExampleSolution\ExampleProjectWrong\ExampleProjectCorrect.csproj
whereas actually, the project file is located here:
C:\My Projects\ExampleSolution\ExampleProjectCorrect\ExampleProjectCorrect.csproj
I cannot for the life of me get it to recognize the correct location. I have tried:
Removing and re-adding the project from the correct location. An error message comes up saying The project file at C:\My Projects\ExampleSolution\ExampleProjectWrong\ExampleProjectCorrect.csproj could not be found.
Manually editing the .sln file to ensure all references to ExampleProjectCorrect.csproj have the correct paths.
Doing a find in files on the solution directory for both the correct and incorrect paths, to try and track down where studio is hiding the incorrect path.
Deleting the cache directories for VS and TFS
I'm tearing my hair out because I can't recreate the solution as it has near as makes no difference 100 projects in and is tied in to source control with several other developers working on it.
Can anyone point me in the right direction as to where it is storing this incorrect path and/or how to reset it so the damn thing will load correctly?
| Visual Studio retrieving an incorrect path to a project from somewhere |
If you want to set the Cache-Control header, there's nothing in the IIS7 UI to do this, sadly.
You can however drop this web.config in the root of the folder or site where you want to set it:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" />
</staticContent>
</system.webServer>
</configuration>
That will inform the client to cache content for 7 days in that folder and all subfolders.
You can also do this by editing the IIS7 metabase via appcmd.exe, like so:
\Windows\system32\inetsrv\appcmd.exe
set config "Default Web Site/folder"
-section:system.webServer/staticContent
-clientCache.cacheControlMode:UseMaxAge
\Windows\system32\inetsrv\appcmd.exe
set config "Default Web Site/folder"
-section:system.webServer/staticContent
-clientCache.cacheControlMaxAge:"7.00:00:00"
|
I'm trying to do something which I thought would be fairly simple. Get IIS 7 to tell clients they can cache all images on my site for a certain amount of time, let's say 24 hours.
I have tried the step on http://www.galcho.com/Blog/post/2008/02/27/IIS7-How-to-set-cache-control-for-static-content.aspx but to no avail. I still get requests going to the server with 304s being returned.
Does anyone have a way of doing this? I have a graphically intensive site and my users are being hammered (so is my server) every time they request a page. Wierdly the images seem to have "Cache-Control private,max-age=3600" showing up in Firebug but the browser is still requesting them when I press F5.
| IIS7 Cache-Control |
EHCache is very nice. You can create an in memory cache. Check out their code samples for an example of creating an in memory cache. You can specify a max size, and a time to live.
EHCache does offer some advanced features, but if your not interested in using them - don't. But it's nice to know they are there if your requirements ever change.
Here is an in memory cache. Created in code, with no configuration files.
CacheManager cacheManager = CacheManager.getInstance();
int oneDay = 24 * 60 * 60;
Cache memoryOnlyCache = new Cache("name", 200, false, false, oneDay, oneDay);
cacheManager.addCache(memoryOnlyCache);
Creates a cache that will hold 200 elements, and has a ttl of 24 hours.
|
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Question
I'm looking for a Java in-memory object caching API. Any recommendations? What solutions have you used in the past?
Current
Right now, I'm just using a Map:
Map cache = new HashMap<String, Object>();
cache.put("key", value);
Requirements
I need to extend the cache to include basic features like:
Max size
Time to live
However, I don't need more sophisticated features like:
Access from multiple processes (caching server)
Persistence (to disk)
Suggestions
In-Memory caching:
Guava CacheBuilder - active development. See this presentation.
LRUMap - Config via API. No TTL. Not purpose built for caching.
whirlycache - XML config. Mailing list. Last updated 2006.
cache4j - XML config. Documentation in Russian. Last updated 2006.
Enterprise caching:
JCS - Properties config. Extensive documentation.
Ehcache - XML config. Extensive documentation. By far the most popular according to Google hits.
| Lightweight Java Object cache API [closed] |
This is how I solved this problem.
Method 1: When the URL changes whenever the image changes
Glide.with(DemoActivity.this)
.load(Uri.parse("file://" + imagePath))
.diskCacheStrategy(DiskCacheStrategy.NONE)
.skipMemoryCache(true)
.into(mImage);
diskCacheStrategy() can be used to handle the disk cache and you can skip the memory cache using the skipMemoryCache() method.
Method 2: When the URL doesn't change, however, the image changes
If your URL remains constant then you need to use Signature for image cache.
Glide.with(yourFragment)
.load(yourFileDataModel)
.signature(new StringSignature(yourVersionMetadata))
.into(yourImageView);
Glide signature() offers you the capability to mix additional data with the cache key.
You can use MediaStoreSignature if you are fetching content from a media store. MediaStoreSignature allows you to mix the date modified time, mime type, and orientation of a media store item into the cache key. These three attributes reliably catch edits and updates allowing you to cache media store thumbs.
You may StringSignature as well for content saved as Files to mix the file date modified time.
|
I am using Glide in one of my projects to show images from files.
Below is my code of how I am showing the image:
Glide.with(DemoActivity.this)
.load(Uri.parse("file://" + imagePath))
.into(mImage);
The image at this location(imagePath) keeps on changing. By default Glide cache the image it shows in the ImageView. Because of this, the Glide was showing the first image from the cache for new images at that location.
If I change the image at location imagePath with some other image having the same name then the Glide is showing the first image instead of the new one.
Two queries are:
Is it possible to always the image from File and not cache? This way problem will be solved.
Is it possible to clear the image from the cache before getting the newly replaced image? This will also solve the problem.
| Remove image from cache in Glide library |
You can safely delete the WSDL cache files. If you wish to prevent future caching, use:
ini_set("soap.wsdl_cache_enabled", 0);
or dynamically:
$client = new SoapClient('http://somewhere.com/?wsdl', array('cache_wsdl' => WSDL_CACHE_NONE) );
|
In through php_info() where the WSDL cache is held (/tmp), but I don't necessarily know if it is safe to delete all files starting with WSDL.
Yes, I should be able to just delete everything from /tmp, but I don't know what else this could effect if I delete any all WSDL files.
| In PHP how can you clear a WSDL cache? |
This is not the cleanest solution, but it's entirely transparent to the programmer:
import functools
import weakref
def memoized_method(*lru_args, **lru_kwargs):
def decorator(func):
@functools.wraps(func)
def wrapped_func(self, *args, **kwargs):
# We're storing the wrapped method inside the instance. If we had
# a strong reference to self the instance would never die.
self_weak = weakref.ref(self)
@functools.wraps(func)
@functools.lru_cache(*lru_args, **lru_kwargs)
def cached_method(*args, **kwargs):
return func(self_weak(), *args, **kwargs)
setattr(self, func.__name__, cached_method)
return cached_method(*args, **kwargs)
return wrapped_func
return decorator
It takes the exact same parameters as lru_cache, and works exactly the same. However it never passes self to lru_cache and instead uses a per-instance lru_cache.
|
How can I use functools.lru_cache inside classes without leaking memory?
In the following minimal example the foo instance won't be released although going out of scope and having no referrer (other than the lru_cache).
from functools import lru_cache
class BigClass:
pass
class Foo:
def __init__(self):
self.big = BigClass()
@lru_cache(maxsize=16)
def cached_method(self, x):
return x + 5
def fun():
foo = Foo()
print(foo.cached_method(10))
print(foo.cached_method(10)) # use cache
return 'something'
fun()
But foo and hence foo.big (a BigClass) are still alive
import gc; gc.collect() # collect garbage
len([obj for obj in gc.get_objects() if isinstance(obj, Foo)]) # is 1
That means that Foo/BigClass instances are still residing in memory. Even deleting foo0 (del foo1) will not release them.
Why is foo2 holding on to the instance at all? Doesn't the cache use some hash and not the actual object?
What is the recommended way use foo3s inside classes?
I know of two workarounds:
Use per instance caches or make the cache ignore object (which might lead to wrong results, though)
| Python functools lru_cache with instance methods: release object |
In core i7 the line sizes in L1 , L2 and L3 are the same: that is 64 Bytes.
I guess this simplifies maintaining the inclusive property, and coherence.
See page 10 of: https://www.aristeia.com/TalkNotes/ACCU2011_CPUCaches.pdf
|
From a previous question on this forum, I learned that in most of the memory systems, L1 cache is a subset of the L2 cache means any entry removed from L2 is also removed from L1.
So now my question is how do I determine a corresponding entry in L1 cache for an entry in the L2 cache. The only information stored in the L2 entry is the tag information. Based on this tag information, if I re-create the addr it may span multiple lines in the L1 cache if the line-sizes of L1 and L2 cache are not same.
Does the architecture really bother about flushing both the lines or it just maintains L1 and L2 cache with the same line-size.
I understand that this is a policy decision but I want to know the commonly used technique.
| Line size of L1 and L2 caches |
Hooray, as of Spring 3.2 the framework allows for this using Spring SpEL and unless. Note from the java doc for Cacheable element unless:
Spring Expression Language (SpEL) expression used to veto method caching. Veto caching the result if the condition evaluates to true.
Unlike condition(), this expression is evaluated after the method has been called and can therefore refer to the result. Default is "", meaning that caching is never vetoed.
The important aspect is that unless is evaluated after the method has been called. This makes perfect sense because the method will never get executed if the key is already in the cache.
So in the above example you would simply annotate as follows (#result is available to test the return value of a method):
@Cacheable(value="defaultCache", key="#pk", unless="#result == null")
public Person findPerson(int pk) {
return getSession.getPerson(pk);
}
I would imagine this condition arises from the use of pluggable cache implementations such as Ehcache which allows caching of nulls. Depending on your use case scenario this may or may not be desirable.
|
Is there a way to specify that if the method returns null value, then don't cache the result in @Cacheable annotation for a method like this?
@Cacheable(value="defaultCache", key="#pk")
public Person findPerson(int pk) {
return getSession.getPerson(pk);
}
Update:
here is the JIRA issue submitted regarding caching null value last November, which hasn't resolved yet:
[#SPR-8871] @Cachable condition should allow referencing return value - Spring Projects Issue Tracker
| How do I tell Spring cache not to cache null value in @Cacheable annotation |
Regarding the differences between Last-Modified/If-Modified-Since and ETag/If-None-Match:
Both can be used interchangeably. However depending on the type of resource, and how it is generated on the server, one or the other question ("has this been modified since ...?" / "does this still match this ETag?") may be easier to answer.
Examples:
If you're serving files, using the file's mtime as the Last-Modified date is the simplest solution.
If you're serving a dynamic web page built from a number of SQL queries, checking whether the data returned by any of those queries has changed may be impractical (unless all of them have some sort of "last modified" column). In this case, using e.g. an md5 hash of the page content as the ETag will be a lot easier.
OTOH, this means that you still have to generate the whole page on the server, even for a conditional GET. Figuring out what exactly has to go into the ETag (primary keys, revision numbers, ... etc.) can save you a lot of time here.
See these links for more details on the topic:
http://www.tbray.org/ongoing/When/200x/2008/08/14/Rails-ETags
http://bitworking.org/news/150/REST-Tip-Deep-etags-give-you-more-benefits
|
What could be the difference between if-modified-since and if-none-match? I have a feeling that if-none-match is used for files whereas if-modified-since is used for pages?
| if-modified-since vs if-none-match |
HttpRuntime.Cache gets the Cache for the current application.
The MemoryCache class is similar to the ASP.NET Cache class.
The MemoryCache class has many properties and methods for accessing the cache that will be familiar to you if you have used the ASP.NET Cache class.
The main difference between HttpRuntime.Cache and MemoryCache is that the latter has been changed to make it usable by .NET Framework applications that are not ASP.NET applications.
For additional reading:
Justin Mathew Blog - Caching in .Net 4.0
Jon Davis Blog - Four Methods Of Simple Caching In .NET
Update :
According to the users feedback, sometimes Jon davis blog is not working.Hence I have put the whole article as an image.Please see that.
Note : If it's not clear then just click on the image.After that it'll open on a browser.Then click again on it to zoom :)
|
I'm wondering if there are any differences between MemoryCache and HttpRuntime.Cache, which one is preferred in ASP.NET MVC projects?
As far as I understand, both are thread safe, API is from first sight more or less the same, so is there any difference when to use which?
| System.Runtime.Caching.MemoryCache vs HttpRuntime.Cache - are there any differences? |
A quick example:
You have some memory shared by all of the processors in your system.
One of your processors restricts access to a page of that shared memory.
Now, all of the processors have to flush their TLBs, so that the ones that were allowed to access that page can't do so any more.
The actions of one processor causing the TLBs to be flushed on other processors is what is called a TLB shootdown.
|
What is a TLB shootdown in SMPs?
I am unable to find much information regarding this concept. Any good example would be very much appreciated.
| What is TLB shootdown? |
It's used when you have some API that only takes files, but you need to use a string. For example, to compress a string using the gzip module in Python 2:
import gzip
import StringIO
stringio = StringIO.StringIO()
gzip_file = gzip.GzipFile(fileobj=stringio, mode='w')
gzip_file.write('Hello World')
gzip_file.close()
stringio.getvalue()
|
What exactly is StringIO used for?
I have been looking around the internet for some examples. However, almost all of the examples are very abstract. And they just show "how" to use it. But none of them show "why" and "in which circumstances" one should/will use it?
p.s. not to be confused with this question on stackoverflow: StringIO Usage which compares string and StringIo.
| What is StringIO in python used for in reality? |
Usually it's easier to create the request like this
NSURLRequest *request = [NSURLRequest requestWithURL:url
cachePolicy:NSURLRequestReloadIgnoringCacheData
timeoutInterval:60.0];
Then create the connection
NSURLConnection *conn = [NSURLConnection connectionWithRequest:request
delegate:self];
and implement the connection:willCacheResponse: method on the delegate. Just returning nil should do it.
- (NSCachedURLResponse *)connection:(NSURLConnection *)connection willCacheResponse:(NSCachedURLResponse *)cachedResponse {
return nil;
}
|
On iPhone, I perform a HTTP request using NSURLRequest for a chunk of data. Object allocation spikes and I assign the data accordingly. When I finish with the data, I free it up accordingly - however instruments doesn't show any data to have been freed!
My theory is that by default HTTP requests are cached, however - I don't want my iPhone app to cache this data.
Is there a way to clear this cache after a request or prevent any data from being cached in the first place?
I've tried using all the cache policies documented a little like below:
NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:url]];
theRequest.cachePolicy = NSURLRequestReloadIgnoringLocalCacheData;
but nothing seems to free up the memory!
| Is it possible to prevent an NSURLRequest from caching data or remove cached data following a request? |
Just add autocomplete="off" to your inputs and you will solve the problem.
<input type="text" autocomplete="off">
jQuery to solve this on all inputs and textareas
$('input,textarea').attr('autocomplete', 'off');
|
I have a big problem with the functionality in Firefox that keeps data that the user have filled in on reload F5. If i use Ctrl+F5 the forms are cleared and this is great. My problem is that not all my users know that this is what they have to do to force the input cleanup. Is there a way in the html or response headers to tell Firefox to not keep the data in the forms?
| Firefox keeps form data on reload |
25
RFC 7234 details what browsers and proxies should do by default:
Although caching is an entirely OPTIONAL feature of HTTP, it can be
assumed that reusing a cached response is desirable and that such
reuse is the default behavior when no requirement or local
configuration prevents it. Therefore, HTTP cache requirements are
focused on preventing a cache from either storing a non-reusable
response or reusing a stored response inappropriately, rather than
mandating that caches always store and reuse particular responses.
Share
Improve this answer
Follow
edited Oct 7, 2021 at 7:59
CommunityBot
111 silver badge
answered Jun 14, 2016 at 15:55
SilverlightFoxSilverlightFox
33k1111 gold badges8181 silver badges148148 bronze badges
Add a comment
|
|
My problem is: sometimes browser over-cached some resources even if i've already modified them. But After F5, everything is fine.
I studied this case whole afternoon. Now i completely understood the point of "Last-Modified" or "Cache-Control". And i know how to solve my issue (just .js?version or explicit max-age=xxxx). But the problem is still unsolved: how does browser handle the response header without "Cache-Control" like this:
Content-Length: 49675
Content-Type: text/html
Last-Modified: Thu, 27 Dec 2012 03:03:50 GMT
Accept-Ranges: bytes
Etag: "0af7fcbdee3cd1:972"
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Thu, 24 Jan 2013 07:46:16 GMT
They clearly cache them when "Enter in the bar"
| What's default value of cache-control? |
This is what we use in ASP.NET:
// Stop Caching in IE
Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);
// Stop Caching in Firefox
Response.Cache.SetNoStore();
It stops caching in Firefox and IE, but we haven't tried other browsers. The following response headers are added by these statements:
Cache-Control: no-cache, no-store
Pragma: no-cache
|
I'm after a definitive reference to what ASP.NET code is required to disabled browsers from caching the page. There are many ways to affect the HTTP headers and meta tags and I get the impression different settings are required to get different browsers to behave correctly. It would be really great to get a reference bit of code commented to indicate which works for all browsers and which is required for particular browser, including versions.
There is a huge amount of information about this issue there but I have yet to find a good reference that describes the benefits of each method and whether a particular technique has been superseded by a higher level API.
I'm particularly interested in ASP.NET 3.5 SP1 but it would be good to get answers for earlier version as well.
This blog entry Two Important Differences between Firefox and IE Caching describes some HTTP protocol behaviour differences.
The following sample code illustrates the kind of thing I am interested in
public abstract class NoCacheBasePage : System.Web.UI.Page
{
protected override void OnInit(EventArgs e)
{
base.OnInit(e);
DisableClientCaching();
}
private void DisableClientCaching()
{
// Do any of these result in META tags e.g. <META HTTP-EQUIV="Expire" CONTENT="-1">
// HTTP Headers or both?
// Does this only work for IE?
Response.Cache.SetCacheability(HttpCacheability.NoCache);
// Is this required for FireFox? Would be good to do this without magic strings.
// Won't it overwrite the previous setting
Response.Headers.Add("Cache-Control", "no-cache, no-store");
// Why is it necessary to explicitly call SetExpires. Presume it is still better than calling
// Response.Headers.Add( directly
Response.Cache.SetExpires(DateTime.UtcNow.AddYears(-1));
}
}
| Disabling browser caching for all browsers from ASP.NET |
Instead of doing file_put_contents(***WebSiteURL***...) you need to use the server path to /cache/lang/file.php (e.g. /home/content/site/folders/filename.php).
You cannot open a file over HTTP and expect it to be written. Instead you need to open it using the local path.
|
I have uploaded my localhost files to my website but it is showing me this error:-
: [2] file_put_contents( ***WebsiteURL*** /cache/lang/ ***FileName*** .php)
[function.file-put-contents]: failed to open stream: HTTP wrapper does
not support writeable connections | LINE: 127 | FILE: /home/content/
***Folders\FileName*** .php
What i personally feel that the contents get saved in a file in cache folder and when i uploaded the files to my web server it is trying to access the cached localhost folder.
| failed to open stream: HTTP wrapper does not support writeable connections |
But what is that check about?
Exactly checking Last-Modified or ETag. Client would ask server if it has new version of data using those headers and if the answer is no it will serve cached data.
Update
From RFC
no-cache
If the no-cache directive does not specify a field-name, then a cache MUST NOT use
the response to satisfy a subsequent request without successful revalidation with the
origin server. This allows an origin server to prevent caching even by caches that
have been configured to return stale responses to client requests.
|
I don't find get the practical difference between Cache-Control:no-store and Cache-Control:no-cache.
As far as I know, no-store means that no cache device is allowed to cache that response. In the other hand, no-cache means that no cache device is allowed to serve a cached response without validate it first with the source. But what is that validation about? Conditional get?
What if a response has no-cache, but it has no Last-Modified or ETag?
Regards.
| What is the difference between no-cache and no-store in Cache-control? |
Let's consider a constant stream of cache requests with a cache capacity of 3, see below:
A, B, C, A, A, A, A, A, A, A, A, A, A, A, B, C, D
If we just consider a Least Recently Used (LRU) cache with a HashMap + doubly linked list implementation with O(1) eviction time and O(1) load time, we would have the following elements cached while processing the caching requests as mentioned above.
[A]
[A, B]
[A, B, C]
[B, C, A] <- a stream of As keeps A at the head of the list.
[C, A, B]
[A, B, C]
[B, C, D] <- here, we evict A, we can do better!
When you look at this example, you can easily see that we can do better - given the higher expected chance of requesting an A in the future, we should not evict it even if it was least recently used.
A - 12
B - 2
C - 2
D - 1
Least Frequently Used (LFU) cache takes advantage of this information by keeping track of how many times the cache request has been used in its eviction algorithm.
|
What is the difference between LRU and LFU cache implementations?
I know that LRU can be implemented using LinkedHashMap.
But how to implement LFU cache?
| What is the difference between LRU and LFU |
This line in development.rb ensures that caching is not happening.
config.action_controller.perform_caching = false
You can clear the Rails cache with
Rails.cache.clear
That said - I am not convinced this is a caching issue. Are you making changes to the page and not seeing them reflected? You aren't perhaps looking at the live version of that page? I have done that once (blush).
Update:
You can call that command from in the console.
Are you sure you are running the application in development?
The only alternative is that the page that you are trying to render isn't the page that is being rendered.
If you watch the server output you should be able to see the render command when the page is rendered similar to this:
Rendered shared_partials/_latest_featured_video (31.9ms)
Rendered shared_partials/_s_invite_friends (2.9ms)
Rendered layouts/_sidebar (2002.1ms)
Rendered layouts/_footer (2.8ms)
Rendered layouts/_busy_indicator (0.6ms)
|
I have a RoR application (ruby v1.8.7; rails v2.3.5) that is caching a page in the development environment. This wouldn't be so much of an issue, but the cached page's a elements are incorrect.
I haven't made any changes to the development.rb file and I haven't knowingly added any caching commands to the controllers.
I've tried clearing the browser's (Firefox 3.5 on OSX) cookie and page caches for this site (localhost). I've also restarted Mongrel. Nothing seems to help.
What am I missing?
| Ruby on Rails: Clear a cached page |
The build cache process is explained fairly thoroughly in the Best practices for writing Dockerfiles: Leverage build cache section.
Starting with a parent image that is already in the cache, the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
In most cases, simply comparing the instruction in the Dockerfile with one of the child images is sufficient. However, certain instructions require more examination and explanation.
For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file. The last-modified and last-accessed times of the file(s) are not considered in these checksums. During the cache lookup, the checksum is compared against the checksum in the existing images. If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
Aside from the ADD and COPY commands, cache checking does not look at the files in the container to determine a cache match. For example, when processing a RUN apt-get -y update command the files updated in the container are not examined to determine if a cache hit exists. In that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands generate new images and the cache is not used.
You will run into situations where OS packages, NPM packages or a Git repo are updated to newer versions (say a ~2.3 semver in package.json) but as your Dockerfile or ADD0 hasn't updated, docker will continue using the cache.
It's possible to programatically generate a ADD1 that busts the cache by modifying lines on certain smarter checks (e.g retrieve the latest git branch shasum from a repo to use in the clone instruction). You can also periodically run the build with ADD2 to enforce updates.
|
I'm amazed at how good Docker's caching of layers works but I'm also wondering how it determines whether it may use a cached layer or not.
Let's take these build steps for example:
Step 4 : RUN npm install -g node-gyp
---> Using cache
---> 3fc59f47f6aa
Step 5 : WORKDIR /src
---> Using cache
---> 5c6956ba5856
Step 6 : COPY package.json .
---> d82099966d6a
Removing intermediate container eb7ecb8d3ec7
Step 7 : RUN npm install
---> Running in b960cf0fdd0a
For example how does it know it can use the cached layer for npm install -g node-gyp but creates a fresh layer for npm install ?
| How does Docker know when to use the cache during a build and when not? |
I have had a similar problem. What I find is that I need to open up the Chrome developer tools and then hit Ctrl + F5. Only then is the cache refreshed.
Update
Also, I would recommend that you select "Disable Cache" in the developers tools ("Network" tab).
|
I'm working on a web project but I have this really annoying issue with my browser, Google Chrome...
Every time I make changes on my website, my browser won't refresh and clear the cache. It works totally fine in my friend's Chrome browser, but not for me apparently.
As mentioned ctrl + F5 does not work for me. I tried to press F12 (for developer console) and right-click on the refresh icon, and then click "Empty Cache and Hard Reload". Still doesn't work... Actually not true, it worked once - but now it stays the same again... I tried reinstalling chrome too, still didn't work... I tried to clear my whole history including all passwords, cache and so on - but nothing has fixed the issue.
Edited on 02-05-2020:
Some of your answers worked for me, but some time ago I found a Chrome extension that works really well for me and I wanted to share with the community. It is called "Clear Cache" and you can find it here:
https://chrome.google.com/webstore/detail/clear-cache/cppjkneekbjaeellbfkmgnhonkkjfpdn/RK%3D2/RS%3DzwqaryCReNAACSfd_oYYPpX0_tw-
| Chrome WON'T clear cache... ctrl + F5 doesn't seem to work either |
You can force browsers to cache something, but
You can't force browsers to clear their cache.
Thus the only (AMAIK) way is to use a new URL for your resources. Something like versioning.
|
For my site I have the following htaccess rules:
# BEGIN Gzip
<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript
</IfModule>
# END Gzip
# BEGIN EXPIRES
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 10 days"
ExpiresByType text/css "access plus 1 month"
ExpiresByType text/plain "access plus 1 month"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/png "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType application/x-javascript "access plus 1 month"
ExpiresByType application/javascript "access plus 1 month"
ExpiresByType application/x-icon "access plus 1 year"
</IfModule>
# END EXPIRES
I've just updated my site and it looked all screwy until I cleared my cache. How can I force the client's browser to clear the cache after an update so that the user can see the changes?
| htaccess - How to force the client's browser to clear the cache? |
Just wondering, why between -128 and 127?
A larger range of integers may be cached, but at least those between -128 and 127 must be cached because it is mandated by the Java Language Specification (emphasis mine):
If the value p being boxed is true, false, a byte, or a char in the range \u0000 to \u007f, or an int or short number between -128 and 127 (inclusive), then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2.
The rationale for this requirement is explained in the same paragraph:
Ideally, boxing a given primitive value p, would always yield an identical reference. In practice, this may not be feasible using existing implementation techniques. The rules above are a pragmatic compromise. The final clause above requires that certain common values always be boxed into indistinguishable objects. [...]
This ensures that in most common cases, the behavior will be the desired one, without imposing an undue performance penalty, especially on small devices. Less memory-limited implementations might, for example, cache all char and short values, as well as int and long values in the range of -32K to +32K.
How can I cache other values outside of this range.?
You can use the -XX:AutoBoxCacheMax JVM option, which is not really documented in the list of available Hotspot JVM Options. However it is mentioned in the comments inside the Integer class around line 590:
The size of the cache may be controlled by the -XX:AutoBoxCacheMax=<size> option.
Note that this is implementation specific and may or may not be available on other JVMs.
|
Regarding my previous Question, Why do == comparisons with Integer.valueOf(String) give different results for 127 and 128? , we know that Integer class has a cache which stores values between -128 and 127.
Just wondering, why between -128 and 127?
Integer.valueOf() documentation stated that it "caching frequently requested values" . But does values between -128 and 127 are frequently requested for real? I thought frequently requested values are very subjective.
Is there any possible reason behind this?
From the documentation also stated: "..and may cache other values outside of this range."
How is this can be achieved?
| Why Integer class caching values in the range -128 to 127? |
50
I run into similar problem. I have copied request as fetch in Network tab in devtools.
Then I have run it in browser dev console. There I could read description of the error about CORS. After setting cors on the api server, it worked.
You have to paste the fetch command into the dev console of the same origin and NOT accidentally e.g. open it from stackoverflow.
Share
Improve this answer
Follow
edited Aug 12, 2020 at 14:27
answered Apr 18, 2020 at 19:24
rofrolrofrol
14.8k88 gold badges8383 silver badges8181 bronze badges
1
I kept getting this error until I opened the same app URL in my browser as the API I was testing and had the DevTools console in that same window, then it started working each time.
– Neil
Aug 8, 2023 at 19:23
Add a comment
|
|
I'm building a web server and trying to test things. The server is running on localhost:888, and the first time I load the web app, everything works. But if I try to reload the page, a bunch of XmlHttpRequest requests fail with net::ERR_FAILED. By putting breakpoints in the server code, I can verify that the requests are never actually coming in.
This isn't a connection failure, as the connection succeeds the first time. The fact that it succeeds once and then fails later implies that it might be caching-related, but there's nothing in the server code that sets the cache-control header. So I tested it by putting the server up on an actual web server. The first time, everything had to take its time loading; the second time, it all loaded instantly, so this is definitely cache-related
This is a custom server running on top of http.sys (no IIS), and it appears that things are getting cached by default and then failing to load from it on subsequent runs, but only when my server is running on localhost; on the Web, it works fine. As near as I can tell, net::ERR_FAILED is a generic "something went wrong and we've got no useful information for you" message in Chrome, so I'm kind of stuck here. Does anyone know what could be causing this?
| What can cause Chrome to give an net::ERR_FAILED on cached content against a server on localhost? |
With cachetools you can write:
from cachetools import cached
from cachetools.keys import hashkey
from random import randint
@cached(cache={}, key=lambda db_handle, query: hashkey(query))
def find_object(db_handle, query):
print("processing {0}".format(query))
return query
queries = list(range(5))
queries.extend(range(5))
for q in queries:
print("result: {0}".format(find_object(randint(0, 1000), q)))
You will need to install cachetools (pip install cachetools).
The syntax is:
@cached(
cache={},
key=lambda <all-function-args>: hashkey(<relevant-args>)
)
Here is another example that includes keyword args:
@cached(
cache={},
key=lambda a, b, c=1, d=2: hashkey(a, c)
)
def my_func(a, b, c=1, d=2):
return a + c
In the example above note that the lambda function input args match the my_func args. You don't have to exactly match the argspec if you don't need to. For example, you can use kwargs to squash out things that aren't needed in the hashkey:
@cached(
cache={},
key=lambda a, b, c=1, **kwargs: hashkey(a, c)
)
def my_func(a, b, c=1, d=2, e=3, f=4):
return a + c
In the above example we don't care about d=, e= and f= args when looking up a cache value, so we can squash them all out with **kwargs.
|
How can I make @functools.lru_cache decorator ignore some of the function arguments with regard to caching key?
For example, I have a function that looks like this:
def find_object(db_handle, query):
# (omitted code)
return result
If I apply lru_cache decorator just like that, db_handle will be included in the cache key. As a result, if I try to call the function with the same query, but different db_handle, it will be executed again, which I'd like to avoid. I want lru_cache to consider query argument only.
| Make @lru_cache ignore some of the function arguments |
I like to cache in the model or data layer as well. This isolates everything to do with retrieving data from the controller/presentation. You can access the ASP.NET cache from System.Web.HttpContext.Current.Cache or use the Caching Application Block from the Enterprise Library. Create your key for the cached data from the parameters for the query. Be sure to invalidate the cache when you update the data.
|
I would like to cache my most database heavy actions in my asp.net-mvc site.
In my research I have found
donut caching on Phil's blog
Caching/compressing filters on Kazi's blog
Scott Hansleman's podcast about how they cached things in SO.
But I don't feel I get it yet.
I want to be able to cache my POST request depending on several pars. These pars are in an object. So I would like to cache the result of the following request:
public ActionResult AdvancedSearch(SearchBag searchBag)
Where searchBag is an object that holds (a bunch) of optional search parameters.
My views themselves are light (as they should be), but the data access can be rather time consuming, depending on what fields are filled in in the search bag.
I have the feeling I should be caching on my datalayer, rather then on my actions.
How am I supposed to use the VaryByParam in the OutputCache attribute?
| Caching in asp.net-mvc |
I would probably use these settings:
Cache-Control: max-age=31556926 – Representations may be cached by any cache. The cached representation is to be considered fresh for 1 year:
To mark a response as "never expires," an origin server sends an
Expires date approximately one year from the time the response is
sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
year in the future.
Cache-Control: no-cache – Representations are allowed to be cached by any cache. But caches must submit the request to the origin server for validation before releasing a cached copy.
Cache-Control: no-store – Caches must not cache the representation under any condition.
See Mark Nottingham’s Caching Tutorial for further information.
|
I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!)
On my web site, I'll have three kinds of resources:
(1) Forever cacheable (public / equal for all users)
Example: 0A470E87CC58EE133616F402B5DDFE1C.cache.html (auto generated by GWT)
These files are automatically assigned a new name, when they change content (based on the MD5).
They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?)
They shouldn't require the client to make a round-trip to the server to validate, if the content has changed.
(2) Changing occasionally (public / equal for all users)
Examples: index.html, mymodule.nocache.js
These files change their content without changing the URL, when a new version of the site is deployed.
They can be cached, but probably need a round-trip to be revalidated every time.
(3) Individual for each request (private / user specific)
Example: JSON responses
These resources should never be cached unencrypted to disk under no circumstances. (Except maybe I'll have a few specific requests that could be cached.)
I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.
| Ideal HTTP cache control headers for different types of resources |
Here's the basic pattern:
Check the cache for the value, return if its available
If the value is not in the cache, then implement a lock
Inside the lock, check the cache again, you might have been blocked
Perform the value look up and cache it
Release the lock
In code, it looks like this:
private static object ThisLock = new object();
public string GetFoo()
{
// try to pull from cache here
lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock
// cache is still empty, so retreive the value here
// store the value in the cache here
}
// return the cached value here
}
|
I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache.
What is the best way in c# to implement cache locking in ASP.NET?
| What is the best way to lock cache in asp.net? |
To answer my own question...it seems that given I'm using memcached, I actually can't use delete_if or delete_matched because memcached does not support enumerating or querying keys by pattern (1).
|
Is it possible to somehow run Rails.cache.clear and only clear keys with a certain name/string?
I don't want to clear the entire cache...just keys with the string blog/post in the name (ie. blog/post/1, blog/post/2).
I'm using dalli with memcached for my cache and running Rails 3.0.6.
| Rails.cache.clear certain key names? |
1
The Chrome browser mostly looks at the cache-control header from the response header; if this header has immutable it will store the response to disk. Memory cache is loaded from the RAM so that which is much faster than disk cache & Memory cache is not persisted while close the tab cache will be cleared but disk cache will be retained till that expires. This is one of the primary reasons why browsers have greater memory allotted to them.
You may also need to understand this Memory Cache vs Disk Cache
Refer HTTP Cache chart so that you can easily understand cache flows
Share
Improve this answer
Follow
edited Jan 20, 2023 at 12:18
answered Jan 20, 2023 at 12:11
Gajarajan KGajarajan K
46944 silver badges99 bronze badges
Add a comment
|
|
I knew that Google Chrome supports from memory cache and from disk cache when I request resources. However, I didn't see from memory cache before.
How does chrome determine which resources should be cached in memory?
| How does the chrome browser determine memory cache and disk cache? |
To me, the correct way of doing it would be the ones listed. Either ajax or ajaxSetup. If you really want to use get and not use ajaxSetup then you could create your own parameter and give it the value of the the current date/time.
I would however question your motives in not using one of the other methods.
|
jQuery.get() is a shorthand for jQuery.ajax() with a get call. But when I set cache:false in the data of the .get() call, what is sent to the server is a parameter called cache with a value of false. While my intention is to send a timestamp with the data to the server to prevent caching which is what happens if I use cache: false in jQuery.ajax data. How do I accomplish this without rewriting my jQuery.get calls to jQuery.ajax calls or using
$.ajaxSetup({
// Disable caching of AJAX responses
cache: false
});
update: Thanks everyone for the answers. You are all correct. However, I was hoping that there was a way to let the get call know that you do not want to cache, or send that value to the underlying .ajax() so it would know what to do with it.
I a. looking for a fourth way other than the three ways that have been identified so far:
Doing it globally via ajaxSetup
Using a .ajax call instead of a .get call
Doing it manually by adding a new parameter holding a timestamp to your .get call.
I just thought that this capability should be built into the .get call.
| How to set cache: false in jQuery.get call |
You can achieve this using the cache_control decorator. Example from the documentation:
from django.views.decorators.cache import never_cache
@never_cache
def myview(request):
# ...
|
I'm using the render_to_response shortcut and don't want to craft a specific Response object to add additional headers to prevent client-side caching.
I'd like to have a response that contains:
Pragma: no-cache
Cache-control : no-cache
Cache-control: must-revalidate
And all the other nifty ways that browsers will hopefully interpret as directives to avoid caching.
Is there a no-cache middleware or something similar that can do the trick with minimal code intrusion?
| Fighting client-side caching in Django |
You can use a LinkedHashMap (Java 1.4+) :
// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
// This method is called just after a new entry has been added
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
// Add to cache
Object key = "key";
cache.put(key, object);
// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
// Object not in cache. If null is not a possible value in the cache,
// the call to cache.contains(key) is not needed
}
// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
|
I know it's simple to implement, but I want to reuse something that already exist.
Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache...
Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?
| Easy, simple to use LRU cache in java |
Sounds like you want the sync command, or the sync() function.
If you want disk cache flushing: echo 3 | sudo tee /proc/sys/vm/drop_caches
|
I need to do it for more predictable benchmarking.
| How to purge disk I/O caches on Linux? |
Yes, that is the correct way. You have to set the Cache-Control header to let the browsers know that they don't have to cache any content for that request.
<meta http-equiv="Cache-control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
(Pragma & Cache-Control is one and the same thing but from the different HTTP specification. See the answer here: Difference between Pragma and Cache-control headers?)
See one of the related answer here: How to burst yeoman index.html cache
|
Normally for .js and .css file we will append a version during build like
xx.js?v=123, and then after website deploy, we can get the new version of js and CSS. But I don't see a place talking about how to make the index.html file upgrade when website deployment happen. And we do see in IE that the HTML content should have been changed but it still use the old HTML content.
One solution I find from google is to
<meta http-equiv="Cache-control" content="no-cache">
However, I am not sure whether this is the best solution?
| How to make index.html not to cache when the site contents are changes in AngularJS website? |
In the recent versions of Picasso, there is a new method for invalidate, without any workarounds, so I think that custom PicassoTools class mentioned earlier, is now obsolete in this case
Picasso.with(getActivity()).invalidate(file);
|
I load an image from disk using Picasso, e.g., Picasso.with(ctx).load(new File("/path/to/image")).into(imageView), but whenever I save a new image in that file, and refresh my ImageView, Picasso still has the bitmap cached.
Is it possible to invalidate the cache in Picasso?
| Invalidate cache in Picasso |
Cache misses. When N int objects are allocated back-to-back, the memory reserved to hold them tends to be in a contiguous chunk. So crawling over the list in allocation order tends to access the memory holding the ints' values in sequential, contiguous, increasing order too.
Shuffle it, and the access pattern when crawling over the list is randomized too. Cache misses abound, provided there are enough different int objects that they don't all fit in cache.
At r==1, and r==2, CPython happens to treat such small ints as singletons, so, e.g., despite that you have 10 million elements in the list, at r==2 it contains only (at most) 100 distinct int objects. All the data for those fit in cache simultaneously.
Beyond that, though, you're likely to get more, and more, and more distinct int objects. Hardware caches become increasingly useless then when the access pattern is random.
Illustrating:
>>> from random import randint, seed
>>> seed(987987987)
>>> for x in range(1, 9):
... r = 10 ** x
... js = [randint(1, r) for _ in range(10_000_000)]
... unique = set(map(id, js))
... print(f"{r:12,} {len(unique):12,}")
...
10 10
100 100
1,000 7,440,909
10,000 9,744,400
100,000 9,974,838
1,000,000 9,997,739
10,000,000 9,999,908
100,000,000 9,999,998
|
In the following code, I create two lists with the same values: one list unsorted (s_not), the other sorted (s_yes). The values are created by randint(). I run some loop for each list and time it.
import random
import time
for x in range(1,9):
r = 10**x # do different val for the bound in randint()
m = int(r/2)
print("For rand", r)
# s_not is non sorted list
s_not = [random.randint(1,r) for i in range(10**7)]
# s_yes is sorted
s_yes = sorted(s_not)
# do some loop over the sorted list
start = time.time()
for i in s_yes:
if i > m:
_ = 1
else:
_ = 1
end = time.time()
print("yes", end-start)
# do the same to the unsorted list
start = time.time()
for i in s_not:
if i > m:
_ = 1
else:
_ = 1
end = time.time()
print("not", end-start)
print()
With output:
For rand 10
yes 1.0437555313110352
not 1.1074268817901611
For rand 100
yes 1.0802974700927734
not 1.1524150371551514
For rand 1000
yes 2.5082249641418457
not 1.129960298538208
For rand 10000
yes 3.145440101623535
not 1.1366300582885742
For rand 100000
yes 3.313387393951416
not 1.1393756866455078
For rand 1000000
yes 3.3180911540985107
not 1.1336982250213623
For rand 10000000
yes 3.3231537342071533
not 1.13503098487854
For rand 100000000
yes 3.311596393585205
not 1.1345293521881104
So, when increasing the bound in the randint(), the loop over the sorted list gets slower. Why?
| Why is Python list slower when sorted? |
As of SQL Server 2012 you no longer have to go through the hassle of deleting the bin file (which causes other side effects). You should be able to press the Delete key within the MRU list of the Server Name dropdown in the Connect to Server dialog. This is documented in this Connect item and this blog post.
To be clear, since a couple of people seemed to have trouble with this for months: You need to click on the Server name: dropdown, and down-arrow or hover with your mouse until the server you want to remove is selected, and then press Delete. In this screen shot, I'm going to press Delete now, and it will remove the server ADMIN:SHELDON\SQL2014 from my MRU list. Note that because I merely hovered with my mouse, this is not even the server that is showing in the Server name: text box.
Note that if you have multiple entries for a single server name (e.g. one with Windows and one with SQL Auth), you won't be able to tell which one you're deleting.
|
Or, to put it another way, where is SqlStudio.bin for SQL Server 2012? It doesn't seem to be in the place that would be expected by looking at this other SO question.
| How to remove cached server names from the Connect to Server dialog? |
Insert will overwrite an existing cached value with the same Key; Add fails (does nothing) if there is an existing cached value with the same key. So there's a case for saying you should always use Insert since the first time the code runs it will put your object into the cache and when it runs subsequently it will update the cached value.
|
What is the difference between the Cache.Add() and Cache.Insert() methods?
In which situations should I use each one?
| ASP.NET cache add vs insert |
60
Caching is normally controlled through setting headers on the content when it is returned by the server. If you're already doing that and IE is ignoring them and caching anyway, the only way to get around it would be to use one of the cache busting techniques mentioned in your question. In the case of an API, it would likely be better to make sure you are using proper cache headers before attempting any of the cache busting techniques.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ
Cache-control: no-cache
Cache-control: no-store
Pragma: no-cache
Expires: 0
Share
Improve this answer
Follow
answered Aug 28, 2015 at 18:15
Kevin BKevin B
94.9k1616 gold badges165165 silver badges182182 bronze badges
3
Appreciate the honest attempt at an answer, but I fear this leaves us in "duplicate question/answer" territory. Oh well. I bet this question will get downvoted, I will delete, and collect my "peer-pressure" badge ;).
– stolli
Aug 28, 2015 at 18:38
As per your question, you mention "modern REST API". While I do agree that using the "wrong" http verb or a useless parameter are hacky solutions, and should not work for clean and nice implementations, the actual designed (thus, preferred) way of setting cache controls is server-side with the HTTP headers mentioned in this answer.
– Alex Mazzariol
Feb 15, 2016 at 11:33
4
@stolli so much for that peer pressure badge.
– Kevin B
Jul 1, 2019 at 17:37
Add a comment
|
|
I realize this question has been asked, but in modern REST practice none of the previous iterations of this question nor their answers are accurate or sufficient. A definitive answer to this question is needed.
The problem is well known, IE (even 11) caches AJAX requests, which is really really dumb. Everyone understands this.
What is not well understood is that none of the previous answers are sufficient. Every previous instance of this question on SO is marked as sufficiently answered by either:
1) Using a unique query string parameter (such as a unix timestamp) on each request, so as to make each request URL unique, thereby preventing caching.
-- or --
2) using POST instead of GET, as IE does not cache POST requests except in certain unique circumstances.
-- or --
3) using 'cache-control' headers passed by the server.
IMO in many situations involving modern REST API practice, none of these answers are sufficient or practical. A REST API will have completely different handlers for POST and GET requests, with completely different behavior, so POST is typically not an appropriate or correct alternative to GET. As well, many APIs have strict validation around them, and for numerous reasons, will generate 500 or 400 errors when fed query string parameters that they aren't expecting. Lastly, often we are interfacing with 3rd-party or otherwise inflexible REST APIs where we do not have control over the headers provided by the server response, and adding cache control headers is not within our power.
So, the question is:
Is there really nothing that can be done on the client-side in this situation to prevent I.E. from caching the results of an AJAX GET request?
| How to avoid AJAX caching in Internet Explorer 11 when additional query string parameters or using POST are not an option |
You can use the OutputCacheAttribute to control server and/or browser caching for specific actions or all actions in a controller.
Disable for all actions in a controller
[OutputCacheAttribute(VaryByParam = "*", Duration = 0, NoStore = true)] // will be applied to all actions in MyController, unless those actions override with their own decoration
public class MyController : Controller
{
// ...
}
Disable for a specific action:
public class MyController : Controller
{
[OutputCacheAttribute(VaryByParam = "*", Duration = 0, NoStore = true)] // will disable caching for Index only
public ActionResult Index()
{
return View();
}
}
If you want to apply a default caching strategy to all actions in all controllers, you can add a global action filter by editing your global.asax.cs and looking for the RegisterGlobalFilters method. This method is added in the default MVC application project template.
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new OutputCacheAttribute
{
VaryByParam = "*",
Duration = 0,
NoStore = true,
});
// the rest of your global filters here
}
This will cause it to apply the OutputCacheAttribute specified above to every action, which will disable server and browser caching. You should still be able to override this no-cache by adding OutputCacheAttribute to specific actions and controllers.
|
How to disable automatic browser caching from asp.Net mvc application?
Because I am having a problem with caching as it caches all links. But sometimes it redirected to DEFAULT INDEX PAGE automatically
which stored it caching and then all the time I click to that link it will redirect me to DEFAULT INDEX PAGE.
So some one know how to manually disable caching option from ASP.NET MVC 4?
| ASP.NET MVC how to disable automatic caching option? |
WeakHashMap isn't useful as a cache, at least the way most people think of it. As you say, it uses weak keys, not weak values, so it's not designed for what most people want to use it for (and, in fact, I've seen people use it for, incorrectly).
WeakHashMap is mostly useful to keep metadata about objects whose lifecycle you don't control. For example, if you have a bunch of objects passing through your class, and you want to keep track of extra data about them without needing to be notified when they go out of scope, and without your reference to them keeping them alive.
A simple example (and one I've used before) might be something like:
WeakHashMap<Thread, SomeMetaData>
where you might keep track of what various threads in your system are doing; when the thread dies, the entry will be removed silently from your map, and you won't keep the Thread from being garbage collected if you're the last reference to it. You can then iterate over the entries in that map to find out what metadata you have about active threads in your system.
See WeakHashMap in not a cache! for more information.
For the type of cache you're after, either use a dedicated cache system (e.g. EHCache) or look at Guava's MapMaker class; something like
new MapMaker().weakValues().makeMap();
will do what you're after, or if you want to get fancy you can add timed expiration:
new MapMaker().weakValues().expiration(5, TimeUnit.MINUTES).makeMap();
|
Java's WeakHashMap is often cited as being useful for caching. It seems odd though that its weak references are defined in terms of the map's keys, not its values. I mean, it's the values I want to cache, and which I want to get garbage collected once no-one else besides the cache is strongly referencing them, no?
In which way does it help to hold weak references to the keys? If you do a ExpensiveObject o = weakHashMap.get("some_key"), then I want the cache to hold on to 'o' until the caller doesn't hold the strong reference anymore, and I don't care at all about the string object "some_key".
Am I missing something?
| Java's WeakHashMap and caching: Why is it referencing the keys, not the values? |
One important difference is, that items in the cache can expire (will be removed from cache) after a specified amount of time. Items put into a session will stay there, until the session ends.
ASP.NET can also remove items from cache when the amount of available memory gets small.
Another difference: the session state can be kept external (state server, SQL server) and shared between several instances of your web app (for load balancing). This is not the case with the cache.
Besides of these differences (as others have noted): session is per user/session while cache is per application.
|
What is the difference between storing a datatable in Session vs Cache? What are the advantages and disadvantages?
So, if it is a simple search page which returns result in a datatable and binds it to a gridview. If user 'a' searches and user 'b' searches, is it better to store it in Session since each user would most likely have different results or can I still store each of their searches in Cache or does that not make sense since there is only one cache. I guess basically what I am trying to say is that would the Cache be overwritten.
| Advantages of Cache vs Session |
NPM cache is located in ~/.npm but in most CIs you can only cache things inside your working directory.
What you can do to circumvent this is changing the cache directory to your current directory with npm set cache .npm. The NPM cache will now be located in ./.npm an you can cache this folder between CI jobs.
Example with GitLab CI:
my-super-job:
image: node:13-alpine
script:
- npm set cache .npm
- npm ci
cache:
paths:
- .npm
EDIT: Just discovered that you can set the config as a command line flag so npm ci --cache .npm should do the same
|
for now npm ci is the most common way to install node modules when using CI.
But it is honestly really slow.
Is there a way to speedup npm ci using cache or do not fully remove existing packages (whole node_modules folder)?
| Is there a way to speedup npm ci using cache? |
Your file will probably be cached - but it depends...
Different browsers have slightly different behaviors - most noticeably when dealing with ambiguous/limited caching headers emanating from the server. If you send a clear signal, the browsers obey, virtually all of the time.
The greatest variance by far, is in the default caching configuration of different web servers and application servers.
Some (e.g. Apache) are likely to serve known static file types with HTTP headers encouraging the browser to cache them, while other servers may send no-cache commands with every response - regardless of filetype.
...
So, first off, read some of the excellent HTTP caching tutorials out there. HTTP Caching & Cache-Busting
for Content Publishers was a real eye opener for me :-)
Next install and fiddle around with Firebug and the Live HTTP Headers add-on , to find out which headers your server is actually sending.
Then read your web server docs to find out how to tweak them to perfection (or talk your sysadmin into doing it for you).
...
As to what happens when the browser is restarted, it depends on the browser and the user configuration.
As a rule of thumb, expect the browser to be more likely to check in with the server after each restart, to see if anything has changed (see If-Last-Modified and If-None-Match).
If you configure your server correctly, it should be able to return a super-short 304 Not Modified (costing very little bandwidth) and after that the browser will use the cache as normal.
|
Quick question regarding CSS and the browser. I tried searching SO and found some similar posts, but nothing definitive.
I use one or two CSS files in my web projects. These are referenced in the HEAD of my web pages. Once I hit one of my pages, does the CSS get cached so that it's not re-downloaded with each request? I hope so. Do IE, Firefox and Safari handle this differently? If the browser is closed, is the CSS refreshed on the first visit when a new browser instance is opened?
| Browser Caching of CSS files |
It's done this way so that different cores modifying different fields won't have to bounce the cache line containing both of them between their caches. In general, for a processor to access some data in memory, the entire cache line containing it must be in that processor's local cache. If it's modifying that data, that cache entry usually must be the only copy in any cache in the system (Exclusive mode in the MESI/MOESI-style cache coherence protocols). When separate cores try to modify different data that happens to live on the same cache line, and thus waste time moving that whole line back and forth, that's known as false sharing.
In the particular example you give, one core can be enqueueing an entry (reading (shared) buffer_ and writing (exclusive) only enqueue_pos_) while another dequeues (shared buffer_ and exclusive dequeue_pos_) without either core stalling on a cache line owned by the other.
The padding at the beginning means that buffer_ and buffer_mask_ end up on the same cache line, rather than split across two lines and thus requiring double the memory traffic to access.
I'm unsure whether the technique is entirely portable. The assumption is that each cacheline_pad_t will itself be aligned to a 64 byte (its size) cache line boundary, and hence whatever follows it will be on the next cache line. So far as I know, the C and C++ language standards only require this of whole structures, so that they can live in arrays nicely, without violating alignment requirements of any of their members. (see comments)
The attribute approach would be more compiler specific, but might cut the size of this structure in half, since the padding would be limited to rounding up each element to a full cache line. That could be quite beneficial if one had a lot of these.
The same concept applies in C as well as C++.
|
In Dmitry Vyukov's excellent bounded mpmc queue written in C++
See: http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
He adds some padding variables. I presume this is to make it align to a cache line for performance.
I have some questions.
Why is it done in this way?
Is it a portable method that will
always work
In what cases would it be best to use __attribute__
((aligned (64))) instead.
why would padding before a buffer pointer help with performance? isn't just the pointer loaded into the cache so it's really only the size of a pointer?
static size_t const cacheline_size = 64;
typedef char cacheline_pad_t [cacheline_size];
cacheline_pad_t pad0_;
cell_t* const buffer_;
size_t const buffer_mask_;
cacheline_pad_t pad1_;
std::atomic<size_t> enqueue_pos_;
cacheline_pad_t pad2_;
std::atomic<size_t> dequeue_pos_;
cacheline_pad_t pad3_;
Would this concept work under gcc for c code?
| How and when to align to cache line size? |
1
The issue is that if the endpoint is authenticated, then by definition the output varies by the user. So basically all external output cache providers are no longer an option.
Your options are either:
Unprotect the endpoints if they could allow anonymous safely
Use local caching that can vary by user
Split up your endpoints so that you use child actions, and/or AJAX calls for protected data. This can allow you to make most things public, but keep the actual data un-cached and protected
Cache at a different tier than output. Is your app server request/response and view rendering really your scale pain point? Or is it more likely the DB, and any service tier calculations? Caching in those layers are easy, and can vary by user easily as needed.
Share
Improve this answer
Follow
answered Jun 13, 2021 at 16:50
TimTim
2,90811 gold badge1515 silver badges1919 bronze badges
Add a comment
|
|
After switching an ASP.NET MVC 5 application to Azure Redis (Microsoft.Web.RedisOutputCacheProvider Nuget package) I was surprised to see that OutputCacheAttribute when set to use either OutputCacheLocation.Any or OutputCacheLocation.ServerAndClient
[Route("Views/Orders")]
[OutputCache(Duration = 600, Location = OutputCacheLocation.Any)]
public ActionResult Orders()
{
}
randomly generates the following error:
When using a custom output cache provider like 'RedisOutputCache',
only the following expiration policies and cache features are
supported: file dependencies, absolute expirations, static
validation callbacks and static substitution callbacks.
which is weird as the declaration above clearly defines just absolute expiration without any advanced stuff like varybyparam. After some searching it looks like there is no fix to this issue which is extremely frustrating. Are there any external cache providers compatible with ASP.NET caching mechanics? If not, how do you implement server side HTTP output caching in cluster scenarios in MVC/WebApi apps?
| ASP.NET MVC OutputCacheAttribute with external cache providers |
To know the sizes, you need to look it up using the documentation for the processor, afaik there is no programatic way to do it. On the plus side however, most cache lines are of a standard size, based on intels standards. On x86 cache lines are 64 bytes, however, to prevent false sharing, you need to follow the guidelines of the processor you are targeting (intel has some special notes on its netburst based processors), generally you need to align to 64 bytes for this (intel states that you should also avoid crossing 16 byte boundries).
To do this in C or C++ requires that you use the standard aligned_alloc function or one of the compiler specific specifiers such as __attribute__((aligned(64))) or __declspec(align(64)). To pad between members in a struct to split them onto different cache lines, you need on insert a member big enough to align it to the next 64 byte boundery
|
To prevent false sharing, I want to align each element of an array to a cache line. So first I need to know the size of a cache line, so I assign each element that amount of bytes. Secondly I want the start of the array to be aligned to a cache line.
I am using Linux and 8-core x86 platform. First how do I find the cache line size. Secondly, how do I align to a cache line in C. I am using the gcc compiler.
So the structure would be following for example, assuming a cache line size of 64.
element[0] occupies bytes 0-63
element[1] occupies bytes 64-127
element[2] occupies bytes 128-191
and so on, assuming of-course that 0-63 is aligned to a cache line.
| Aligning to cache line and knowing the cache line size |
121
Make sure you have the disable cache checkbox unchecked/disabled in the Developer Tools.
Share
Improve this answer
Follow
edited Feb 2, 2021 at 6:36
Rob Bednark
26.8k2424 gold badges8282 silver badges126126 bronze badges
answered Jan 28, 2013 at 17:20
Sabrina LeggettSabrina Leggett
9,25977 gold badges4747 silver badges5050 bronze badges
4
5
This helped me solve the same issue. In-case you all are stuck, in the developer tools, click the gear icon at the bottom-right.
– Kayla
Sep 5, 2013 at 0:19
4
And to think I was about to write an image serving page, just to enforce cache control. THANK YOU!!
– Reinstate Monica Cellio
Oct 22, 2014 at 15:11
9
I just threw my arms up into the air demanding god to punish me for my stupidity. Also had cache disabled...
– TheBokiya
Nov 19, 2014 at 0:43
7
The problem existed between my keyboard and chair.
– Travis D
Jan 31, 2017 at 15:29
Add a comment
|
|
When Chrome loads my website, it checks the server for updated versions of files before it shows them. (Images/Javascript/CSS) It gets a 304 from the server because I never edit external javascript, css or images.
What I want it to do, is display the images without even checking the server.
Here are the headers:
Connection:keep-alive
Date:Tue, 03 Aug 2010 21:39:32 GMT
ETag:"2792c73-b1-48cd0909d96ed"
Expires:Thu, 02 Sep 2010 21:39:32 GMT
Server:Apache/Nginx/Varnish
How do I make it not check the server?
| Chrome doesn't cache images/js/css |
You should add an is_file() check, because sub-directories could reside in the the directory you're checking.
Also, as this answer suggests, you should replace the pre-calculated seconds with a more expressive notation.
$files = glob(cacheme_directory() . '*');
$threshold = strtotime('-2 day');
foreach ($files as $file) {
if (is_file($file)) {
if ($threshold >= filemtime($file)) {
unlink($file);
}
}
}
Alternatively you could also use the DirectoryIterator, as shown in this answer. In this simple case it doesn't really offer any advantages, but it would be OOP way.
|
Just curious:
$files = glob(cacheme_directory() . '*');
foreach ($files as $file) {
$filemtime = filemtime($file);
if (time() - $filemtime >= 172800) {
unlink($file);
}
}
I just want to make sure if the code is correct or not. Thanks.
| The correct way to delete all files older than 2 days in PHP |
You might want to use private_no_expire instead of private, but set a long expiration for content you know is not going to change and make sure you process if-modified-since and if-none-match requests similar to Emil's post.
$tsstring = gmdate('D, d M Y H:i:s ', $timestamp) . 'GMT';
$etag = $language . $timestamp;
$if_modified_since = isset($_SERVER['HTTP_IF_MODIFIED_SINCE']) ? $_SERVER['HTTP_IF_MODIFIED_SINCE'] : false;
$if_none_match = isset($_SERVER['HTTP_IF_NONE_MATCH']) ? $_SERVER['HTTP_IF_NONE_MATCH'] : false;
if ((($if_none_match && $if_none_match == $etag) || (!$if_none_match)) &&
($if_modified_since && $if_modified_since == $tsstring))
{
header('HTTP/1.1 304 Not Modified');
exit();
}
else
{
header("Last-Modified: $tsstring");
header("ETag: \"{$etag}\"");
}
Where $etag could be a checksum based on the content or the user ID, language, and timestamp, e.g.
$etag = md5($language . $timestamp);
|
I have a PHP 5.1.0 website (actually it's 5.2.9 but it must also run on 5.1.0+).
Pages are generated dynamically but many of them are mostly static. By static I mean the content don't change but the "template" around the content can change over time.
I know they are several cache systems and PHP frameworks already out there, but my host don't have APC or Memcached installed and I'm not using any framework for this particular project.
I want the pages to be cached (I think by default PHP "disallow" cache). So far I'm using:
session_cache_limiter('private'); //Aim at 'public'
session_cache_expire(180);
header("Content-type: $documentMimeType; charset=$documentCharset");
header('Vary: Accept');
header("Content-language: $currentLanguage");
I read many tutorials but I can't find something simple (I know cache is something complex, but I only need some basic stuff).
What are "must" have headers to send to help caching?
| How to use HTTP cache headers with PHP |
In Chrome Developer Tools switch to the Network tab and on the Size column it will either give you the size of the downloaded content or say (from disk/memory cache).
|
In Google Chrome, how you can check which files are served from the browser cache, and which comes from the server?
| Check whether network response is coming from server or Chrome cache |
26
The time the browser considers a cached response fresh is usually relative to when it was last modified:
Since origin servers do not always provide explicit expiration times, a cache MAY assign a heuristic expiration time when an explicit time is not specified, employing algorithms that use other header field values (such as the Last-Modified time)... If the response has a Last-Modified header field (Section 2.2 of [RFC7232]), caches are encouraged to use a heuristic expiration value that is no more than some fraction of the interval since that time. A typical setting of this fraction might be 10%. [https://www.rfc-editor.org/rfc/rfc7234#section-4.2.2]
The details of how Chrome (and other browsers) calculate that value, can be found in the source code (An example from Chrome v49). It would appear that Chrome also calculates the value relative to the Last-Modified header.
(Credit to this post)
Share
Improve this answer
Follow
edited Oct 7, 2021 at 7:59
CommunityBot
111 silver badge
answered Aug 26, 2016 at 7:27
JonJon
1,0941111 silver badges2323 bronze badges
2
1
What if there is no Last-Modified header?
– Phil
Jan 25, 2017 at 10:22
It'll depend on the browser I think. The code sample I noted from Chrome falls back on 'max_age_value' and Expires header, I think...
– Jon
Jan 26, 2017 at 23:39
Add a comment
|
|
We have been having a problem with Chrome caching a resource on our Glassfish server. The expires and no-cache headers are not being sent and the resource (an approximately 4 MB SWF file) is being cached by Chrome -- despite the presence of the Last-Modified header.
Sometimes Chrome will get a 304 code, and other times it simply does a 200 (from cache). I understand the 304 -- Chrome is likely checking the most recent Last-Modified date with the cached version to decide. But other times it does the 200 (from cache), which does not return any header information and appears that Chrome is simply assuming the file hasn't been modified instead of checking.
Google's own site states the following:
HTTP/S supports local caching of static resources by the browser. Some
of the newest browsers (e.g. IE 7, Chrome) use a heuristic to decide
how long to cache all resources that don't have explicit caching
headers.
But this does not provide a definitive answer. Is this heuristic published anywhere? I realize there may not be a fixed answer (like 30 days), but some general guidelines would be useful. Furthermore, if Last-Modified is being set, I don't understand why Chrome isn't bothering to check that first.
| How long does Google Chrome cache a resource if expires and/or no-cache headers are not set? |
org.springframework.web.servlet.support.WebContentGenerator, which is the base class for all Spring controllers has quite a few methods dealing with cache headers:
/* Set whether to use the HTTP 1.1 cache-control header. Default is "true".
* <p>Note: Cache headers will only get applied if caching is enabled
* (or explicitly prevented) for the current request. */
public final void setUseCacheControlHeader();
/* Return whether the HTTP 1.1 cache-control header is used. */
public final boolean isUseCacheControlHeader();
/* Set whether to use the HTTP 1.1 cache-control header value "no-store"
* when preventing caching. Default is "true". */
public final void setUseCacheControlNoStore(boolean useCacheControlNoStore);
/* Cache content for the given number of seconds. Default is -1,
* indicating no generation of cache-related headers.
* Only if this is set to 0 (no cache) or a positive value (cache for
* this many seconds) will this class generate cache headers.
* The headers can be overwritten by subclasses, before content is generated. */
public final void setCacheSeconds(int seconds);
They can either be invoked within your controller prior to content generation or specified as bean properties in Spring context.
|
In an annotation-based Spring MVC controller, what is the preferred way to set cache headers for a specific path?
| How do you set cache headers in Spring MVC? |
Yes, half the data on server a, and half on server b would be a distributed cache. There are many methods of distributing the data, though some sort of hashing of the keys seems to be most popular.
The terms server and node are generally interchangeable. A node is generally a single unit of some collection, often called a cluster. A server is generally a single piece of hardware. In erlang, you can run multiple instances of the erlang runtime on a single server, and thus you'd have multiple erlang nodes... but generally you'd want to have one node per server for more optimum scheduling. (For non-distributed languages and platforms you have to manage your processes based on your needs.)
If a server goes down, and it is a cache server, then the data would have to come from its original source. EG: A cache is usually a memory based database designed for quick retrieval. The data in the cache sticks around only so long as its being used regularly, and eventually will be purged. But for distributed systems where you need persistence, a common technique is to have multiple copies. EG: you have servers A, B, C, D, E, and F. For data 1, you would put it on A, and then a copy on B and C. Couchbase and Riak do this. For data 2, it could be on B, and then copies on C and D. This way if any one server goes down you still have two copies.
|
I am confused about the concept of Distributed Cache. I kinda know what it is from google search. A distributed cache may span multiple servers so that it can grow in size and in transactional capacity. However, I do not really understand how it works or how it distribute the data.
For example, let's say we have Data 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 2 cache servers A and B. If we use distributed cache, then one of possible solution is that Data 1, 3, 5, 7, 9 are stored in Cache Server A, and 2, 4, 6, 8, 10 are stored in cache Server B.
So is this correct or did I misunderstand it?
Second question is that I usually heard the word server node. What is it? In the above example, Server A is a server node, right?
Third question, if a server (let's say Server A) goes down, what can we do about that? I mean if my example above is correct, we cannot get the data 1, 3, 5, 7, 9 from cache when Server A is down, then what could Cache Server do in this case?
| What is a distributed cache? |
I'm assuming we're talking about the production environment.
When you change any of your javascripts or stylesheets in the production environment, you need to run rake assets:precompile; this task compiles and compresses the various .js and .css files and creates the application.js and application.css files that is loaded by your views.
It's possible that if you replaced jquery.autoresize.js with a version with an older timestamp, the precompile step might skip it, thinking the compiled version is up-to-date. You can avoid that by running rake assets:clean first, forcing it to rebuild everything in the public/assets directory from scratch.
|
I'm starting a new project in Rails, and it looks like the application.js manifest file is doing something funny with the javascripts that I reference - does it cache those files as part of the asset pipeline?
Here's what happened. I added a javascript file named jquery.autoresize.js to the vendor/assets/javascripts folder, and then referenced the file in the application.js manifest like this:
//= require jquery.autoresize.js
Then I started up the rails server. But after navigating around in my app, I realized that I had accidentally added the wrong version of the jquery.autoresize.js file. So, I deleted that file and then added the correct version to the vendor/assets/javascripts folder. But, to my horror, when I reloaded the page, it is still loading the old javascript file.
I tried emptying my browser cache, then exiting and restarting the Rails server, but to no avail. I hacked a solution together by simply renaming my javascript file and referencing the new name, which worked fine. But there has got to be a better solution to this.
Does the new asset pipeline cache the files you reference somehow? If so, how can I clear that cache? Thanks for any help!
| Clear the cache from the Rails asset pipeline |
There are two types of cache policies you can use:
CacheItemPolicy.AbsoluteExpiration will expire the entry after a set amount of time.
CacheItemPolicy.SlidingExpiration will expire the entry if it hasn't been accessed in a set amount of time.
The ObjectCache Add() overload you're using treats it as an absolute expiration, which means it'll expire after 1 day, regardless of how many times you access it. You'll need to use one of the other overloads. Here's how you'd set a sliding expiration (it's a bit more complicated):
CacheItem item = cache.GetCacheItem("item");
if (item == null) {
CacheItemPolicy policy = new CacheItemPolicy {
SlidingExpiration = TimeSpan.FromDays(1)
}
item = new CacheItem("item", someData);
cache.Set(item, policy);
}
You change the TimeSpan to the appropriate cache time that you want.
|
If I use an ObjectCache and add an item like so:
ObjectCache cache = MemoryCache.Default;
string o = "mydata";
cache.Add("mykey", o, DateTime.Now.AddDays(1));
I understand the object will expire in 1 day. But if the object is accessed 1/2 a day later using:
object mystuff = cache["mykey"];
Does this reset the timer so it's now 1 day since the last access of the entry with the key "mykey", or it still 1/2 a day until expiry?
If the answer is no is there is a way to do this I would love to know.
| .NET Caching how does Sliding Expiration work? |
1
Clear Cache in Safari version 7 on Mac OSX
You have two options to clear the cache in Safari version 6:
Use the "Empty Caches" option
Select which items you want to clear by using the "Reset" option
Share
Improve this answer
Follow
answered May 28, 2015 at 14:51
chandanachandana
13988 bronze badges
Add a comment
|
|
In Safari 7, the main html file with a manifest is loadable when offline, but none of the external resources are loaded, even if they're listed in the manifest file as cached. Safari's resource pane lists the files as in the application cache, but it will not load them. I've tried an extremely simple test, checked MIME type of the manifest file, renamed the manifest file, and tried other demos. Here's an example that works fine on Chrome, but on Safari it will not load the sticky image when offline: http://htmlfive.appspot.com/static/stickies.html
This is the same problem described in AppCache misbehaving in Safari, firefox, but I think that question doesn't make the problem as clear, and I wanted to provide a question with a concrete demo. Is there a work-around, or does Safari 7 totally not support application cache beyond the primary html file? Thanks!
| Safari 7 application cache does not work |
Open DevTools
Open Settings (bottom right or use F1 shortcut)
Check Disable cache (while DevTools is open)
https://developers.google.com/chrome-developer-tools/docs/settings#general
|
I'm trying to work in my local server but I have to clear my cache every time if I want to see changes on the css rules.
There is any way to control Google Chrome cache?
| Google chrome css doesn't update unless clear cache |
Python 2.7 and 3.1 have OrderedDict and there are pure-Python implementations for earlier Pythons.
from collections import OrderedDict
class LimitedSizeDict(OrderedDict):
def __init__(self, *args, **kwds):
self.size_limit = kwds.pop("size_limit", None)
OrderedDict.__init__(self, *args, **kwds)
self._check_size_limit()
def __setitem__(self, key, value):
OrderedDict.__setitem__(self, key, value)
self._check_size_limit()
def _check_size_limit(self):
if self.size_limit is not None:
while len(self) > self.size_limit:
self.popitem(last=False)
You would also have to override other methods that can insert items, such as update. The primary use of OrderedDict is so you can control what gets popped easily, otherwise a normal dict would work.
|
I'd like to work with a dict in python, but limit the number of key/value pairs to X. In other words, if the dict is currently storing X key/value pairs and I perform an insertion, I would like one of the existing pairs to be dropped. It would be nice if it was the least recently inserted/accesses key but that's not completely necessary.
If this exists in the standard library please save me some time and point it out!
| How to limit the size of a dictionary? |
I found that if you append the last modified timestamp of the file onto the end of the URL the browser will request the files when it is modified. For example in PHP:
function urlmtime($url) {
$parsed_url = parse_url($url);
$path = $parsed_url['path'];
if ($path[0] == "/") {
$filename = $_SERVER['DOCUMENT_ROOT'] . "/" . $path;
} else {
$filename = $path;
}
if (!file_exists($filename)) {
// If not a file then use the current time
$lastModified = date('YmdHis');
} else {
$lastModified = date('YmdHis', filemtime($filename));
}
if (strpos($url, '?') === false) {
$url .= '?ts=' . $lastModified;
} else {
$url .= '&ts=' . $lastModified;
}
return $url;
}
function include_css($css_url, $media='all') {
// According to Yahoo, using link allows for progressive
// rendering in IE where as @import url($css_url) does not
echo '<link rel="stylesheet" type="text/css" media="' .
$media . '" href="' . urlmtime($css_url) . '">'."\n";
}
function include_javascript($javascript_url) {
echo '<script type="text/javascript" src="' . urlmtime($javascript_url) .
'"></script>'."\n";
}
|
CSS and Javascript files don't change very often, so I want them to be cached by the web browser. But I also want the web browser to see changes made to these files without requiring the user to clear their browser cache. Also want a solution that works well with a version control system such as Subversion.
Some solutions I have seen involve adding a version number to the end of the file in the form of a query string.
Could use the SVN revision number to automate this for you: ASP.NET Display SVN Revision Number
Can you specify how you include the Revision variable of another file? That is in the HTML file I can include the Revision number in the URL to the CSS or Javascript file.
In the Subversion book it says about Revision: "This keyword describes the last known revision in which this file changed in the repository".
Firefox also allows pressing CTRL+R to reload everything on a particular page.
To clarify I am looking for solutions that don't require the user to do anything on their part.
| How can I make the browser see CSS and Javascript changes? |
17
According to "What every programmer should know about memory", by Ulrich Drepper you can do the following on Linux:
Once we have a formula for the memory
requirement we can compare it with the
cache size. As mentioned before, the
cache might be shared with multiple
other cores. Currently {There
definitely will sometime soon be a
better way!} the only way to get
correct information without hardcoding
knowledge is through the /sys
filesystem. In Table 5.2 we have seen
the what the kernel publishes about
the hardware. A program has to find
the directory:
/sys/devices/system/cpu/cpu*/cache
This is listed in Section 6: What Programmers Can Do.
He also describes a short test right under Figure 6.5 which can be used to determine L1D cache size if you can't get it from the OS.
There is one more thing I ran across in his paper: sysconf(_SC_LEVEL2_CACHE_SIZE) is a system call on Linux which is supposed to return the L2 cache size although it doesn't seem to be well documented.
Share
Improve this answer
Follow
edited Dec 24, 2009 at 17:54
answered Dec 24, 2009 at 9:36
Robert S. BarnesRobert S. Barnes
40.1k3232 gold badges134134 silver badges180180 bronze badges
Add a comment
|
|
is there a way in C++ to determine the CPU's cache size? i have an algorithm that processes a lot of data and i'd like to break this data down into chunks such that they fit into the cache. Is this possible?
Can you give me any other hints on programming with cache-size in mind (especially in regard to multithreaded/multicore data processing)?
Thanks!
| C++ cache aware programming |
You should be able to use BitmapFactory:
File mSaveBit; // Your image file
String filePath = mSaveBit.getPath();
Bitmap bitmap = BitmapFactory.decodeFile(filePath);
mImageView.setImageBitmap(bitmap);
|
I am using Universal-Image-Loader and there is this functionality that access the file cache of the image from sd card. But I don't know how to convert the returned file cache into bitmap. Basically I just wanted to assign the bitmap to an ImageView.
File mSaveBit = imageLoader.getDiscCache().get(easyPuzzle);
Log.d("#ImageValue: ", ""+mSaveBit.toString());
mImageView.setImageBitmap(mSaveBit);
Error: "The method setImageBitmap(Bitmap) in the type ImageView is not applicable for the arguments (File)"
| Convert a File Object to Bitmap |
This is because of the way proxies are created for handling caching, transaction related functionality in Spring. This is a very good reference of how Spring handles it - Transactions, Caching and AOP: understanding proxy usage in Spring
In short, a self call bypasses the dynamic proxy and any cross cutting concern like caching, transaction etc which is part of the dynamic proxies logic is also bypassed.
The fix is to use AspectJ compile time or load time weaving.
|
I'm trying to call a @Cacheable method from within the same class:
@Cacheable(value = "defaultCache", key = "#id")
public Person findPerson(int id) {
return getSession().getPerson(id);
}
public List<Person> findPersons(int[] ids) {
List<Person> list = new ArrayList<Person>();
for (int id : ids) {
list.add(findPerson(id));
}
return list;
}
and hoping that the results from findPersons are cached as well, but the @Cacheable annotation is ignored, and findPerson method got executed everytime.
Am I doing something wrong here, or this is intended?
| Spring cache @Cacheable method ignored when called from within the same class |
EDIT: create-react-app v2 now have the service worker disabled by default
This answer only apply for CRA v1
This is probably because of your web worker.
If you look into your index.js file you can see
registerServiceWorker();
Never wondered what it did? If we take a look at the file it got imported from we can see
// In production, we register a service worker to serve assets from local cache.
// This lets the app load faster on subsequent visits in production, and gives
// it offline capabilities. However, it also means that developers (and users)
// will only see deployed updates on the "N+1" visit to a page, since previously
// cached resources are updated in the background.
// To learn more about the benefits of this model, read {URL}
// This link also includes instructions on opting out of this behavior.
If you want to delete the web worker, don't just delete the line. Import unregister and call it in your file instead of the register.
import { unregister } from './registerServiceWorker';
and then call
unregister()
P.S. When you unregister, it will take at least one refresh to make it work
|
When I updated my site, run npm run build and upload the new files to the server I am still looking the old version of my site.
Without React, I can see the new version of my site with cache-busting. I do this:
Previous file
<link rel="stylesheet" href="/css/styles.css">
New file
<link rel="stylesheet" href="/css/styles.css?abcde">
How can I do something like this or to achieve cache busting with create react app?
There are many threads in the GitHub of create react app about this but no one has a proper/simple answer.
| Cache busting with CRA React |
As far as I know, a forced update like this is not directly possible. You might be able to reduce the DNS downtime by reducing the TTL (Time-To-Live) value of the entries before changing them, if your name server service provider allows that.
Here's a guide for less painful DNS changes.
A fair warning, though - not all name servers between your client and the authoritative (origin) name server will enforce your TTL, they might have their own caching time.
|
I'm moving my web application to another server and in the next few days I'll refresh the DNS to point to the new IP location.
Unfortunately some browsers and SOs keep a DNS cache that will make users point to the old IP location. Some users are rookies and they'll not refresh the DNS cache manually and I know we'll lose a lot of them in the first weeks after this change.
Is there anyway to force this DNS cache to refresh so it'll be transparent for our final users?
| How to force DNS refresh for a website? |
If your read the links from @runmad you can see in the flow chart that if the HEAD of the file is unchanged it will still used the cached version when you set the cachePolicy.
In Swift3 I had to do this to get it to work:
let config = URLSessionConfiguration.default
config.requestCachePolicy = .reloadIgnoringLocalCacheData
config.urlCache = nil
let session = URLSession(configuration: config)
That got a truly non-cached version of the file, which I needed for bandwidth estimation calculations.
|
In my iOS app, I am using NSURLSessionTask to download json data to my app. I discovered that when I call the url directly from the browser, I get an up to date json and when it's called from within the app, I get an older version of the json.
Is this due to caching? How can I tell NSURLSessionTask to not use caching.
This is the call I use:
NSURLSessionTask *task = [[NSURLSession sharedSession] dataTaskWithURL:url completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
Thanks!
| How to disable caching from NSURLSessionTask |
header("Cache-Control: no-cache, must-revalidate");
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT");
header("Content-Type: application/xml; charset=utf-8");
|
How to clear browser cache with php?
| How to clear browser cache with php? |
As binarygiant requested I am posting my comment as an answer. I have solved this problem by adding No-Cache headers to the response on server side. Note that you have to do this for GET requests only, other requests seems to work fine.
binarygiant posted how you can do this on node/express. You can do it in ASP.NET MVC like this:
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "None")]
public ActionResult Get()
{
// return your response
}
|
I currently use service/$resource to make ajax calls (GET in this case), and IE caches the calls so that fresh data cannot be retrieved from the server. I have used a technique I found by googling to create a random number and append it to the request, so that IE will not go to cache for the data.
Is there a better way than adding the cacheKill to every request?
factory code
.factory('UserDeviceService', function ($resource) {
return $resource('/users/:dest', {}, {
query: {method: 'GET', params: {dest: "getDevicesByUserID"}, isArray: true }
});
Call from the controller
$scope.getUserDevices = function () {
UserDeviceService.query({cacheKill: new Date().getTime()},function (data) {
//logic
});
}
| Better Way to Prevent IE Cache in AngularJS? |
Just throw some Exception if user is not found and catch it in client code while using get(key) method.
new CacheLoader<ObjectId, User>() {
@Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
if (u != null) {
return u;
} else {
throw new UserNotFoundException();
}
}
}
From CacheLoader.load(K) Javadoc:
Returns:
the value associated with key; must not be null
Throws:
Exception - if unable to load the result
Answering your doubts about caching null values:
Returns the value associated with key in this cache, first loading
that value if necessary. No observable state associated with this
cache is modified until loading completes.
(from LoadingCache.get(K) Javadoc)
If you throw an exception, load is not considered as complete, so no new value is cached.
EDIT:
Note that in Caffeine, which is sort of Guava cache 2.0 and "provides an in-memory cache using a Google Guava inspired API" you can return null from load method:
Returns:
the value associated with key or null if not found
If you may consider migrating, your data loader could freely return when user is not found.
|
I am using Guava to cache hot data. When the data does not exist in the cache, I have to get it from database:
public final static LoadingCache<ObjectId, User> UID2UCache = CacheBuilder.newBuilder()
//.maximumSize(2000)
.weakKeys()
.weakValues()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<ObjectId, User>() {
@Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
return u;
}
});
My problem is when the data does not exists in database, I want it to return null and to not do any caching. But Guava saves null with the key in the cache and throws an exception when I get it:
com.google.common.cache.CacheLoader$InvalidCacheLoadException:
CacheLoader returned null for key shisoft.
How do we avoid caching null values?
| How to avoid caching when values are null? |
by this:
<meta http-equiv="expires" content="0">
Setting the content to "0" tells the browsers to always load the page from the web server.
|
I have a webpage index.html hosted on a particular server. I have pointed example.com to example.com/index.html. So when I make changes in index.html and save it, and then try to open example.com, the changes are not reflected. Reason that the webpages are being cached.
Then I manually refresh the page and since it loads the fresh copies and not from cache, it works fine. But I cannot ask my client to do so, and they want everything to be perfect. So my question is that is there a trick or technique as to how I can make the file load every time from the server and not from cache?
P.S: I know the trick for CSS, JS and images files, i.e. appending ?v=1 but don't know how to do it for index.html.
Any help would be appreciated. Thanks!
| Load index.html every time from the server and NOT from cache |
1
Try to update cordova version as latest one.
cordova -v
npm install -g cordova
current recent cordova version is 8.x.
and please check which cordova plugins are installed.
Share
Improve this answer
Follow
edited Mar 7, 2018 at 2:38
Pang
9,733146146 gold badges8282 silver badges123123 bronze badges
answered Mar 7, 2018 at 2:05
sirius2013sirius2013
32511 silver badge88 bronze badges
Add a comment
|
|
I have an Android HTC Amaze and an Android HTC Desire. My Sencha Touch 2 apps wrapped by PhoneGap work excellent for the Desire but they refuse to load on the HTC Amaze 4.0.3.
I'm getting this kind of errors in log -
08-24 17:08:37.577: E/chromium(16106): external/chromium/net/disk_cache/stat_hub.cc:190: [0824/170837:ERROR:stat_hub.cc(190)] StatHub::Init - App "appname" isn't supported.
| Sencha Touch 2 PhoneGap issue for 4.0.x |
77
This function will save an image in the documents folder:
func saveImage(image: UIImage) -> Bool {
guard let data = UIImageJPEGRepresentation(image, 1) ?? UIImagePNGRepresentation(image) else {
return false
}
guard let directory = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) as NSURL else {
return false
}
do {
try data.write(to: directory.appendingPathComponent("fileName.png")!)
return true
} catch {
print(error.localizedDescription)
return false
}
}
To use:
let success = saveImage(image: UIImage(named: "image.png")!)
This function will get that image:
func getSavedImage(named: String) -> UIImage? {
if let dir = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) {
return UIImage(contentsOfFile: URL(fileURLWithPath: dir.absoluteString).appendingPathComponent(named).path)
}
return nil
}
To use:
if let image = getSavedImage(named: "fileName") {
// do something with image
}
Share
Improve this answer
Follow
edited Dec 14, 2018 at 10:34
answered Jun 4, 2017 at 13:50
BobbyBobby
6,16544 gold badges3535 silver badges3737 bronze badges
2
1
Works for **Swift 4.2 ** as well! :)
– Gerard G
Oct 6, 2018 at 21:06
what to do with let success ? how can i store or insert or would you give an example to save on a button click and press on other button should show in image view
– iOS Developer
Nov 27, 2018 at 12:21
Add a comment
|
|
I am saving an image using saveImage.
func saveImage (image: UIImage, path: String ) -> Bool{
let pngImageData = UIImagePNGRepresentation(image)
//let jpgImageData = UIImageJPEGRepresentation(image, 1.0) // if you want to save as JPEG
print("!!!saving image at: \(path)")
let result = pngImageData!.writeToFile(path, atomically: true)
return result
}
New info:
Saving file does not work properly ("[-] ERROR SAVING FILE" is printed)--
// save your image here into Document Directory
let res = saveImage(tempImage, path: fileInDocumentsDirectory("abc.png"))
if(res == true){
print ("[+] FILE SAVED")
}else{
print ("[-] ERROR SAVING FILE")
}
Why doesn't the saveImage function save the image? Access rights?
Older info:
The debug info says:
!!!saving image at: file:///var/mobile/Applications/BDB992FB-E378-4719-B7B7-E9A364EEE54B/Documents/tempImage
Then I retrieve this location using
fileInDocumentsDirectory("tempImage")
The result is correct.
Then I am loading the file using this path
let image = UIImage(contentsOfFile: path)
if image == nil {
print("missing image at: \(path)")
}else{
print("!!!IMAGE FOUND at: \(path)")
}
The path is correct, but the message is "missing image at..". Is the file somehow inaccessible or not stored? What can be a reason for this behavior?
I am testing this code on iphone 4 with ios 7 and iphone 5 with ios 7 simulator.
Edit:
1. The fileInDocumentsDirectory function
func fileInDocumentsDirectory(filename: String) -> String {
let documentsURL = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0]
let fileURL = documentsURL.URLByAppendingPathComponent(filename).absoluteString
return fileURL
}
| Saving image and then loading it in Swift (iOS) |
Pop your cached query into Django's cache:
from django.core.cache import cache
cache.set('key', queryset)
Then create a context processor to add the value of the cache to all templates:
# myproject/myapp/context_processors.py
from django.core.cache import cache
def cached_queries():
return {'cache', cache.get('key')}
Then add your context processor in your Django settings file:
TEMPLATE_CONTEXT_PROCESSORS += (
'myproject.myapp.context_processors.cached_queries'
)
Now you will be able to access the cache variable in all generic templates and all templates which have a requests context, which a template is given if this is done in the view:
return render_to_response('my_template.html',
my_data_dictionary,
context_instance=RequestContext(request))
When to Set the Cache
It depends on what is contained in the cache. However a common problem is that Django only really gets to execute Python whenever a page request is sent, and this is often not where you want to do this kind of work.
An alternative is to create a custom management command for a particular app. You can then either run this manually when necessary, or more commonly set this to run as a cron job.
To create a management command you must create a class decended from Command inside of a management/commands directory located inside of an app:
# myproject/myapp/management/commands/update_cache.py
from django.core.management.base import NoArgsCommand
from django.core.cache import cache
class Command(NoArgsCommand):
help = 'Refreshes my cache'
def handle_noargs(self, **options):
cache.set('key', queryset)
The name of this file is important as this will be the name of the command. In this case you can now call this on the command line:
python manage.py update_cache
|
I'm trying to find a way to cache the results of a query that won't change with frequency. For example, categories of products from an e-commerce (cellphones, TV, etc).
I'm thinking of using the template fragment caching, but in this fragment, I will iterate over a list of these categories. This list is avaliable in any part of the site, so it's in my base.html file. Do I have always to send the list of categories when rendering the templates? Or is there a more dynamic way to do this, making the list always available in the template?
| Caching query results in django |
Advantages of Java memory over memcache:
Java memory is faster (no network).
Java memory won't require serialization, you have Java objects available to you.
Advantages of memcache over Java memory:
It can be accessed by more than one application server, so your cache will be shared among all your app servers.
It can be accessed by a variety of different servers, so long as they all agree on the key scheme and the serialization.
It will discard expired cache values, so you get time-based invalidation.
|
Simple, probably dumb question: Suppose I have a Java server that stores in memory commonly used keys and values which I can query (let's say in a HashMap)
What's the difference between that and using Memcache (or even Redis)? They both store things in memory. Is there a benefit to one or the other? Does Memcache leaves less of a memory footprint? Can store more in less memory? Faster to query? No difference?
| Memcache vs Java Memory |
One reason could be that the part of the code inserting the object uses a different classloader than the code retrieving it.
An instance of a class can not be cast to the same class loaded by a different classloader.
Response to the edit:
What would you do if this happened in production?
This generally happens when the reading and inserting modules each include the same jar containing C1.
Since most containers try the parent classloader first, and then the local classloader (the Parent first strategy), the common solution to the problem is to instead load the class in the closest common parent to the inserting and reading modules.
If you move the module containing the C1 class to the parent module, you force both submodules to get the class from the parent, removing any classloader differences.
|
This is an interview question.
The interview is over, but this question is still on my mind.
I can't ask the interviewer, as I did not get the job.
Scenario:
put object of class C1 in to a cache with key "a"
Later code:
C1 c1FromCache = (C1) cache.get("a");
This code throws a ClassCastException.
What can the reasons be?
I said because someone else put another object with the same key and so overwrote it. I was told no, think of other possibilities.
I said maybe the jar defining class C1 was not available on this node (not sure if this would result in a class cast or a ClassNotFoundException, but I was grasping for any lead now. Then I said maybe wrong version of class? They said the same jar of class C1 is there in all nodes).
Edit/ Add Asked if the get was throwing the ClassCast but was told no. after that i told him my action to resolve such an issue would be to drop in a test jsp that would mimic the actions and put better logging (stack trace) after the exception. that was the 2nd part of the question (why and what would you do if this happened in production)
Does anyone else have any ideas about why a cache get would result in a cast issue?
| What else can throw a ClassCastException in java? |
Have a look at Yahoo! tips: https://developer.yahoo.com/performance/rules.html#expires.
There are also tips by Google: https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
|
Which is the best method to make the browser use cached versions of js files (from the serverside)?
| caching JavaScript files |
I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator.
In order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance.
If you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 10 years ago.
Improve this question
I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use?
APC - Installation Guide
eAccelerator - Installation Guide
XCache - Installation Guide
I'm also open to any other alternatives that have slipped under my radar.
Currently running on a stock Debian Etch with Apache 2 and PHP 5.2
[Update 1]
HowtoForge installation links added
[Update 2]
Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application:
Login
Access Home Page
With 50 concurrent connections, the results are as follows:
No Opcode Caching
APC
eAccelerator
XCache
Performance Graph (smaller is better)
From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance.
I have decided to use APC due to the following 2 reasons:
Package is available in official Debian repository
More functional control panel
To summarize my experience:
Ease of Installation: APC > eAccelerator > XCache
Performance: eAccelerator > APC, XCache
Control Panel: APC > XCache > eAccelerator
| Which PHP opcode cacher should I use to improve performance? [closed] |
Adding the following into web.config solution worked across Chrome, IE, Firefox, and Safari:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<location path="index.html">
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-cache" />
</customHeaders>
</httpProtocol>
</system.webServer>
</location>
</configuration>
This will ensure that the that Cache-Control header is set to no-cache when requesting index.html.
|
I have a single page application (angular-js) which is served through IIS. How do I prevent caching of HTML files? The solution needs to be achieved by changing content within either index.html or the web.config, as access to IIS through a management console is not possible.
Some options I am currently investigating are:
web.config caching profiles - http://www.iis.net/configreference/system.webserver/caching
web.config client cache - http://www.iis.net/configreference/system.webserver/staticcontent/clientcache
meta tags - Using <meta> tags to turn off caching in all browsers?
IIS is version 7.5 with .NET framework 4
| How to disable caching of single page application HTML file served through IIS? |
38
I use life hacking like this:
@Configuration
@EnableCaching
@EnableScheduling
public class CachingConfig {
public static final String GAMES = "GAMES";
@Bean
public CacheManager cacheManager() {
ConcurrentMapCacheManager cacheManager = new ConcurrentMapCacheManager(GAMES);
return cacheManager;
}
@CacheEvict(allEntries = true, value = {GAMES})
@Scheduled(fixedDelay = 10 * 60 * 1000 , initialDelay = 500)
public void reportCacheEvict() {
System.out.println("Flush Cache " + dateFormat.format(new Date()));
}
}
Share
Improve this answer
Follow
edited Aug 15, 2022 at 18:49
ℛɑƒæĿᴿᴹᴿ
5,09455 gold badges3939 silver badges5959 bronze badges
answered Jul 8, 2016 at 7:55
AtumAtum
1,28111 gold badge1212 silver badges1515 bronze badges
1
24
Nice but this will evict all entries, old or new ones.
– Adrian
Sep 9, 2019 at 9:01
Add a comment
|
|
I have implemented a cache and now I want to add an expiry time.
How can I set an expiry time in spring boot with @Cacheable?
This is a code snippet:
@Cacheable(value="forecast",unless="#result == null")
| Expiry time @cacheable spring boot |
From MSDN:
The main differences between the Cache and MemoryCache classes are
that the MemoryCache class has been changed to make it usable by .NET
Framework applications that are not ASP.NET applications. For example,
the MemoryCache class has no dependencies on the System.Web assembly.
Another difference is that you can create multiple instances of the
MemoryCache class for use in the same application and in the same
AppDomain instance.
Reading that and doing some investigation in reflected code it is obvious that MemoryCache is just a simple class. You can use MemoryCache.Default property to (re)use same instance or you can construct as many instances as you want (though recommended is as few as possible).
So basically the answer lies in your code.
If you use MemoryCache.Default then your cache lives as long as your application pool lives. (Just to remind you that default application pool idle time-out is 20 minutes which is less than 1 hour.)
If you create it using new MemoryCache(string, NameValueCollection) then the above mentioned considerations apply plus the context you create your instance in, that is if you create your instance inside controller (which I hope is not the case) then your cache lives for one request
It's a pity I can't find any references, but ... MemoryCache does not guarantee to hold data according to a cache policy you specify. In particular if machine you're running your app on gets stressed on memory your cache might be discarded.
If you still have no luck figuring out what's the reason for early cache item invalidation you could take advantage of RemoveCallback and investigate what is the reason of item invalidation.
|
I'm using MemoryCache in ASP.NET and it is working well. I have an object that is cached for an hour to prevent fresh pulls of data from the repository.
I can see the caching working in debug, but also once deployed to the server, after the 1st call is made and the object is cached subsequent calls are about 1/5 of the time.
However I'm noticing that each new client call (still inside that 1 hour window - in fact just a minute or 2 later) seems to have the 1st call to my service (that is doing the caching) taking almost as long as the original call before the data was cached.
This made me start to wonder - is MemoryCache session specific, and each new client making the call is storing it's own cache, or is something else going on to cause the 1st call to take so long even after I know the data has been cached?
| Is MemoryCache scope session or application wide? |
16
<?php Header("Cache-Control: max-age=3000, must-revalidate"); ?>
You can implement a PHP script that must be the first line of code in your index file . It is an http header typically issued by web servers. You can also rename the resource that is considered "stale". This tutorial will give you more details. https://www.mnot.net/cache_docs/
Share
Improve this answer
Follow
answered Apr 15, 2016 at 11:59
Eli DuhonEli Duhon
16911 silver badge44 bronze badges
1
2
Even though we're many versions of Chrome later, this answer still fixed a problem we were having in development. Also, just to clarify, this header can be added in any web server - it's not PHP specific.
– Randall
Sep 28, 2018 at 16:31
Add a comment
|
|
I am experiencing this weird issue where my Chrome browser keeps loading a old version of my website whose code doesn't even exist on my server any more. I assume it's a typical cache issue.
I tried to clean the browser cache, use igcognito mode, and clean DNS cache. The old cached page is still being loaded.
This issue seems to have been discussing on this google group for three years but there is still no solutions. https://productforums.google.com/forum/#!topic/chrome/xR-6YAkcASQ
Using firefox or any other web browsers works perfectly.
It doesn't just happen to me. All my coworkers experience the same issue on my website.
| Chrome keeps loading a old cache of my website |
You can create a custom IBundleTransform class to do this. Here's an example that will append a v=[filehash] parameter using a hash of the file contents.
public class FileHashVersionBundleTransform: IBundleTransform
{
public void Process(BundleContext context, BundleResponse response)
{
foreach(var file in response.Files)
{
using(FileStream fs = File.OpenRead(HostingEnvironment.MapPath(file.IncludedVirtualPath)))
{
//get hash of file contents
byte[] fileHash = new SHA256Managed().ComputeHash(fs);
//encode file hash as a query string param
string version = HttpServerUtility.UrlTokenEncode(fileHash);
file.IncludedVirtualPath = string.Concat(file.IncludedVirtualPath, "?v=", version);
}
}
}
}
You can then register the class by adding it to the Transforms collection of your bundles.
new StyleBundle("...").Transforms.Add(new FileHashVersionBundleTransform());
Now the version number will only change if the file contents change.
|
I've got an MVC application and I'm using the StyleBundle class for rendering out CSS files like this:
bundles.Add(new StyleBundle("~/bundles/css").Include("~/Content/*.css"));
The problem I have is that in Debug mode, the CSS urls are rendered out individually, and I have a web proxy that aggressively caches these urls. In Release mode, I know a query string is added to the final url to invalidate any caches for each release.
Is it possible to configure StyleBundle to add a random querystring in Debug mode as well to produce the following output to get around the caching issue?
<link href="/stylesheet.css?random=some_random_string" rel="stylesheet"/>
| MVC4 StyleBundle: Can you add a cache-busting query string in Debug mode? |
swift 3, alamofire 4
My solution was:
creating extension for Alamofire:
extension Alamofire.SessionManager{
@discardableResult
open func requestWithoutCache(
_ url: URLConvertible,
method: HTTPMethod = .get,
parameters: Parameters? = nil,
encoding: ParameterEncoding = URLEncoding.default,
headers: HTTPHeaders? = nil)// also you can add URLRequest.CachePolicy here as parameter
-> DataRequest
{
do {
var urlRequest = try URLRequest(url: url, method: method, headers: headers)
urlRequest.cachePolicy = .reloadIgnoringCacheData // <<== Cache disabled
let encodedURLRequest = try encoding.encode(urlRequest, with: parameters)
return request(encodedURLRequest)
} catch {
// TODO: find a better way to handle error
print(error)
return request(URLRequest(url: URL(string: "http://example.com/wrong_request")!))
}
}
}
and using it:
Alamofire.SessionManager.default
.requestWithoutCache("https://google.com/").response { response in
print("Request: \(response.request)")
print("Response: \(response.response)")
print("Error: \(response.error)")
}
|
When I send a GET request twice with Alamofire I get the same response but I'm expecting a different one. I was wondering if it was because of the cache, and if so I'd like to know how to disable it.
| How to disable caching in Alamofire |
OK,
finally it worked with this:
@app.after_request
def add_header(r):
"""
Add headers to both force latest IE rendering engine or Chrome Frame,
and also to cache the rendered page for 10 minutes.
"""
r.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
r.headers["Pragma"] = "no-cache"
r.headers["Expires"] = "0"
r.headers['Cache-Control'] = 'public, max-age=0'
return r
If you add this, this function will called after each request done. Please,see here
I would be happy, if anyone could explain me why this headers overwriting did not work from the page handler?
Thank you.
|
This question already has answers here:
Using Flask, how do I modify the Cache-Control header for ALL output?
(3 answers)
Closed 2 years ago.
I have some caching issues. I'm running very small web-application which reads one frame, saves it to the disk and then shows it in browsers window.
I know, it is probably not the best solution, but every time I save this read frame with the same name and therefor any browser will cache it.
I tried to use html meta-tags - no success:
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
Also, I have tried this one (flask-specific):
resp.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
resp.headers["Pragma"] = "no-cache"
resp.headers["Expires"] = "0"
This is how I tried to modify resp headers:
r = make_response(render_template('video.html', video_info=video_info))
r.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
r.headers["Pragma"] = "no-cache"
r.headers["Expires"] = "0"
Still both Google Chrome and Safari do caching.
What might be the problem here?
| Disabling caching in Flask [duplicate] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.