Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
As for my experience, You should use multiple levels of cache. Implement both of Your solutions (provided that it's not the only code that uses "SELECT title, body FROM posts WHERE id=%%". If it is use only the first one).
In the second version of code, You memcache.get(query.hash()), but memcache.put("feed_%s" % id, query_result). This might not work as You want it to (unless You have an unusual version of hash() ;) ).
I would avoid query.hash(). It's better to use something like posts-title-body-%id. Try deleting a video when it's stored in cache as query.hash(). It can hang there for months as a zombie-video.
By the way:
id = GET_PARMS['id']
query = query("SELECT title, body FROM posts WHERE id=%%", id)
You take something from GET and put it right into the sql query? That's bad (will result in SQL injection attacks).
|
Something I'm curious about.. What would be "most efficient" to cache the generation of, say, an RSS feed? Or an API response (like the response to /api/films/info/a12345).
For example, should I cache the entire feed, and try and return that, as psuedo code:
id = GET_PARAMS['id']
cached = memcache.get("feed_%s" % id)
if cached is not None:
return cached
else:
feed = generate_feed(id)
memcache.put("feed_%s" % id, feed)
return feed
Or cache the queries result, and generate the document each time?
id = sanitise(GET_PARMS['id'])
query = query("SELECT title, body FROM posts WHERE id=%%", id)
cached_query_result = memcache.get(query.hash())
if cached_query_result:
feed = generate_feed(cached_query_result)
return feed
else:
query_result = query.execute()
memcache.put("feed_%s" % id, query_result)
feed = generate_feed(query_result)
(Or, some other way I'm missing?)
|
How granular should data in memcached be?
|
4
The simplest answer is to set an HTTP Expires header with a far future value (say 10 years). This way (in theory) the browser should never request that image again taking huge load off your server if you have a lot of repeat visits. Of course, you'll need to come up with a mechanism for changing the image url when the you want to update the image. Most people approach this by adding a build number into the path somewhere (or by appending a build number to a query string).
There's some good information on performance (including use of the Expires header) here:
http://developer.yahoo.com/performance/rules.html
I personally wouldn't worry about caching these on the server side (I presume they aren't being generated dynamically). They are being served directly from disc so the overhead should be small anyway. Make sure you have a separate machine for serving static content and use a very lightweight web server, such as Lighttpd or nginx which should help increase throughput.
http://www.lighttpd.net/
http://nginx.net/
Share
Improve this answer
Follow
edited Feb 5, 2009 at 9:04
answered Feb 5, 2009 at 8:52
Andy HumeAndy Hume
40.9k1010 gold badges4848 silver badges5858 bronze badges
Add a comment
|
|
I have a website
www.somesite1.com which gets all its image content from www.somesite2.com
At the moment each time an image is to be displayed we simply use an absolute URL to get it like this
<img src="http://www.somesite2.com/images/myimage.jpg" />
So each time a user goes to www.somesite1.com for content www.somesite2.com gets hammered.
I'm looking for clever ways of caching images, perhaps using http headers or such like. Thanks
|
Whats the best way of caching images on my website?
|
All you are doing is storing a pointer in cache... The actual "table" is still on the heap, where all .Net reference types are stored... You are not making a copy of it...
The variable in cache just acts to stop the garbage collector from erasing the object on the heap...
and no, you don't want to call dispose until the actual object is no longer required.
|
Simple case:
i put a DataTable in Cache
DataTable table = SomeClass.GetTable();
Cache["test"] = table;
then in later calls i use
DataTable table = (DataTable)Cache["test"];
now the question is: should i call table.dispose() on each call, though its stored in the Cache? Means the object is always the same? Or will Cache create a copy on each time?
thx :)
|
Do I have to call dispose after each use though datatable is stored in cache?
|
The amount of memory you are using is going to be roughly the same as the size of the file. The XML will have some overhead in the tags that the Dictionary will not, so it's a safe estimate of the memory requirements. So are you talking about 10-50 MB or 100-500 MB? I wouldn't necessarily worry about 10 to 50 MB.
If you are concerned, then you need to think about if you really need to do the replacements everytime the page is loaded. Can you take the hit of going to a database or the XML file once per page and then caching the output of the ASP.NET page and hold it for an hour? If so, consider using Page Caching.
|
We're building a text templating engine out of a custom HttpModule that replaces tags in our HTML with whole sections of text from an XML file.
Currently the XML file is loaded into memory as a string/string Dictionary so that the lookup/replace done by the HttpModule can be performed very quickly using a regex.
We're looking to expand the use of this though to incorperate larger and larger sections of replaced text and I'm concerned over keeping more verbose text in memory at one time as part of the Dictionary, especially as we use ASP.NET caching for many uses as well.
Does anyone have a suggestion for a more efficient and scalable data structure/management strategy that we could use?
UPDATE: In response to Corbin March's great suggestion below, I don't think this is a case of us going down a 'dark road' (although I appreciate the concern). Our application is designed to be reskinned completely for different clients, right down to text anywhere on the page - including the ability to have multiple languages. The technique I've described has proven to be the most flexible way to handle this.
|
ASP.NET Templating
|
From the Freemarker manual, it seems caching is on by default.
Template caching
FreeMarker caches templates (assuming you use the Configuration [which the Spring MBean does...] methods to create Template objects). This means that when you call getTemplate, FreeMarker not only returns the resulting Template object, but stores it in a cache, so when next time you call getTemplate with the same (or equivalent) path, it just returns the cached Template instance, and will not load and parse the template file again.
|
I'm using the Spring class FreeMarkerConfigurationFactoryBean to retrieve FreeMarker templates. I would like these templates to be cached, but there doesn't appear to be any way to indicate that this behaviour is required.
In contrast, Spring modules provides a CachingTemplateResolver which does provide template caching, but is it possible to achieve this without using Spring modules, i.e. with FreeMarkerConfigurationFactoryBean?
Cheers,
Don
|
cache FreeMarker templates
|
Safari always reloads (ctrl+r) a page ignoring whatever that might be in the cache.
As Athena points out, iframes are cached. It's actually not the iframe content, but the request that's cached.
In those cases, Safari caches the
page, and then no matter which link
you click, shows the iframe from the
last click BEFORE the refresh (or
back/forward). It's then stuck on that
content, and shows it for all links.
This is overcome by assigning a different iframe id on each load:
iframe.id = new Date().getTime();
|
Something that would really reload the page or resource, ignoring whatever might be in cache.
|
Is there an equivalent to Ctrl-Shift-R in Safari/WebKit?
|
4
Using a service worker to implement an offline cache is certainly possible. (See, for example, MDN's Making PWAs work offline with Service workers.)
Please note that Unpoly/htmx rely on standard web interfaces like HTML/HTTP, so this is not specific to these frameworks.
If you want to be able to automatically update the page for changed data, the service worker must be able to notify the web page, and the web page must listen to these notifications. (Unpoly supports polling, htmx also supports SSE.)
The service worker essentially implements your server API (or a read-only subset of the API, if you're only doing caching). So it would make sense (and be useful) for the actual web server to implement the same notification mechanism.
Share
Improve this answer
Follow
answered Jun 1, 2023 at 22:06
CL.CL.
176k1717 gold badges226226 silver badges269269 bronze badges
1
Wow. Thanks for the answer. Can you link/post an example?
– Fred Hors
Jun 2, 2023 at 12:05
Add a comment
|
|
I'm a quite frustrated programmer for the usage of single page applications and I'm trying to find equally good alternatives using less technologies.
One could be unpoly.
Unpoly has a very efficient caching system but it has no possibility to persist that cache in the browser. So when I reopen the site after having browsed even just a few seconds before, I have no data in the cache and I have to wait (again) for the backend with its latencies and hope that the internet doesn't go away in those situations.
Example:
the user navigates with the mobile phone, opens modals, makes changes and the item lists are updated, good
the user leaves work and takes the ferry to go home
while waiting to arrive, he reopens the website/app and looks for the data he had seen a minute before and - since there is no internet on the ferry (or under the tunnel or in the countryside) (and there won't be for the next few minutes) he does not see nothing because that cache is not persisted in the browser he used a few minutes before
the user is angry
So I was thinking about integrating in this workflow the service worker cache but maybe something is not clear in my mind.
Do you think the following steps will work?
And could they also work using htmx?
ONLINE: GET /list (v1)
SW CACHE EMPTY: /list cached (v1)
UNPOLY gets /list (v1)
CLOSE BROWSER
Someone updates /list (to v2)
ONLINE: GET /list (v1)
SW CACHE FULLFILLED FROM BEFORE gives unpoly /list (v1)
UNPOLY gets /list (v1) from SW immediately fullfilling it's cache
UNPOLY ask server (through SW) for updates on /list (v1)
SW cache /list (v2)
UNPOLY re-render for /list (v2)
CLOSE BROWSER
OFFLINE: GET /list (v2) comes from SW cache (maybe with some indicator that is stale content)
What do you think?
|
Is it possible to use service worker cache with unpoly or htmx?
|
4
You can compile your own RediSearch module and load it into a Redis server through the configuration file. See the repository on GitHub: https://github.com/RediSearch/RediSearch/releases
Once you have built it, you'd amend your redis.conf to include something like this. The user that redis-server runs as will need permission to see and execute this file:
loadmodule /full/path/to/redisearch.so
Note that if you also want to use the JSON document type with Search, you'll need the RedisJSON module: https://github.com/RedisJSON/RedisJSON/releases
Share
Improve this answer
Follow
answered Apr 14, 2023 at 11:13
Simon PrickettSimon Prickett
3,99311 gold badge1414 silver badges2626 bronze badges
Add a comment
|
|
I want to use Redisearch. But problem is there is no Redis-stack docker image in my Organization's private registry. There is Redis version 5 and 6 though.
Can I use Rediserach directly on Redis instead of Redis-stack or is there any other way to solve this problem.
I tried going through Redis and Redis-stack documentation. But There is nowhere mentioned if Redisearch will be compatible on Redis directly.
|
Redisearch on Redis instead of Redis-Stack?
|
This isn't a perfect approach, but you could use a custom decorator instead of @functools.cache which would then wrap your function with functools.cache and gather the cache stats before and after the call to determine if the lookup resulted in a cache hit.
This was hastily thrown together but seems to work:
def cache_notify(func):
func = functools.cache(func)
def notify_wrapper(*args, **kwargs):
stats = func.cache_info()
hits = stats.hits
results = func(*args, **kwargs)
stats = func.cache_info()
if stats.hits > hits:
print(f"NOTE: {func.__name__}() results were cached")
return results
return notify_wrapper
As an example of a simple function:
@cache_notify
def f(x):
return -x
print("calling f")
print(f"f(1) returned {f(1)}")
print("calling f again")
print(f"f(1) returned {f(1)}")
Results:
calling f
f(1) returned -1
calling f again
NOTE: f() results were cached
f(1) returned -1
How to "notify the user" can be tailored as needed.
Also note that it's possible for the cache stats to be a bit misleading in a multi-threaded environment; see Python lru_cache: how can currsize < misses < maxsize? for details.
|
import functools
@functools.cache
def get_some_results():
return results
Is there a way to notify the user of the function that the results they are getting are a cached version of the original for any other time they are calling the function?
|
functools.cache - notify that the result is cached
|
I think the best way to do this would be to leverage the background operations... I just tried the following code on the Aerospike sandbox and it did update all the 'shape' entries in the 'report' map to updated shape. (Disclaimer: I am not a developer so there may be really bad ways of doing things in Java in general but this hopefully shows how to do a background operation scan in Aerospike)
Run the 'setup' tab and then copy paste this in the create tab (or in the blank tab) and run:
import com.aerospike.client.task.ExecuteTask;
import com.aerospike.client.task.Task;
import com.aerospike.client.query.Statement;
import com.aerospike.client.query.Filter;
import com.aerospike.client.Operation;
import com.aerospike.client.exp.Exp;
AerospikeClient client = new AerospikeClient("127.0.0.1", 3000);
try {
WritePolicy writePolicy = new WritePolicy();
MapPolicy mapPolicy = new MapPolicy();
String mapBinName = "report";
String mapKeyName = "shape";
Statement stmt = new Statement();
stmt.setNamespace("sandbox");
stmt.setSetName("ufodata");
ExecuteTask task = client.execute(
writePolicy, stmt, MapOperation.put(mapPolicy, mapBinName,
Value.get(mapKeyName), Value.get("updated shape")));
if(task != null) {
System.out.format("Task ID: %s\nTask Status: %s",
task.getTaskId(), task.queryStatus());
}
else {
System.out.format("Task seems null!");
}
}
catch (AerospikeException ae) {
System.out.format("Error: %s", ae.getMessage());
}
You can change this to a secondary index query (if defined) or add further filters on the map bin itself...
Here are a couple of tutorials that may help:
Working with maps.
Expressions in Aerospike. (for fancier things on what / how to update inside a map bin)
|
I have an Aerospike cache consists of list of data with value of json like structure.
example value: {"name": "John", "count": 10}
I wanted to scan for all records and reset count to zero. What would be a good option to handle this problem?
|
Best way to update single field in Aerospike
|
The plots are always written out to a file. You can see that for the cached block, the image is not modified when you re-knit the document, but the image in the non-cached block is rewritten (check the modified dates). R isn't re-running the code that generates the image for the cached block. If you don't have any caching enabled, rmarkdown will "clean" up after it's run and delete all images. But because rmarkdown doesn't track side effects on a per-block level, when cachine is enabled it can't clean up after itself anymore because it doesn't know which images came from which block. So it keeps them all to be safe.
|
When knitting an R markdown file, the plots outputted from any chunk with cache=TRUE are saved independently from the HTML output. This makes sense to me. However, if even a single chunk has the cache=TRUE option set, all chunks, including those with cache=FALSE, have their plots saved independently. For example, the following code saves image files for both chunks:
---
title: "Cache Plot Test"
output:
html_document:
df_print: paged
---
```{r test_plot1, cache = FALSE}
library(ggplot2)
ggplot(airquality, aes(x = Temp, y = Wind)) +
geom_point()
```
```{r test_plot2, cache = TRUE}
library(ggplot2)
ggplot(airquality, aes(x = Month, y = Ozone)) +
geom_point()
```
Is there any way to prevent this if someone wants to implement caching on particular chunks but doesn't want to independently save every single plot in the output? If there isn't such an option and this is by design, what's the rationale? Why would it be necessary to save the plots from chunks that don't implement caching?
|
In R markdown, how do I prevent plots from non-cached chunks from being saved separately?
|
As I understand it, the problem is that Vercel doesn't include the request's origin in the cache key, and you get accidental Web cache poisoning. Unfortunately, Vercel doesn't seem to allow custom cache keys yet.
A long-term solution would be to put pressure on Vercel for them to add the origin to the their cache key; this is a sensible default that other CDNs, such as Cloudflare, have adopted. An alternative, short-term solution would be to make your responses to CORS requests non-cacheable according to Vercel caching rules:
{
"name": "foo",
"version": 2,
"routes": [
{
"src": "/whatever",
"headers": [
{ key: "Cache-Control", value: "no-cache" },
...
]
}
],
...
}
|
I have a Next.js API deployed on Vercel.
The API is used by multiple other domains.
When the browser send the If-None-Match header, Vercel can reply with a 304; however, the Access-Control-Allow-Origin header may contain a different origin, which causes a CORS error. I guess it's due to the fact Vercel sends the headers from the cached response.
How can I make sure the correct origin value will be specified in the Access-Control-Allow-Origin header?
I think I could add some proxy for every domains consuming the API but I'd prefer to avoid that.
|
Vercel cache CORS headers issue for multiple domains
|
If you're using hardcoded credentials:
You have a bigger security issue than "re-used" credentials and should remove them immediately.
From documentation:
Do NOT put literal access keys in your application files. If you do, you create a risk of accidentally exposing your credentials if, for example, you upload the project to a public repository.
Do NOT include files that contain credentials in your project area.
Replace them with an execution role.
If you're using an execution role:
You're not providing any credentials manually for any AWS SDK calls. The credentials for the SDK are coming automatically from the execution role of the Lambda function.
Even if Boto3 role credentials are shared across invocations under the hood for provisioned concurrency (nobody is sure), what would be the issue?
Let Amazon deal with role credentials - it's not your responsibility to manage that at all.
I would worry more about the application code having security flaws as opposed to Amazon's automatically authenticating SDK requests with execution role credentials.
|
Following the best practice Take advantage of execution environment reuse to improve the performance of your function, I am investigating if caching the boto3 client has any negative effect while using Lambda Provisioned Concurrency. The boto3 client is cached through @lru_cache decorator and it is lazy-initialized. Now, the concern is that the underlying credentials of boto3 client are not refreshed because Provisioned Concurrency will keep the execution environment alive for an unknown amount of time. This lifetime might be longer than the duration of the temporary credentials that Lambda environment injected.
I couldn’t find any doc explaining how this case is handled. Does anyone know how Lambda environment handles the refreshing of credentials in the above case?
|
How are credentials refreshed with cached boto3 client and Lambda Provisioned Concurrency?
|
4
Add a fake query parameter to the request and increment the version on publish.
<script src="js/shared/shared.min.js?version=1.3"></script>
Share
Improve this answer
Follow
answered Oct 7, 2021 at 18:16
Brian ParkerBrian Parker
13.3k22 gold badges3232 silver badges4242 bronze badges
Add a comment
|
|
Currently, we are referencing JS and CSS assets in our Blazor Web Assembly application as following:
index.html
<link href="css/site.min.css" rel="stylesheet" />
<link href="CompnayName.ProjectName.Client.styles.css" rel="stylesheet" />
...
<script src="js/shared/shared.min.js"></script>
Every time we make any changes to these files - we need to clear the browser cache on LIVE env to see the changes.
Is there any way to implement cache busting?
|
How to implement cache busting for Blazor Web Assembly
|
With buildkit, there's an additional setting to include cache metadata in the created image:
--build-arg BUILDKIT_INLINE_CACHE=1
That only handles the final image layers. If you want to include the cache for intermediate stages of a multi stage build, you likely want to cache to a registry, which I believe needs buildx to access these buildkit options.
--cache-from type=registry,ref=localhost:5000/myrepo:buildcache
--cache-to type=registry,ref=localhost:5000/myrepo:buildcache,mode=max
Buildkit defines several other cache options in their readme.
|
I understand that Docker --cache-from command will restore cache from pulled images when building a different one. Am I wrong? Whenever I create a new image, delete it and its dangling cache, pull it and build it again, it will not use the newly downloaded image as cache. Below are the commands for my use case.
docker build --target base -t image:base .
docker push image:base
docker image rm image:base
docker builder prune
docker pull image:base
docker build --target base --cache-from image:base -t image:base .
If I don't prune the cache, it will use it regardless of the --cache-from command being present or not. Therefore, how am I supposed to use --cache-from, and is there any possibility of restoring cache from pulled images without using docker load (because it takes a while)?
|
Docker cache from command for new pulled images doesnt work
|
4
Create a method/sub that can be called when FormClosed.
C# Sample
private async void ClearBrowserCache()
{
await this.webView.CoreWebView2.Profile.ClearBrowsingDataAsync();
}
VB Sample
Private Async Sub ClearBrowserCache()
Await Me.wbMain.CoreWebView2.Profile.ClearBrowsingDataAsync()
End Sub
Check out the other options for Clearing browsing data from the user data folder.
https://docs.microsoft.com/en-us/microsoft-edge/webview2/concepts/clear-browsing-data?tabs=csharp
C# and C++ Examples are provided. VB.NET examples are not available, but the equivalents can be determined by navigating the CoreWebView2 object properties.
Share
Improve this answer
Follow
edited Dec 2, 2023 at 15:24
RAllen
1,26511 gold badge77 silver badges1212 bronze badges
answered Sep 8, 2022 at 12:53
mapadmmapadm
4144 bronze badges
Add a comment
|
|
I'm about to release an app that shows a self-coded website with Google's model-viewer library. To prevent lurkers from stealing the 3D assets shown on that website, I started coding a C# application that launches WebView2 with some authentication parameters (so only that app is allowed to access the website).
Sadly I've been told that the application downloads the 3D models immediately into cache. This is a huge issue I must prevent.
I've added several no-cache headers to the website. Isn't there any option to disable the WebView2 cache, or programmatically remove cached objects after a certain action happened? Or any way to encode/decode those files?
The WebView2 documentations don't cover any of these informations.
Thanks in advance.
|
WebView2 C# - Immediately clear cache
|
I think you are looking for refreshAfterWrite and override CacheLoader.reload(K, V).
Here is the post that explain the details: https://github.com/ben-manes/caffeine/wiki/Refresh
Implementation for your case would be something like:
@Log4j2
@Configuration
public class CacheConfig {
@Bean
public Cache<String,Item> caffeineConfig() {
return Caffeine.newBuilder()
.refreshAfterWrite(10, TimeUnit.SECONDS)
.build(new ConditionalCacheLoader<>(this::shouldReload, this::load));
}
private Item load(String key){
//load the item
return null;
}
private boolean shouldReload(Item oldValue){
//your condition logic here
return true;
}
private static class Item{
// the item value can contain any data
}
private static class ConditionalCacheLoader<K,V> implements CacheLoader<K,V>{
private final Predicate<V> shouldReload;
private final Function<K,V> load;
protected ConditionalCacheLoader(Predicate<V> shouldReload, Function<K, V> load) {
this.shouldReload = shouldReload;
this.load = load;
}
@Override
public V load(K key) throws Exception{
return load.apply(key);
}
@Override
public V reload(K key, V oldValue) throws Exception {
if (shouldReload.test(oldValue)){
return load(key);
}else {
return oldValue;
}
}
}
}
|
I'm trying to use caffeine and spring-boot-starter-cache to implement the following caching logic:
If expiration time has passed and condition (That requires computation and I/O) is evaluated to TRUE then force fetch the data and update cache.
If expiration time has passed and condition (That requires computation and I/O) is evaluated to FALSE then don't invalidate the cache data and retrieve the value from cache.
If expiration time has not passed then retrieve the value from cache.
I worked according to this guide:
https://www.baeldung.com/spring-boot-caffeine-cache
I tried all sort of approaches using @CachePut, @CacheEvict and @Cacheable on the getter method of the object I;m caching but the core issue is that I need to condition the eviction with both an expiration time and another logic but these annotations cannot control whether to evict or not... Perhaps this can be done using a Scheduler?
|
Cache With Complex Conditional Eviction
|
4
Well I found the answer while looking for something else
Instead of using
from django.core.cache import cache
cache.set('hello', 'bye')
cache.get('hello')
which stores the data in the default caching Use something like this
from django.core.cache import caches
c = caches['static_page']
c.set('hello', 'bye')
c.get('hello')
It is such a small thing that most of the document don't mention it separately, and you might miss it when going through the documentation.
Share
Improve this answer
Follow
edited Jun 13, 2021 at 15:31
answered Jun 13, 2021 at 15:09
wetlerwetler
37422 silver badges1111 bronze badges
Add a comment
|
|
I have a Django project and i am using django-redis where I want to implement different types of caching,
Caching search query
Caching static pages
Cache user Data (eg: online status)
I can add different prefix for different kind of caching, but I want to use different redis server for all the different caching I have.
I couldn't find anything on the docs how to do this
My current settings
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://localhost:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"PARSER_CLASS": "redis.connection.HiredisParser",
"IGNORE_EXCEPTIONS": True,
},
"KEY_PREFIX": "db_cache",
}
}
What I would want
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://localhost:6379/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"PARSER_CLASS": "redis.connection.HiredisParser",
},
"KEY_PREFIX": "db_cache",
},
'static_page': {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://localhost:6378/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"PARSER_CLASS": "redis.connection.HiredisParser",
"IGNORE_EXCEPTIONS": True,
},
"KEY_PREFIX": "db_cache",
},
'user_data': {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://localhost:6377/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"PARSER_CLASS": "redis.connection.HiredisParser",
},
"KEY_PREFIX": "db_cache",
}
}
|
Django Use multiple redis for caching
|
4
The ResponseCacheFilter (source) is the action filter that actually sets the Cache-Control headers, but it doesn't take into account status codes, so it's not actually possible to do what I wanted.
Instead, I wrote my own action filter:
public class CacheControlAttribute : ActionFilterAttribute
{
public int Duration { get; set; } = 0;
public override void OnActionExecuted(ActionExecutedContext context)
{
if (ResultIsSuccess(context.Result))
{
SetCacheControlHeaders(context.HttpContext.Response);
}
}
private bool ResultIsSuccess(IActionResult result)
{
return result is IStatusCodeActionResult statusCodeActionResult && statusCodeActionResult.StatusCode is >= 200 and < 300;
}
private void SetCacheControlHeaders(HttpResponse response)
{
response.Headers[HeaderNames.CacheControl] = $"public,max-age={Duration}";
}
}
You can use it like this:
[ApiController]
public class TestController : ControllerBase
{
[CacheControl(Duration = 60)]
[HttpGet("/fail")]
public IActionResult Fail()
{
return BadRequest();
}
}
It will only set the Cache-Control headers on success status codes (>= 200 and < 300).
Also, no need for the app.UseResponseCaching() middleware in either case, as it doesn't control the Cache-Control headers; it just reads them (as might be set by the ResponseCache attribute), and caches cacheable responses to implement server-side caching.
Share
Improve this answer
Follow
answered Jun 10, 2021 at 10:04
Jordan WalkerJordan Walker
59011 gold badge66 silver badges1717 bronze badges
Add a comment
|
|
startup.cs
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddDefaultPolicy(builder =>
{
builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader();
});
});
services.AddResponseCaching();
services.AddControllers();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors();
app.UseResponseCaching();
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
TestController.cs
[ApiController]
public class TestController : ControllerBase
{
[ResponseCache(Duration = 60)]
[HttpGet("/fail")]
public IActionResult Fail()
{
return BadRequest();
}
}
When I hit the /fail endpoint, it returns a 400 status as expected but it has a Cache-Control header of public,max-age=60 because of the ResponseCache attribute on the action method. According to the docs,
Response Caching Middleware only caches server responses that result in a 200 (OK) status code. Any other responses, including error pages, are ignored by the middleware.
How can I stop error responses (or any non-200 response) from being cached?
|
Why is ASP.NET Core setting cache-control headers on error responses?
|
It seems that the path to the Maven repository isn't correctly initialized. As this issue describes the paths are written with \\ instead of / which GNU tar expects. The fix was already provided in Dec 2020, so it made it to the version v2.1.4. The last version v2.1.3 was released in November. But sadly there is a bug in pointing the v2 to the latest v2.1.4 (as normally expected by GitHub Actions users). Therefore to solve this issue, we need to explicitely specifiy the full actions/cache version v2.1.4 like this:
steps:
- uses: actions/checkout@v2
- name: Set up JDK 1.8
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Cache Maven packages
uses: actions/[email protected]
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('**/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
Now it should work like a charm (see logs here).
|
As the docs state in order to cache the Maven dependencies with GitHub Actions all we have to use is the actions/cache action like this:
steps:
- uses: actions/checkout@v2
- name: Set up JDK 1.8
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Cache Maven packages
uses: actions/cache@v2
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('**/pom.xml') }}
restore-keys: ${{ runner.os }}-m2
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
However using the windows-2016 GitHub Actions environment, this doesn't provides us with a working cache - as the logs states:
Post job cleanup.
"C:\Program Files\Git\usr\bin\tar.exe" --posix --use-compress-program "zstd -T0" -cf cache.tzst -P -C D:/a/spring-boot-admin/spring-boot-admin --files-from manifest.txt --force-local
/usr/bin/tar: C\:\\Users\runneradmin\\.m2\repository: Cannot stat: No such file or directory
/usr/bin/tar: Exiting with failure status due to previous errors
Warning: Tar failed with error: The process 'C:\Program Files\Git\usr\bin\tar.exe' failed with exit code 2
How to fix this?
|
GitHub Actions: Cache Maven .m2 repository on Windows environment C\:\\Users\runneradmin\\.m2\repository: Cannot stat: No such file or directory
|
Redis DB is not completely independent, but logically independent.
I think you can think of it as a namespace.
The functionality you want will have to use another instance.
|
Is it possible to set maxmemory and maxmemory-policy per database in redis?
I've tried to switch to db 1 and set those properties
SELECT 1
CONFIG SET maxmemory 100mb
CONFIG SET maxmemory-policy allkeys-lfu
But it looks like they are being set per instance not a specific database
SELECT 2
CONFIG GET maxmemory
1) "maxmemory"
2) "104857600"
CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "allkeys-lfu"
So, are there any ways to set those properties on a database level or should I use different instances / nodes?
|
Redis set maxmemory per database
|
You can pass in your own cache_path to expire the key. You still need to fetch some record to calculate it though.
class MyController < ApplicationController
before_action :set_record
caches_action :show, expires_in: 1.hour, cache_path: ->(_) { show_cache_key }
def show; end
private
def set_record
@record = Record.find(params[:id])
end
def show_cache_key
@record.cache_key
end
end
Doing cache invalidation by hand is an incredibly frustrating and error-prone process so I would avoid invalidating keys in an after_step and use key based expiration instead.
|
I am using Action Caching which seems to work OK with Rails 6.
The issue is how to expire the cache when the underlying model is updated?
According to Rails Guides
See the actionpack-action_caching gem. See DHH's key-based cache expiration overview for the newly-preferred method.
According to the Action Caching gem issue, using a Rails Observer to sweep the cache would work.
https://github.com/rails/rails-observers#action-controller-sweeper
But Rails Observer does not seem to work with Rails 6.
So, how to expire the cache with an after_save callback?
|
How to expire Action Cache when underlying model is updated in Rails 6?
|
When using Vercel you can configure the headers in your vercel.json file.
{
"headers": [
{
"source": "/(.*)",
"headers" : [
{
"key" : "Cache-Control",
"value" : "public, max-age=31536000, immutable"
}
]
}
]
}
|
I need to Cache Control my personal site and I don't know how or where should I tell the browser to cache the site.
I wanna use
Cache-Control: public, max-age=31536000, immutable
But don't know how as I'm using Next.js app hosted with Vercel.
|
How can I implement Cache-Control: public in Vercel
|
Apollo or server state libraries like React-Query provide tools to fetch data from a server and store results in client memory, like you would do with Redux or Context (or simply in a component state).
But they also provide tools to do more sophisticated caching to provide a smooth user experience and bandwith optimization :
allowing you to set caching strategies per request,
define caching durations,
invalidate cache entries when needed (for example after a mutation, when server data changed)
define a retry strategy on error,
manage periodic refreshes on background,
...
These tools are designed to handle server state in your UI app in an efficient manner. This involves storing data, but this is only the first (and easy) step for a decent HTTP caching tool.
EDIT from phry comment
More than a way to store data, Redux is a javascript Flux implementation, which is a design pattern for shared UI state management. Event though you can implement a HTTP cache with Redux, this is not its primary goal (and obviously you would have to implement the cache logic which is not a trivial task). On the other hand, React-Query, SWR or Apollo are caching tools.
|
I'm wondering what the difference is between redux or context or any kind of application state storage versus client side caching, with the specific example being Apollo's client side cache.
I generally understand from this answer that application state storage, such as redux or context, is a form of caching, and what it does is cache, or in this case, "store", information in RAM. What makes something like Apollo's client cache different? Is it the same and just storing the data like you would with redux, or is it doing something different? Thanks.
|
What is the difference between caching and redux
|
3
As I was typing this up and listing out the cache settings, I noticed the WP Rocket section for File Optimization which led me to finding a solution. By unchecking Minify HTML and Minify JavaScript file I was able to fully eliminate the issue.
Further research uncovered that this may the caused by a dependency of WP Rocket called Minify.
Hopefully this will help anybody else who stumbles upon the same issue. My site is running very quickly now with WP Rocket despite not minifying the HTML and JavaScript.
Share
Improve this answer
Follow
answered Dec 3, 2020 at 18:32
RobRob
91733 gold badges88 silver badges2323 bronze badges
Add a comment
|
|
I've recently implemented WP Rocket on my WordPress site and suddenly it's not rendering correctly. I'm getting this error message in the console:
Uncaught SyntaxError: Invalid regular expression: missing /
I can clear the cache and it loads the first time I load it, but then fails to fully render on all subsequent loads. I've disabled all other plugins and I still get the same error. I'm using a popular theme Divi which is not seeing the same issues on other sites I've got with Divi.
I created a blank site with Divi and loaded only WP Rocket with the same settings. Turns out the page fully loads, but I still get the same error in the console:
Uncaught SyntaxError: Invalid regular expression: missing /
I'm fairly certain it's related to the WP Rocket cache at this point. Here's the full WP Rocket cache settings:
[x] Enable caching for mobile devices
[x] Separate cache files for mobile devices
[ ] Enable caching for logged-in WordPress users
Cache Lifespan: 1 Day
I've seen others post about this same issue, but no solutions other than clearing the cache - which only works on the very next load, not subsequent ones.
|
Why is my site not fully rendering after implementing WP Rocket? I get "Uncaught SyntaxError: Invalid regular expression: missing /" in my console
|
Ersoy led me to the right path with Redis monitor function. This is the final product which works:
public function forgetByPattern($key)
{
foreach ($this->keys($key) as $item) {
$item = explode(':', $item);
$this->forget($item[1]);
}
return true;
}
Also, the prefix I was baffled with comes from database.php config file under redis.options.prefix key.
|
I am using Redis as my cache driver and I would like to extend the functionality to delete keys by pattern.
When I list out the cache within Redis CLI I get:
127.0.0.1:6379[1]> keys *
1) "workspace_database_workspace_cache:table_def_workspace_items"
2) "workspace_database_workspace_cache:table_def_workspaces"
However when I dump $this->prefix from Illuminate\Cache\RedisStore I get:
"workspace_cache:"
For some reason, my delete doesn't work. When I try to fetch keys using:
public function keys($pattern = '*')
{
return $this->connection()->keys($pattern);
}
I get the keys back as expected.
But if I try to delete them, I am failing to do so (when calling Cache::forgetByPattern('*items'):
public function forgetByPattern($key)
{
foreach ($this->keys($key) as $item) {
$this->connection()->del($item);
}
return true;
}
Item dump here shows exactly workspace_database_workspace_cache:table_def_workspace_items.
If I delete by providing exact key after a prefix (like the original forget() method functions):
$this->connection()->del($this->prefix.'table_def_workspace_items');
Surely it does delete the key.
I have also tried doing a:
$this->prefix0
and
$this->prefix1
EDIT: re-checking the documentation, Redis doesn't provide DEL by pattern.
But none of these work. Why is this failing, and why am I getting additional prefix added?
|
Laravel Redis cache prefix missmatch
|
This is a good question, Kedro has CachedDataSet for caching datasets within the same run, which handles caching the dataset in memory when it's used/loaded multiple times in the same run. There isn't really the same thing that persists across runs, in general Kedro doesn't do much persistent stuff.
That said, off the top of my head, I can think of two options that (mostly) replicates or gives this functionality:
Use the same catalog in the same config environment but with the TemplatedConfigLoader where your catalog datasets have their filepaths looking something like:
my_dataset:
filepath: ${base_data}/01_raw/blah.csv
and you set base_data to s3://bucket/blah when running in "production" mode and with local_filepath/data locally. You can decide how exactly you do this in your overriden context method (whether it's using local/globals.yml (see the linked documentation above) or environment variables or what not.
Use separate environments, likely local (it's kind of what it was made for!) where you keep a separate copy of your catalog where the filepaths are replaced with local ones.
Otherwise, your next best bet is to write a catalog0 similar to catalog1 which intercepts the loading/saving for the wrapped dataset and makes a local copy when loading for the first time in a deterministic location that you look up on subsequent loads.
|
I'm trying to figure out how to store intermediate Kedro pipeline objects both locally AND on S3. In particular, say I have a dataset on S3:
my_big_dataset.hdf5:
type: kedro.extras.datasets.pandas.HDFDataSet
filepath: "s3://my_bucket/data/04_feature/my_big_dataset.hdf5"
I want to refer to these objects in the catalog by their S3 URI so that my team can use them. HOWEVER, I want to avoid re-downloading the datasets, model weights, etc. every time I run a pipeline by keeping a local copy in addition to the S3 copy. How do I mirror files with Kedro?
|
How to catalog datasets & models by S3 URI, but keep a local copy?
|
4
For example a user has already clicked on an inline button and your bot responded with a message, if that user clicks on the button again within cache_time, telegram may show the same message without bothering your server for cache_timeseconds.
In your last question about limiting user clicks on inline buttons, you can also set cache time to avoid telegram sending callback_query to your server.
Share
Improve this answer
Follow
answered Jul 17, 2020 at 22:29
Ali PadidaAli Padida
1,83011 gold badge1717 silver badges3636 bronze badges
Add a comment
|
|
The explaination in the official docs is given as -
cache_time - Integer Optional. The maximum amount of time in seconds that the result of the callback query may be cached client-side. Telegram apps will support caching starting in version 3.14.
Defaults to 0.
So if someone clicks on the callback button what does sending cache_timein answerCallbackquery do? I didn't understand the official docs explaination.
Official docs- https://core.telegram.org/bots/api#answercallbackquery
|
What does cache_time do in telegram's answerCallbackquery API?
|
clflush does what it says on the tin and flushes all caches: instruction, data, and unified. (And the decoded-uop cache). Margaret Bloom actually tested to confirm this.
Instruction caches can't ever be dirty so it makes sense that discussion of modified lines talks about data. Only write-back data caches can be dirty. Note that one of the intended purposes of clflush is to write-back dirty data to a non-volatile DIMM, so it's natural that the docs focus on data.
You're reading too much into the wording, perhaps based on a misconception that write-back is synonymous with flushing. A clean cache line (such as I-cache) can be flushed by simply dropping it, no write-back necessary. Note the difference between invd and wbinvd. The invd docs use "flush" as a synonym for "invalidate". (related: What use is the INVD instruction?)
From Intel's vol.2 ISA ref manual entry for clflush
The CLFLUSH instruction can be used at all privilege levels and is subject to all permission checking and faults associated with a byte load (and in addition, a CLFLUSH instruction is allowed to flush a linear address in an execute-only segment). Like a load, the CLFLUSH instruction sets the A bit but not the D bit in the page tables.
This doesn't explicitly say that it actually flushes I-cache in such regions, but it implies that flushing is relevant for such cases where data accesses aren't even possible.
Semi-related: Flush iCache in x86
|
This is the question: Does clflush flush L1i? Intel ISA manual was not clear on that:
Invalidates from every level of the cache hierarchy in the cache
coherence domain the cache line that contains the linear address
specified with the memory operand. If that cache line contains
modified data at any level of the cache hierarchy, that data is
written back to memory
I would speculate by the wording
If that cache line contains modified data at any level of the
cache hierarchy, that data is written back to memory.
So L1I left untouched. Is it the actual behavior of Intel CPUs?
|
Does clflush flush L1i?
|
You could use a cache (with some Map maybe) and Vertx::setTimer to invalidate it after 3 hours. Assuming you are using Router:
router.get("/things/:id").handler(rc -> {
String id = rc.pathParam("id");
List result = cache.getThing(id);
if (result == null) {
result = getThingFromDatabase(id);
cache.saveThing(result);
vertx.setTimer(10800000, t -> { // <-- 3 hours
cache.invalidateThing(id);
});
}
return result;
});
|
I've just been starting out with Vertx. I wonder if there's a way to store/cache a response data for a period of time?
For example, the first time a user calls my API, it will query the database on the server and return a data. I want to save/cache this data to a local file (or memory) on the server for, for example, 3 hours. During these 3 hours, if any other user make another call to the API, it will use the cached data instead. After 3 hours, the cached data reset.
I tried searching on Google for solutions like Vertx Redis, or StaticHandler, but they seem over-complicated, and don't seem to match my needs?
Is there a simple way to achieve this?
|
Timed caching in Vertx
|
You can use NT stores like movntps on normal WB memory (i.e. the heap). See also Enhanced REP MOVSB for memcpy for more about NT stores vs. normal stores.
It treats it as WC for the purposes of those NT stores, despite the MTRR and/or PAT having it set to normal WB.
The Intel docs are telling you that NT stores "work" on WB, WT, and WC memory. (But not strongly-ordered UC uncacheable memory, and of course not on WP write-protected memory).
You are correct that normally only video RAM (or possibly other similar device-memory regions) are mapped WC. And no, you can't easily allocate WC memory in a user-space process under a normal OS like Linux, but you wouldn't normally want to.
You can only use SSE4 NT loads on WC memory (otherwise current CPUs ignore the NT hint), but some cache pollution for loads is a small price to pay for HW prefetch and caching working. You can use NT prefetch from WB memory to reduce pollution in some levels of cache, e.g. bypassing L2. But that's hard to tune.
IIRC, normal stores like mov on WC memory have the store-merging behaviour you get from NT stores. But you don't need to use WC memory for NT stores to work.
|
In Agner Fog's "Optimizing subroutines in assembly language - section 11.8 Cache control instructions," he says: "Memory writes are more expensive than reads when cache misses occur in a write-back cache. A whole cache line has to be read from memory, modified, and written back in case of a cache miss. This can be avoided by using the non-temporal write instructions MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPD, MOVNTPS. These instructions should be used when writing to a memory location that is unlikely to be cached and unlikely to be read from again before the would-be cache line is evicted. As a rule of thumb, it can be recommended to use non-temporal writes only when writing a memory block that is bigger than half the size of the largest-level cache."
From the "Intel 64 and IA-32 Architectures Software Developer's Manual Combined Volumes Oct 2019" - "These SSE and SSE2 non-temporal store instructions minimize cache pollution by treating the memory being accessed as the write combining (WC) type. If a program specifies a non-temporal store with one of these instructions and the memory type of the destination region is write back (WB), write through (WT), or write combining (WC), the processor will do the following . . . "
I thought that write-combining memory is only found in graphics cards but not in general-purpose heap memory -- and by extension that the instructions listed above would only be useful in such cases. If that's true, why would Agner Fog recommend those instructions? The Intel manual seems to suggest that it's only useful with WB, WT or WC memory, but then they say that the memory being accessed will be treated as WC.
If those instructions actually can be used in an ordinary write to heap memory, are there any limitations? How do I allocate write-combining memory?
|
Can we use non-temporal mov instructions on heap memory?
|
You can use the cache-magic package with a symbolic link to your Drive folder.
Make the following your first cell in Colab and always run it first (replace path/to/my/project/folder with your Drive project folder):
from google.colab import drive
drive.mount('/content/drive')
%cd '/content/drive/My Drive/path/to/my/project/folder'
!pip install cache-magic
import cache_magic
!mkdir .cache
!ln -s '/content/drive/My Drive/path/to/my/project/folder/.cache' /content/.cache
Now when you have some long computation:
bigVariable = longComputation()
you replace it with
%cache bigVariable = longComputation()
and it will be reloaded from cache if it was computed before!
Your cache lives in a folder named .cache inside your project folder.
|
After a period of inactivity, my Google Colab variables are lost and I have to recompute them. I know I can work around it with
from google.colab import drive
drive.mount('/content/drive/')
%cd '/content/drive/My Drive/path/to/my/project/folder'
and then use numpy.save, torch.utils.checkpoint or tf.train.Checkpoint which will save it to Google Drive if my variable is in one of these formats.
Is there a way to cache any Python variable on Colab (i.e. not be bound to a specific data science framework or format)?
|
How do I cache Python variables in Google Colab
|
I've managed to have a working solution for this problem. Check the GitHub repo https://github.com/alexsomai/cache-static-assets.
This is an example of config that should work:
@Configuration
public class WebConfig implements WebMvcConfigurer {
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
Objects.requireNonNull(registry);
// could use either '/**/images/{filename:\w+\.png}' or '/**/images/*.png'
registry.addResourceHandler("/**/images/{filename:\\w+\\.png}")
.addResourceLocations("classpath:/static/")
.setCacheControl(CacheControl.maxAge(1, TimeUnit.DAYS));
registry.addResourceHandler("/**/images/*.jpg")
.addResourceLocations("classpath:/static/")
.setCacheControl(CacheControl.maxAge(2, TimeUnit.DAYS));
registry.addResourceHandler("/**/lib/*.js")
.addResourceLocations("classpath:/static/")
.setCacheControl(CacheControl.maxAge(3, TimeUnit.DAYS));
}
}
You could easily adjust it for your needs, based on file type and cache duration.
As key takeaways, make sure to add the addResourceLocations function (without this one, you get 404). Plus, if you are using Spring Boot, you don't need the @EnableWebMvc, as it was initially posted in this example https://stackoverflow.com/a/33216168/6908551.
|
I'm trying to set up caching headers on specific static file type in Spring Boot.
In directory src/main/resources/static there are few subdirectories with different file types:
src/main/resources/static/font --> *.otf
src/main/resources/static/lib --> *.js
src/main/resources/static/images --> *.png, *.jpg
Is there a way to put cache headers by file type inside Spring configuration?
*.otf 365 days
*.png 30 days
*.jpg 7 days
Spring version is 5.2.3 and Spring Boot 2.2.4 - is there a chance that Spring Boot deals with it and makes it not work?
Tried with
@Override
public void addResourceHandlers(final ResourceHandlerRegistry registry) {
final CacheControl oneYearPublic =
CacheControl.maxAge(365, TimeUnit.DAYS).cachePublic();
// it does not "work" with "/static/fonts/"
registry
.addResourceHandler("/fonts/{filename:\\w+\\.otf}")
.setCacheControl(oneYearPublic);
}
but I get weird results. When checking with Network tab of DevTools I get these headers:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
But when I go to the URL directly, I get 404
http://localhost/fonts/1952RHEINMETALL.otf
Without any configuration I get "no-store" Cache-Control header.
|
Cache static files by content type in Spring Boot
|
Here are three options for you:
Use a sorted set, using the timestamp as score, and the post-hash as value. The post-hash is also the key in a hash where the actual posts are stored. Commands involved: ZADD, HSET, ZREVRANGEBYSCORE, HGET.
Use a sorted set, using the timestamp as score, and the post with metadata as value. Make sure "post with metadata" is unique, you can include the timestamp and user to achieve this. This will have better performance, but makes it a bit harder if you have to find a specific post. Commands involved: ZADD, ZREVRANGEBYSCORE, ZRANGEBYSCORE.
Use Redis Streams. If you want a uniform insert order independent of client time, Redis can set the timestamp for you. However, stream entries cannot be modified, so either users cannot edit posts, or whenever they edit the post is brought up as new. Commands involved: XADD, XREVRANGE, XDEL.
See:
Redis Commands
Introduction to Redis Streams
|
I am implementing a feed for a social network in that newly uploaded post should be served first and so on.I use hashes of posts as keys and posts as values.I need the posts in "newest first order".How to do it?
My idea is
Store the post and timestamp with hash as a key
Get all keys and timestamps
Sort the timestamps in descending order
Then use the respective keys to get th latest images
Question1:But this approach is not good.How to do it ?
EDIT:Question2:Please tell me what algorithm you use to serve feed.If feed is common for all users based on "newest-first",how to implement it?
This is my first time in backend.Please if the question is dumb.
Thanks.
|
In redis,how to get the keys descending order based on insertion order?
|
Flask-Caching itself does not necessarily automatically delete cache items after they time out. Generally, it will only do so when performing certain operations. For example, in the FileSystemCache, if you look at the source code for the get() function you can see that if you try to get a cached item that has expired, it deletes the file and returns None.
Also, looking at the source code for the set() function, you can see the call to _prune() (source code). This is an internal function which (if the number of cached files exceeds the threshold set in the constructor) will go through the files in the cache directory and delete the expired files.
The reasons you are not seeing any files be deleted may be because even if the timeout is low, the threshold may be high enough that you aren't caching enough files for it to start deleting some. You can use the CACHE_THRESHOLD config variable to set the maximum number of files that can be cached.
I hope this helps!
|
I have a Flask app, and I have implemented the "Flask-Caching" extension. I am using the "FileSystemCache" method. This is all new to me so it may be working properly and I'm unaware.
I have found that when I call cache.clear(), I will see items delete from the directory specified as my cache location. However, when I set the timeout to be something really short, I do not see the files delete when the timeout duration is reached.
I'm not sure if it is supposed to delete or if I should write a background task to delete all files older than the timeout setting. Each file is small but they accumulate very quickly.
I ask that someone advise me if this is working as intended. Creating a background task to clear the directory is no issue but it seems like this should be happening automatically.
In terms of relevant code, there isn't much:
cache = Cache(app,config={'CACHE_TYPE': 'filesystem',
'CACHE_DIR': r"<my cache directory>",
'CACHE_DEFAULT_TIMEOUT': 15})
The timeout is only 15 seconds to aid my testing here but it will be increased later. Throughout my code, I'm only really using @cache.memoize() and the occasional cache.delete_memoized().
|
Flask-Caching FileSystemCache method does not delete cache items upon timeout
|
You can control your cache stategy using this attribute on your controller or methods:
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public sealed class NoCacheAttribute : ResponseCacheAttribute
{
public NoCacheAttribute()
{
Duration = 0;
NoStore = true;
Location = ResponseCacheLocation.None;
}
}
And then use this attribute where needed:
[HttpGet]
[NoCache]
public async Task<IActionResult> Get()
{
... you code here
}
You can also add some parameters to your attribute to finetune your cache strategy.
|
I am having a tough time with Caching, First I am getting problems with Google Chrome caching the front end reactjs code every though I added a hash to the javascript and css file(it seems like the index.html file is being cached??).
Now IE 11 seems to be caching my api called. For instance I load up IE 11 and do a request and then hit F5 and look at the network tab I will see that the requests will be "Received From Cache" this is always shown even if I know 100% the data from the api has changed.
No other browser has this issue.
|
Asp.net Core Caching + IE 11 Results in Cached Responses
|
4
Assuming you are looking to use the distribution id to run an invalidation and wait for it to complete (which i suspect would be a common use case), you can pass the distribution id through to other commands to facilitate this.
DISTRIBUTION_ID=`aws cloudformation describe-stacks --stack-name <name> | jq -r '.Stacks[0].Outputs[] | select(.OutputKey=="ClientDistribution").OutputValue'` && \
INVALIDATION_ID=`aws cloudfront create-invalidation --distribution-id $DISTRIBUTION_ID --paths "/*" | jq -r .Invalidation.Id` && \
aws cloudfront wait invalidation-completed --distribution-id $DISTRIBUTION_ID --id $INVALIDATION_ID
Share
Improve this answer
Follow
answered Nov 4, 2019 at 7:02
Craig BarrCraig Barr
5155 bronze badges
Add a comment
|
|
Need to programmatically get the CDN_DISTRIBUTION_ID so that CloudFront caches can be invalidated post-build regardless of distribution.
I've got cloudfront to invalidate as intended but it requires a distribution id. Since same yaml code is going to be used for each developers' individual environment, ci, production, etc it needs to be gotten programmatically.
I'm struggling to figure out how to get the distribution_id without manually finding it in the AWS console, or using workarounds in other languages. We would like to be able to just pass it in as a variable like ${DISTRUBUTIION_ID} but I'm struggling to figure out how to get set that through yaml.
- aws cloudfront create-invalidation --distribution-id CDN_DISTRIBUTION_ID --paths "/*"
I've got the ID to be a stack import but I'm unsure on how to import it into the run time build spec where it needs to be.
Outputs:
ClientDistribution:
Description: "ClientDistribution distribution id"
Value: !Ref ClientDistribution
Export:
Name: !Sub "${AWS::StackName}-ClientDistribution"
|
How to programmatically get 'distribution-id' to invalidate cloudfront cache?
|
2
It's an interesting intellectual exercise, but in my opinion this is classic premature optimization.
1) It's probably way too early to have even introduced redis, let alone be thinking about whether redis is fast enough. Your social network is almost certainly just fine up to about 1,000 users running off raw SQL queries against Mysql / Postgres / Random RDS. If it starts to slow down, get data on slow running queries and fix them with query optimizations and appropriate indexes. That'll get you past 10,000 users.
2) Now you can start introducing redis. In general, I'd encourage you to think about your redis as purely caching and not permanent storage; it shouldn't matter if it gets blown away, it just means your site is slower for the next few seconds because your users are getting their page loads from SQL queries instead of redis hits (each query re-populating that user's sorted list of posts in redis, of course).
Your strategy and example code for using redis seem fine to me, but until you have actual data on how users use your site (which may be drastically different than your current expectations), it's simply impossible to know what types of SQL indexes you will need, what keys and lists are ideal for caching in redis, etc.
Share
Improve this answer
Follow
answered Jan 14, 2019 at 14:19
Elliot NelsonElliot Nelson
11.5k33 gold badges3232 silver badges4747 bronze badges
Add a comment
|
|
I'm building a small social network (users have posts and posts have comments - very basic), using clustered nodejs server and redis as a distributed cache.
My approach to cache users posts is to have a sorted set that contains all the user's posts ids ordered by rate(which should be updated every time someone add a like or comment), and actual objects sorted as hash objects.
So the get user's posts flow should look like this:
1. using zrange to get a range of ids from the sorted set.
2. using multi/exec and hgetall to fetch all the objects at once.
I have a couple of questions:
1. in regards of performance issues, will my approach scale when the cache size getting bigger, or maybe I should use lua or something?
1. in case if I want to continue with current approach, where I should save the sorted set in case of redis crash, if I use the redis persistence this will affect the overall performance, I thought about using a dedicated redis server for the sets (I searched If it is possible to backup only part of the redis data but didn't found anything about it.
My approach => getTopObjects({userID}, 0, 20) :
self.zrange = function(setID, start, stop, multi)
{
return execute(this, "zrange", [setID, start, stop], multi);
};
self.getObject = function(key, multi)
{
return execute(this, "hgetall", key, multi);
};
self.getObjects = function(keys)
{
let multi = thisArg.client.multi();
let promiseArray = [];
for (var i = 0, len = keys.length; i < len; i++)
{
promiseArray.push(this.getObject(keys[i], multi));
}
return execute(this, "exec", [], multi).then(function(results)
{
//TODO: do something with the result.
return Promise.all(promiseArray);
});
};
self.getTopObjects = function(setID, start, stop)
{
//TODO: validate the range
let thisArg = this;
return this.zrevrange(setID, start, stop).then(function(keys)
{
return thisArg.getObjects(keys);
});
};
|
Caching relational data using redis
|
Sorry for taking so long to respond. I just found this is a bug in libcuda. I will submit a fix shortly.
For now, a workaround is to set CUDA_CACHE_MAXSIZE to 2Gb-1 (2147483647). Setting it to a value between 2Gb and 4Gb-1 could end up with a really high cache size, and setting it to 4Gb should result in a cache size of 256Mb, which is the default cache size since R334, instead of 32Mb, as said here.
I hope this workaround will help you.
|
I have an OpenCL application which runs on CUDA v7.5.
The application has very many large kernels. I am setting CUDA_CACHE_MAXSIZE to the maximum possible value, 4294967296 i.e. 4GB. However, the total size of the files stored in the cache directory never grows above ~307MB. It does appear that cache entries are being added / evicted (I see small changes in the total file size, and my application is definitely hitting the cache when querying for recent kernels). It behaves as if there were some cache size limit lower than CUDA_CACHE_MAXSIZE being enforced, maybe by the opencl driver?
I would like to know what caused this, and if it is possible for me to access the full cache size of 4GB.
|
My CUDA JIT cache stays persistently far below CUDA_CACHE_MAXSIZE
|
on maciOS using chrome I found pressing SHIFT + REFRESH hard refreshes the page and forces re-downloading of all source files and images.
|
I'll upload my html, css, and javascript to my hosting; everyhthing works great. But then say I make a change locally to my CSS, i'll upload the CSS file again, redirect to my site but my site is still using the old CSS.
I can't seem to fix it unless I chnage the filename of my CSS. Is there a way to clear my browser from thinking it can just reuse the old css?
|
When I upload new web files to my hosting, my browser is returning the old CSS file
|
If you want to use a solution that already exists for this functionality then you can check out this library runtime-memcache implements lru and a few other caching schemes(mru, timeout) in javascript.
It uses modified Doubly Linked List to achieve O(1) for get, set and remove.
|
I'm looking for a way to do in the browser what Memcached offers, i.e. ability to configure a size limit (e.g. the localStorage quota) and it will evict old items automatically to keep the cache under limit.
The benefit is no explicit deletions are ever needed, as long as the cache keys are versioned/timestamped. I've seen some libraries that do similar with a "number of items" limit, but size limit would be more useful to stay under the browser's quota.
|
JavaScript localStorage cache with size limit and least-recently used (LRU) eviction
|
What do you mean by "this async is not allowed"? I see no particular issue with the putIfAbsent code, and I believe it should work.
The one probelem I see is that the cache is not caching futures, but strings. Since your function is returning a future anyway, you might as well store the future in the cache.
I would write it as:
final Map<String, Future<String>> _cache = new Map();
Future<String> parse(final String name) =>
_cache.putIfAbsent(name, () => File(name).readAsString());
but apart from fixing the _cache map type, that is effectively the same, it's just avoiding creating and waiting for a couple of extra futures.
|
In my class I'm loading some files, and for efficiency I wanted to make a thread safe cache. I see in the map class that there is a putIfAbsent method, but it doesn't accept async types. Also not sure if this structure in general is safe to use.
This is the style of what I'm trying to do:
final Map<String, String> _cache = new Map();
Future<String> parse(final String name) async {
_cache.putIfAbsent(name, () async { // this async is not allowed
return await new File(name).readAsString();
});
return _cache[name];
}
Since I can use async on the parameter I've opted to use locks instead, but it makes the code far more verbose..
final Lock _lock = new Lock();
final Map<String, String> _cache = new Map();
Future<String> parse(final String name) async {
if (!_cache.containsKey(name)) {
await _lock.synchronized(() async {
if (!_cache.containsKey(name)) {
_cache[name] = await new File(name).readAsString();
}
});
}
return _cache[name];
}
Does anyone know how I can simplify this code, or if there are better libraries I can use for thread safe cache?
|
How to use putIfAbsent for when action returns Future
|
You need to use WebGet rather than WebInvoke
The difference is subtle but important:
WebInvoke
Represents an attribute indicating that a service operation is logically an invoke operation and that it can be called by the WCF REST programming model.
WebGet
Represents an attribute indicating that a service operation is logically a retrieval operation and that it can be called by the WCF REST programming model.
And further down, in the remarks section for WebInvoke:
The WebInvokeAttribute attribute is applied to a service operation … and associates the operation with a UriTemplate as well as an underlying transport verb that represents an invocation (for example, HTTP POST, PUT, or DELETE). … The WebInvokeAttribute determines what HTTP method that a service operation responds to. By default, all methods that have the WebInvokeAttribute applied respond to POST requests. The Method property allows you to specify a different HTTP method. If you want a service operation to respond to GET, use the WebGetAttribute instead.
|
I have a WCF service which does work but really needs the responses to be cached.
I've followed some guidance online but I receive the following error:
AspNetCacheProfileAttribute can only be used with GET operations.
I've checked this in Postman to be completely sure that my client side code is not making the request in the wrong way, same result there.
The interface looks like this:
public interface INavbar
{
[OperationContract]
[WebInvoke(Method = "GET", ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare)]
List<Navbar.simpleNav> getNavList();
}
The service returns a List as a JSON object, the 2 changes I made to try and add the caching were to add AspNetCacheProfile attribute to the class and add the caching profile details to my web.config.
Attribute for class:
[AspNetCacheProfile("CacheFor180Seconds")]
web.config:
<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="CacheFor180Seconds" duration="180" varyByParam="none"/>
</outputCacheProfiles>
</outputCacheSettings>
</caching>
As far as I can tell it's all pretty standard stuff however it may be that I'm trying to do this the wrong way so any help is appreciated.
Many thanks
|
Issue with caching wcf service - "AspNetCacheProfileAttribute can only be used with GET operations"
|
There's nothing in the Standard Library built specifically for memory caching but it's easy enough to roll your own.
// memoize this function (arity 1)
def memo1[A,R](f: A=>R): (A=>R) =
new collection.mutable.WeakHashMap[A,R] {
override def apply(a: A) = getOrElseUpdate(a,f(a))
}
The reason for using WeakHashMap is that it is designed to drop (forget) seldom-accessed elements in a memory-challenged environment.
So this can be used to cache (memoize) existing methods/functions...
def s2l(s :String) :Long = ???
val s2lM = memo1(s2l) //memoize this String=>Long method
val bigNum :Long = s2lM(inputString) //common inputs won't be recalculated
...or you can define the function logic directly.
//memoized Long-to-Double calculation
val l2dM = memo1{ n:Long =>
//Long=>Double code goes here
}
For functions with larger arity, use a tuple as the Map key.
def memo3[A,B,C,R](f :(A,B,C)=>R) :(A,B,C)=>R = {
val cache = new collection.mutable.WeakHashMap[(A,B,C),R]
(a:A,b:B,c:C) => cache.getOrElseUpdate((a,b,c), f(a,b,c))
}
|
Is there a built-in way of doing in memory caching in Scala like a MemoryCache class that can be used without any additional dependencies for a simple LRU cache with a size limit? I've found many possibilities but they all require external dependencies.
|
Is there a Scala Built-In Cache Class
|
4
Python is a "garbage collection" language. One such consequence of this is that memory is automatically allocated and freed as needed. This creates memory fragmentation which can break apart transfers to the CPU caches. It's also not possible to change the layout of a data structure directly in memory which means that one transfer on the bus might not contain all the relevant information for a computation — even though it might all fit within the bus width. It essentially hurts any prospects for keeping L1/L2 caches filled with the relevant data for the next computation.
Another problem comes from Python’s dynamic types and not being compiled. Many C developers generally realize at some point the compiler is usually smarter than they are. The compiler can perform many tricks to affect how things are laid out, how the CPU will run certain instructions, in what order, and the best way to optimize them. Python, however, is not compiled and, to make matters worse, has dynamic types which means that inferring any possible opportunities for optimizations with an algorithm is exponentially more difficult since code functionality can be changed during runtime.
As mentioned in the comments, there ways to mitigate such problems, foremost being CPython or the Cython extensions for Python — it allows Python code to be compiled.
Share
Improve this answer
Follow
answered Apr 12, 2018 at 1:42
l'L'll'L'l
45.9k1111 gold badges9999 silver badges148148 bronze badges
Add a comment
|
|
It's my first time posting here and I hope you guys can answer my question.
I'm writing a Python script that finds all of the prime numbers up to some number n using the sieve of eratosthenes. In order to make the program more efficient, I want to sieve the range of numbers in "segments" that fit inside the CPU's L1 cache. I know my own L1 cache, it's 576 KB. But I want this program to work on any computer.
Is there any way I can get the CPU's L1 cache? I want specifically L1 cache, not L2 or L3.
|
Python: Get Processor's L1 cache
|
I don't think you can do that. How are these keys generated? If there is a pattern to it, then you can write a short python script to generate a text file, which has the delete commands. For example:
File: mydelete.txt
DELETE FROM ns1.set1 WHERE PK = 'k1'
DELETE FROM ns1.set1 WHERE PK = 'k2'
etc..
Then, in AQL, use the RUN command.
aql> RUN 'mydelete.txt'
However, if you want to delete all the records in a set, you can use TRUNCATE in AQL.
|
I have a list of Aerospike keys with me. I need to delete all the records associated with these keys using AQL.
I am aware of the delete query for a single key. Like this:
DELETE FROM <ns>[.<set>] WHERE PK=<key>
However, I would like to delete them all using AQL in a single query. Is there any such query to delete in bulk?
|
How to delete multiple records in Aerospike given multiple keys through a single AQL query?
|
I actually miss how the Spring context works in hybris. Since it has to be accessible from all application contexts it has to be set on global level.
All application contexts have as the parent the global application context.
The cache region bean has to be defined in a "global" spring file. In hybris it's done by setting this property. (my_cache.xml has to be in resources in project_name)
<project_name>.global-context=my_cache.xml
|
I need to create a new region for a specific set of models. I've followed the documentation about RegionCache but it doesn't work.
Here is the configuration :
<alias name="defaultTestCacheRegion" alias="testCacheRegion"/>
<bean name="defaultTestCacheRegion" class="de.hybris.platform.regioncache.region.impl.EHCacheRegion">
<constructor-arg name="name" value="testCacheRegion" />
<constructor-arg name="maxEntries" value="${regioncache.testcacheregion.maxentries}" />
<constructor-arg name="evictionPolicy" value="${regioncache.testcacheregion.evictionpolicy}" />
<constructor-arg name="statsEnabled" value="${regioncache.stats.enabled}" />
<constructor-arg name="exclusiveComputation" value="${regioncache.exclusivecomputation}" />
<property name="handledTypes">
<array>
<value>25049</value>
<value>25050</value>
<value>25051</value>
</array>
</property>
</bean>
<bean id="testCacheRegionRegistrar" class="de.hybris.platform.regioncache.region.CacheRegionRegistrar" c:region-ref="testCacheRegion" />
|
How to create a new cache region in SAP Commerce (hybris)
|
collection.createIndex({createdAt: 1}, {expireAfterSeconds: 60,unique:true});
and
collection.createIndex({createdAt: 1}, {expireAfterSeconds: 300,unique:true});
this is invalid
You cannot use createIndex() to change the value of expireAfterSeconds of an existing index. Instead use the collMod database command in conjunction with the index collection flag. Otherwise, to change the value of the option of an existing index, you must drop the index first and recreate.
https://docs.mongodb.com/v3.4/core/index-ttl/
For expiry of individual documents, there are reports that it can only be done by calculating the expiry time and expiring them by a specific clock time (ref: groups.google.com/forum/#!topic/mongodb-dev/ZLb8KSrLyOo).
var testItems = [{
"_id": "abc12345",
"dmac": "abc",
"expireAt": moment().add(3, 'minutes').toDate(),
"val": {
"0": 10,
"1": 15,
"2": 5
},
"res": "minutes",
"time": "2017-12-04T00:12:58Z"
}];
let unix = new moment().valueOf();
collection.createIndex({expireAt: unix}, {expireAfterSeconds: 0}, function(error, indexName){
if(error) console.log(error);
console.log(indexName);
});
|
I'm using MongoDB (v3.4) as a cache and using TTL indexes to expire records. However, the TTL settings don't seem to work properly. Specifically, I tested using an endpoint to insert data (as below).
The endpoint mongoInsert is supposed to expire in 5 min. However, it seems that after ~1 min, the document got removed. I've looked up other similar suggestions regarding the use of UTC time using moment().utc().toDate(), and the behaviour was the same. new Date() returned UTC time so I guess it should be the same effect.
Not sure if there are other settings that should be included but are not detailed in the documentation. Anyone encountered this before?
function mongoInsert(mongoClient){
return function(req, res){
mongoClient.connect('mongodb://localhost:27017/cache', function(err, db) {
db.collection('data', function(err, collection){
var testItems = [{
"_id": "abc12345",
"dmac": "abc",
"createdAt": new Date(),
"val": {
"0": 10,
"1": 15,
"2": 5
},
"res": "minutes",
"time": "2017-12-04T00:12:58Z"
}];
let unix = new moment().valueOf();
collection.createIndex({createdAt: unix}, {expireAfterSeconds: 300});
collection.insertMany(testItems, function(err, items){
db.close();
res.json(items);
});
})
})
}
}
|
MongoDB TTL expiry doesn't work properly on NodeJS
|
Yes, see the documentation for LoadingCache.get(K) (and it sibling, Cache.get(K, Runnable)):
If another call to get(K) or getUnchecked(K) is currently loading the value for key, simply waits for that thread to finish and returns its loaded value.
So if a cache entry is currently being computed (or reloaded/recomputed), other threads that try to retrieve that entry will simply wait for the computation to finish - they will not kick off their own redundant refresh.
|
I implemented a non-blocking cache using Google Guava, there's only one key in the cache, and value for the key is only refreshed asynchronously (by overriding reload()).
My question is that does Guava cache handle de-duplication if the first reload() task hasn't finished, and a new get() request comes in.
//Cache is defined like below
this.cache = CacheBuilder
.newBuilder()
.maximumSize(1)
.refreshAfterWrite(10, TimeUnit.MINUTES)
.recordStats()
.build(loader);
//reload is overwritten asynchronously
@Override
public ListenableFuture<Map<String, CertificateInfo>> reload(final String key, Map<String, CertificateInfo> prevMap) throws IOException {
LOGGER.info("Refreshing certificate cache.");
ListenableFutureTask<Map<String, CertificateInfo>> task = ListenableFutureTask.create(new Callable<Map<String, CertificateInfo>>() {
@Override
public Map<String, CertificateInfo> call() throws Exception {
return actuallyLoad();
}
});
executor.execute(task);
return task;
}
|
Does Google Guava Cache do deduplication when refreshing value of the same key
|
This is based on how often the website is changed. For example, Wikipedia may be updated several times a day, but 92spoons.com may be updated every few days. (source)
This also can be changed by popularity. You can visit this website which should allow you to refresh the cache. (source)
|
If you type in cache:www.92spoons.com, for example, into the Google search engine, it shows you a snapshot of the page from a time when Google snapshotted the site. I was just wondering, how often does Google refresh its cached data? It looks like, as of now, it was refreshed about 3 days ago. Also, do all sites' cached data update at the same time?
|
How often does Google refresh its cached websites?
|
2
Caching and paging are separate issues. Say you have some rest API that serves as your primary data source and is slow. So you want to create some faster reacting cache (no paging issue yet). Of course with your cache come all the relevant issues such as stale data, "hit and miss" and so on and so forth. Say you resolve those issues and now you have a layered data source with a primary slow data source and cache in front of it. Now in your findAll method, you rely on Pagination abilities that Spring boot provides you with (regardless of what data source you use). BTW your repository should extend PagingAndSortingRepository which extends CrudRepository. See reference here
Share
Improve this answer
Follow
answered Jan 2, 2020 at 12:03
Michael GantmanMichael Gantman
7,59822 gold badges2121 silver badges3838 bronze badges
Add a comment
|
|
I'm using spring boot with maven to create a repository that stores results of searched articles and I'm displaying those results through pagination, the interface is as follows:
public interface HelpArticleSearchRepository extends CrudRepository<HelpArticleSearchResults, Integer> {
Page<HelpArticleSearchResults> findAll(Pageable pageable);
}
My problem is that: I'm getting those results, of searched articles, from a REST API query, and the response is quite slow.
So, I searched online for caching solutions, but the solution I found seems to only work for methods that return the same class instance that was used to store the data into the repository. Like:
spring-data-examples caching
I was hoping to get a solution that works with pagination, but I'm still new to spring boot, so it's a huge challenge.
Please point me in the right direction, for any additional information I'm happy to provide.
|
Caching Pageable objects (results) in interface that extends CrudRepository
|
You need to provide a customized CacheManager and Cache implementation for your caching provider (e.g. Ehcache, Redis or Hazelcast).
By default, OOTB, Spring's Cache Abstraction does not split up cached method array/collection type return values into separate entries in the targeted cache. You must handle this yourself.
See my last response for this nearly identical question.
|
I want to cache the response of the repository class which has the following methods:
@Cacheable(cacheNames = "books", key="#id")
Book findById(Long Id);
@CacheEvict(cacheNames = "books", key = "#id")
void deleteById(Long id);
@Cacheable(cacheNames = "books", key="#book.id")
Book save(Book book);
@Cacheable("books")
List<Book> findAll();
Except the findAll() method, others are working as expected.
How to make findAll() to populate the books cache with book.id as key?
|
How to cache the list of entires based on its primary key using spring caching
|
2
Have a look at Azure Durable Functions, e.g. Fan-In/Fan-Out scenario. They use Azure Storage Queues underneath, but provide higher-level abstractions.
Note that Durable Functions are still in early preview (as of August 2017), so not suitable for production use yet.
Share
Improve this answer
Follow
answered Aug 23, 2017 at 11:36
Mikhail ShilkovMikhail Shilkov
34.5k44 gold badges7070 silver badges107107 bronze badges
Add a comment
|
|
A really common pattern that I need in multi instance web applications is invalidating MemoryCaches over all instances - and waiting for a confirmation that this has been done. (Because a user might otherwise after a refresh suddenly see old data on another instance)
We can make this with a combination of:
AzureServicebus,
Sending message to a topic
other instances send message back with ReplyTo to the original instance
have a wait loop for waiting on the messages back,
be aware of how many other instances are there in the first place.
probably some timeout because what happens if an instance crashes in between?
I think working out all these little edge cases might be a lot of work - so before we reinvent the wheel - is there already a common pattern or library for this?
(of course one solution would be using a shared cache like Redis, but for some situations a memorycache is a lot faster)
|
Azure - Send message to all other Roles and wait for response
|
Yes, the rule
/0100
{
/glob "*.html"
/type "deny"
}
means that no files with the .html extension will be cached. See the documenatation for more details.
I'm not sure what that would accomplish on a Publish instance. The only situation where it would seem apt would be if all HTML pages were rendered with user-specific data inline with the static parts (as in, user data rendered in JSP/HTL scripts responsible for displaying whole pages). Not caching HTML pages puts a significant strain on your Publisher farm. If the avoidance of caching dynamic data is the reason for this config, there are better ways to deal with serving user-specific data from AEM, each of which requires a change in your application and deployment architecture (AJAX calls, Server Side Includes, Sling Dynamic Inlcudes, Edge Side Includes, Templating Engines, to name a few).
As pointed out in the other answer, this can be a valid rule when the dispatcher is set up in front of an Author environment.
|
I am looking to get an understanding on a part of AEM dispatcher configuration. This will go under /cache /rules section
It looks like something below
/rules
{
# initial blanket deny
/0000
{
/glob "*"
/type "deny"
}
/0100
{
/glob "*.html"
/type "deny"
}
}
Does rule 100 mean that the dispatcher is not caching any html pages?
|
AEM Dispatcher - cache rules
|
Thanks all, I have fix it by make the cache method open
@Component
@CacheConfig(cacheNames = arrayOf("longCacheManager"), cacheManager = "longCacheManager")
open class CacheService {
@Cacheable(key = "#id.toString()")
open fun save(id: Long): Long {
return id
}
}
Spring generated proxy with cglib.
It must mandatory inherit the classes and methods.
But kotlin default is final class and method which without the keyword open.
|
I'm try to replace some java code with kotlin.
Such as jpa or cache.
The Start class is:
@EnableAsync
@EnableCaching
@EnableSwagger2
@SpringBootApplication
open class Application
fun main(args: Array<String>) {
SpringApplication.run(Application::class.java)
}
The simple controller:
@RestController
class CacheController {
@Autowired
private lateinit var cache: CacheService
@PutMapping("{id}")
fun save(@PathVariable id: Long) {
cache.save(id)
}
}
CacheService:
@Component
@CacheConfig(cacheNames = arrayOf("longCacheManager"), cacheManager = "longCacheManager")
open class CacheService {
@Cacheable(key = "#id")
fun save(id: Long): Long {
return id
}
}
cacheManager:
@Configuration
open class CacheConfig {
@Autowired
private lateinit var redisConnectionFactory: RedisConnectionFactory
@Bean
@Qualifier("longCacheManager")
open fun longCacheManager(): CacheManager {
val redisTemplate = StringRedisTemplate(redisConnectionFactory)
redisTemplate.valueSerializer = GenericToStringSerializer(Long::class.java)
val cacheManager = RedisCacheManager(redisTemplate)
cacheManager.setUsePrefix(true)
return cacheManager
}
}
I can confirm the parameter of id entered in CacheService's method save, but after I have excute the PutMethod, there is nothing in redis.
When I write the cacheServie with java like this, the redis would be save what I want.
The Java cache service like this:
@Component
@CacheConfig(cacheNames = "longCacheManager", cacheManager = "longCacheManager")
public class JavaCacheService {
@Cacheable(key = "#id")
public Long save(Long id) {
return id;
}
}
I have also read some article like this:
https://pathtogeek.com/spring-boot-caching-with-kotlin
My SpringBootVersion is 1.5.3.RELEASE
and kotlinVersion is 1.1.3-2
|
spring boot cache not support kotlin?
|
firebaser here
The integration between Cloud Functions, Firebase Hosting and its CDN is currently a purely time-to-live based cache. When you set a cache-header in your Cloud Functions, the CDN puts your response in its edge caches for the time period you indicate. Once it expires, the CDN edges will stop serving the content from the cache and request a fresh copy from the server when a user on that edge requests it.
We know that having an API to tell the CDN to refresh this content would allow for many additional use-cases. But this is currently not in the scope of Firebase Hosting.
|
In my app, Users profiles are open to the public and only updated by the profile owner.
The profile URL is example.com/[email protected]
And based on the docs https://firebase.google.com/docs/hosting/functions. I can cache the response JSON of the function, in this case, the public profile. Which will save me a lot of at cloud functions executions and Firebase database bandwidth
And when the user updates his profile, I want to re cache the profile in the CDN.
I think that can be done by making the user re-request his public profile, with Cache-Control: no-cache in the request header, after a successful update of his profile.
And when a user visits that same profile afterward he shall see the new version.
Is that possible ?? or that's not how Cache-Control shall be used.
|
example for caching cloud function for firebase requests and only re cache after a successful edit to the profile
|
Once the file is cached, it will not be downloaded again, unless the page specifies a different version of the file than is already present in the cache. As such, minification helps only when the file is downloaded and not thereafter.
The minification of the files certainly could (in theory) have an impact on parsing times of user agents, but given that we are only talking about characters, there would literally have to be tens of thousands of extraneous characters before any noticeable performance degradation would be seen. So, for practical purposes, a minified file won't be processed/parsed faster in any noticeable way than a non-minified one.
|
This question already has answers here:
Does minified JavaScript code improve performance?
(7 answers)
Closed 6 years ago.
I am well aware that a good practice in web development is to minify your JS and CSS files when you're done developing them to reduce the amount of data downloaded with each http fetch.
My question, however, is if JS and CSS file minification helps after the first page load if you're caching the files.
I have no doubt it helps the first time, but does minification also speed-up load and execution time if the file is being loaded from an on-disk cache?
Or at that point is its effect rather small. Please let me know your thoughts. Thanks.
|
Does JS/CSS minification help even if the files are cached? [duplicate]
|
It sounds like you are referencing the Apache Hadoop HDFS Architecture documentation, specifically the section titled Staging. Unfortunately, this information is out-of-date and no longer an accurate description of current HDFS behavior.
Instead, the client immediately issues a create RPC call to the NameNode. The NameNode tracks the new file in its metadata and replies with a set of candidate DateNode addresses that can receive writes of block data. Then, the client starts writing data to the file. As the client writes data, it is writing on a socket connection to a DataNode. If the written data becomes large enough to cross a block size boundary, then the client will interact with the NameNode again for an addBlock RPC to allocate a new block in NameNode metadata and get a new set of candidate DataNode locations. There is no point at which the client is writing to a local temporary file.
Note however that alternative file systems, such as S3AFileSystem which integrates with Amazon S3, may support options for buffering to disk. (See the Apache Hadoop documentation for Integration with Amazon Web Services if you're interested in more details on this.)
I have filed Apache JIRA HDFS-11995 to track correcting the documentation.
|
Why can't HDFS client send directly to DataNode?
What's the advantage of HDFS client cache?
An application request to create a file does not reach the NameNode immediately.
In fact, initially the HDFS client caches the file data into a temporary local file.
Application writes are transparently redirected to this temporary local file.
When the local file accumulates data worth at least one HDFS block size, the client contacts the NameNode to create a file.
The NameNode then proceeds as described in the section on Create. The client flushes the block of data from the local temporary file to the specified DataNodes.
When a file is closed, the remaining un-flushed data in the temporary local file is transferred to the DataNode.
The client then tells the NameNode that the file is closed.
At this point, the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed, the file is lost.
|
Why HDFS client caches the file data into a temporary local file?
|
I was stuck in a similar situation for as long as I can remember! Using getMetadata() is slow, and causes delays. What I figured was that to keep my image up to date, there was no other option but to incorporate the Realtime Database. This can be done in one of the following ways :-
First :
Whenever the image at the particular storage location is edited, you update the timestamp for that image's node in your realtime database. And when you want to display the image, just download its timestamp and write a Glide.signature(timestamp) method similar to what you mentioned.
Second :
Just obtain the download URL of the image whenever you upload/edit it, and save the URL in the realtime database. In this way, whenever the image is updated, a different URL is saved to the same location. This guarantees that your cache does not show outdated images (changing URL of source is the advocated method to invalidate caches for Glide).
I understand that there could be overhead involved with retrieving data from the realtime database first and then downloading the image. However, that's the only way to go when using Glide + Firebase. Plus, enabling persistence and other realtime database quirks can make it seamlessly fast!
|
I'm using Glide and Firebase for loading and cashing images. Usually, I use Signature with image created time then determine cache time. But in Firebase I can get created time only using second request getMetadata(). How do I make caching correctly when i change one image to another with same name in my storage? Should I use getMetadata() or there are other ways?
Glide.with(getContext())
.using(new FirebaseImageLoader())
.load(storageReference.child(item.getImageUrl()))
.placeholder(R.drawable.category_image_not_found)
.signature(???)
.into(image);
|
Firebase+glide, caching strategy
|
4
The problem is that maxAge option of express.static use the Cache-Control header.
maxAge: Set the max-age property of the Cache-Control header in milliseconds or a string in ms format.
In the Expires header docs you can find the following:
If there is a Cache-Control header with the "max-age" or "s-maxage" directive in the response, the Expires header is ignored
Therefore, a possible solution is using only the Cache-Control header. Next snippets sets this header inside an express middleware:
app.use((req, res, next) => {
res.header('Cache-Control', 'max-age=2592000000');
next();
});
Share
Improve this answer
Follow
answered Mar 20, 2018 at 1:08
CarloluisCarloluis
4,27022 gold badges2121 silver badges2525 bronze badges
Add a comment
|
|
I'm trying to setup browser caching on my website without any success
There is my code on my server.js file
app.use(function(req, res, next){
res.set({
Expires: new Date(Date.now() + 2592000000).toUTCString()
});
next();
})
app.use(express.static(path.join(__dirname, '../build/public'),{maxAge:2592000000}));
Whats wrong ?
|
browser caching express nodejs
|
You can try something like this:
<img src="1000.jpg?<?php echo(time());?>">
|
I have some images in a folder that I rename (like 1000.jpg 2000.jpg...) with PHP script and when PHP return to html the images don't have been reloaded in the right order, instead of what's happened in directory. How can i clear browser cache from file and image without using Chrome standard function in PHP or javascript?
something like this
header("Expires: Tue, 01 Jan 2000 00:00:00 GMT");
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
or like
location.reload(true)
doesn't work
|
Clear browser cache for load renamed file
|
4
Just to expand on @BDS and @Jakub's comments, who have the right answers, if you only do one thing it should be to change your query to eager loading (note that we are selecting both entity types in our query so all records will be fetched in one go):
$refuel= $this->m_em
->getRepository('AppBundle:Refuel')
->createQueryBuilder('r')
->select('r,t')
->join('r.fuel_type', 't')
->getQuery()
->getResult();
Once you've done that, you can start to look at utilising the second level cache. For that, you need to specify the cache driver in your config[_prod].yml:
second_level_cache:
region_cache_driver:
type: memcache
enabled: true
log_enabled: false
regions:
my_region:
cache_driver: memcache
lifetime: 3600
And then add annotations to your cachable entities:
/**
* @ORM\Entity
* @ORM\Table(name="fos_user")
* @ORM\Cache(usage="NONSTRICT_READ_WRITE", region="my_region")
*/
Share
Improve this answer
Follow
answered Jun 5, 2017 at 15:35
beterthanlifebeterthanlife
1,66822 gold badges1818 silver badges3030 bronze badges
Add a comment
|
|
I have list of records which have at least one related entity. On record list I display main list and data from related records so if table has ~100 records doctrine generates and executes ~150 queries: one for list and one for each related entity which is not great thing. It can be easily reduced to just 2 queries: one for list and one for all related entities at once.
As I found, second level cache in doctrine would be perfect for such case.
So, I have enabled cache in config:
doctrine:
orm:
second_level_cache:
enabled: true
And added to all entities annotation @ORM\Cache(usage="READ_ONLY"). Then before generating list I am fetching all records form both tables:
$this->getDoctrine()->getRepository('AppBundle:Refuel')->findAll();
$this->getDoctrine()->getRepository('AppBundle:FuelType')->findAll();
As I hoped they will got cached and durign actual list render reused without generating ~150 queries. But it is not a case. Debug panel shows that there are still ~150 queries executed and cache stats are:
Hits: 0
Misses: 2
Puts: 319
So as I guess entities are being cached but not reused. Why?
All i found already about second level cache is related to redis - do not want to use it - is redis required to use that cache?
It is sad that there is so little documentation on how to configure that.
|
Symfony 2 second level cache enabled but not being used?
|
Here is a line from documentation for $http service:
cache – {boolean|Object} – A boolean value or object created with $cacheFactory to enable or disable caching of the HTTP response. See $http Caching for more information.
So by specifying {cache: $templateCache} you tell Angular to cache HTTP response in internal cache data map, accessible as $templateCache service. It means that if you request ./template.html again with $http.get or by using template as source in ngInclude directive, it will not be redownloaded but will be retrieved from cache.
|
Could someone explain to me the following code:
$http.get('./template.html', {
cache: $templateCache
}).then (function(response){
console.log(response.data);
});
I understand response.data will be equal to the whole content of template.html, however what about the object
{cache: $templateCache}
What does it do?
|
$http.get and $templateCache
|
4
In your .env file, change:
CACHE_DRIVER=file
TO
CACHE_DRIVER=array
After this change you might have to execute the following commands in the command line:
php artisan config:clear
php artisan config:cache
Share
Improve this answer
Follow
answered Jun 5, 2017 at 6:31
Rahul GuptaRahul Gupta
9,99577 gold badges6161 silver badges6969 bronze badges
Add a comment
|
|
After searching all possible answers, I still dont know how to solve this problem. I'm using zizaco/entrust package for laravel, and everytime I save data into the database it returns an error:
BadMethodCallException in Repository.php line 294:
This cache store does not support tagging.
They said, I just have to use CACHE_DRIVER=array and others said use memcached or redis. Do I have to use array? Some say that it is the right way to solve the problem. Or can I use cache without tagging ?
T.I.A.
|
Entrust - This cache store does not support tagging in Laravel 5.1
|
Todd from Fabric here. It is not safe to delete these programmatically as they contain our crash report data. The folder Library/Caches/com.crashlytics.data/ is where crashes are uploaded from when your app relaunches. Thanks!
|
There're 2 folders in ~/Library/Caches in our iOS APP:
com.crashlytics.data
io.fabric.sdk.ios.data
It seems that they're used by Fabric?
I want to add a feature to delete all contents in the Caches folder, and I'm wondering if it's safe to delete these 2 folders?
If I delete the 2 folders when APP is running, what will happen if there're crashes in APP? Will the crash reports still be sent to Fabric?
Any advice would be appreciated.
|
Is it safe to delete Fabric contents in ~/Library/Caches in iOS APP
|
Bad news - you have a bug in second one ;)
Original code
.L3:
mov ecx, DWORD PTR v[0+eax*4]
add eax, 1
cmp eax, 10000000
jl .L3
Second version
.L3:
mov ecx, DWORD PTR v[0+eax*4]
mov ecx, DWORD PTR v[0+eax*4 + 4]
mov ecx, DWORD PTR v[0+eax*4 + 8]
mov ecx, DWORD PTR v[0+eax*4 + 12]
add eax, 4
cmp eax, 2500000 <- here
jl .L3
In both cases you need to load 10 mln elements. Max element address accessed in both cases must be the same, right?
So in first case max address is:
(10.000.000-1)*4 = 39.999.996
and second:
(2.500.000-4)*4+12 = 9.999.996
exactly 4 times less.
Just fix second example to cmp eax, 10000000 too.
|
I have run the following assembly code: (that iterates 1000 times through an array of 10 000 000 elements each of 4 bytes) on an Intel Core i7 CPU (with 32KB L1 data cache and 64B L1 cache line size)
main:
.LFB0:
.cfi_startproc
mov edx, 1000
jmp .L2
.L3:
mov ecx, DWORD PTR v[0+eax*4]
add eax, 1
cmp eax, 10000000
jl .L3
sub edx, 1
je .L4
.L2:
mov eax, 0
jmp .L3
.L4:
mov eax, 0
ret
.cfi_endproc
Perf gives the following stats:
10,135,716,950 L1-dcache-loads
601,544,266 L1-dcache-load-misses # 5.93% of all L1-dcache hits
4.747253821 seconds time elapsed
This makes totally sense because I am accessing 1 000 * 10 000 000 = 10 000 000 000 elements in memory, and the cache line being 64B (with an element in vector of 4 B) this means an L1 cache miss at every 16 elements (therefore about 625 000 000 L1 cache misses).
Now, I have "unrolled" a part of the loop and the code is:
.cfi_startproc
mov edx, 1000
jmp .L2
.L3:
mov ecx, DWORD PTR v[0+eax*4]
mov ecx, DWORD PTR v[0+eax*4 + 4]
mov ecx, DWORD PTR v[0+eax*4 + 8]
mov ecx, DWORD PTR v[0+eax*4 + 12]
add eax, 4
cmp eax, 2500000
jl .L3
sub edx, 1
je .L4
.L2:
mov eax, 0
jmp .L3
.L4:
mov eax, 0
ret
.cfi_endproc
Perf how gives the following stats:
2,503,436,639 L1-dcache-loads
123,835,666 L1-dcache-load-misses # 4.95% of all L1-dcache hits
0.629926637 seconds time elapsed
I cannot understand why?
1) There are fewer L1 cache loads, since I am accessing the same amount of data?
2) The code runs 6 times faster than the first version? I know that it has
to do with Out-of-order execution and superscalar execution, but I cannot explain this in detail (I want to understand exactly what causes this speed-up).
|
Intel Core i7 processor and cache behaviour
|
As per the Rails docs, add the following code block to your environments/development.rb
config.assets.configure do |env|
env.cache = ActiveSupport::Cache.lookup_store(:null_store)
end
|
I have tried a lot to disable sprockets asset cache in rails but no vain. I have tried to configure development.rb but it is not working at at all.
I am using this code to disable cache generation
config.assets.cache_store = :null_store # Disables the Asset cache
config.sass.cache = false # Disable the SASS compiler cache
ruby version=2.3.3
rails version=5.0.1
thanks in advance.
|
Disable the sprockets assets caching in rails 5+
|
In addition to being available in Workers, the Cache Storage API is also available as part of the window global scope, as window.caches.
Here's an exerpt from a full example of using that interface to get a list of all cache contents:
window.caches.keys().then(function(cacheNames) {
cacheNames.forEach(function(cacheName) {
window.caches.open(cacheName).then(function(cache) {
return cache.keys();
}).then(function(requests) {
requests.forEach(function(request) {
// Do something with request, like update your UI
// based on request.url.
});
});
});
});
|
My blog already has a working Service Worker that caches recent posts, and let users read even when they are offline.
On the offline page that is shown for content not available in the cache, I would like to list the recent posts that ARE in the cache, to give the user an opportunity to read them while offline.
Is there an easy way to list such content when in a standard window context, instead of a Service Worker one?
I can't find any tutorial for this. All the tutorials I find only deal with the Service Worker part.
Thanks.
|
How can I list my blog posts already stored in the browser cache by a Service Worker?
|
I've figured out the answer:
First I had to add cache config in config.yml:
framework:
cache:
pools:
my_cache_name:
adapter: cache.adapter.filesystem
default_lifetime: 0
Than instead of
$cache = new FilesystemAdapter();
I had to use the new service like:
$cache = $this->getContainer()->get('my_cache_name);
And it's started working! Hope it helps to others!
|
I'm using the new Symfony Cache Component. I generate my cache first with a command (Command/AppCacheGenerateCommand.php):
$cache = new FilesystemAdapter();
foreach ($domains as $domain){
if ($domain->getHost()){
$output->writeln('Generate cache for domain: ' . $domain->getHost());
$domainCache = $cache->getItem('domain.' . $domain->getHost());
$domainCache->set($domain->getId());
$cache->save($domainCache);
}
}
Then trying to get these cached elements in a onKernelRequest EventListener (EventListener/RequestListener.php)
$cache = new FileSystemAdapter();
$domainCache = $cache->getItem('domain.' . $host);
if (!$domainCache->isHit()){
die;
}
It's always dies here, not going further. Anyone can give me an explanation? (I've tried if the host not matching, but it does...)
|
Symfony Cache Component (3.1) not saving cache
|
Yes, there is:
p <- ggplot(iris, (aes(x = Species, y = Sepal.Length))) +
geom_boxplot()
g <- ggplotGrob(p)
library(grid)
grid.newpage()
grid.draw(g)
system.time(print(p))
#user system elapsed
#0.11 0.00 0.11
system.time({
grid.newpage()
grid.draw(g)
})
#user system elapsed
#0.03 0.00 0.03
But also consider if you create the right kind of plot. E.g., if you plot hundreds of thousands of points, you are creating a plot that contains huge amounts of overplotting.
|
Working with ggplot and shiny, and plotting a lot of data to generate some interactive plots.
I have some performance problems, so I've checked with benchplot() my plotting time, and some of the big plot's are slow. For example, this is the time that it took me to plot one of those plots-
step user.self sys.self elapsed
1 construct 0.093 0.005 0.101
2 build 1.528 0.044 1.583
3 render 3.292 0.070 3.446
4 draw 3.102 0.189 3.521
5 TOTAL 8.015 0.308 8.651
I can't plot with ggvis or ggbio, because they don't have faceting, which is essential.
Is there a way to cache the constructing, building and rendering of the plot, so I only need to draw it asked, and can save half of the time?
(saving pictures is not a possibility, because the plot are interactive)
|
A way to cache a ggplot2 plot
|
Source: PHP: Worry about some magical added “Cache-Control” Header ?
These headers are automatically set by the PHP Session module to
prevent browser/proxy based caching of your pages. Depending on your
environment setup, it’s possible to control these headers by using the
session_cache_limiter() function or use the php.ini
To disable these behaviour just pass an empty string to the
session_cache_limiter() function as mentioned in the documentation:
session_cache_limiter('');
|
I have an application where a number of otherwise static javascript files are being generated via PHP to allow configuration options to alter the static files (path like: mystaticfile.js.php). Everything works fine EXCEPT that I can't seem to get the cache settings to work and these resources are being reloaded with every page load.
The PHP file uses the following headers to try to set the cache settings:
$expires= 60 * 60 * 24 * 60; //cache for 60 days
header('Pragma: public');
header('Cache-Control: max-age=' . $expires);
header('Expires: ' . gmdate('D, d M Y H:i:s', time() + $expires) . ' GMT');
header("content-type: application/x-javascript");
However, when the files are served they're showing headers that look like:
HTTP/1.1 200 OK
Date: Sun, 06 Nov 2016 19:18:00 GMT
Server: Apache/2.2.15 (CentOS)
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 744
Keep-Alive: timeout=15, max=95
Connection: Keep-Alive
Content-Type: application/x-javascript
My first thought was that it was that it might be because Apache has the ExpiresActive flag set on but I don't see any ExpiresByType rules set for PHP files.
Reading online it sounds like ETag issues could be the problem, but I've added
Header unset Pragma
FileETag None
Header unset ETag
to the http.conf file (and restarted the service) and still no dice.
Any thoughts?
|
Cache Headers on PHP file not working
|
So, I think I've got it. First, I minified the files as much as possible and got them down to the 5mgbs range.
d3.json("geo.json", function(data){
localStorage.setItem('data',JSON.stringify(data));
var retrievedObject = localStorage.getItem('data');
data=JSON.parse(retrievedObject);
/*draw the map*/ });
|
I'm using a really large geojson file to draw a map in d3. I've reduced the size of the geojson file as much as possible and it's still huge at 30 mgbs. I want to store it in localStorage to make the page more efficient at least for those who are refreshing the page but since d3 references the file directly, and localStorage returns a string, I'm not sure how to go about caching it.
d3.json("geo.json", function(data){/*draw the map*/});
|
How to cache a geojson file in d3?
|
The file read will fetch data from page cache without writing to disk. From Linux Kernel Development 3rd Edition by Robert Love:
Whenever the kernel begins a read operation—for example, when a
process issues the read() system call—it first checks if the requisite
data is in the page cache. If it is, the kernel can forgo accessing
the disk and read the data directly out of RAM.This is called a cache
hit. If the data is not in the cache, called a cache miss, the kernel
must schedule block I/O operations to read the data off the disk.
Writeback to disk happens periodically, separate from read:
The third strategy, employed by Linux, is called write-back. In a
write-back cache, processes perform write operations directly into the
page cache.The backing store is not immediately or directly updated.
Instead, the written-to pages in the page cache are marked as dirty
and are added to a dirty list. Periodically, pages in the dirty list
are written back to disk in a process called writeback, bringing the
on-disk copy in line with the inmemory cache.
|
When bytes are written to a file, the kernel does not immediately write those bytes to disk but stores the bytes in dirtied pages in the page cache (write-back caching).
The question is if a file read is issued before the dirty pages are flushed to disk, will the bytes be served from the dirtied pages in the cache or will the dirty pages first be flushed to disk followed by a disk read to serve the bytes (storing them in the page cache in the process)?
|
Are file reads served from dirtied pages in the page cache?
|
You should look up "resources" in "jar" files. That's what is commonly used for this job in the java world. Normally a jar file is just a zip file, which is sequential, but many years ago they added the ability to have indexed jar files, which provide random access to their contents.
You can begin here: http://docs.oracle.com/javase/1.5.0/docs/tooldocs/windows/jar.html
(Look for the "i" option which adds an index to the jar file.)
|
I have seen a few programs and games that store their data in an indexed file, and they load their data from that file which they usually call cache.
I want to be able to load my data in this way:
final int SPRITES_INDEX = 3;
List<Sprite> sprites = (List<Sprite>) cache.loadIndex(SPRITES_INDEX);
Does any know how it's done and why its done this way? or is there a name for this method of storing data?
|
Java storing multiple images, config files and such in an indexed file?
|
Just now, I had similar problem.
So, my solution:
In the nginx config add to the
http {
...
map $request_uri $request_path {
~(?<captured_path>[^?]*) $captured_path;
}
...
}
Then you will have a variable $request_path, which contains $request_uri without query_string.
So use $request_path for cache key
fastcgi_cache_key "$scheme$request_method$host$request_path"
Important. The "map" directive can be added only to "http {}". This directive will be execute for all requests for all hosts.
|
I am trying to integrate a WordPress plugin (Jetpack's Related Posts module) which adds query strings to the end of post URLs. I would like to cache URLs with FastCGI while completely ignoring the query strings/$args.
My current config is: fastcgi_cache_key "$scheme$request_method$host$request_uri";
I am aware of using the solution mentioned here to turn my $skip_cache variable off for URLs containing a certain $arg, which works. However, I want to cache the same result regardless of the value of the $args rather than using a unique cache key for each set of $args.
I am also aware of some suggestions to just use $uri in the fastcgi_cache_key rather than $request_uri; however, because $uri is not just the original requested URI minus the fastcgi_cache_key "$scheme$request_method$host$request_uri";0, something in the WordPress architecture (likely pretty links) forces all requested URIs to return the same cache result (rather than a different result for each page).
Is there any way to truly use the originally requested URI without including the fastcgi_cache_key "$scheme$request_method$host$request_uri";1 in the cache key?
|
Set fastcgi_cache_key using original $request_uri without $args
|
Reading the source (as well as the docs), it looks like the mixin class is ONLY for use when you use the default list and retrieve functions. Check the source:
# -*- coding: utf-8 -*-
from rest_framework_extensions.cache.decorators import cache_response
from rest_framework_extensions.settings import extensions_api_settings
class BaseCacheResponseMixin(object):
# todo: test me. Create generic test like
# test_cache_reponse(view_instance, method, should_rebuild_after_method_evaluation)
object_cache_key_func = extensions_api_settings.DEFAULT_OBJECT_CACHE_KEY_FUNC
list_cache_key_func = extensions_api_settings.DEFAULT_LIST_CACHE_KEY_FUNC
class ListCacheResponseMixin(BaseCacheResponseMixin):
@cache_response(key_func='list_cache_key_func')
def list(self, request, *args, **kwargs):
return super(ListCacheResponseMixin, self).list(request, *args, **kwargs)
class RetrieveCacheResponseMixin(BaseCacheResponseMixin):
@cache_response(key_func='object_cache_key_func')
def retrieve(self, request, *args, **kwargs):
return super(RetrieveCacheResponseMixin, self).retrieve(request, *args, **kwargs)
class CacheResponseMixin(RetrieveCacheResponseMixin,
ListCacheResponseMixin):
pass
As you can see, it defines its own list and retrieve methods. When you write yours in your viewset class, it bypasses these ones completely.
So, the answer is to use the decorators when you need to write your own list and retrieve functions, or, if you can use the default list and retrieve functions built into the view/viewset, then use the mixin class.
|
I'm using Django Rest Framework and the DRF-Extensions for caching.
I have a viewset with custom list() and retrieve() methods. I can put @cache_response() decorators on the methods and it successfully gets and sets to the cache. However, if I try to use CacheResponseMixin nothing happens.
Works:
class SeriesViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = SeriesSerializer
def get_queryset(self):
series_type = EntityType.objects.get(name='series')
return Container.objects.filter(type=series_type)
@cache_response()
def list(self, request):
series = self.get_queryset()
serializer = SeriesSerializer(series, many=True)
return Response(serializer.data)
@cache_response()
def retrieve(self, request, pk=None):
name = pk
series = self.get_queryset()
show = series.get(data__title=name)
serializer = SeriesSerializer(show)
return Response(serializer.data)
Does NOT work:
class SeriesViewSet(CacheResponseMixin, viewsets.ReadOnlyModelViewSet):
serializer_class = SeriesSerializer
def get_queryset(self):
series_type = EntityType.objects.get(name='series')
return Container.objects.filter(type=series_type)
def list(self, request):
series = self.get_queryset()
serializer = SeriesSerializer(series, many=True)
return Response(serializer.data)
def retrieve(self, request, pk=None):
name = pk
series = self.get_queryset()
show = series.get(data__title=name)
serializer = SeriesSerializer(show)
return Response(serializer.data)
No errors are given, my cache entry simply doesn't get created.
|
Why isn't the drf-extensions CacheResponseMixin caching?
|
4
You could just use the request modifier and pass it as an option to Kingfisher's setting image method. Like this:
let modifier = AnyModifier { request in
var r = request
r.setValue("Bearer fbzi5u0f5kyajdcxrlnhl3zwl1t2wqaor", forHTTPHeaderField: "Authorization")
return r
}
imageView.kf.setImage(with: url, placeholder: nil, options: [.requestModifier(modifier)])
See the wiki page of Kingfisher for more.
Share
Improve this answer
Follow
answered Oct 18, 2016 at 4:48
onevcatonevcat
4,62111 gold badge2525 silver badges3131 bronze badges
Add a comment
|
|
I have a tableView with customCells and I need to load pictures from a REST API (access with auth token).
As Im a noob in swift I came across some libraries and it seems KingFisher or AlamofireImage are good ones for asynch loading and caching images retrieved from an API call.
But since my API here has an access token, how can that being passed into this request?
//image handling with kingfisher
if let imgView = cell.schoolCoverImage {
imgView.kf_setImageWithURL(
NSURL(string: "")!,
placeholderImage: nil,
optionsInfo: nil,
progressBlock: nil,
completionHandler: { (image, error, CacheType, imageURL) -> () in
self.tableView.reloadRowsAtIndexPaths([indexPath], withRowAnimation: UITableViewRowAnimation.Automatic) }
)
}
For example in Alamofire there is the field headers where the access token can be passed
//Sample API call with Alamofire
Alamofire.request(
.GET,
baseURL+schools+nonAcademicParameter,
headers: accessToken
)
.responseJSON { response in
switch response.result {
case .Success(let value):
completionHandler(value, nil)
case .Failure(let error):
completionHandler(nil, error)
}
}
But with AlamofireImage the field headers seems not to be available
//Image request with AlamofireImage
Alamofire.request(
.GET,
"https://httpbin.org/image/png"),
headers: ["Authorization" : "Bearer fbzi5u0f5kyajdcxrlnhl3zwl1t2wqaor"] //not available
.responseImage { response in
debugPrint(response)
print(response.request)
print(response.response)
debugPrint(response.result)
if let image = response.result.value {
print("image downloaded: \(image)")
}
}
|
How to use KingFisher or AlamofireImage library with auth token?
|
I assume that you are working on a queue, where you insert 1000 items at a single place and retrieve them at multiple place in the order at which it is inserted.
You can't achieve it with a single command but you can do it with 2 commands. You can write a lua script to make them atomic.
Lrange : http://redis.io/commands/lrange
Lrange list -100 -1
This will list you first 100 elements in the list. here the offset is -100.
Note that this will return the items in the opposite order at which it is inserted. So you need to reverse the loop to ensure the queue mechanism.
Ltrim : http://redis.io/commands/ltrim
ltrim list 0 -101
This will trim the 1st 100 elements in the list. here 101 is n+1 so it must be 101. Here offset is 101
Writing them inside a lua block will ensure you the atomicity.
Let me give you a simple example.
You insert 100 elements in a single place.
lpush list 1 2 3 .. 100
You have multiple clients
each trying to access this lua block. Say your n value is 5 here. 1st
client gets in and gets first 5 elements inserted.
127.0.0.1:6379> lrange list -5 -1
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
You keep them in your lua object and delete them.
127.0.0.1:6379> LTRIM list 0 -6
OK
return them to your code, now the result you want is 1 2 3 4 5 but what you have got is 5 4 3 2 1. So you need to reverse the loop and perform the operation.
When the next client comes in it will get the next set of values.
127.0.0.1:6379> lrange list -5 -1
1) "10"
2) "9"
3) "8"
4) "7"
5) "6"
In this way you can achieve your requirement. Hope this helps.
EDIT:
Lua script:
local result = redis.call('lrange', 'list','-5','-1')
redis.call('ltrim','list','0','-6')
return result
|
I have a distributed system where In one place i insert around 10000 items in a redis list then Call my multiple applications hook to process items. what i need is to Have some ListLeftPop type of methhod with numbers of items. It should remove items from the redis List and return to my calling application.
I am using Stackexchange.Resis.extension
My current method just for get (not pop) is
public static List<T> GetListItemRange<T>(string key, int start, int chunksize) where T : class
{
List<T> obj = default(List<T>);
try
{
if (Muxer != null && Muxer.IsConnected && Muxer.GetDatabase() != null)
{
var cacheClient = new StackExchangeRedisCacheClient(Muxer, new NewtonsoftSerializer());
var redisValues = cacheClient.Database.ListRange(key, start, (start + chunksize - 1));
if (redisValues.Length > 0)
{
obj = Array.ConvertAll(redisValues, value => JsonConvert.DeserializeObject<T>(value)).ToList();
}
}
}
catch (Exception ex)
{
Logger.Fatal(ex.Message, ex);
}
return obj;
}
For Pop and get i have find a snippet
var cacheClient = new StackExchangeRedisCacheClient(Muxer, new NewtonsoftSerializer());
var redisValues = cacheClient.ListGetFromRight<T>(key);
But it will only do for single item
|
Redis Pop list item By numbers of items
|
Answering for general JCache ExpiryPolicy, maybe there are additional options in Apache Ignite.
The TouchedExpiryPolicy uses the same duration for creation and update.
You can set individual times, by subclassing the ExpiryPolicy.
Be careful about the logic implications. Setting 10 seconds expiry after access and 30 seconds after creation means for example:
Item is created, stays 30 seconds in the cache, if not accessed
Item is created, is accessed 5 seconds after creation, item expires 15 seconds after creation (10 seconds after access)
Probably you want to achieve something different. So the answer is: Mixing TTL and TTI isn't possible the way it is designed.
|
For example, I would like to configure the cache with the following two expiry policies:
TouchedExpiryPolicy
CreatedExpiryPolicy
The sample code is as below (Apache Ignite version 1.5.0.final):
public IgniteCache<String, Object> getOrCreateCache(String cacheName) {
Ignite ignite = Ignition.ignite();
CacheConfiguration<String, Object> cacheCfg = new CacheConfiguration<String, Object>(cacheName);
cacheCfg.setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, 10)));
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, 30)));
IgniteCache<String, Object> igniteCache = ignite.getOrCreateCache(cacheCfg);
return igniteCache;
}
The problem is, however, that the second expiry policy will replace the first one. Wonder if there is any way I can configure the Ignite cache so that the cache will honour both expiry policies? Thank you.
By the way, in EhCache, I can achieve the same thing by configuring the cache the following way:
<cache name="my-cache-name" ...
timeToIdleSeconds="10" timeToLiveSeconds="30"
...>
</cache>
References:
Apache Ignite Expiry Policies
Ehcache configuration
|
How to configure Apache Ignite cache with multiple expiry policies
|
If you are using the default cache store, i.e. the FileStore or the MemoryStore, deleting only a subset of keys should be possible with the delete_matched method, e.g.:
Rails.cache.delete_matched(/^google\.com/)
If you are using memcached, it is not possible and you would have to manually delete all keys exactly as used in the fetch calls. Or, simply not bother with expiration at all and use keys that change on every relevant change with the data they contain, see e.g. this blog post.
|
We have:
Rails.cache.fetch("google.com/videos", expires_in: 12.hours) { # some request }
Rails.cache.fetch("google.com/images", expires_in: 12.hours) { # some request }
Rails.cache.fetch("stackoverflow.com/questions", expires_in: 12.hours) { # some request }
How can I get rails cache be expired by "google.com" key or "stackoverflow.com"?
|
How to manually expire rails low cache by namespace
|
There is no way to set "compound cache key" as concatenated string. Django template engine treats next symbol following the time section as "cache key" string.
In my question it was symbol "*":
{% cache 3600 * 24 tmpl_key %}
{% include 'comments/index.html' with obj=video %}
{% endcache %}
So, django treats '*' as cache key and '24', 'tmpl_key' as vary_on parameters.
Correct code:
You need to pass cache params like this:
{% cache 3600 video_comments video.id %}
{% include 'comments/index.html' with obj=video %}
{% endcache %}
And then invalidate cache by passing list of vary_on parameters in make_template_fragment_key
@receiver(comment_was_posted)
def comment_was_posted_handler(sender, **kwargs):
comment = kwargs['comment']
if str(comment.content_type) == "video":
key = make_template_fragment_key('video_comments', [comment.object_pk])
|
In Django template I have compound cache key variable:
{% with video.id|to_str as video_id %}
{% with 'cm'|add:"video"|add:video_id as tmpl_key %}
{% cache 3600 * 24 tmpl_key %}
{% include 'comments/index.html' with obj=video %}
{% endcache %}
{% endwith %}
{% endwith %}
This block caches comments on the video page.
On signal (occures when new comment was added) I'm trying to invalidate cache like this:
@receiver(comment_was_posted)
def comment_was_posted_handler(sender, **kwargs):
comment = kwargs['comment']
cache_key = "cm" + str(comment.content_type) + str(comment.object_pk)
key = make_template_fragment_key(cache_key)
print(cache.get(key))
cache.delete(key)
print(cache.get(key)) - Always returns None (cache invalidation fails, because there is no any cache by retrieved key), but I'm sure that caching works well and tmpl_key === cache_key are equal.
Am I doing something wrong with make_template_fragment_key?
|
Django invalidate template cache by signal
|
A Maven archetype is an artifact after all and as such it will be cached automatically by Maven at its first usage. Later usages will always run fetched artifacts first (from local cache). We could also force Maven to only use the cache (offline mode, as explained below).
So you could simply invoke the concerned archetypes once (i.e. for a dummy project) and have them offline for further invocations.
If you really want to cache it upfront, you could use the Maven Dependency Plugin and its get goal to add to your local Maven cache the archetype artifact.
For instance, let's cache the Maven Quickstart Archetype as following:
mvn dependency:get -DgroupId=org.apache.maven.archetypes \
-DartifactId=maven-archetype-quickstart -Dversion=1.0
It will hence store on your local Maven cache the maven-archetype-quickstart-1.0.jar artifact.
If you don't know where your local Maven cache is, you can use the Maven Help Plugin and run:
mvn help:evaluate -Dexpression=settings.localRepository
As part of the verbose output, you will get the full path to your local Maven cache.
Since now the QuickStart Archetype is on our cache, we can run it using the -o flag (go offline, forced) for the Maven invocation
mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes \
-DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.0 \
-DgroupId=com.sample -DartifactId=test -Dversion=1.0-SNAPSHOT -B -o
As such, Maven will run an off-line execution and only use its local cache.
Note that you could also use the archetypeCatalog option while invoking archetype:generate and set it to local to only check the local catalog, but a forced execution to offline mode (-o) would better suit your need (forcing local catalog AND local cached archetypes).
|
I'd like to be able to start a project from a Maven archetype while being offline. But I can't find clear instructions on how to cache Maven archetypes for offline use. Does anyone have any advice?
|
Offline Caching of Maven Archetypes
|
You can set expires headers for different file extensions like so:
<IfModule mod_expires.c>
ExpiresActive on
# Your document html
ExpiresByType text/html "access plus 0 seconds"
# Media: images, video, audio
ExpiresByType audio/ogg "access plus 1 month"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType image/png "access plus 1 month"
ExpiresByType video/mp4 "access plus 1 month"
ExpiresByType video/ogg "access plus 1 month"
ExpiresByType video/webm "access plus 1 month"
# CSS and JavaScript
ExpiresByType application/javascript "access plus 1 year"
ExpiresByType text/css "access plus 1 year"
</IfModule>
For specific files that you know will be changed on a more regular basis, you can add an extension such as ?=[timestamp] to the URL, so that the browser recognises it as a new version.
For example, you could change the .js file to main.js?version=2 when you want the viewer to see a new version.
Alternatively you can change the extension to main.js?ts=<?php echo date() ?> for the file to be reloaded on every visit, because the timestamp will be different each time.
Edit Another solution would be to use the last-edited time of the file (using filemtime()) to append as a parameter, like so:
<script type='text/javascript' src='main.js?fmt=<?= filemtime("main.js") ?>'>
<link rel="stylesheet" href="<?php bloginfo('stylesheet_url'); echo '?ver=' . filemtime( get_stylesheet_directory() . '/style.css'); ?>" type="text/css" media="all" />
This would mean that the first load after a change in the file's data would force a refresh.
|
I'm running wordpress. Added this code in .htaccess file so that users will see latest version of website and not the cached one.
<IfModule mod_rewrite.c>
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires 0
</IfModule>
But now it loads very slowly and the reason is clear. Cause every time when am refreshing, it loads new files again. I was wondering is there any other way to detect when something had been changed, only then add these headers or clear cache in other way. Don't want to use any plugin. Want to fix it via some php or javascript code. Any help is appreciated.
|
set cache headers only when something changed in the website (php, htaccess)
|
According to section-5.2.1.4, it appears that no-cache request directive best fits my need.
The "no-cache" request directive indicates that a cache MUST NOT use a
stored response to satisfy the request without successful validation
on the origin server.
Nothing is said about subsequent requests, which is exactly what I want. There is also a no-cache response directive in section-5.2.2.2, but that also applies to subsequent requests.
|
We're designing a REST service with server-side caching. We'd like to provide an option to the client to specifically ask for latest data even if the cached data has not expired. I'm looking into the HTTP 1.1 spec to see if there exists a standard way to do this, and the Cache Revalidation and Reload Controls appears to fit my need.
Questions:
Should we just use Cache Revalidation and Reload Controls?
If not, is it acceptable to include an If-Modified-Since header with epoch time, causing the server to always consider the resource as have changed? The spec doesn't preclude this, but I'm wondering if I'm abusing :) the intent of the header?
What'd be a good way to identify the resource to refresh? In our case, the URL path alone is not enough, and I'm not sure if query or matrix parameters are considered as part of a unique URL. What about using an ETag?
|
How can a HTTP client request the server for latest data/to refresh cache?
|
I was able to solve it like this:
create nginx.conf.erb file:
cp $(passenger-config about resourcesdir)/templates/standalone/config.erb nginx.conf.erb
Inside server block in nginx.conf.erb, instruct Nginx to generate appropriate headers when a file under our assets directory is requested:
server {
# ....
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
}
Pass Nginx engine options to passenger in Procfile:
web: bundle exec passenger start -p $PORT --max-pool-size 3 --nginx-config-template nginx.conf.erb
|
I'm using Rails api with an angularjs front-end which is served simply as static files under public directory. I've chosen passenger as the app server, deployed to heroku and everything seems to be working fine except for caching.
Since static assets are served by passenger/nginx, I believe this has nothing to do with rails. But I have no idea how to get it working or where to add configurations.
Response headers when requesting a static file (application-a24e9c3607.js):
Connection: keep-alive
Content-Length: 0
Date: Thu, 14 Jan 2016 06:45:31 GMT
Etag: "5696ce02-43102"
Last-Modified: Wed, 13 Jan 2016 22:21:54 GMT
Server: nginx/1.8.0
Via: 1.1 vegur
|
Rails + Passenger on Heroku: How to set expiry date or a maximum age in the HTTP headers for static resources?
|
The below code is working for me,
for (String cacheName : getCacheManager().getCacheNames()) {
logger.info("Clearing cache: " + cacheName);
Cache cache = getCacheManager().getCache(cacheName);
Object obj = cache.getNativeCache();
if (obj instanceof net.sf.ehcache.Ehcache) {
Ehcache ehCa = (Ehcache)obj;
List<Object> keys = ehCa.getKeys();
for (Object key : keys) {
String keyString = (String)key;
if (keyString.equalsIgnoreCase("CACHE_LIST_COLUMNS_10000_2"))
{
cache.evict(key);
}
}
}
}
|
I want to remove particular cache keys. I am using java spring.
I have different keys under same cache name. I have to remove some particular keys, not the whole cache.
The cache code is as below,
@CacheEvict(value="MyCache", key="CACHE_LIST_COLUMNS + #accountId + '_' + #formType")
public void addListColumn(){.. my code..}
@CacheEvict(value="MyCache", key="CACHE_SUMMARY_FIELDS + #accountId + '_' + #formType")
public void addSummaryColumn(){.. my code..}
Now as you can see under name 'MyCache' I have to different caches with different keys. Can someone guide me how we can get the particular key from a particular cache and remove that cache.
I want to remove this cache list:
value="MyCache", key="CACHE_SUMMARY_FIELDS + #accountId + '_' + #formType"
This is what I am trying
StringBuffer cacheNames = new StringBuffer();
for (String cacheName : getCacheManager().getCacheNames()) {
Cache cache = getCacheManager().getCache(cacheName);
cache.clear();
}
What this code does is gets cache of name 'MyCache' and clears this whole MyCache. But I don't want to clear all the cache entries.
For example the keys are,
CacheEvict(value="MyCache", key="CACHE_LIST_COLUMNS + 10000 + '_' + 3")
CacheEvict(value="MyCache", key="CACHE_SUMMARY_FIELDS + 10000 + '_' + 4")
So the keys are Key1 = CACHE_LIST_COLUMNS10000_3
Key2 = CACHE_SUMMARY_FIELDS10000_4
Now I want to remove only key CACHE_LIST_COLUMNS10000_3.
Hence how can I get the cache 'MyCache' and Key CACHE_LIST_COLUMNS10000_3,
and remove only data related to this key(CACHE_LIST_COLUMNS10000_3).
I have to explicitly remove cache through java code. Not through annotations.
What my function will be is get cache keys as input and delete only those particular keys.
If still you don't understand my question please let me know.
|
How can I remove a particular key from Cache using Java Spring
|
I think the suggestion from Louis, using the the keys for locking is the most simple and practical one. Here is code some snippet, that, without the help of Guava libraries, illustrates the idea:
static locks[] = new Lock[ ... ];
static { /* initialize lock array */ }
int id;
void doSomething() {
final lock = locks[id % locks.length];
lock.lock();
try {
/* protected code */
} finally {
lock.unlock();
}
}
The size of the lock array limits the maximum amount of parallelism you get. If your code is only using CPU, you can initialize it by the number of available processors and this is the perfect solution. If your code waits for I/O you might need an arbitrary big array of locks or you limit the number of threads that can run the critical section. In this case another approach might be better.
Comments on a more conceptual level:
If you want to prevent the item from being evicted, you need a mechanism called pinning. Internally this is used by most cache implementations, e.g. for blocking during I/O operations. Some caches may expose a way to do it by the applications.
In a JCache compatible cache, there is the concept of an EntryProcessor. The EntryProcessor allows you to process a peace of code on an entry in an atomic way. This means the cache is doing all the locking for you. Depending of the scope of the problem, this may have an advantage, since this also works in clustered scenarios, which means the locking is cluster wide.
Another idea which comes to my mind is the vetoable eviction. This is a concept EHCache 3 is implementing. By specifying a vetoable eviction policy you can implement a pinning mechanism on your own.
|
I'm using something like
Cache<Integer, Item> cache;
where the Items are independent of each other and look like
private static class Item {
private final int id;
... some mutable data
synchronized doSomething() {...}
synchronized doSomethingElse() {...}
}
The idea is to obtain the item from the cache and call a synchronized method on it. In case of a miss, the item can be recreated, that's fine.
A problem occurs when an item gets evicted from the cache and recreated while a thread runs a synchronized method. A new thread obtains a new item and synchronizes on it... so for a single id, there are two threads inside the synchronized method. FAIL.
Is there an easy way around it? It's Guava Cache, if it helps.
|
Synchronizing on cached items
|
I suggest to take a look at the talk
Symfony Routing Under the hood - David Buchmann
That do a great overview to the Symfony Routing component.
Routing is compiled in php code, that is cached in prod environment
Main point of optimization is about:
Compile routes to PHP
Dump cached matcher, single class
Group similar routes
Prefer strpos, use regex only when needed
Possesive quantifiers in regex
Hope this help
|
I'm planning my first project with the actual version of Symfony 3.0. So this won't be my last question :)
What I wonder most about at the moment are routes. In the symfony book the default way to realize routes is by using annotations in the controller classes. Does this mean every single time someone hits my URL all the classes are parsed to find the most matching route? Wouldn't be this a real performance issue? Or is there a built-in cache?
|
Symfony3: Routing & Cache
|
The keychain (consider the name) is designed to hold keys and other reasonably small secure items. For data, encrypt it with AES using Common Crypto and save the key in the keychain. Create the key from random bytes. Save the encrypted data in the Documents directory or subdirectory.
|
In my app, I want to keep very sensitive data persisted on a client in an encrypted cache, and thought of using the keychain.
Potentially, we could end up putting quite a bit of information (a couple of MBs) into this cache and was wondering...
Are there any hard limits on the size of data that I can cram into the keychain?
Is there another/better place I can store this data? I only need a simple key/value interface similar to NSUserDefaults, but encrypted.
Thanks in advance!
|
Limits of iOS Keychain usage
|
I understand that you try to avoid the session lock on the page.
The cache is not lock the full page access so the answer is that the cache is not run sequentially.
There are two kind of cache, one in memory that use static dictionary to keep the data and one that save the cache on database, that use files to save the data. Both of them locks the data only for the period of read/write, while the session is lock the full access on the page from start to the end of it.
So use cache, but close the session on the page you have this issue. Also have in mind that if you use web garden then the cache on memory can have multiple different data because memory cache have its own static space on each pool.
Also the session is different for each user, the cache is the same for all users.
some more to read : ASP.NET Server does not process pages asynchronously
|
I'm writing an ASP.NET MVC5 Application, I know that the actions where session["foo"] = bar are ran sequentially, now to avoid this, i want to store some informations into a MemoryCache object and not in session, but my doubt is: Is the cache managed like the session? So the actions where i put ObjectCache.Set("foo", bar, null) are ran sequentially like for session?
I know the scope difference between cache and session but for me and in this case it's not important.
Thank to everyone
|
Access differences between cache and session in ASP.NET MVC
|
4
By having "only-if-cached" in the Cache-Control directive the client won't contact the server, it's just going to check the cache and return the 504 when it cannot find a cache entry match.
How to solve it?
Handling the 504 response with a new request without the client's Cache-Control header. Or just removing the "only-if-cached" if it's not the intented behaviour.
Share
Improve this answer
Follow
edited Nov 17, 2015 at 11:20
answered Nov 17, 2015 at 10:59
liloarganaliloargana
16511 silver badge1010 bronze badges
7
I'm not sure what's the behavior you are trying to achieve. Removing the "only-if-cached" part should do it. Or handling the 504 response with a new request without the client's Cache-Control header.
– liloargana
Nov 17, 2015 at 11:13
removed "only-if-cached" from header. now the error is resolved, but still not caching
– droidev
Nov 17, 2015 at 11:19
Is the response cacheable tho? Please share the response headers, Cache-Control and Expires
– liloargana
Nov 17, 2015 at 11:27
I'm using interceptor, response need not be cachebale
– droidev
Nov 17, 2015 at 13:00
The interceptor i'm looking at your code is for the request, what you should be using is a networkInterceptor that overrides the server response's header with cache rules. (github.com/square/okhttp/blob/…)
– liloargana
Nov 17, 2015 at 13:43
|
Show 2 more comments
|
in my android application I am using retrofit for making network download, also I am using OkHttp to cache the response, given below is the code that I wrote
private RestAdapter buildRestAdapter(String baseUrl, Context ctx) {
RestAdapter.LogLevel logLevel = RestAdapter.LogLevel.FULL;
RestAdapter.Builder builder = new RestAdapter.Builder();
builder.setLogLevel(logLevel);
if (!TextUtils.isEmpty(baseUrl))
builder.setEndpoint(baseUrl);
builder.setRequestInterceptor(new RequestInterceptor() {
@Override
public void intercept(RequestFacade request) {
int maxStale = 60 * 60 * 24 * 28;
request.addHeader("Cache-Control",
"public, only-if-cached, max-stale=" + maxStale);
}
});
File httpCacheDirectory = new File(ctx.getCacheDir(), "responses");
Cache cache = new Cache(httpCacheDirectory, 10 * 1024 * 1024);
OkHttpClient okHttpClient = new OkHttpClient();
if (cache != null) {
okHttpClient.setCache(cache);
}
builder.setClient(new OkClient(okHttpClient));
return builder.build();
}
the problem here is when I am making an API call I'm getting failure response and the Error says 504 Unsatisfiable Request (only-if-cached)
what could be the issue here ? any help will be appreciated
|
OkHttp Response caching not works
|
The caching factory is designed for short-lived sessions such as operations performed with a JmsTemplate.
It's generally not needed with the listener container (because its sessions are generally long-lived), unless you perform JmsTemplate operations on the container's thread - to participate in the container's transaction.
In that case, consumers should not be cached by the factory, especially if variable concurrency is in use. See the container javadocs.
|
When using the spring-jms, there are 2 options given by spring for connection and session caching for performance gain.
Use the CachingConnectionFactory and cache the sessions, optionally you can cache producer and consumers as well.
Using the DefaultMessageListenerContainer you can set the cacheLevel to [1:connection caching, 2:session caching], with 3 you can cache consumers as well.
My questions Is , why did spring created the redundant functionality ? which one is optimal and fast in terms of performance?
|
Which is better to use CachingConnectionFactory or caching in DefaultMessageListenerContainer?
|
2
There is no "Rails way" to do this. This is a problem no matter what framework/language you're using and is best solved with a content delivery network (aka CDN)
Share
Improve this answer
Follow
answered Oct 5, 2015 at 20:33
Gavin MillerGavin Miller
43.4k2222 gold badges123123 silver badges189189 bronze badges
Add a comment
|
|
In a Rails 4 app, we have some big images on our homepage (the dimensions are like 2400px on width) and naturally, their loading is quite slow.
What are the options to speed up loading them? One way is to decrease their quality => their size => faster loading.
But is there a Rails way how to pre-cache/compress them?
Thank you.
|
Rails 4 - is there a way to compress big images to make them load faster?
|
4
delete gradle folder, quit Android Studio and Start again fixed this problem for me
Share
Improve this answer
Follow
answered Feb 4, 2016 at 7:43
ZaarthaZaartha
1,10699 silver badges2525 bronze badges
Add a comment
|
|
My gradle build in android Studio is failing because it can't access the cache.properties file. I have tried invalidating all caches and retrying but that did not work. I also reinstalled all the SDK tools. Here is the error I'm getting
Error:java.io.FileNotFoundException: C:\Users\Siddharth\AndroidStudioProjects\IntentAssignment.gradle\2.4\taskArtifacts\cache.properties (The system cannot find the file specified)
C:\Users\Siddharth\AndroidStudioProjects\IntentAssignment.gradle\2.4\taskArtifacts\cache.properties (The system cannot find the file specified)
Blockquote
Any advice on what to do, here is the complete console output:
FAILURE: Build failed with an exception.
* What went wrong:
java.io.FileNotFoundException: C:\Users\Siddharth\AndroidStudioProjects\IntentAssignment.gradle\2.4\taskArtifacts\cache.properties (The system cannot find the file specified)
C:\Users\Siddharth\AndroidStudioProjects\IntentAssignment.gradle\2.4\taskArtifacts\cache.properties (The system cannot find the file specified)
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 0.747 secs
|
Gradle build not executing because it can't find cache.properties
|
Function responsible for creating cache key for template tag is django.core.cache.utils.make_template_fragment_key. It takes as first argument your cache fragment name (in this case "object_detail" and as second argument all additional arguments passed to cache tag (in this case object_detail.pk and request.LANGUAGE_CODE). It will return complete key in format: template.cache.__YOUR_CACHE_FRAGMENT_NAME__.__HEX_DIGEST_OF_FRAGMENT_NAME_AND_PARAMETERS
If you want to know how that hex digest is computed, check source code
So your code should look like this:
from django.core.cache.utils import make_template_fragment_key
def clear_cache_block(obj, lang):
key = make_template_fragment_key('object_detail', (obj.id, lang))
cache.delete_pattern(key)
where key is language code for language that you're trying to clear cache. If you want to do it for all languages, you must do it in loop.
|
I use redis cache backend and caching templates via django cache.
I create cache with template tag
{% cache 43200 object_detail object_detail.pk request.LANGUAGE_CODE %}
{% endcache %}
and in redis-cli I see smth like this
1) ":1:template.cache.object_detail.89484b14b36a8d5329426a3d944d2983"
My cache invalidation is a function that performed after saving object in UpdateView and takes this object:
def clear_cache_block(obj):
key = hashlib.md5()
obj_pk = obj.pk
key.update(str(obj))
cache.delete_pattern('*object_detail.'+str(key.hexdigest()))
but generated hash is not the same with hash in redis cache.
What should I use to clear cache only for object I update?
|
Django clear cache for only detail view
|
Reference it as JSF resource. Then JSF resource handler will automatically take care of caching. Its default expiration time is configurable in web.xml.
So instead of
<div data-src="resources/images/slide1.png">
do
<div data-src="#{resource['images/slide1.png']}">
See also:
How to reference CSS / JS / image resource in Facelets template?
|
I have a JSF page with very big images - 3.4 Mb. I use this code to call the image
<div data-src="resources/images/slide1.png">
Is there any way to cache the image into client browser because every time it takes 3-4 seconds to download the picture?
|
Cache image in browser
|
Setting 777 permissions do nothing, because when webserver user write folders to it, the folders belong to this user, and you, as command line user haven't access to them, and vice versa. Set permissions in correct way: http://symfony.com/doc/current/book/installation.html#book-installation-permissions
|
RuntimeException in ClassCollectionLoader.php line 239:
Failed to write cache file "/var/www/fareast/app/cache/dev/classes.php".
I have this error when I run my project in symfony2.
I have nginx as my web server and centOS 7 as my OS.
I tried deleting the cache folder manually and by
php app/console cache:clear
then I did a bad practice to app/cache and app/logs
chmod -R 777
but I have the same error.
/var/log/nginx/error.log is also empty.
Do you recommend other solutions?
Or do I have to install some php5 libraries for cache?
|
Unable to write cache symfony2
|
4
edit file /etc/mysql/conf.d/mysql.cnf
add
[mysqld]
query_cache_size = 268435456
query_cache_type=1
query_cache_limit=1048576
Share
Improve this answer
Follow
answered Mar 16, 2016 at 2:07
gilcierwebgilcierweb
2,66411 gold badge1717 silver badges1616 bronze badges
1
mysql: [ERROR] unknown variable 'query_cache_type=1' I got this error while restarting mysqld
– Rajat Jain
Aug 13, 2020 at 6:48
Add a comment
|
|
I'm trying to enable MySQL Query Cache on Ubuntu 15.04 with MySQL 5.6.25
I've added this to end end of /etc/mysql/my.cnf and /etc/mysql/conf.d/mysql.cnf:
query_cache_type = 1
query_cache_size = 4096M
query_cache_limit = 2M
query_cache_strip_comments =1
The whole server has been restarted more than once.
user@myhost:/$ mysql
mysql: unknown variable 'query_cache_type=1'
Using SHOW VARIABLES LIKE '%query_cache%' confirms that query_cache_type = OFF
SET GLOBAL query_cache_type = 1;
/* SQL Error (1651): Query cache is disabled; restart the server with query_cache_type=1 to enable it */
How can I solve this?
|
Can't enable MySQL 5.6 Query Cache
|
None of cache drivers bundled with Laravel offers this kind of double layer storage, so you'll need to implement a new driver yourself. Luckily,
it won't be too complicated.
First, create your new driver:
class SessionRedisStore extends RedisStore {
public function get($key) {
return Session::has($key) ? Session::get($key) : parent::get($key);
}
public function put($key, $value, $minutes, $storeInSession = false) {
if ($storeInSession) Session::set($key, $value);
return parent::put($key, $value, $minutes);
}
}
Next, register the new driver in your AppServiceProvider:
public function register()
{
$this->app['cache']->extend('session_redis', function(array $config)
{
$redis = $this->app['redis'];
$connection = array_get($config, 'connection', 'default') ?: 'default';
return Cache::repository(new RedisStore($redis, $this->getPrefix($config), $connection));
});
}
provide config in your config/cache.php:
'session_redis' => [
'driver' => 'redis',
'connection' => 'default',
],
and set your cache driver to that driver in config/cache.php or .env file:
'default' => env('CACHE_DRIVER', 'session_redis'),
Please keep in mind that I've updated only get() and put() methods. You might need to override some more methods, but doing that should be just as simple as for get/put.
Another thing to keep in mind is that I've produced above snippets by looking at the Laravel code and didn't have a chance to test it :) Let me know if you have any issues and I'll be more than happy to get it working :)
|
For performance reasons, I'd like to store some data in the PHP session rather than my Redis cache.
I'm hoping to use the Laravel Cache facade to do this, but with some sort of syntax to indicate that I'd like a copy to be kept in the user's session in addition to the normal Redis cache.
Then on retrieval, I want the Cache store to look first in the Session, then only if not found do the network request to Redis.
I'm not looking for full code, but some direction would be appreciated.
|
How can I create a session-backed Laravel Cache store?
|
The quickest workaround is probably to modify the uri with every call. This will bypass caching. Just append a parameter like "?dummy=345" to your uri and change the parameter value (345) with every call.
This looks like a new uri to the caching mechanism and it will retrieve the content.
|
I am developing a mobile app using Rest API. I'm using Asynchronous calls as GetResponse method is not supported in Windows Phone 8 development.
When I launch the application, it fetches the correct data using GET method. I have implemented a 60 second refresh interval. When the refresh triggers, the JSON output that I receive is not the new one but the one fetched initially. Basically it is not refreshing. I went through some of the blogs here and found that it is an issue with caching. I need help with disabling this cache. Also, I checked and found that HttpRequestCachePolicy cannot be used as System.Net.Cache doesnt exist in the framework (I am new to development so please correct me if I'm wrong here)
Below is the code that I'm using.
Request Creation:
string AuthServiceUri = "http://" + Authentication.ipAddress + "/api/alerts/open";
HttpWebRequest alerts_request = HttpWebRequest.Create(AuthServiceUri) as HttpWebRequest;
alerts_request.Accept = "application/json";
alerts_request.Method = "GET";
alerts_request.Headers["AuthToken"] = Authentication.authToken;
alerts_request.BeginGetResponse(new AsyncCallback(AlertsGetResponsetStreamCallback), alerts_request);
GetResponseStreamCallback:
HttpWebRequest request = (HttpWebRequest)callbackResult.AsyncState;
HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(callbackResult);
response.Headers["Cache-Control"] = "no-cache";
string responseString = "";
Stream streamResponse = response.GetResponseStream();
StreamReader reader = new StreamReader(streamResponse);
responseString = reader.ReadToEnd();
streamResponse.Close();
reader.Close();
response.Close();
string result = responseString;
The code works just fine fetching the results. Just that I'm having trouble clearing the cache. Am I implementing the "no-cache" correctly here by adding it to the header? Or am I missing something? Should it be added to the header as well? Be my savior!!
|
How to disable cache for asynchronous HTTPWebRequest for Windows Phone application?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.